Patents by Inventor Sayed Ali Emami

Sayed Ali Emami has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10002309
    Abstract: A method includes the following steps. A video sequence including detection results from one or more detectors is received, the detection results identifying one or more objects. A clustering framework is applied to the detection results to identify one or more clusters associated with the one or more objects. The clustering framework is applied to the video sequence on a frame-by-frame basis. Spatial and temporal information for each of the one or more clusters are determined. The one or more clusters are associated to the detection results based on the spatial and temporal information in consecutive frames of the video sequence to generate tracking information. One or more target tracks are generated based on the tracking information for the one or more clusters. The one or more target tracks are consolidated to generate refined tracks for the one or more objects.
    Type: Grant
    Filed: November 10, 2016
    Date of Patent: June 19, 2018
    Assignee: International Business Machines Corporation
    Inventors: Lisa M. Brown, Sayed Ali Emami, Mehrtash Harandi, Sharathchandra U. Pankanti
  • Publication number: 20170061239
    Abstract: A method includes the following steps. A video sequence including detection results from one or more detectors is received, the detection results identifying one or more objects. A clustering framework is applied to the detection results to identify one or more clusters associated with the one or more objects. The clustering framework is applied to the video sequence on a frame-by-frame basis. Spatial and temporal information for each of the one or more clusters are determined. The one or more clusters are associated to the detection results based on the spatial and temporal information in consecutive frames of the video sequence to generate tracking information. One or more target tracks are generated based on the tracking information for the one or more clusters. The one or more target tracks are consolidated to generate refined tracks for the one or more objects.
    Type: Application
    Filed: November 10, 2016
    Publication date: March 2, 2017
    Inventors: Lisa M. Brown, Sayed Ali Emami, Mehrtash Harandi, Sharathchandra U. Pankanti
  • Patent number: 9582895
    Abstract: A method includes the following steps. A video sequence including detection results from one or more detectors is received, the detection results identifying one or more objects. A clustering framework is applied to the detection results to identify one or more clusters associated with the one or more objects. The clustering framework is applied to the video sequence on a frame-by-frame basis. Spatial and temporal information for each of the one or more clusters are determined. The one or more clusters are associated to the detection results based on the spatial and temporal information in consecutive frames of the video sequence to generate tracking information. One or more target tracks are generated based on the tracking information for the one or more clusters. The one or more target tracks are consolidated to generate refined tracks for the one or more objects.
    Type: Grant
    Filed: May 22, 2015
    Date of Patent: February 28, 2017
    Assignees: International Business Machines Corporation, The University of Queensland
    Inventors: Lisa M. Brown, Sayed Ali Emami, Mehrtash Harandi, Sharathchandra U. Pankanti
  • Publication number: 20160343146
    Abstract: A method includes the following steps. A video sequence including detection results from one or more detectors is received, the detection results identifying one or more objects. A clustering framework is applied to the detection results to identify one or more clusters associated with the one or more objects. The clustering framework is applied to the video sequence on a frame-by-frame basis. Spatial and temporal information for each of the one or more clusters are determined. The one or more clusters are associated to the detection results based on the spatial and temporal information in consecutive frames of the video sequence to generate tracking information. One or more target tracks are generated based on the tracking information for the one or more clusters. The one or more target tracks are consolidated to generate refined tracks for the one or more objects.
    Type: Application
    Filed: May 22, 2015
    Publication date: November 24, 2016
    Inventors: Lisa M. Brown, Sayed Ali Emami, Mehrtash Harandi, Sharathchandra U. Pankanti