Patents by Inventor Yun Zhai

Yun Zhai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9104919
    Abstract: Multiple discrete objects within a scene image captured by a single camera track are distinguished as un-labeled from a background model within a first frame of a video data input. Object position, object appearance and/or object size attributes are determined for each of the blobs, and costs determined to assign to existing blobs of existing object tracks as a function of the determined attributes. The un-labeled object blob that has a lowest cost of association with any of the existing object tracks is labeled with the label of that track having the lowest cost, said track is removed from consideration for labeling remaining un-labeled object blobs, and the process iteratively repeated until each of the track labels have been used to label one of the un-labeled blobs.
    Type: Grant
    Filed: October 6, 2014
    Date of Patent: August 11, 2015
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Patent number: 9082201
    Abstract: A computer receives a first set of spectral information for a first surface, wherein the first set of spectral information includes a pixel count for each color value of a range of color values, with regard to each color, measured at time one. The computer determines, with regard to the first set, whether dispersion of the pixel count across the range of color values, with regard to each color, exceeds a first threshold value. The computer determines, with regard to the first set, a surface contamination level based on at least whether the dispersion of the pixel count across the range of color values, with regard to each color, exceeds the first threshold value.
    Type: Grant
    Filed: January 4, 2013
    Date of Patent: July 14, 2015
    Assignee: International Business Machines Corporation
    Inventors: Ira L. Allen, Rogerio S. Feris, Yun Zhai
  • Patent number: 9070020
    Abstract: Foreground feature data and motion feature data is determined for frames of video data acquired from a train track area region of interest. The frames are labeled as “train present” if the determined foreground feature data value meets a threshold value, else as “train absent”; and as “motion present” if the motion feature data meets a motion threshold, else as “static.” The labels are used to classify segments of the video data comprising groups of consecutive video frames, namely as within a “no train present” segment for groups with “train absent” and “static” labels; within a “train present and in transition” segment for groups “train present” and “motion present” labels; and within a “train present and stopped” segment for groups with “train present” and “static” labels. The presence or motion state of a train at a time of inquiry is thereby determined from the respective segment classification.
    Type: Grant
    Filed: August 21, 2012
    Date of Patent: June 30, 2015
    Assignee: International Business Machines Corporation
    Inventors: Russell P. Bobbitt, Rogerio S. Feris, Yun Zhai
  • Patent number: 9069104
    Abstract: A computer generates a three dimensional map of a pathway area using a plurality of overhead images. The computer determines a forecasted weather pattern to occur in the pathway area. The computer analyzes the three dimensional map and the forecasted weather pattern to predict one or more violations of the pathway. The computer generates a priority for the one or more predicted violations of the pathway. The computer generates a plan for pathway management of the pathway area.
    Type: Grant
    Filed: December 12, 2012
    Date of Patent: June 30, 2015
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Publication number: 20150178570
    Abstract: Foreground object image features are extracted from input video via application of a background subtraction mask, and optical flow image features from a region of the input video image data defined by the extracted foreground object image features. If estimated movement features indicate that the underlying object is in motion, a dominant moving direction of the underlying object is determined. If the dominant moving direction is parallel to an orientation of the second, crossed thoroughfare, an event alarm indicating that a static object is blocking travel on the crossing second thoroughfare is not generated. If the estimated movement features indicate that the underlying object is static, or that its determined dominant moving direction is not parallel to the second thoroughfare, an appearance of the foreground object region is determined and a static-ness timer run while the foreground object region comprises the extracted foreground object image features.
    Type: Application
    Filed: March 5, 2015
    Publication date: June 25, 2015
    Inventors: Rogerio S. Feris, Yun Zhai
  • Patent number: 9064325
    Abstract: Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode.
    Type: Grant
    Filed: August 21, 2013
    Date of Patent: June 23, 2015
    Assignee: International Business Machines Corporation
    Inventors: Russell P. Bobbitt, Lisa M. Brown, Rogerio S. Feris, Arun Hampapur, Yun Zhai
  • Publication number: 20150154457
    Abstract: Automatic object retrieval from input video is based on learned, complementary detectors created for each of a plurality of different motionlet clusters. The motionlet clusters are partitioned from a dataset of training vehicle images as a function of determining that vehicles within each of the scenes of the images in each cluster share similar two-dimensional motion direction attributes within their scenes. To train the complementary detectors, a first detector is trained on motion blobs of vehicle objects detected and collected within each of the training dataset vehicle images within the motionlet cluster via a background modeling process; a second detector is trained on each of the training dataset vehicle images within the motionlet cluster that have motion blobs of the vehicle objects but are misclassified by the first detector; and the training repeats until all of the training dataset vehicle images have been eliminated as false positives or correctly classified.
    Type: Application
    Filed: February 12, 2015
    Publication date: June 4, 2015
    Inventors: ANKUR DATTA, ROGERIO S. FERIS, SHARATHCHANDRA U. PANKANTI, YUN ZHAI
  • Patent number: 9008359
    Abstract: Foreground object image features are extracted from input video via application of a background subtraction mask, and optical flow image features from a region of the input video image data defined by the extracted foreground object image features. If estimated movement features indicate that the underlying object is in motion, a dominant moving direction of the underlying object is determined. If the dominant moving direction is parallel to an orientation of the second, crossed thoroughfare, an event alarm indicating that a static object is blocking travel on the crossing second thoroughfare is not generated. If the estimated movement features indicate that the underlying object is static, or that its determined dominant moving direction is not parallel to the second thoroughfare, an appearance of the foreground object region is determined and a static-ness timer run while the foreground object region comprises the extracted foreground object image features.
    Type: Grant
    Filed: June 28, 2012
    Date of Patent: April 14, 2015
    Assignee: International Business Machines Corporation
    Inventors: Rogerio S. Feris, Yun Zhai
  • Publication number: 20150098661
    Abstract: A system comprises an input component, a feature extractor, an object classifier, an adaptation component and a calibration tool. The input component is configured to receive one or more images, and the feature extractor is configured to extract features for one or more objects in the one or more images, the extracted features comprising at least one view-independent feature. The object classifier is configured to classify the one or more objects based at least in part on the extracted features and one or more object classification parameters, and the adaptation component is configured to adjust the classification of at least one of the objects based on one or more contextual parameters. The calibration tool is configured to adjust one or more of the object classification parameters based on likelihoods for characteristics associated with one or more object classes.
    Type: Application
    Filed: April 15, 2014
    Publication date: April 9, 2015
    Applicant: International Business Machines Corporation
    Inventors: LISA MARIE BROWN, Longbin Chen, Rogerio Schmidt Feris, Arun Hampapur, Yun Zhai
  • Patent number: 9002060
    Abstract: Automatic object retrieval from input video is based on learned, complementary detectors created for each of a plurality of different motionlet clusters. The motionlet clusters are partitioned from a dataset of training vehicle images as a function of determining that vehicles within each of the scenes of the images in each cluster share similar two-dimensional motion direction attributes within their scenes. To train the complementary detectors, a first detector is trained on motion blobs of vehicle objects detected and collected within each of the training dataset vehicle images within the motionlet cluster via a background modeling process; a second detector is trained on each of the training dataset vehicle images within the motionlet cluster that have motion blobs of the vehicle objects but are misclassified by the first detector; and the training repeats until all of the training dataset vehicle images have been eliminated as false positives or correctly classified.
    Type: Grant
    Filed: June 28, 2012
    Date of Patent: April 7, 2015
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Publication number: 20150062340
    Abstract: A method and system for demonstrating compliance with a requirement of a high occupancy lane in a vehicle for a reduced toll charge for the vehicle is provided. The system includes a housing, an infrared camera within the housing, a GPS unit, a transceiver and a control within the housing. The infrared camera images one or more people in the vehicle. The transceiver detects an RF signal indicating that the vehicle is located at or near a toll booth for the high occupancy lane. The control triggers the infrared camera to image the one or more people and transmit a current time, a current location of the vehicle from the GPS unit, and the triggered image of the one or more people, to a server to demonstrate compliance with the requirement of the high occupancy lane for the reduced toll charge for the vehicle.
    Type: Application
    Filed: September 3, 2013
    Publication date: March 5, 2015
    Applicant: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra Pankanti, Yun Zhai
  • Publication number: 20150055830
    Abstract: Field of view overlap among multiple cameras are automatically determined as a function of the temporal overlap of object tracks determined within their fields-of-view. Object tracks with the highest similarity value are assigned into pairs, and portions of the assigned object track pairs having a temporally overlapping period of time are determined. Scene entry points are determined from object locations on the tracks at a beginning of the temporally overlapping period of time, and scene exit points from object locations at an ending of the temporally overlapping period of time. Boundary lines for the overlapping fields-of-view portions within the corresponding camera fields-of-view are defined as a function of the determined entry and exit points in their respective fields-of-view.
    Type: Application
    Filed: November 4, 2014
    Publication date: February 26, 2015
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Patent number: 8948454
    Abstract: A method and system for training a special object detector to distinguish a foreground object appearing in a sequence of frames for a target domain. The sequence of frames depicts motion of the foreground object in a non-uniform background. The foreground object is detected in a high-confidence subwindow of an initial frame of the sequence, which includes computing a measure of confidence that the high-confidence subwindow includes the foreground object and determining that the measure of confidence exceeds a specified confidence threshold. The foreground object is tracked in respective positive subwindows of subsequent frames appearing after the initial frame. The subsequent frames are within a specified short period of time. The positive subwindows are used to train the special object detector to detect the foreground object in the target domain. The positive subwindows include the subwindow of the initial frame and the respective subwindows of the subsequent frames.
    Type: Grant
    Filed: January 2, 2013
    Date of Patent: February 3, 2015
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Publication number: 20150023560
    Abstract: Multiple discrete objects within a scene image captured by a single camera track are distinguished as un-labeled from a background model within a first frame of a video data input. Object position, object appearance and/or object size attributes are determined for each of the blobs, and costs determined to assign to existing blobs of existing object tracks as a function of the determined attributes. The un-labeled object blob that has a lowest cost of association with any of the existing object tracks is labeled with the label of that track having the lowest cost, said track is removed from consideration for labeling remaining un-labeled object blobs, and the process iteratively repeated until each of the track labels have been used to label one of the un-labeled blobs.
    Type: Application
    Filed: October 6, 2014
    Publication date: January 22, 2015
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Publication number: 20140376775
    Abstract: Objects within two-dimensional video data are modeled by three-dimensional models as a function of object type and motion through manually calibrating a two-dimensional image to the three spatial dimensions of a three-dimensional modeling cube. Calibrated three-dimensional locations of an object in motion in the two-dimensional image field of view of a video data input are determined and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the determined three-dimensional locations. The two-dimensional object image is replaced in the video data input with an object-type three-dimensional polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features.
    Type: Application
    Filed: September 5, 2014
    Publication date: December 25, 2014
    Inventors: Ankur Datta, Rogerio S. Feris, Yun Zhai
  • Patent number: 8913791
    Abstract: Field of view overlap among multiple cameras is automatically determined as a function of the temporal overlap of object tracks determined within their fields-of-view. Object tracks with the highest similarity value are assigned into pairs, and portions of the assigned object track pairs having a temporally overlapping period of time are determined. Scene entry points are determined from object locations on the tracks at a beginning of the temporally overlapping period of time, and scene exit points from object locations at an ending of the temporally overlapping period of time. Boundary lines for the overlapping fields-of-view portions within the corresponding camera fields-of-view are defined as a function of the determined entry and exit points in their respective fields-of-view.
    Type: Grant
    Filed: March 28, 2013
    Date of Patent: December 16, 2014
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Patent number: 8885885
    Abstract: Multiple discrete objects within a scene image captured by a single camera track are distinguished as un-labeled from a background model within a first frame of a video data input. Object position and object appearance and/or object size attributes are determined for each of the blobs, and costs determined to assign to existing blobs of existing object tracks as a function of the determined attributes and combined to generate respective combination costs. The un-labeled object blob that has a lowest combined cost of association with any of the existing object tracks is labeled with the label of that track having the lowest combined cost, said track is removed from consideration for labeling remaining un-labeled object blobs, and the process iteratively repeated until each of the track labels have been used to label one of the un-labeled blobs.
    Type: Grant
    Filed: October 5, 2012
    Date of Patent: November 11, 2014
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Publication number: 20140293043
    Abstract: A camera at a fixed vertical height positioned above a reference plane, with an axis of a camera lens at an acute angle with respect to a perpendicular of the reference plane. One or more processors receive images of different people. The vertical measurement values of the images of different people are determined. The one or more processors determine a first statistical measure associated with a statistical distribution of the vertical measurement values. The known heights of people from a known statistical distribution of heights of people are transformed to normalized measurements, based in part on a focal length of the camera lens, the angle of the camera, and a division operator in an objective function of differences between the normalized measurements and the vertical measurement values. The fixed vertical height of the camera is determined, based at least on minimizing the objective function.
    Type: Application
    Filed: March 28, 2013
    Publication date: October 2, 2014
    Applicant: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Publication number: 20140294231
    Abstract: Field of view overlap among multiple cameras is automatically determined as a function of the temporal overlap of object tracks determined within their fields-of-view. Object tracks with the highest similarity value are assigned into pairs, and portions of the assigned object track pairs having a temporally overlapping period of time are determined. Scene entry points are determined from object locations on the tracks at a beginning of the temporally overlapping period of time, and scene exit points from object locations at an ending of the temporally overlapping period of time. Boundary lines for the overlapping fields-of-view portions within the corresponding camera fields-of-view are defined as a function of the determined entry and exit points in their respective fields-of-view.
    Type: Application
    Filed: March 28, 2013
    Publication date: October 2, 2014
    Applicant: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Patent number: 8842163
    Abstract: Objects within two-dimensional (2D) video data are modeled by three-dimensional (3D) models as a function of object type and motion through manually calibrating a 2D image to the three spatial dimensions of a 3D modeling cube. Calibrated 3D locations of an object in motion in the 2D image field of view of a video data input are computed and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the computed 3D locations. The 2D object image is replaced in the video data input with an object-type 3D polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features.
    Type: Grant
    Filed: June 7, 2011
    Date of Patent: September 23, 2014
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Yun Zhai