Patents by Inventor Rogerio S. Feris

Rogerio S. Feris has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20140122470
    Abstract: In response to a query of discernable facial attributes, the locations of distinct and different facial regions are estimated from face image data, each relevant to different attributes. Different features are extracted from the estimated facial regions from database facial images, which are ranked in base layer rankings as a function of relevance of extracted features to attributes relevant to the estimated regions, and in second-layer rankings as a function of combinations of the base layer rankings and relevance of the extracted features to common ones of the attributes relevant to the estimated regions. The images are ranked in relevance to the query as a function of the second-layer rankings.
    Type: Application
    Filed: January 6, 2014
    Publication date: May 1, 2014
    Applicant: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Daniel A. Vaquero
  • Publication number: 20140098989
    Abstract: Multiple discrete objects within a scene image captured by a single camera track are distinguished as un-labeled from a background model within a first frame of a video data input. Object position and object appearance and/or object size attributes are determined for each of the blobs, and costs determined to assign to existing blobs of existing object tracks as a function of the determined attributes and combined to generate respective combination costs. The un-labeled object blob that has a lowest combined cost of association with any of the existing object tracks is labeled with the label of that track having the lowest combined cost, said track is removed from consideration for labeling remaining un-labeled object blobs, and the process iteratively repeated until each of the track labels have been used to label one of the un-labeled blobs.
    Type: Application
    Filed: October 5, 2012
    Publication date: April 10, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Publication number: 20140098221
    Abstract: An approach for re-identifying an object in a first test image is presented. Brightness transfer functions (BTFs) between respective pairs of training images are determined. Respective similarity measures are determined between the first test image and each of the training images captured by the first camera (first training images). A weighted brightness transfer function (WBTF) is determined by combining the BTFs weighted by weights of the first training images. The weights are based on the similarity measures. The first test image is transformed by the WBTF to better match one of the training images captured by the second camera. Another test image, captured by the second camera, is identified because it is closer in appearance to the transformed test image than other test images captured by the second camera. An object in the identified test image is a re-identification of the object in the first test image.
    Type: Application
    Filed: October 9, 2012
    Publication date: April 10, 2014
    Applicant: International Business Machines Corporation
    Inventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti
  • Patent number: 8675917
    Abstract: Methods and apparatus are provided for improved abandoned object recognition using pedestrian detection. An abandoned object is detected in one or more images by determining if one or more detected objects in a foreground of the images comprises a potential abandoned object; applying a trained pedestrian detector to the potential abandoned object to determine if the potential abandoned object comprises at least a portion of a pedestrian; and classifying the potential abandoned object as an abandoned object based on whether the potential abandoned object is not at least a portion of a pedestrian. The trained pedestrian detector is trained using positive training samples comprised of at least portions of human bodies in one or more poses and/or negative training samples comprised of at least portions of abandoned objects.
    Type: Grant
    Filed: October 31, 2011
    Date of Patent: March 18, 2014
    Assignee: International Business Machines Corporation
    Inventors: Lisa M. Brown, Rogerio S. Feris, Frederik C. Kjeldsen, Kristina Scherbaum
  • Patent number: 8670611
    Abstract: Long-term understanding of background modeling includes determining first and second dimension gradient model derivatives of image brightness data of an image pixel along respective dimensions of two-dimensional, single channel image brightness data of a static image scene. The determined gradients are averaged with previous determined gradients of the image pixels, and with gradients of neighboring pixels as a function of their respective distances to the image pixel, the averaging generating averaged pixel gradient models for each of a plurality of pixels of the video image data of the static image scene that each have mean values and weight values. Background models for the static image scene are constructed as a function of the averaged pixel gradients and weights, wherein the background model pixels are represented by averaged pixel gradient models having similar orientation and magnitude and weights meeting a threshold weight requirement.
    Type: Grant
    Filed: October 24, 2011
    Date of Patent: March 11, 2014
    Assignee: International Business Machines Corporation
    Inventors: Rogerio S. Feris, Yun Zhai
  • Publication number: 20140056476
    Abstract: A moving object tracked within a field of view environment of a two-dimensional data feed of a calibrated video camera is represented by a three-dimensional model. An appropriate three-dimensional mesh-based volumetric model for the object is initialized by using a back-projection of a corresponding two-dimensional image. A texture of the object is projected onto the three-dimensional model, and two-dimensional tracks of the object are upgraded to three-dimensional motion to drive a three-dimensional model.
    Type: Application
    Filed: October 29, 2013
    Publication date: February 27, 2014
    Applicant: International Business Machines Corporation
    Inventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti
  • Publication number: 20140056479
    Abstract: Foreground feature data and motion feature data is determined for frames of video data acquired from a train track area region of interest. The frames are labeled as “train present” if the determined foreground feature data value meets a threshold value, else as “train absent; and as “motion present” if the motion feature data meets a motion threshold, else as “static.” The labels are used to classify segments of the video data comprising groups of consecutive video frames, namely as within a “no train present” segment for groups with “train absent” and “static” labels; within a “train present and in transition” segment for groups “train present” and “motion present” labels; and within a “train present and stopped” segment for groups with “train present” and “static” labels. The presence or motion state of a train at a time of inquiry is thereby determined from the respective segment classification.
    Type: Application
    Filed: August 21, 2012
    Publication date: February 27, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Russell P. Bobbitt, Rogerio S. Feris, Yun Zhai
  • Publication number: 20140050356
    Abstract: Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode.
    Type: Application
    Filed: August 21, 2013
    Publication date: February 20, 2014
    Applicant: International Business Machines Corporation
    Inventors: Russell P. Bobbitt, Lisa M. Brown, Rogerio S. Feris, Arun Hampapur, Yun Zhai
  • Patent number: 8655018
    Abstract: A method, system and computer program product for detecting presence of an object in an image are disclosed. According to an embodiment, a method for detecting a presence of an object in an image comprises: receiving multiple training image samples; determining a set of adaptive features for each training image sample, the set of adaptive features matching the local structure of each training image sample; integrating the sets of adaptive features of the multiple training image samples to generate an adaptive feature pool; determining a general feature based on the adaptive feature pool; and examining the image using a classifier determined based on the general feature to detect the presence of the object.
    Type: Grant
    Filed: January 19, 2012
    Date of Patent: February 18, 2014
    Assignee: International Business Machines Corporation
    Inventors: Rogerio S. Feris, Arun Hampapur, Ying-Li Tian
  • Patent number: 8639689
    Abstract: In response to a query of discernable facial attributes, the locations of distinct and different facial regions are estimated from face image data, each relevant to different attributes. Different features are extracted from the estimated facial regions from database facial images, which are ranked in base layer rankings by matching feature vectors to a base layer ranking sequence as a function of edge weights. Second-layer rankings define second-layer attribute vectors as combinations of the base-layer feature vectors and associated base layer parameter vectors for common attributes, which are matched to a second-layer ranking sequence as a function of edge weights. The images are thus ranked for relevance to the query as a function of the second-layer rankings.
    Type: Grant
    Filed: January 9, 2013
    Date of Patent: January 28, 2014
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Daniel A. Vaquero
  • Patent number: 8630460
    Abstract: A moving object detected and tracked within a field of view environment of a two-dimensional data feed of a calibrated video camera is represented by a three-dimensional model through localizing a centroid of the object and determining an intersection with a ground-plane within the field of view environment. An appropriate three-dimensional mesh-based volumetric model for the object is initialized by using a back-projection of a corresponding two-dimensional image as a function of the centroid and the determined ground-plane intersection. A texture of the object is projected onto the three-dimensional model, and two-dimensional tracks of the object are upgraded to three-dimensional motion to drive a three-dimensional model.
    Type: Grant
    Filed: May 9, 2013
    Date of Patent: January 14, 2014
    Assignee: International Business Machines Corporation
    Inventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti
  • Publication number: 20140003724
    Abstract: Foreground object image features are extracted from input video via application of a background subtraction mask, and optical flow image features from a region of the input video image data defined by the extracted foreground object image features. If estimated movement features indicate that the underlying object is in motion, a dominant moving direction of the underlying object is determined. If the dominant moving direction is parallel to an orientation of the second, crossed thoroughfare, an event alarm indicating that a static object is blocking travel on the crossing second thoroughfare is not generated. If the estimated movement features indicate that the underlying object is static, or that its determined dominant moving direction is not parallel to the second thoroughfare, an appearance of the foreground object region is determined and a static-ness timer run while the foreground object region comprises the extracted foreground object image features.
    Type: Application
    Filed: June 28, 2012
    Publication date: January 2, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Rogerio S. Feris, Yun Zhai
  • Publication number: 20140003708
    Abstract: Automatic object retrieval from input video is based on learned, complementary detectors created for each of a plurality of different motionlet clusters. The motionlet clusters are partitioned from a dataset of training vehicle images as a function of determining that vehicles within each of the scenes of the images in each cluster share similar two-dimensional motion direction attributes within their scenes. To train the complementary detectors, a first detector is trained on motion blobs of vehicle objects detected and collected within each of the training dataset vehicle images within the motionlet cluster via a background modeling process; a second detector is trained on each of the training dataset vehicle images within the motionlet cluster that have motion blobs of the vehicle objects but are misclassified by the first detector; and the training repeats until all of the training dataset vehicle images have been eliminated as false positives or correctly classified.
    Type: Application
    Filed: June 28, 2012
    Publication date: January 2, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Patent number: 8620026
    Abstract: Training data object images are clustered as a function of motion direction attributes and resized from respective original into same aspect ratios. Motionlet detectors are learned for each of the sets from features extracted from the resized object blobs. A deformable sliding window is applied to detect an object blob in input by varying window size, shape or aspect ratio to conform to a shape of the detected input video object blob. A motion direction of an underlying image patch of the detected input video object blob is extracted and motionlet detectors selected and applied that have similar motion directions. An object is thus detected within the detected blob and semantic attributes of an underlying image patch extracted if a motionlet detectors fires, the extracted semantic attributes available for use for searching for the detected object.
    Type: Grant
    Filed: April 13, 2011
    Date of Patent: December 31, 2013
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Behjat Siddiquie, Yun Zhai
  • Publication number: 20130336581
    Abstract: Foreground objects of interest are distinguished from a background model by dividing a region of interest of a video data image into a grid array of individual cells that are each smaller than that a foreground object of interest. More particularly, image data of the foreground object of interest spans a contiguous plurality of the cells. Each of the cells are labeled as foreground if accumulated edge energy within the cell meets an edge energy threshold, if color intensities for different colors within each cell differ by a color intensity differential threshold, or as a function of combinations of said determinations in view of one or more combination rules.
    Type: Application
    Filed: June 14, 2012
    Publication date: December 19, 2013
    Applicant: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Xiaoyu Wang
  • Publication number: 20130336535
    Abstract: Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode.
    Type: Application
    Filed: August 21, 2013
    Publication date: December 19, 2013
    Applicant: International Business Machines Corporation
    Inventors: Russell P. Bobbitt, Lisa M. Brown, Rogerio S. Feris, Arun Hampapur, Yun Zhai
  • Publication number: 20130336534
    Abstract: Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode.
    Type: Application
    Filed: August 21, 2013
    Publication date: December 19, 2013
    Applicant: International Business Machines Corporation
    Inventors: Russell P. Bobbitt, Lisa M. Brown, Rogerio S. Feris, Arun Hampapur, Yun Zhai
  • Publication number: 20130272573
    Abstract: View-specific object detectors are learned as a function of scene geometry and object motion patterns. Motion directions are determined for object images extracted from a training dataset and collected from different camera scene viewpoints. The object images are categorized into clusters as a function of similarities of their determined motion directions, the object images in each cluster are acquired from the same camera scene viewpoint. Zenith angles are estimated for object image poses in the clusters relative to a position of a horizon in the cluster camera scene viewpoint, and azimuth angles of the poses as a function of a relation of the determined motion directions of the clustered images to the cluster camera scene viewpoint. Detectors are thus built for recognizing objects in input video, one for each of the clusters, and associated with the estimated zenith angles and azimuth angles of the poses of the respective clusters.
    Type: Application
    Filed: June 7, 2013
    Publication date: October 17, 2013
    Inventors: Rogerio S. Feris, Sharathchandra U. Pankanti, Behjat Siddiquie
  • Publication number: 20130243256
    Abstract: Techniques, systems, and articles of manufacture for multispectral detection of attributes for video surveillance. A method includes generating one or more training sets of one or more multispectral images, generating a group of one or more multispectral box features, using the one or more training sets to select one or more of the one or more multispectral box features to generate a multispectral attribute detector, and using the multispectral attribute detector to identify a location of an attribute in video surveillance, wherein using the multispectral attribute detector comprises, for one or more locations on each spectral band level of the multispectral image, applying the multispectral attribute detector and producing an output indicating attribute detection or an output indicating no attribute detection, and wherein the attribute corresponds to the multispectral attribute detector.
    Type: Application
    Filed: May 7, 2013
    Publication date: September 19, 2013
    Applicant: International Business Machines Corporation
    Inventors: Lisa Marie Brown, Rogerio S. Feris, Arun Hampapur, Daniel Andre Vaquero
  • Publication number: 20130241928
    Abstract: A moving object detected and tracked within a field of view environment of a two-dimensional data feed of a calibrated video camera is represented by a three-dimensional model through localizing a centroid of the object and determining an intersection with a ground-plane within the field of view environment. An appropriate three-dimensional mesh-based volumetric model for the object is initialized by using a back-projection of a corresponding two-dimensional image as a function of the centroid and the determined ground-plane intersection. A texture of the object is projected onto the three-dimensional model, and two-dimensional tracks of the object are upgraded to three-dimensional motion to drive a three-dimensional model.
    Type: Application
    Filed: May 9, 2013
    Publication date: September 19, 2013
    Applicant: International Business Machines Corporation
    Inventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti