Patents by Inventor Rogerio S. Feris

Rogerio S. Feris has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160140732
    Abstract: Image-matching tracks the movements of the objects from initial camera scenes to ending camera scenes in non-overlapping cameras. Paths are defined through scenes for pairings of initial and ending cameras by different respective scene entry and exit points. For each of said camera pairings a combination path having a highest total number of tracked movements relative to all other combinations of one path through the initial and ending camera scene is chosen, and the scene exit point of the selected path through the initial camera and the scene entry point of the selected path into the ending camera define a path connection of the initial camera scene to the ending camera scene.
    Type: Application
    Filed: January 22, 2016
    Publication date: May 19, 2016
    Inventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra Pankanti
  • Patent number: 9342594
    Abstract: An approach that indexes and searches according to a set of attributes of a person is provided. In one embodiment, there is an extensible indexing and search tool, including an extraction component configured to extract a set of attributes of a person monitored by a set of sensors in a zone of interest. An index component is configured to index each of the set of attributes of the person within an index of an extensible indexing and search tool. A search component is configured to enable a search of the index of the extensible indexing and search tool according to at least one of the set of attributes of the person.
    Type: Grant
    Filed: October 29, 2008
    Date of Patent: May 17, 2016
    Assignee: International Business Machines Corporation
    Inventors: Lisa M. Brown, Raymond A. Cooke, Rogerio S. Feris, Arun Hampapur, Frederik C. M. Kjeldsen, Christopher S. Milite, Stephen R. Russo, Chiao-Fe Shu, Ying-li Tian, Yun Zhai, Zuoxuan Lu
  • Publication number: 20160124996
    Abstract: Images are retrieved and ranked according to relevance to attributes of a multi-attribute query through training image attribute detectors for different attributes annotated in a training dataset. Pair-wise correlations are learned between pairs of the annotated attributes from the training dataset of images. Image datasets may are searched via the trained attribute detectors for images comprising attributes in a multi-attribute query. The retrieved images are ranked as a function of comprising attributes that are not within the query subset plurality of attributes but are paired to one of the query subset plurality of attributes by the pair-wise correlations, wherein the ranking is an order of likelihood that the different ones of the attributes will appear in an image with the paired one of the query subset plurality of attributes.
    Type: Application
    Filed: January 13, 2016
    Publication date: May 5, 2016
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Behjat Siddiquie
  • Patent number: 9330111
    Abstract: In response to a query of discernable facial attributes, the locations of distinct and different facial regions are estimated from face image data, each relevant to different attributes. Different features are extracted from the estimated facial regions from database facial images, which are ranked in base layer rankings as a function of relevance of extracted features to attributes relevant to the estimated regions, and in second-layer rankings as a function of combinations of the base layer rankings and relevance of the extracted features to common ones of the attributes relevant to the estimated regions. The images are ranked in relevance to the query as a function of the second-layer rankings.
    Type: Grant
    Filed: July 20, 2015
    Date of Patent: May 3, 2016
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Daniel A. Vaquero
  • Patent number: 9330312
    Abstract: Techniques, systems, and articles of manufacture for multispectral detection of attributes for video surveillance. A method includes generating one or more training sets of one or more multispectral images, generating a group of one or more multispectral box features, using the one or more training sets to select one or more of the one or more multispectral box features to generate a multispectral attribute detector, and using the multispectral attribute detector to identify a location of an attribute in video surveillance, wherein using the multispectral attribute detector comprises, for one or more locations on each spectral band level of the multispectral image, applying the multispectral attribute detector and producing an output indicating attribute detection or an output indicating no attribute detection, and wherein the attribute corresponds to the multispectral attribute detector.
    Type: Grant
    Filed: May 7, 2013
    Date of Patent: May 3, 2016
    Assignee: International Business Machines Corporation
    Inventors: Lisa Marie Brown, Rogerio S. Feris, Arun Hampapur, Daniel Andre Vaquero
  • Patent number: 9322647
    Abstract: A camera at a fixed vertical height positioned above a reference plane, with an axis of a camera lens at an acute angle with respect to a perpendicular of the reference plane. One or more processors receive images of different people. The vertical measurement values of the images of different people are determined. The one or more processors determine a first statistical measure associated with a statistical distribution of the vertical measurement values. The known heights of people from a known statistical distribution of heights of people are transformed to normalized measurements, based in part on a focal length of the camera lens, the angle of the camera, and a division operator in an objective function of differences between the normalized measurements and the vertical measurement values. The fixed vertical height of the camera is determined, based at least on minimizing the objective function.
    Type: Grant
    Filed: March 28, 2013
    Date of Patent: April 26, 2016
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Patent number: 9299162
    Abstract: Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode.
    Type: Grant
    Filed: July 31, 2015
    Date of Patent: March 29, 2016
    Assignee: International Business Machines Corporation
    Inventors: Russell P. Bobbitt, Lisa M. Brown, Rogerio S. Feris, Arun Hampapur, Yun Zhai
  • Patent number: 9280833
    Abstract: Image-matching tracks the movements of the objects from initial camera scenes to ending camera scenes in non-overlapping cameras. Paths are defined through scenes for pairings of initial and ending cameras by different respective scene entry and exit points. For each of said camera pairings a combination path having a highest total number of tracked movements relative to all other combinations of one path through the initial and ending camera scene is chosen, and the scene exit point of the selected path through the initial camera and the scene entry point of the selected path into the ending camera define a path connection of the initial camera scene to the ending camera scene.
    Type: Grant
    Filed: March 5, 2013
    Date of Patent: March 8, 2016
    Assignee: International Business Machines Corporation
    Inventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti
  • Patent number: 9262445
    Abstract: Images are retrieved and ranked according to relevance to attributes of a multi-attribute query through training image attribute detectors for different attributes annotated in a training dataset. Pair-wise correlations are learned between pairs of the annotated attributes from the training dataset of images. Image datasets may are searched via the trained attribute detectors for images comprising attributes in a multi-attribute query. The retrieved images are ranked as a function of comprising attributes that are not within the query subset plurality of attributes but are paired to one of the query subset plurality of attributes by the pair-wise correlations, wherein the ranking is an order of likelihood that the different ones of the attributes will appear in an image with the paired one of the query subset plurality of attributes.
    Type: Grant
    Filed: October 17, 2014
    Date of Patent: February 16, 2016
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Behjat Siddiquie
  • Patent number: 9251425
    Abstract: Automatic object retrieval from input video is based on learned, complementary detectors created for each of a plurality of different motionlet clusters. The motionlet clusters are partitioned from a dataset of training vehicle images as a function of determining that vehicles within each of the scenes of the images in each cluster share similar two-dimensional motion direction attributes within their scenes. To train the complementary detectors, a first detector is trained on motion blobs of vehicle objects detected and collected within each of the training dataset vehicle images within the motionlet cluster via a background modeling process; a second detector is trained on each of the training dataset vehicle images within the motionlet cluster that have motion blobs of the vehicle objects but are misclassified by the first detector; and the training repeats until all of the training dataset vehicle images have been eliminated as false positives or correctly classified.
    Type: Grant
    Filed: February 12, 2015
    Date of Patent: February 2, 2016
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Publication number: 20160012606
    Abstract: Foreground objects of interest are distinguished from a background model by dividing a region of interest of a video data image into a grid array of individual cells. Each of the cells are labeled as foreground if accumulated edge energy within the cell meets an edge energy threshold, or if color intensities for different colors within each cell differ by a color intensity differential threshold, or as a function of combinations of said determinations.
    Type: Application
    Filed: September 22, 2015
    Publication date: January 14, 2016
    Inventors: ANKUR DATTA, ROGERIO S. FERIS, SHARATHCHANDRA U. PANKANTI, XIAOYU WANG
  • Publication number: 20150379729
    Abstract: Field of view overlap among multiple cameras are automatically determined as a function of the temporal overlap of object tracks determined within their fields-of-view. Object tracks with the highest similarity value are assigned into pairs, and portions of the assigned object track pairs having a temporally overlapping period of time are determined. Scene entry points are determined from object locations on the tracks at a beginning of the temporally overlapping period of time, and scene exit points from object locations at an ending of the temporally overlapping period of time. Boundary lines for the overlapping fields-of-view portions within the corresponding camera fields-of-view are defined as a function of the determined entry and exit points in their respective fields-of-view.
    Type: Application
    Filed: September 14, 2015
    Publication date: December 31, 2015
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Publication number: 20150379768
    Abstract: Objects within two-dimensional video data are modeled by three-dimensional models as a function of object type and motion through manually calibrating a two-dimensional image to the three spatial dimensions of a three-dimensional modeling cube. Calibrated three-dimensional locations of an object in motion in the two-dimensional image field of view of a video data input are determined and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the determined three-dimensional locations. The two-dimensional object image is replaced in the video data input with an object-type three-dimensional polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features.
    Type: Application
    Filed: September 9, 2015
    Publication date: December 31, 2015
    Inventors: Ankur Datta, Rogerio S. Feris, Yun Zhai
  • Patent number: 9224049
    Abstract: Foreground object image features are extracted from input video via application of a background subtraction mask, and optical flow image features from a region of the input video image data defined by the extracted foreground object image features. If estimated movement features indicate that the underlying object is in motion, a dominant moving direction of the underlying object is determined. If the dominant moving direction is parallel to an orientation of the second, crossed thoroughfare, an event alarm indicating that a static object is blocking travel on the crossing second thoroughfare is not generated. If the estimated movement features indicate that the underlying object is static, or that its determined dominant moving direction is not parallel to the second thoroughfare, an appearance of the foreground object region is determined and a static-ness timer run while the foreground object region comprises the extracted foreground object image features.
    Type: Grant
    Filed: March 5, 2015
    Date of Patent: December 29, 2015
    Assignee: International Business Machines Corporation
    Inventors: Rogerio S. Feris, Yun Zhai
  • Patent number: 9224046
    Abstract: View-specific object detectors are learned as a function of scene geometry and object motion patterns. Motion directions are determined for object images extracted from a training dataset and collected from different camera scene viewpoints. The object images are categorized into clusters as a function of similarities of their determined motion directions, the object images in each cluster are acquired from the same camera scene viewpoint. Zenith angles are estimated for object image poses in the clusters relative to a position of a horizon in the cluster camera scene viewpoint, and azimuth angles of the poses as a function of a relation of the determined motion directions of the clustered images to the cluster camera scene viewpoint. Detectors are thus built for recognizing objects in input video, one for each of the clusters, and associated with the estimated zenith angles and azimuth angles of the poses of the respective clusters.
    Type: Grant
    Filed: January 19, 2015
    Date of Patent: December 29, 2015
    Assignee: International Business Machines Corporation
    Inventors: Rogerio S. Feris, Sharathchandra U. Pankanti, Behjat Siddiquie
  • Publication number: 20150356352
    Abstract: Long-term understanding of background modeling includes determining first and second dimension gradient model derivatives of image brightness data of an image pixel along respective dimensions of two-dimensional, single channel image brightness data of a static image scene. The determined gradients are averaged with previous determined gradients of the image pixels, and with gradients of neighboring pixels as a function of their respective distances to the image pixel, the averaging generating averaged pixel gradient models for each of a plurality of pixels of the video image data of the static image scene that each have mean values and weight values. Background models for the static image scene are constructed as a function of the averaged pixel gradients and weights, wherein the background model pixels are represented by averaged pixel gradient models having similar orientation and magnitude and weights meeting a threshold weight requirement.
    Type: Application
    Filed: August 11, 2015
    Publication date: December 10, 2015
    Inventors: Rogerio S. Feris, Yun Zhai
  • Publication number: 20150356745
    Abstract: Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode.
    Type: Application
    Filed: August 19, 2015
    Publication date: December 10, 2015
    Inventors: Russell P. Bobbitt, Lisa M. Brown, Rogerio S. Feris, Arun Hampapur, Yun Zhai
  • Publication number: 20150339831
    Abstract: Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode.
    Type: Application
    Filed: July 31, 2015
    Publication date: November 26, 2015
    Inventors: Russell P. Bobbitt, Lisa M. Brown, Rogerio S. Feris, Arun Hampapur, Yun Zhai
  • Publication number: 20150324368
    Abstract: In response to a query of discernable facial attributes, the locations of distinct and different facial regions are estimated from face image data, each relevant to different attributes. Different features are extracted from the estimated facial regions from database facial images, which are ranked in base layer rankings as a function of relevance of extracted features to attributes relevant to the estimated regions, and in second-layer rankings as a function of combinations of the base layer rankings and relevance of the extracted features to common ones of the attributes relevant to the estimated regions. The images are ranked in relevance to the query as a function of the second-layer rankings.
    Type: Application
    Filed: July 20, 2015
    Publication date: November 12, 2015
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Daniel A. Vaquero
  • Publication number: 20150310630
    Abstract: A method and system for real time processing of a sequence of video frames. A current frame in the sequence and at least one frame in the sequence occurring prior to the current frame is analyzed. The sequence of video frames is received in synchronization with a recording of the video frames in real time. The analyzing includes performing a background subtraction on the at least one frame, which determines a background image and a static region mask associated with a static region consisting of a contiguous distribution of pixels in the current frame. The static region mask identifies each pixel in the static region upon the static region mask being superimposed on the current frame. A status of a static object is determined as either an abandoned status if the static object is an abandoned object or a removed status if the static object is a removed object.
    Type: Application
    Filed: June 10, 2015
    Publication date: October 29, 2015
    Inventors: Rogerio S. Feris, Arun Hampapur, Zouxuan Lu, Ying-li Tian