Patents by Inventor Rogerio S. Feris
Rogerio S. Feris has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9594963Abstract: Foreground feature data and motion feature data is determined for frames of video data acquired from an object region of interest. The frames are labeled as “object present” if the determined foreground feature data value meets a threshold value, else as “object absent; and as “motion present” if the motion feature data meets a motion threshold, else as “static.” The labels are used to classify segments of the video data comprising groups of consecutive video frames, namely as within a “no object present” segment for groups with “object absent” and “static” labels; within a “object present and in transition” segment for groups “object present” and “motion present” labels; and within a “object present and stopped” segment for groups with “object present” and “static” labels. The presence or motion state of an object at a time of inquiry is thereby determined from the respective segment classification.Type: GrantFiled: September 29, 2016Date of Patent: March 14, 2017Assignee: International Business Machines CorporationInventors: Russell P. Bobbitt, Rogerio S. Feris, Yun Zhai
-
Publication number: 20170046596Abstract: Embodiments are directed to an object detection system having at least one processor circuit configured to receive a series of image regions and apply to each image region in the series a detector, which is configured to determine a presence of a predetermined object in the image region. The object detection system performs a method of selecting and applying the detector from among a plurality of foreground detectors and a plurality of background detectors in a repeated pattern that includes sequentially selecting a selected one of the plurality of foreground detectors; sequentially applying the selected one of the plurality of foreground detectors to one of the series of image regions until all of the plurality of foreground detectors have been applied; selecting a selected one of the plurality of background detectors; and applying the selected one of the plurality of background detectors to one of the series of image regions.Type: ApplicationFiled: August 12, 2015Publication date: February 16, 2017Inventors: Russell P. Bobbitt, Rogerio S. Feris, Chiao-Fe Shu, Yun Zhai
-
Publication number: 20170046587Abstract: Embodiments are directed to an object detection system having at least one processor circuit configured to receive a series of image regions and apply to each image region in the series a detector, which is configured to determine a presence of a predetermined object in the image region. The object detection system performs a method of selecting and applying the detector from among a plurality of foreground detectors and a plurality of background detectors in a repeated pattern that includes sequentially selecting a selected one of the plurality of foreground detectors; sequentially applying the selected one of the plurality of foreground detectors to one of the series of image regions until all of the plurality of foreground detectors have been applied; selecting a selected one of the plurality of background detectors; and applying the selected one of the plurality of background detectors to one of the series of image regions.Type: ApplicationFiled: December 8, 2015Publication date: February 16, 2017Inventors: Russell P. Bobbitt, Rogerio S. Feris, Chiao-Fe Shu, Yun Zhai
-
Publication number: 20170017845Abstract: Foreground feature data and motion feature data is determined for frames of video data acquired from an object region of interest. The frames are labeled as “object present” if the determined foreground feature data value meets a threshold value, else as “object absent; and as “motion present” if the motion feature data meets a motion threshold, else as “static.” The labels are used to classify segments of the video data comprising groups of consecutive video frames, namely as within a “no object present” segment for groups with “object absent” and “static” labels; within a “object present and in transition” segment for groups “object present” and “motion present” labels; and within a “object present and stopped” segment for groups with “object present” and “static” labels. The presence or motion state of an object at a time of inquiry is thereby determined from the respective segment classification.Type: ApplicationFiled: September 29, 2016Publication date: January 19, 2017Inventors: RUSSELL P. BOBBITT, ROGERIO S. FERIS, YUN ZHAI
-
Publication number: 20160371560Abstract: Long-term understanding of background modeling includes determining first and second dimension gradient model derivatives of image brightness data of an image pixel along respective dimensions of two-dimensional, single channel image brightness data of a static image scene. The determined gradients are averaged with previous determined gradients of the image pixels, and with gradients of neighboring pixels as a function of their respective distances to the image pixel, the averaging generating averaged pixel gradient models for each of a plurality of pixels of the video image data of the static image scene that each have mean values and weight values. Background models for the static image scene are constructed as a function of the averaged pixel gradients and weights, wherein the background model pixels are represented by averaged pixel gradient models having similar orientation and magnitude and weights meeting a threshold weight requirement.Type: ApplicationFiled: August 31, 2016Publication date: December 22, 2016Inventors: ROGERIO S. FERIS, YUN ZHAI
-
Publication number: 20160350600Abstract: Methods and system are provided for monitoring a queue. A method includes receiving by sensors a non-visual identifier at predefined locations of a queue. Further, the method includes capturing by image capture devices images of an object possessing the non-visual identifier at the predefined locations of the queue. Further, the method includes visually tracking another object in the queue based on transformations of a predefined feature extracted from the images of the object possessing the non-visual identifier at the predefined locations.Type: ApplicationFiled: August 9, 2016Publication date: December 1, 2016Inventors: Ira L. ALLEN, Russell P. BOBBITT, Rogerio S. FERIS, Yun ZHAI
-
Publication number: 20160335798Abstract: Objects within two-dimensional video data are modeled by three-dimensional models as a function of object type and motion through manually calibrating a two-dimensional image to the three spatial dimensions of a three-dimensional modeling cube. Calibrated three-dimensional locations of an object in motion in the two-dimensional image field of view of a video data input are determined and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the determined three-dimensional locations. The two-dimensional object image is replaced in the video data input with an object-type three-dimensional polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features.Type: ApplicationFiled: July 29, 2016Publication date: November 17, 2016Inventors: ANKUR DATTA, ROGERIO S. FERIS, YUN ZHAI
-
Patent number: 9495599Abstract: Foreground feature data and motion feature data is determined for frames of video data acquired from a train track area region of interest. The frames are labeled as “train present” if the determined foreground feature data value meets a threshold value, else as “train absent; and as “motion present” if the motion feature data meets a motion threshold, else as “static.” The labels are used to classify segments of the video data comprising groups of consecutive video frames, namely as within a “no train present” segment for groups with “train absent” and “static” labels; within a “train present and in transition” segment for groups “train present” and “motion present” labels; and within a “train present and stopped” segment for groups with “train present” and “static” labels. The presence or motion state of a train at a time of inquiry is thereby determined from the respective segment classification.Type: GrantFiled: May 14, 2015Date of Patent: November 15, 2016Assignee: International Business Machines CorporationInventors: Russell P. Bobbitt, Rogerio S. Feris, Yun Zhai
-
Patent number: 9477890Abstract: Techniques for object detection are provided that employ limited learned attribute ranges. One or more objects are initially detected for a full range of one or more attributes at each location of an image. Thereafter, a set of positional constraints are generated indicating an expected range of values for each position in the image for one or more of the attributes based on the detected objects employing a geometric model of a scene in the image. Objects are then detected in the image using the expected range of values for each position in the image for the one or more the attributes. The attributes comprise, for example, one or more of size, pose and rotation of the objects. A best fit is computed to the geometric model to generate the set of positional constraints, for example, using a least squares approach.Type: GrantFiled: November 6, 2013Date of Patent: October 25, 2016Assignee: International Business Machines CorporationInventors: Lisa M. Brown, Rogerio S. Feris
-
Patent number: 9471852Abstract: An aspect of providing user-configurable settings for content obfuscation includes, for each media segment in a media file, inputting the media segment to a neural network, applying a classifier to features output by the neural network, and determining from results of the classifier images in the media segment that contain the sensitive characteristics. The classifier specifies images that are predetermined to include sensitive characteristics. An aspect further includes assigning a tag to each of the images in the media segment that contain the sensitive characteristics. The tag indicates a type of sensitivity. An aspect also includes receiving at least one user-defined sensitivity, the user-defined sensitivity indicating an action or condition that is considered objectionable to a user, identifying a subset of the tagged images that correlate to the user-defined sensitivity, and visually modifying, during playback of the media file, an appearance of the subset of the tagged images.Type: GrantFiled: November 11, 2015Date of Patent: October 18, 2016Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Rogerio S. Feris, Itzhack Goldberg, Minkyong Kim, Clifford A. Pickover, Neil Sondhi
-
Patent number: 9460361Abstract: Techniques for performing foreground analysis are provided. The techniques include identifying a region of interest in a video scene; detecting a static foreground object in the region of interest; and performing a foreground analysis based on tracking information to determine whether the static foreground object is abandoned or removed.Type: GrantFiled: August 13, 2014Date of Patent: October 4, 2016Assignee: International Business Machines CorporationInventors: Rogerio S. Feris, Arun Hampapur, Frederik C. Kjeldsen, Hao-Wei Liu
-
Patent number: 9460349Abstract: Long-term understanding of background modeling includes determining first and second dimension gradient model derivatives of image brightness data of an image pixel along respective dimensions of two-dimensional, single channel image brightness data of a static image scene. The determined gradients are averaged with previous determined gradients of the image pixels, and with gradients of neighboring pixels as a function of their respective distances to the image pixel, the averaging generating averaged pixel gradient models for each of a plurality of pixels of the video image data of the static image scene that each have mean values and weight values. Background models for the static image scene are constructed as a function of the averaged pixel gradients and weights, wherein the background model pixels are represented by averaged pixel gradient models having similar orientation and magnitude and weights meeting a threshold weight requirement.Type: GrantFiled: August 11, 2015Date of Patent: October 4, 2016Assignee: International Business Machines CorporationInventors: Rogerio S. Feris, Yun Zhai
-
Publication number: 20160284097Abstract: Foreground objects of interest are distinguished from a background model by dividing a region of interest of a video data image into a grid array of individual cells. Each of the cells are labeled as foreground if accumulated edge energy within the cell meets an edge energy threshold, or if color intensities for different colors within each cell differ by a color intensity differential threshold, or as a function of combinations of said determinations.Type: ApplicationFiled: June 8, 2016Publication date: September 29, 2016Inventors: ANKUR DATTA, ROGERIO S. FERIS, SHARATHCHANDRA U. PANKANTI, XIAOYU WANG
-
Patent number: 9443148Abstract: Methods and system are provided for monitoring a queue. A method includes receiving by sensors a non-visual identifier at predefined locations of a queue. Further, the method includes capturing by image capture devices images of an object possessing the non-visual identifier at the predefined locations of the queue. Further, the method includes visually tracking another object in the queue based on transformations of a predefined feature extracted from the images of the object possessing the non-visual identifier at the predefined locations.Type: GrantFiled: March 15, 2013Date of Patent: September 13, 2016Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ira L. Allen, Russell P. Bobbitt, Rogerio S. Feris, Yun Zhai
-
Patent number: 9430874Abstract: Objects within two-dimensional video data are modeled by three-dimensional models as a function of object type and motion through manually calibrating a two-dimensional image to the three spatial dimensions of a three-dimensional modeling cube. Calibrated three-dimensional locations of an object in motion in the two-dimensional image field of view of a video data input are determined and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the determined three-dimensional locations. The two-dimensional object image is replaced in the video data input with an object-type three-dimensional polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features.Type: GrantFiled: September 9, 2015Date of Patent: August 30, 2016Assignee: International Business Machines CorporationInventors: Ankur Datta, Rogerio S. Feris, Yun Zhai
-
Patent number: 9424659Abstract: A method and system for real time processing of a sequence of video frames. A current frame in the sequence and at least one frame in the sequence occurring prior to the current frame is analyzed. Each frame includes a two-dimensional array of pixels. The sequence of video frames is received in synchronization with a recording of the video frames in real time. The analyzing includes performing a background subtraction on the at least one frame, which determines a background image and a static region mask associated with a static region consisting of a contiguous distribution of pixels in the current frame. The static region mask identifies each pixel in the static region upon the static region mask being superimposed on the current frame. A determination is made that a persistence requirement, both a non-persistence duration requirement and a persistence duration requirement, or a combination thereof have been satisfied.Type: GrantFiled: June 10, 2015Date of Patent: August 23, 2016Assignee: International Business Machines CorporationInventors: Rogerio S. Feris, Arun Hampapur, Zouxuan Lu, Ying-li Tian
-
Patent number: 9418445Abstract: A method and system for real time processing of a sequence of video frames. A current frame in the sequence and at least one frame in the sequence occurring prior to the current frame is analyzed. The sequence of video frames is received in synchronization with a recording of the video frames in real time. The analyzing includes performing a background subtraction on the at least one frame, which determines a background image and a static region mask associated with a static region consisting of a contiguous distribution of pixels in the current frame, which includes executing a mixture of 3 to 5 Gaussians algorithm coupled together in a linear combination by Gaussian weight coefficients to generate the background model, a foreground image, and the static region. The static region mask identifies each pixel in the static region upon the static region mask being superimposed on the current frame.Type: GrantFiled: June 10, 2015Date of Patent: August 16, 2016Assignee: International Business Machines CorporationInventors: Rogerio S. Feris, Arun Hampapur, Zouxuan Lu, Ying-li Tian
-
Patent number: 9418444Abstract: A method and system for real time processing of a sequence of video frames. A current frame in the sequence and at least one frame in the sequence occurring prior to the current frame is analyzed. The sequence of video frames is received in synchronization with a recording of the video frames in real time. The analyzing includes performing a background subtraction on the at least one frame, which determines a background image and a static region mask associated with a static region consisting of a contiguous distribution of pixels in the current frame. The static region mask identifies each pixel in the static region upon the static region mask being superimposed on the current frame. A status of a static object is determined as either an abandoned status if the static object is an abandoned object or a removed status if the static object is a removed object.Type: GrantFiled: June 10, 2015Date of Patent: August 16, 2016Assignee: International Business Machines CorporationInventors: Rogerio S. Feris, Arun Hampapur, Zouxuan Lu, Ying-li Tian
-
Patent number: 9396548Abstract: Foreground objects of interest are distinguished from a background model by dividing a region of interest of a video data image into a grid array of individual cells. Each of the cells are labeled as foreground if accumulated edge energy within the cell meets an edge energy threshold, or if color intensities for different colors within each cell differ by a color intensity differential threshold, or as a function of combinations of said determinations.Type: GrantFiled: September 22, 2015Date of Patent: July 19, 2016Assignee: International Business Machines CorporationInventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Xiaoyu Wang
-
Publication number: 20160203611Abstract: A camera at a fixed vertical height positioned above a reference plane, with an axis of a camera lens at an acute angle with respect to a perpendicular of the reference plane. One or more processors receive camera images of a multiplicity of people of unknown height and vertical axis of the images are transformed into pixel counts. The known heights of people from a known statistical distribution of heights of people are received by one or more processors and transformed to a normalized measurement of pixel counts, based in part on a focal length of the camera lens, the angle of the camera, and an objective function summing differences between pixel counts of the known heights of people and the unknown heights of people. The fixed vertical height of the camera is determined by adjusting the estimated camera height to minimize the objective function.Type: ApplicationFiled: March 22, 2016Publication date: July 14, 2016Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai