Patents by Inventor Rogerio S. Feris
Rogerio S. Feris has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10037607Abstract: Image-matching tracks the movements of the objects from initial camera scenes to ending camera scenes in non-overlapping cameras. Paths are defined through scenes for pairings of initial and ending cameras by different respective scene entry and exit points. For each of said camera pairings a combination path having a highest total number of tracked movements relative to all other combinations of one path through the initial and ending camera scene is chosen, and the scene exit point of the selected path through the initial camera and the scene entry point of the selected path into the ending camera define a path connection of the initial camera scene to the ending camera scene.Type: GrantFiled: January 22, 2016Date of Patent: July 31, 2018Assignee: International Business Machines CorporationInventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra Pankanti
-
Patent number: 10037604Abstract: Foreground objects of interest are distinguished from a background model by dividing a region of interest of a video data image into a grid array of individual cells. Each of the cells are labeled as foreground if accumulated edge energy within the cell meets an edge energy threshold, or if color intensities for different colors within each cell differ by a color intensity differential threshold, or as a function of combinations of said determinations.Type: GrantFiled: June 8, 2016Date of Patent: July 31, 2018Assignee: International Business Machines CorporationInventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Xiaoyu Wang
-
Publication number: 20180107830Abstract: Techniques are described for video redaction. In one aspect, techniques include receiving a video for redaction; analyzing the video to generate an appearance model for the video, providing a user interface (UI) allowing a user to modify the appearance model, and responsive to a user selecting an object from the appearance model, extending and completing a trajectory of the selected object with enhanced marking based on the user input.Type: ApplicationFiled: October 15, 2016Publication date: April 19, 2018Inventors: Russell P. Bobbitt, Curtis H. Brobst, Rogerio S. Feris, Yun Zhai
-
Patent number: 9946951Abstract: Embodiments are directed to an object detection system having at least one processor circuit configured to receive a series of image regions and apply to each image region in the series a detector, which is configured to determine a presence of a predetermined object in the image region. The object detection system performs a method of selecting and applying the detector from among a plurality of foreground detectors and a plurality of background detectors in a repeated pattern that includes sequentially selecting a selected one of the plurality of foreground detectors; sequentially applying the selected one of the plurality of foreground detectors to one of the series of image regions until all of the plurality of foreground detectors have been applied; selecting a selected one of the plurality of background detectors; and applying the selected one of the plurality of background detectors to one of the series of image regions.Type: GrantFiled: August 12, 2015Date of Patent: April 17, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Russell P. Bobbitt, Rogerio S. Feris, Chiao-Fe Shu, Yun Zhai
-
Publication number: 20180039867Abstract: An embodiment of the invention provides a method for finding missing persons by learning features for person attribute classification based on deep learning. A first component of a neural network identifies geographic locations of training images; and, a second component of the neural network identifies weather information for each of the identified geographic locations. A third component of the neural network generates image pairs from the training images. For each image pair of the image pairs, the third component of the neural network determines whether images of the image pair include the same person. The neural network generates neural network parameters with the identified geographic locations, the weather information for each of the identified geographic locations, and the determination of whether the images of the image pairs include the same person.Type: ApplicationFiled: August 2, 2016Publication date: February 8, 2018Applicant: International Business Machines CorporationInventors: Yu Cheng, Rogerio S. Feris, Clifford A. Pickover, Maja Vukovic, Jing Wang
-
Patent number: 9885568Abstract: A camera at a fixed vertical height positioned above a reference plane, with an axis of a camera lens at an acute angle with respect to a perpendicular of the reference plane. One or more processors receive camera images of a multiplicity of people of unknown height and vertical axis of the images are transformed into pixel counts. The known heights of people from a known statistical distribution of heights of people are received by one or more processors and transformed to a normalized measurement of pixel counts, based in part on a focal length of the camera lens, the angle of the camera, and an objective function summing differences between pixel counts of the known heights of people and the unknown heights of people. The fixed vertical height of the camera is determined by adjusting the estimated camera height to minimize the objective function.Type: GrantFiled: March 22, 2016Date of Patent: February 6, 2018Assignee: International Business Machines CorporationInventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
-
Patent number: 9858483Abstract: Long-term understanding of background modeling includes determining first and second dimension gradient model derivatives of image brightness data of an image pixel along respective dimensions of two-dimensional, single channel image brightness data of a static image scene. The determined gradients are averaged with previous determined gradients of the image pixels, and with gradients of neighboring pixels as a function of their respective distances to the image pixel, the averaging generating averaged pixel gradient models for each of a plurality of pixels of the video image data of the static image scene that each have mean values and weight values. Background models for the static image scene are constructed as a function of the averaged pixel gradients and weights, wherein the background model pixels are represented by averaged pixel gradient models having similar orientation and magnitude and weights meeting a threshold weight requirement.Type: GrantFiled: August 31, 2016Date of Patent: January 2, 2018Assignee: International Business Machines CorporationInventors: Rogerio S. Feris, Yun Zhai
-
Patent number: 9805505Abstract: Objects within two-dimensional video data are modeled by three-dimensional models as a function of object type and motion through manually calibrating a two-dimensional image to the three spatial dimensions of a three-dimensional modeling cube. Calibrated three-dimensional locations of an object in motion in the two-dimensional image field of view of a video data input are determined and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the determined three-dimensional locations. The two-dimensional object image is replaced in the video data input with an object-type three-dimensional polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features.Type: GrantFiled: July 29, 2016Date of Patent: October 31, 2017Assignee: International Business Machines CorproationInventors: Ankur Datta, Rogerio S. Feris, Yun Zhai
-
Patent number: 9773195Abstract: An aspect of providing user-configurable settings for content obfuscation includes, for each media segment in a media file, inputting the media segment to a neural network, applying a classifier to features output by the neural network, and determining from results of the classifier images in the media segment that contain the sensitive characteristics. The classifier specifies images that are predetermined to include sensitive characteristics. An aspect further includes assigning a tag to each of the images in the media segment that contain the sensitive characteristics. The tag indicates a type of sensitivity. An aspect also includes receiving at least one user-defined sensitivity, the user-defined sensitivity indicating an action or condition that is considered objectionable to a user, identifying a subset of the tagged images that correlate to the user-defined sensitivity, and visually modifying, during playback of the media file, an appearance of the subset of the tagged images.Type: GrantFiled: October 5, 2016Date of Patent: September 26, 2017Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Rogerio S. Feris, Itzhack Goldberg, Minkyong Kim, Clifford A. Pickover, Neil Sondhi
-
Patent number: 9736446Abstract: An aspect of automated color adjustment of media files includes receiving profile data corresponding to a subject of image capture. The profile data indicates color values of an element associated with the subject. An aspect also includes storing the profile data in a memory device coupled to a computer processor, capturing an image of the subject, and processing the image and adjusting color aspects based on the color values associated with the element.Type: GrantFiled: January 28, 2016Date of Patent: August 15, 2017Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Rogerio S. Feris, Matous Havlena, Minkyong Kim, Ying Li, Clifford A. Pickover, Valentina Salapura
-
Publication number: 20170223326Abstract: An aspect of automated color adjustment of media files includes receiving profile data corresponding to a subject of image capture. The profile data indicates color values of an element associated with the subject. An aspect also includes storing the profile data in a memory device coupled to a computer processor, capturing an image of the subject, and processing the image and adjusting color aspects based on the color values associated with the element.Type: ApplicationFiled: January 28, 2016Publication date: August 3, 2017Inventors: Rogerio S. Feris, Matous Havlena, Minkyong Kim, Ying Li, Clifford A. Pickover, Valentina Salapura
-
Patent number: 9710924Abstract: Field of view overlap among multiple cameras are automatically determined as a function of the temporal overlap of object tracks determined within their fields-of-view. Object tracks with the highest similarity value are assigned into pairs, and portions of the assigned object track pairs having a temporally overlapping period of time are determined. Scene entry points are determined from object locations on the tracks at a beginning of the temporally overlapping period of time, and scene exit points from object locations at an ending of the temporally overlapping period of time. Boundary lines for the overlapping fields-of-view portions within the corresponding camera fields-of-view are defined as a function of the determined entry and exit points in their respective fields-of-view.Type: GrantFiled: September 14, 2015Date of Patent: July 18, 2017Assignee: International Business Machines CorporationInventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
-
Publication number: 20170200051Abstract: An approach for re-identifying an object in a test image is presented. Similarity measures between the test image and training images captured by a first camera are determined. The similarity measures are based on Bhattacharyya distances between feature representations of an estimated background region of the test image and feature representations of background regions of the training images. A transformed test image based on the Bhattacharyya distances has a brightness that is different from the test image's brightness, and matches a brightness of training images captured by a second camera. An appearance of the transformed test image resembles an appearance of a capture of the test image by the second camera. Another image included in test images captured by the second camera is identified as being closest in appearance to the transformed test image and another object in the identified other image is a re-identification of the object.Type: ApplicationFiled: March 28, 2017Publication date: July 13, 2017Inventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti
-
Publication number: 20170174343Abstract: Coffee or other drink, for example a caffeine containing drink, is delivered to individuals that would like the drink, or who have a predetermined cognitive state, using an unmanned aerial vehicle (UAV)/drone. The drink is connected to the UAV, and the UAV flies to an area including people, and uses sensors to scan the people for an individual who has gestured that they would like the drink, or for whom an electronic analysis of sensor data indicates to be in a predetermined cognitive state. The UAV then flies to the individual to deliver the drink. The analysis can include profile data of people, including electronic calendar data, which can be used to determine a potentially predetermined cognitive state.Type: ApplicationFiled: December 22, 2015Publication date: June 22, 2017Inventors: Thomas David ERICKSON, Rogerio S. FERIS, Clifford A. PICKOVER, Maja VUKOVIC
-
Publication number: 20170178218Abstract: An information processing system, a computer readable storage medium, and a method for providing a recommendation for a plaything as a recommended item can include analyzing information received from a person monitoring system to provide an analysis for providing the recommendation for the plaything, and based on the analysis, sending a representation of the recommended item in a signal to a shopping cart such as an online shopping cart. The system can include an analysis module that receives information from a person monitoring system, and at least one processor configured to analyze information received from the person monitoring system (for one or more persons) to provide an analysis. The analysis provides a recommendation for a plaything. The processor can further send a representation of the recommended item in a signal to a shopping cart to upload into the shopping cart based on the analysis. Other embodiments are disclosed.Type: ApplicationFiled: December 17, 2015Publication date: June 22, 2017Inventors: Rogerio S. FERIS, James R. KOZLOSKI, Clifford A. PICKOVER, Maja VUKOVIC
-
Publication number: 20170168703Abstract: Information relating to at least one of a user and a user environment is acquired. A user cognitive state is determined based on the acquired information. A graphical control element is automatically configured based on the user cognitive state. The graphical control element is automatically presented on a display interface of a user device to control viewing of content displayed on the user device.Type: ApplicationFiled: December 15, 2015Publication date: June 15, 2017Inventors: Rogerio S. Feris, James R. Kozloski, Clifford A. Pickover, Maja Vukovic
-
Patent number: 9652863Abstract: Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode.Type: GrantFiled: August 19, 2015Date of Patent: May 16, 2017Assignee: International Business Machines CorporationInventors: Russell P. Bobbitt, Lisa M. Brown, Rogerio S. Feris, Arun Hampapur, Yun Zhai
-
Publication number: 20170132495Abstract: An aspect of providing user-configurable settings for content obfuscation includes, for each media segment in a media file, inputting the media segment to a neural network, applying a classifier to features output by the neural network, and determining from results of the classifier images in the media segment that contain the sensitive characteristics. The classifier specifies images that are predetermined to include sensitive characteristics. An aspect further includes assigning a tag to each of the images in the media segment that contain the sensitive characteristics. The tag indicates a type of sensitivity. An aspect also includes receiving at least one user-defined sensitivity, the user-defined sensitivity indicating an action or condition that is considered objectionable to a user, identifying a subset of the tagged images that correlate to the user-defined sensitivity, and visually modifying, during playback of the media file, an appearance of the subset of the tagged images.Type: ApplicationFiled: October 5, 2016Publication date: May 11, 2017Inventors: Rogerio S. Feris, Itzhack Goldberg, Minkyong Kim, Clifford A. Pickover, Neil Sondhi
-
Patent number: 9633263Abstract: An approach for re-identifying an object in a first test image is presented. Brightness transfer functions (BTFs) between respective pairs of training images are determined. Respective similarity measures are determined between the first test image and each of the training images captured by the first camera (first training images). A weighted brightness transfer function (WBTF) is determined by combining the BTFs weighted by weights of the first training images. The weights are based on the similarity measures. The first test image is transformed by the WBTF to better match one of the training images captured by the second camera. Another test image, captured by the second camera, is identified because it is closer in appearance to the transformed test image than other test images captured by the second camera. An object in the identified test image is a re-identification of the object in the first test image.Type: GrantFiled: October 9, 2012Date of Patent: April 25, 2017Assignee: International Business Machines CorporationInventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti
-
Patent number: 9633045Abstract: Images are retrieved and ranked according to relevance to attributes of a multi-attribute query through training image attribute detectors for different attributes annotated in a training dataset. Pair-wise correlations are learned between pairs of the annotated attributes from the training dataset of images. Image datasets may are searched via the trained attribute detectors for images comprising attributes in a multi-attribute query. The retrieved images are ranked as a function of comprising attributes that are not within the query subset plurality of attributes but are paired to one of the query subset plurality of attributes by the pair-wise correlations, wherein the ranking is an order of likelihood that the different ones of the attributes will appear in an image with the paired one of the query subset plurality of attributes.Type: GrantFiled: January 13, 2016Date of Patent: April 25, 2017Assignee: International Business Machines CorporationInventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Behjat Siddiquie