Patents by Inventor Jeyasri Subramanian
Jeyasri Subramanian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240087287Abstract: A system determines an input video and a first annotated image from the input video which identifies an object of interest. The system initiates a tracker based on the first annotated image and the input video. The tracker generates, based on the first annotated image and the input video, information including: a sliding window for false positives; a first set of unlabeled images from the input video; and at least two images with corresponding labeled states. A semi-supervised classifier classifies, based on the information, the first set of unlabeled images from the input video. If a first unlabeled image is classified as a false positive, the system reinitiates the tracker based on a second annotated image occurring in a frame prior to a frame with the false positive. The system generates an output video comprising the input video displayed with tracking on the object of interest.Type: ApplicationFiled: September 8, 2022Publication date: March 14, 2024Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Robert R. Price, Jeyasri Subramanian, Sumeet Menon
-
Publication number: 20240071132Abstract: A method of image annotation includes obtaining a candidate annotation map for an annotation task for an image from each of a set of annotation models wherein each of the candidate annotation maps includes suggested annotations for the image, receiving user selections or modifications of at least one of the suggested annotations from one or more of the candidate annotation maps, and generating a final annotation map based on the user selections or modifications from the one or more of the candidate annotation maps.Type: ApplicationFiled: November 6, 2023Publication date: February 29, 2024Inventors: Matthew Shreve, Raja Bala, Jeyasri Subramanian
-
Publication number: 20240044783Abstract: Generating one or more high-resolution atmospheric gas concentration maps using geography-informed machine learning includes obtaining a remote sensing dataset constrained by at least one temporal window and at least one spatial window defining a first geographic area. The remote sensing dataset includes at least a first set of atmospheric gas concentration data for a plurality of atmospheric gases. A training dataset is generated based on the remote sensing dataset. A machine learning model is trained with the training dataset to predict a plurality of atmospheric gas concentration values for at least one atmospheric gas of the plurality of atmospheric gases in a given geographic area and with a spatial resolution that is greater than a spatial resolution of atmospheric gas concentration data provided as an input to the machine learning module.Type: ApplicationFiled: August 2, 2022Publication date: February 8, 2024Inventors: Kalaivani RAMEA KUBENDRAN, Md Nurul HUDA, David SCHWARTZ, Jeyasri SUBRAMANIAN
-
Publication number: 20240046143Abstract: A geography-informed machine learning (GIML) model is trained on a first remote sensing dataset corresponding to a first geographic area and including a first set of atmospheric gas concentration data for at least one atmospheric gas, a first set of multispectral data, and a first set of spatially autocorrelated land use classifications. The GIML model receives input including a second remote sensing dataset corresponding to a second geographic area. The second remote sensing dataset includes a second set of atmospheric gas concentration data for the atmospheric gas, a second set of multispectral data, and a second set of spatially autocorrelated land use classifications. The GIML model generates, for the second geographic area, a plurality of predicted atmospheric gas concentration values for the atmospheric gas having a spatial resolution that is greater than a spatial resolution of the first and second sets of atmospheric gas concentration data.Type: ApplicationFiled: August 2, 2022Publication date: February 8, 2024Inventors: Kalaivani Ramea KUBENDRAN, Md Nurul HUDA, David SCHWARTZ, Jeyasri SUBRAMANIAN
-
Publication number: 20240046568Abstract: A system is provided which mixes static scene and live annotations for labeled dataset collection. A first recording device obtains a 3D mesh of a scene with physical objects. The first recording device marks, while in a first mode, first annotations for a physical object displayed in the 3D mesh. The system switches to a second mode. The system displays, on the first recording device while in the second mode, the 3D mesh including a first projection indicating a 2D bounding area corresponding to the marked first annotations. The first recording device marks, while in the second mode, second annotations for the physical object or another physical object displayed in the 3D mesh. The system switches to the first mode. The first recording device displays, while in the first mode, the 3D mesh including a second projection indicating a 2D bounding area corresponding to the marked second annotations.Type: ApplicationFiled: August 2, 2022Publication date: February 8, 2024Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Jeyasri Subramanian
-
Patent number: 11810396Abstract: A method of image annotation includes selecting a plurality of annotation models related to an annotation task for an image, obtaining a candidate annotation map for the image from each of the plurality of annotation models, and selecting at least one of the candidate annotation maps to be displayed via a user interface, the candidate annotation maps comprising suggested annotations for the image. The method further includes receiving user selections or modifications of at least one of the suggested annotations from the candidate annotation map and generating a final annotation map based on the user selections or modifications.Type: GrantFiled: April 16, 2021Date of Patent: November 7, 2023Assignee: Xerox CorporationInventors: Matthew Shreve, Raja Bala, Jeyasri Subramanian
-
Publication number: 20220335239Abstract: A method of image annotation includes selecting a plurality of annotation models related to an annotation task for an image, obtaining a candidate annotation map for the image from each of the plurality of annotation models, and selecting at least one of the candidate annotation maps to be displayed via a user interface, the candidate annotation maps comprising suggested annotations for the image. The method further includes receiving user selections or modifications of at least one of the suggested annotations from the candidate annotation map and generating a final annotation map based on the user selections or modifications.Type: ApplicationFiled: April 16, 2021Publication date: October 20, 2022Inventors: Matthew Shreve, Raja Bala, Jeyasri Subramanian
-
Patent number: 11431894Abstract: One embodiment can include a system for providing an image-capturing recommendation. During operation the system receives, from a mobile computing device, one or more images. The one or more images are captured by one or more cameras associated with the mobile computing device. The system analyzes the received images to obtain image-capturing conditions for capturing images of a target within a physical space; determines, based on the obtained image-capturing conditions and a predetermined image-quality requirement, one or more image-capturing settings; and recommends the determined one or more image-capturing settings to a user.Type: GrantFiled: February 6, 2020Date of Patent: August 30, 2022Assignee: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Raja Bala, Jeyasri Subramanian
-
Publication number: 20210250492Abstract: One embodiment can include a system for providing an image-capturing recommendation. During operation the system receives, from a mobile computing device, one or more images. The one or more images are captured by one or more cameras associated with the mobile computing device. The system analyzes the received images to obtain image-capturing conditions for capturing images of a target within a physical space; determines, based on the obtained image-capturing conditions and a predetermined image-quality requirement, one or more image-capturing settings; and recommends the determined one or more image-capturing settings to a user.Type: ApplicationFiled: February 6, 2020Publication date: August 12, 2021Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Raja Bala, Jeyasri Subramanian
-
Patent number: 11068746Abstract: A method for predicting the realism of an object within an image includes generating a training image set for a predetermined object type. The training image set comprises one or more training images at least partially generated using a computer. A pixel level training spatial realism map is generated for each training image of the one or more training images. Each training spatial realism map configured to represent a perceptual realism of the corresponding training image. A predictor is trained using the training image set and the corresponding training spatial realism maps. An image of the predetermined object is received. A spatial realism map of the received image is produced using the trained predictor.Type: GrantFiled: December 28, 2018Date of Patent: July 20, 2021Assignee: Palo Alto Research Center IncorporatedInventors: Raja Bala, Matthew Shreve, Jeyasri Subramanian, Pei Li
-
Publication number: 20200210770Abstract: A method for predicting the realism of an object within an image includes generating a training image set for a predetermined object type. The training image set comprises one or more training images at least partially generated using a computer. A pixel level training spatial realism map is generated for each training image of the one or more training images. Each training spatial realism map configured to represent a perceptual realism of the corresponding training image. A predictor is trained using the training image set and the corresponding training spatial realism maps. An image of the predetermined object is received. A spatial realism map of the received image is produced using the trained predictor.Type: ApplicationFiled: December 28, 2018Publication date: July 2, 2020Inventors: Raja Bala, Matthew Shreve, Jeyasri Subramanian, Pei Li
-
Patent number: 9767349Abstract: A method for determining an emotional state of a subject taking an assessment. The method includes eliciting predicted facial expressions from a subject administered questions each intended to elicit a certain facial expression that conveys a baseline characteristic of the subject; receiving a video sequence capturing the subject answering the questions; determining an observable physical behavior experienced by the subject across a series of frames corresponding to the sample question; associating the observed behavior with the emotional state that corresponds with the facial expression; and training a classifier using the associations. The method includes receiving a second video sequence capturing the subject during an assessment and applying features extracted from the second image data to the classifier for determining the emotional state of the subject in response to an assessment item administered during the assessment.Type: GrantFiled: May 9, 2016Date of Patent: September 19, 2017Assignee: XEROX CORPORATIONInventors: Matthew Adam Shreve, Jayant Kumar, Raja Bala, Phillip J. Emmett, Megan Clar, Jeyasri Subramanian, Eric Harte
-
Patent number: 9354711Abstract: A method, non-transitory computer-readable medium, and apparatus for localizing a region of interest using a dynamic hand gesture are disclosed. For example, the method captures the ego-centric video containing the dynamic hand gesture, analyzes a frame of the ego-centric video to detect pixels that correspond to a fingertip using a hand segmentation algorithm, analyzes temporally one or more frames of the ego-centric video to compute a path of the fingertip in the dynamic hand gesture, localizes the region of interest based on the path of the fingertip in the dynamic hand gesture and performs an action based on an object in the region of interest.Type: GrantFiled: November 25, 2014Date of Patent: May 31, 2016Assignee: Xerox CorporationInventors: Jayant Kumar, Xiaodong Yang, Qun Li, Raja Bala, Edgar A. Bernal, Jeyasri Subramanian
-
Publication number: 20160091976Abstract: A method, non-transitory computer-readable medium, and apparatus for localizing a region of interest using a dynamic hand gesture are disclosed. For example, the method captures the ego-centric video containing the dynamic hand gesture, analyzes a frame of the ego-centric video to detect pixels that correspond to a fingertip using a hand segmentation algorithm, analyzes temporally one or more frames of the ego-centric video to compute a path of the fingertip in the dynamic hand gesture, localizes the region of interest based on the path of the fingertip in the dynamic hand gesture and performs an action based on an object in the region of interest.Type: ApplicationFiled: November 25, 2014Publication date: March 31, 2016Inventors: JAYANT KUMAR, Xiaodong Yang, Qun Li, Raja Bala, Edgar A. Bernal, Jeyasri Subramanian