Patents by Inventor Jeyasri Subramanian

Jeyasri Subramanian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240087287
    Abstract: A system determines an input video and a first annotated image from the input video which identifies an object of interest. The system initiates a tracker based on the first annotated image and the input video. The tracker generates, based on the first annotated image and the input video, information including: a sliding window for false positives; a first set of unlabeled images from the input video; and at least two images with corresponding labeled states. A semi-supervised classifier classifies, based on the information, the first set of unlabeled images from the input video. If a first unlabeled image is classified as a false positive, the system reinitiates the tracker based on a second annotated image occurring in a frame prior to a frame with the false positive. The system generates an output video comprising the input video displayed with tracking on the object of interest.
    Type: Application
    Filed: September 8, 2022
    Publication date: March 14, 2024
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Matthew A. Shreve, Robert R. Price, Jeyasri Subramanian, Sumeet Menon
  • Publication number: 20240071132
    Abstract: A method of image annotation includes obtaining a candidate annotation map for an annotation task for an image from each of a set of annotation models wherein each of the candidate annotation maps includes suggested annotations for the image, receiving user selections or modifications of at least one of the suggested annotations from one or more of the candidate annotation maps, and generating a final annotation map based on the user selections or modifications from the one or more of the candidate annotation maps.
    Type: Application
    Filed: November 6, 2023
    Publication date: February 29, 2024
    Inventors: Matthew Shreve, Raja Bala, Jeyasri Subramanian
  • Publication number: 20240044783
    Abstract: Generating one or more high-resolution atmospheric gas concentration maps using geography-informed machine learning includes obtaining a remote sensing dataset constrained by at least one temporal window and at least one spatial window defining a first geographic area. The remote sensing dataset includes at least a first set of atmospheric gas concentration data for a plurality of atmospheric gases. A training dataset is generated based on the remote sensing dataset. A machine learning model is trained with the training dataset to predict a plurality of atmospheric gas concentration values for at least one atmospheric gas of the plurality of atmospheric gases in a given geographic area and with a spatial resolution that is greater than a spatial resolution of atmospheric gas concentration data provided as an input to the machine learning module.
    Type: Application
    Filed: August 2, 2022
    Publication date: February 8, 2024
    Inventors: Kalaivani RAMEA KUBENDRAN, Md Nurul HUDA, David SCHWARTZ, Jeyasri SUBRAMANIAN
  • Publication number: 20240046143
    Abstract: A geography-informed machine learning (GIML) model is trained on a first remote sensing dataset corresponding to a first geographic area and including a first set of atmospheric gas concentration data for at least one atmospheric gas, a first set of multispectral data, and a first set of spatially autocorrelated land use classifications. The GIML model receives input including a second remote sensing dataset corresponding to a second geographic area. The second remote sensing dataset includes a second set of atmospheric gas concentration data for the atmospheric gas, a second set of multispectral data, and a second set of spatially autocorrelated land use classifications. The GIML model generates, for the second geographic area, a plurality of predicted atmospheric gas concentration values for the atmospheric gas having a spatial resolution that is greater than a spatial resolution of the first and second sets of atmospheric gas concentration data.
    Type: Application
    Filed: August 2, 2022
    Publication date: February 8, 2024
    Inventors: Kalaivani Ramea KUBENDRAN, Md Nurul HUDA, David SCHWARTZ, Jeyasri SUBRAMANIAN
  • Publication number: 20240046568
    Abstract: A system is provided which mixes static scene and live annotations for labeled dataset collection. A first recording device obtains a 3D mesh of a scene with physical objects. The first recording device marks, while in a first mode, first annotations for a physical object displayed in the 3D mesh. The system switches to a second mode. The system displays, on the first recording device while in the second mode, the 3D mesh including a first projection indicating a 2D bounding area corresponding to the marked first annotations. The first recording device marks, while in the second mode, second annotations for the physical object or another physical object displayed in the 3D mesh. The system switches to the first mode. The first recording device displays, while in the first mode, the 3D mesh including a second projection indicating a 2D bounding area corresponding to the marked second annotations.
    Type: Application
    Filed: August 2, 2022
    Publication date: February 8, 2024
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Matthew A. Shreve, Jeyasri Subramanian
  • Patent number: 11810396
    Abstract: A method of image annotation includes selecting a plurality of annotation models related to an annotation task for an image, obtaining a candidate annotation map for the image from each of the plurality of annotation models, and selecting at least one of the candidate annotation maps to be displayed via a user interface, the candidate annotation maps comprising suggested annotations for the image. The method further includes receiving user selections or modifications of at least one of the suggested annotations from the candidate annotation map and generating a final annotation map based on the user selections or modifications.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: November 7, 2023
    Assignee: Xerox Corporation
    Inventors: Matthew Shreve, Raja Bala, Jeyasri Subramanian
  • Publication number: 20220335239
    Abstract: A method of image annotation includes selecting a plurality of annotation models related to an annotation task for an image, obtaining a candidate annotation map for the image from each of the plurality of annotation models, and selecting at least one of the candidate annotation maps to be displayed via a user interface, the candidate annotation maps comprising suggested annotations for the image. The method further includes receiving user selections or modifications of at least one of the suggested annotations from the candidate annotation map and generating a final annotation map based on the user selections or modifications.
    Type: Application
    Filed: April 16, 2021
    Publication date: October 20, 2022
    Inventors: Matthew Shreve, Raja Bala, Jeyasri Subramanian
  • Patent number: 11431894
    Abstract: One embodiment can include a system for providing an image-capturing recommendation. During operation the system receives, from a mobile computing device, one or more images. The one or more images are captured by one or more cameras associated with the mobile computing device. The system analyzes the received images to obtain image-capturing conditions for capturing images of a target within a physical space; determines, based on the obtained image-capturing conditions and a predetermined image-quality requirement, one or more image-capturing settings; and recommends the determined one or more image-capturing settings to a user.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: August 30, 2022
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Matthew A. Shreve, Raja Bala, Jeyasri Subramanian
  • Publication number: 20210250492
    Abstract: One embodiment can include a system for providing an image-capturing recommendation. During operation the system receives, from a mobile computing device, one or more images. The one or more images are captured by one or more cameras associated with the mobile computing device. The system analyzes the received images to obtain image-capturing conditions for capturing images of a target within a physical space; determines, based on the obtained image-capturing conditions and a predetermined image-quality requirement, one or more image-capturing settings; and recommends the determined one or more image-capturing settings to a user.
    Type: Application
    Filed: February 6, 2020
    Publication date: August 12, 2021
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Matthew A. Shreve, Raja Bala, Jeyasri Subramanian
  • Patent number: 11068746
    Abstract: A method for predicting the realism of an object within an image includes generating a training image set for a predetermined object type. The training image set comprises one or more training images at least partially generated using a computer. A pixel level training spatial realism map is generated for each training image of the one or more training images. Each training spatial realism map configured to represent a perceptual realism of the corresponding training image. A predictor is trained using the training image set and the corresponding training spatial realism maps. An image of the predetermined object is received. A spatial realism map of the received image is produced using the trained predictor.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: July 20, 2021
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Raja Bala, Matthew Shreve, Jeyasri Subramanian, Pei Li
  • Publication number: 20200210770
    Abstract: A method for predicting the realism of an object within an image includes generating a training image set for a predetermined object type. The training image set comprises one or more training images at least partially generated using a computer. A pixel level training spatial realism map is generated for each training image of the one or more training images. Each training spatial realism map configured to represent a perceptual realism of the corresponding training image. A predictor is trained using the training image set and the corresponding training spatial realism maps. An image of the predetermined object is received. A spatial realism map of the received image is produced using the trained predictor.
    Type: Application
    Filed: December 28, 2018
    Publication date: July 2, 2020
    Inventors: Raja Bala, Matthew Shreve, Jeyasri Subramanian, Pei Li
  • Patent number: 9767349
    Abstract: A method for determining an emotional state of a subject taking an assessment. The method includes eliciting predicted facial expressions from a subject administered questions each intended to elicit a certain facial expression that conveys a baseline characteristic of the subject; receiving a video sequence capturing the subject answering the questions; determining an observable physical behavior experienced by the subject across a series of frames corresponding to the sample question; associating the observed behavior with the emotional state that corresponds with the facial expression; and training a classifier using the associations. The method includes receiving a second video sequence capturing the subject during an assessment and applying features extracted from the second image data to the classifier for determining the emotional state of the subject in response to an assessment item administered during the assessment.
    Type: Grant
    Filed: May 9, 2016
    Date of Patent: September 19, 2017
    Assignee: XEROX CORPORATION
    Inventors: Matthew Adam Shreve, Jayant Kumar, Raja Bala, Phillip J. Emmett, Megan Clar, Jeyasri Subramanian, Eric Harte
  • Patent number: 9354711
    Abstract: A method, non-transitory computer-readable medium, and apparatus for localizing a region of interest using a dynamic hand gesture are disclosed. For example, the method captures the ego-centric video containing the dynamic hand gesture, analyzes a frame of the ego-centric video to detect pixels that correspond to a fingertip using a hand segmentation algorithm, analyzes temporally one or more frames of the ego-centric video to compute a path of the fingertip in the dynamic hand gesture, localizes the region of interest based on the path of the fingertip in the dynamic hand gesture and performs an action based on an object in the region of interest.
    Type: Grant
    Filed: November 25, 2014
    Date of Patent: May 31, 2016
    Assignee: Xerox Corporation
    Inventors: Jayant Kumar, Xiaodong Yang, Qun Li, Raja Bala, Edgar A. Bernal, Jeyasri Subramanian
  • Publication number: 20160091976
    Abstract: A method, non-transitory computer-readable medium, and apparatus for localizing a region of interest using a dynamic hand gesture are disclosed. For example, the method captures the ego-centric video containing the dynamic hand gesture, analyzes a frame of the ego-centric video to detect pixels that correspond to a fingertip using a hand segmentation algorithm, analyzes temporally one or more frames of the ego-centric video to compute a path of the fingertip in the dynamic hand gesture, localizes the region of interest based on the path of the fingertip in the dynamic hand gesture and performs an action based on an object in the region of interest.
    Type: Application
    Filed: November 25, 2014
    Publication date: March 31, 2016
    Inventors: JAYANT KUMAR, Xiaodong Yang, Qun Li, Raja Bala, Edgar A. Bernal, Jeyasri Subramanian