Patents by Inventor Subhashree Radhakrishnan

Subhashree Radhakrishnan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250029409
    Abstract: Approaches are disclosed herein for an automatic segmentation labeling system that identifies objects for potential open-class categories and generates segmentation masks for objects. The disclosed system may use a training pipeline that trains two segmentation models. The training pipeline may take, as input, a set of images with bounding boxes and class labels. The set of images may be fed into a first segmentation network with the bounding boxes used as ground truth for weak supervision. The first segmentation network may be trained to generate pseudo segmentation masks. In a second stage, the trained first segmentation network is used to generate pseudo masks for a set of input images. The generated pseudo masks are provided as input, along with the corresponding images, to a second segmentation network to be used as a type of ground truth data for training the second segmentation network to generate high-quality segmentation masks.
    Type: Application
    Filed: July 18, 2023
    Publication date: January 23, 2025
    Inventors: Subhashree Radhakrishnan, Ramanathan Arunachahalam, Farzin Aghdasi, Zhiding Yu, Shiyi Lan
  • Publication number: 20250005956
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Application
    Filed: September 9, 2024
    Publication date: January 2, 2025
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
  • Publication number: 20240386586
    Abstract: In various examples, systems and methods are disclosed relating to using neural networks for object detection or instance/semantic segmentation for, without limitation, autonomous or semi-autonomous systems and applications. In some implementations, one or more neural networks receive an image (or other sensor data representation) and a bounding shape corresponding to at least a portion of an object in the image. The bounding shape can include or be labeled with an identifier, class, and/or category of the object. The neural network can determine a mask for the object based at least on processing the image and the bounding shape. The mask can be used for various applications, such as annotating masks for vehicle or machine perception and navigation processes.
    Type: Application
    Filed: May 19, 2023
    Publication date: November 21, 2024
    Applicant: NVIDIA Corporation
    Inventors: Alperen DEGIRMENCI, Jiwoong CHOI, Zhiding YU, Ke CHEN, Shubhranshu SINGH, Yashar ASGARIEH, Subhashree RADHAKRISHNAN, James SKINNER, Jose Manuel ALVAREZ LOPEZ
  • Patent number: 12087077
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Grant
    Filed: July 5, 2023
    Date of Patent: September 10, 2024
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
  • Publication number: 20240221166
    Abstract: Video instance segmentation is a computer vision task that aims to detect, segment, and track objects continuously in videos. It can be used in numerous real-world applications, such as video editing, three-dimensional (3D) reconstruction, 3D navigation (e.g. for autonomous driving and/or robotics), and view point estimation. However, current machine learning-based processes employed for video instance segmentation are lacking, particularly because the densely annotated videos needed for supervised training of high-quality models are not readily available and are not easily generated. To address the issues in the prior art, the present disclosure provides point-level supervision for video instance segmentation in a manner that allows the resulting machine learning model to handle any object category.
    Type: Application
    Filed: December 22, 2023
    Publication date: July 4, 2024
    Inventors: Zhiding Yu, Shuaiyi Huang, De-An Huang, Shiyi Lan, Subhashree Radhakrishnan, Jose M. Alvarez Lopez, Anima Anandkumar
  • Publication number: 20240169545
    Abstract: Class agnostic object mask generation uses a vision transformer-based auto-labeling framework requiring only images and object bounding boxes to generate object (segmentation) masks. The generated object masks, images, and object labels may then be used to train instance segmentation models or other neural networks to localize and segment objects with pixel-level accuracy. The generated object masks may supplement or replace conventional human generated annotations. The human generated annotations may be misaligned compared with the object boundaries, resulting in poor quality labeled segmentation masks. In contrast with conventional techniques, the generated object masks are class agnostic and are automatically generated based only on a bounding box image region without relying on either labels or semantic information.
    Type: Application
    Filed: July 20, 2023
    Publication date: May 23, 2024
    Inventors: Shiyi Lan, Zhiding Yu, Subhashree Radhakrishnan, Jose Manuel Alvarez Lopez, Animashree Anandkumar
  • Patent number: 11899749
    Abstract: In various examples, training methods as described to generate a trained neural network that is robust to various environmental features. In an embodiment, training includes modifying images of a dataset and generating boundary boxes and/or other segmentation information for the modified images which is used to train a neural network.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: February 13, 2024
    Assignee: NVIDIA CORPORATION
    Inventors: Subhashree Radhakrishnan, Partha Sriram, Farzin Aghdasi, Seunghwan Cha, Zhiding Yu
  • Publication number: 20230351795
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Application
    Filed: July 5, 2023
    Publication date: November 2, 2023
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
  • Patent number: 11741736
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: August 29, 2023
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
  • Publication number: 20230078218
    Abstract: Apparatuses, systems, and techniques for training an object detection model using transfer learning.
    Type: Application
    Filed: September 16, 2021
    Publication date: March 16, 2023
    Inventors: Yu Wang, Farzin Aghdasi, Parthasarathy Sriram, Subhashree Radhakrishnan
  • Publication number: 20220327318
    Abstract: Apparatuses, systems, and techniques to perform action recognition. In at least one embodiment, action recognition is performed using one or more neural networks and hardware accelerators, in which the one or more neural networks are processed based on, for example, one or more quantization and pruning processes.
    Type: Application
    Filed: April 8, 2021
    Publication date: October 13, 2022
    Inventors: Subhashree Radhakrishnan, Farzin Aghdasi
  • Publication number: 20220292306
    Abstract: In various examples, training methods as described to generate a trained neural network that is robust to various environmental features. In an embodiment, training includes modifying images of a dataset and generating boundary boxes and/or other segmentation information for the modified images which is used to train a neural network.
    Type: Application
    Filed: March 15, 2021
    Publication date: September 15, 2022
    Inventors: Subhashree Radhakrishnan, Partha Sriram, Farzin Aghdasi, Seunghwan Cha, Zhiding Yu
  • Publication number: 20220261593
    Abstract: Apparatuses, systems, and techniques to train one or more neural networks. In at least one embodiment, one or more neural networks are trained to perform segmentation tasks based at least in part on training data comprising bounding box annotations.
    Type: Application
    Filed: February 16, 2021
    Publication date: August 18, 2022
    Inventors: Zhiding Yu, Shiyi Lan, Chris Choy, Subhashree Radhakrishnan, Guilin Liu, Yuke Zhu, Anima Anandkumar
  • Publication number: 20220114800
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Application
    Filed: December 20, 2021
    Publication date: April 14, 2022
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
  • Patent number: 11205086
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: December 21, 2021
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
  • Publication number: 20200151489
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 14, 2020
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan