Patents by Inventor Aayush Prakash

Aayush Prakash has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11715251
    Abstract: Training deep neural networks requires a large amount of labeled training data. Conventionally, labeled training data is generated by gathering real images that are manually labelled which is very time-consuming. Instead of manually labelling a training dataset, domain randomization technique is used generate training data that is automatically labeled. The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. In an embodiment, the generated training data includes synthetic input images generated by rendering three-dimensional (3D) objects of interest in a 3D scene. In an embodiment, the generated training data includes synthetic input images generated by rendering 3D objects of interest on a 2D background image. The 3D objects of interest are objects that a neural network is trained to detect and/or label.
    Type: Grant
    Filed: October 21, 2021
    Date of Patent: August 1, 2023
    Assignee: NVIDIA Corporation
    Inventors: Jonathan Tremblay, Aayush Prakash, Mark A. Brophy, Varun Jampani, Cem Anil, Stanley Thomas Birchfield, Thang Hong To, David Jesus Acuna Marrero
  • Publication number: 20230229919
    Abstract: In various examples, a generative model is used to synthesize datasets for use in training a downstream machine learning model to perform an associated task. The synthesized datasets may be generated by sampling a scene graph from a scene grammar—such as a probabilistic grammar— and applying the scene graph to the generative model to compute updated scene graphs more representative of object attribute distributions of real-world datasets. The downstream machine learning model may be validated against a real-world validation dataset, and the performance of the model on the real-world validation dataset may be used as an additional factor in further training or fine-tuning the generative model for generating the synthesized datasets specific to the task of the downstream machine learning model.
    Type: Application
    Filed: March 20, 2023
    Publication date: July 20, 2023
    Inventors: Amlan Kar, Aayush Prakash, Ming-Yu Liu, David Jesus Acuna Marrero, Antonio Torralba Barriuso, Sanja Fidler
  • Publication number: 20230177826
    Abstract: Approaches are presented for training and using scene graph generators for transfer learning. A scene graph generation technique can decompose a domain gap into individual types of discrepancies, such as may relate to appearance, label, and prediction discrepancies. These discrepancies can be reduced, at least in part, by aligning the corresponding latent and output distributions using one or more gradient reversal layers (GRLs). Label discrepancies can be addressed using self-pseudo-statistics collected from target data. Pseudo statistic-based self-learning and adversarial techniques can be used to manage these discrepancies without the need for costly supervision from a real-world dataset.
    Type: Application
    Filed: February 2, 2023
    Publication date: June 8, 2023
    Inventors: Aayush Prakash, Shoubhik Debnath, Jean-Francois Lafleche, Eric Cameracci, Gavriel State, Marc Teva Law
  • Patent number: 11610115
    Abstract: In various examples, a generative model is used to synthesize datasets for use in training a downstream machine learning model to perform an associated task. The synthesized datasets may be generated by sampling a scene graph from a scene grammar—such as a probabilistic grammar—and applying the scene graph to the generative model to compute updated scene graphs more representative of object attribute distributions of real-world datasets. The downstream machine learning model may be validated against a real-world validation dataset, and the performance of the model on the real-world validation dataset may be used as an additional factor in further training or fine-tuning the generative model for generating the synthesized datasets specific to the task of the downstream machine learning model.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: March 21, 2023
    Assignee: NVIDIA Corporation
    Inventors: Amlan Kar, Aayush Prakash, Ming-Yu Liu, David Jesus Acuna Marrero, Antonio Torralba Barriuso, Sanja Fidler
  • Patent number: 11574155
    Abstract: Approaches are presented for training and using scene graph generators for transfer learning. A scene graph generation technique can decompose a domain gap into individual types of discrepancies, such as may relate to appearance, label, and prediction discrepancies. These discrepancies can be reduced, at least in part, by aligning the corresponding latent and output distributions using one or more gradient reversal layers (GRLs). Label discrepancies can be addressed using self-pseudo-statistics collected from target data. Pseudo statistic-based self-learning and adversarial techniques can be used to manage these discrepancies without the need for costly supervision from a real-world dataset.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: February 7, 2023
    Assignee: Nvidia Corporation
    Inventors: Aayush Prakash, Shoubhik Debnath, Jean-Francois Lafleche, Eric Cameracci, Gavriel State, Marc Teva Law
  • Publication number: 20230004760
    Abstract: Apparatuses, systems, and techniques to identify objects within an image using self-supervised machine learning. In at least one embodiment, a machine learning system is trained to recognize objects by training a first network to recognize objects within images that are generated by a second network. In at least one embodiment, the second network is a controllable network.
    Type: Application
    Filed: June 28, 2021
    Publication date: January 5, 2023
    Inventors: Siva Karthik Mustikovela, Shalini De Mello, Aayush Prakash, Umar Iqbal, Sifei Liu, Jan Kautz
  • Publication number: 20220044075
    Abstract: Training deep neural networks requires a large amount of labeled training data. Conventionally, labeled training data is generated by gathering real images that are manually labelled which is very time-consuming. Instead of manually labelling a training dataset, domain randomization technique is used generate training data that is automatically labeled. The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. In an embodiment, the generated training data includes synthetic input images generated by rendering three-dimensional (3D) objects of interest in a 3D scene. In an embodiment, the generated training data includes synthetic input images generated by rendering 3D objects of interest on a 2D background image. The 3D objects of interest are objects that a neural network is trained to detect and/or label.
    Type: Application
    Filed: October 21, 2021
    Publication date: February 10, 2022
    Inventors: Jonathan Tremblay, Aayush Prakash, Mark A. Brophy, Varun Jampani, Cem Anil, Stanley Thomas Birchfield, Thang Hong To, David Jesus Acuna Marrero
  • Publication number: 20210374489
    Abstract: Approaches are presented for training and using scene graph generators for transfer learning. A scene graph generation technique can decompose a domain gap into individual types of discrepancies, such as may relate to appearance, label, and prediction discrepancies. These discrepancies can be reduced, at least in part, by aligning the corresponding latent and output distributions using one or more gradient reversal layers (GRLs). Label discrepancies can be addressed using self-pseudo-statistics collected from target data. Pseudo statistic-based self-learning and adversarial techniques can be used to manage these discrepancies without the need for costly supervision from a real-world dataset.
    Type: Application
    Filed: April 9, 2021
    Publication date: December 2, 2021
    Inventors: Aayush Prakash, Shoubhik Debnath, Jean-Francois Lafleche, Eric Cameracci, Gavriel State, Marc Teva Law
  • Patent number: 11182649
    Abstract: Training deep neural networks requires a large amount of labeled training data. Conventionally, labeled training data is generated by gathering real images that are manually labelled which is very time-consuming. Instead of manually labelling a training dataset, domain randomization technique is used generate training data that is automatically labeled. The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. In an embodiment, the generated training data includes synthetic input images generated by rendering three-dimensional (3D) objects of interest in a 3D scene. In an embodiment, the generated training data includes synthetic input images generated by rendering 3D objects of interest on a 2D background image. The 3D objects of interest are objects that a neural network is trained to detect and/or label.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: November 23, 2021
    Assignee: NVIDIA Corporation
    Inventors: Jonathan Tremblay, Aayush Prakash, Mark A. Brophy, Varun Jampani, Cem Anil, Stanley Thomas Birchfield, Thang Hong To, David Jesus Acuna Marrero
  • Publication number: 20210097346
    Abstract: Training deep neural networks requires a large amount of labeled training data. Conventionally, labeled training data is generated by gathering real images that are manually labelled which is very time-consuming. Instead of manually labelling a training dataset, domain randomization technique is used generate training data that is automatically labeled. The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. In an embodiment, the generated training data includes synthetic input images generated by rendering three-dimensional (3D) objects of interest in a 3D scene. In an embodiment, the generated training data includes synthetic input images generated by rendering 3D objects of interest on a 2D background image. The 3D objects of interest are objects that a neural network is trained to detect and/or label.
    Type: Application
    Filed: December 11, 2020
    Publication date: April 1, 2021
    Inventors: Jonathan Tremblay, Aayush Prakash, Mark A. Brophy, Varun Jampani, Cem Anil, Stanley Thomas Birchfield, Thang Hong To, David Jesus Acuna Marrero
  • Patent number: 10867214
    Abstract: Training deep neural networks requires a large amount of labeled training data. Conventionally, labeled training data is generated by gathering real images that are manually labelled which is very time-consuming. Instead of manually labelling a training dataset, domain randomization technique is used generate training data that is automatically labeled. The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. In an embodiment, the generated training data includes synthetic input images generated by rendering three-dimensional (3D) objects of interest in a 3D scene. In an embodiment, the generated training data includes synthetic input images generated by rendering 3D objects of interest on a 2D background image. The 3D objects of interest are objects that a neural network is trained to detect and/or label.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: December 15, 2020
    Assignee: NVIDIA Corporation
    Inventors: Jonathan Tremblay, Aayush Prakash, Mark A. Brophy, Varun Jampani, Cem Anil, Stanley Thomas Birchfield, Thang Hong To, David Jesus Acuna Marrero
  • Publication number: 20200160178
    Abstract: In various examples, a generative model is used to synthesize datasets for use in training a downstream machine learning model to perform an associated task. The synthesized datasets may be generated by sampling a scene graph from a scene grammar—such as a probabilistic grammar—and applying the scene graph to the generative model to compute updated scene graphs more representative of object attribute distributions of real-world datasets. The downstream machine learning model may be validated against a real-world validation dataset, and the performance of the model on the real-world validation dataset may be used as an additional factor in further training or fine-tuning the generative model for generating the synthesized datasets specific to the task of the downstream machine learning model.
    Type: Application
    Filed: November 15, 2019
    Publication date: May 21, 2020
    Inventors: Amlan Kar, Aayush Prakash, Ming-Yu Liu, David Jesus Acuna Marrero, Antonio Torralba Barriuso, Sanja Fidler
  • Publication number: 20190251397
    Abstract: Training deep neural networks requires a large amount of labeled training data. Conventionally, labeled training data is generated by gathering real images that are manually labelled which is very time-consuming. Instead of manually labelling a training dataset, domain randomization technique is used generate training data that is automatically labeled. The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. In an embodiment, the generated training data includes synthetic input images generated by rendering three-dimensional (3D) objects of interest in a 3D scene. In an embodiment, the generated training data includes synthetic input images generated by rendering 3D objects of interest on a 2D background image. The 3D objects of interest are objects that a neural network is trained to detect and/or label.
    Type: Application
    Filed: January 24, 2019
    Publication date: August 15, 2019
    Inventors: Jonathan Tremblay, Aayush Prakash, Mark A. Brophy, Varun Jampani, Cem Anil, Stanley Thomas Birchfield, Thang Hong To, David Jesus Acuna Marrero