Patents by Inventor Amlan Kar

Amlan Kar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11816790
    Abstract: A rule set or scene grammar can be used to generate a scene graph that represents the structure and visual parameters of objects in a scene. A renderer can take this scene graph as input and, with a library of content for assets identified in the scene graph, can generate a synthetic image of a scene that has the desired scene structure without the need for manual placement of any of the objects in the scene. Images or environments synthesized in this way can be used to, for example, generate training data for real world navigational applications, as well as to generate virtual worlds for games or virtual reality experiences.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: November 14, 2023
    Assignee: Nvidia Corporation
    Inventors: Jeevan Devaranjan, Sanja Fidler, Amlan Kar
  • Publication number: 20230229919
    Abstract: In various examples, a generative model is used to synthesize datasets for use in training a downstream machine learning model to perform an associated task. The synthesized datasets may be generated by sampling a scene graph from a scene grammar—such as a probabilistic grammar— and applying the scene graph to the generative model to compute updated scene graphs more representative of object attribute distributions of real-world datasets. The downstream machine learning model may be validated against a real-world validation dataset, and the performance of the model on the real-world validation dataset may be used as an additional factor in further training or fine-tuning the generative model for generating the synthesized datasets specific to the task of the downstream machine learning model.
    Type: Application
    Filed: March 20, 2023
    Publication date: July 20, 2023
    Inventors: Amlan Kar, Aayush Prakash, Ming-Yu Liu, David Jesus Acuna Marrero, Antonio Torralba Barriuso, Sanja Fidler
  • Patent number: 11610115
    Abstract: In various examples, a generative model is used to synthesize datasets for use in training a downstream machine learning model to perform an associated task. The synthesized datasets may be generated by sampling a scene graph from a scene grammar—such as a probabilistic grammar—and applying the scene graph to the generative model to compute updated scene graphs more representative of object attribute distributions of real-world datasets. The downstream machine learning model may be validated against a real-world validation dataset, and the performance of the model on the real-world validation dataset may be used as an additional factor in further training or fine-tuning the generative model for generating the synthesized datasets specific to the task of the downstream machine learning model.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: March 21, 2023
    Assignee: NVIDIA Corporation
    Inventors: Amlan Kar, Aayush Prakash, Ming-Yu Liu, David Jesus Acuna Marrero, Antonio Torralba Barriuso, Sanja Fidler
  • Patent number: 11556797
    Abstract: The present invention relates generally to object annotation, specifically to polygonal annotations of objects. Described are methods of annotating an object including steps of receiving an image depicting an object, generating a set of image features using a CNN encoder implemented on one or more computers, and producing a polygon object annotation via a recurrent decoder or a Graph Neural Network. The recurrent decoder may include a recurrent neural network, a graph neural network or a gated graph neural network. A system for annotating an object and a method of training an object annotation system are also described.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: January 17, 2023
    Inventors: Sanja Fidler, Amlan Kar, Huan Ling, Jun Gao, Wenzheng Chen, David Jesus Acuna Marrero
  • Patent number: 11176715
    Abstract: There is provided a system and method for color representation generation. In an aspect, the method includes: receiving three base colors; receiving a patchwork parameter; and generating a color representation having each of the three base colors at a vertex of a triangular face, the triangular face having a color distribution therein, the color distribution discretized into discrete portions, the amount of discretization based on the patchwork parameter, each discrete portion having an interpolated color determined to be a combination of the base colors at respective coordinates of such discrete portion. In further aspects, one or more color representations are generated based on an input image and can be used to modify colors of a reconstructed image.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: November 16, 2021
    Assignee: THE GOVERNING COUNCIL OF THE UNIVERSITY OF TORONTO
    Inventors: Maria Shugrina, Amlan Kar, Sanja Fidler, Karan Singh
  • Publication number: 20210275918
    Abstract: A rule set or scene grammar can be used to generate a scene graph that represents the structure and visual parameters of objects in a scene. A renderer can take this scene graph as input and, with a library of content for assets identified in the scene graph, can generate a synthetic image of a scene that has the desired scene structure without the need for manual placement of any of the objects in the scene. Images or environments synthesized in this way can be used to, for example, generate training data for real world navigational applications, as well as to generate virtual worlds for games or virtual reality experiences.
    Type: Application
    Filed: December 10, 2020
    Publication date: September 9, 2021
    Inventors: Jeevan Devaranjan, Sanja Fidler, Amlan Kar
  • Publication number: 20210082159
    Abstract: There is provided a system and method for color representation generation. In an aspect, the method includes: receiving three base colors; receiving a patchwork parameter; and generating a color representation having each of the three base colors at a vertex of a triangular face, the triangular face having a color distribution therein, the color distribution discretized into discrete portions, the amount of discretization based on the patchwork parameter, each discrete portion having an interpolated color determined to be a combination of the base colors at respective coordinates of such discrete portion. In further aspects, one or more color representations are generated based on an input image and can be used to modify colors of a reconstructed image.
    Type: Application
    Filed: November 24, 2020
    Publication date: March 18, 2021
    Inventors: Maria SHUGRINA, Amlan KAR, Sanja FIDLER, Karan SINGH
  • Patent number: 10896524
    Abstract: There is provided a system and method for color representation generation. In an aspect, the method includes: receiving three base colors; receiving a patchwork parameter; and generating a color representation having each of the three base colors at a vertex of a triangular face, the triangular face having a color distribution therein, the color distribution discretized into discrete portions, the amount of discretization based on the patchwork parameter, each discrete portion having an interpolated color determined to be a combination of the base colors at respective coordinates of such discrete portion. In further aspects, one or more color representations are generated based on an input image and can be used to modify colors of a reconstructed image.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: January 19, 2021
    Assignee: THE GOVERNING COUNCIL OF THE UNIVERSITY OF TORONTO
    Inventors: Maria Shugrina, Amlan Kar, Sanja Fidler, Karan Singh
  • Publication number: 20200302250
    Abstract: A generative model can be used for generation of spatial layouts and graphs. Such a model can progressively grow these layouts and graphs based on local statistics, where nodes can represent spatial control points of the layout, and edges can represent segments or paths between nodes, such as may correspond to road segments. A generative model can utilize an encoder-decoder architecture where the encoder is a recurrent neural network (RNN) that encodes local incoming paths into a node and the decoder is another RNN that generates outgoing nodes and edges connecting an existing node to the newly generated nodes. Generation is done iteratively, and can finish once all nodes are visited or another end condition is satisfied. Such a model can generate layouts by additionally conditioning on a set of attributes, giving control to a user in generating the layout.
    Type: Application
    Filed: March 20, 2020
    Publication date: September 24, 2020
    Inventors: Hang Chu, Daiqing Li, David Jesus Acuna Marrero, Amlan Kar, Maria Shugrina, Ming-Yu Liu, Antonio Torralba Barriuso, Sanja Fidler
  • Publication number: 20200226474
    Abstract: The present invention relates generally to object annotation, specifically to polygonal annotations of objects. Described are methods of annotating an object including steps of receiving an image depicting an object, generating a set of image features using a CNN encoder implemented on one or more computers, and producing a polygon object annotation via a recurrent decoder or a Graph Neural Network. The recurrent decoder may include a recurrent neural network, a graph neural network or a gated graph neural network. A system for annotating an object and a method of training an object annotation system are also described.
    Type: Application
    Filed: March 23, 2020
    Publication date: July 16, 2020
    Inventors: Sanja Fidler, Amlan Kar, Huan Ling, Jun Gao, Wenzheng Chen, David Jesus Acuna Marrero
  • Publication number: 20200160178
    Abstract: In various examples, a generative model is used to synthesize datasets for use in training a downstream machine learning model to perform an associated task. The synthesized datasets may be generated by sampling a scene graph from a scene grammar—such as a probabilistic grammar—and applying the scene graph to the generative model to compute updated scene graphs more representative of object attribute distributions of real-world datasets. The downstream machine learning model may be validated against a real-world validation dataset, and the performance of the model on the real-world validation dataset may be used as an additional factor in further training or fine-tuning the generative model for generating the synthesized datasets specific to the task of the downstream machine learning model.
    Type: Application
    Filed: November 15, 2019
    Publication date: May 21, 2020
    Inventors: Amlan Kar, Aayush Prakash, Ming-Yu Liu, David Jesus Acuna Marrero, Antonio Torralba Barriuso, Sanja Fidler
  • Patent number: 10643130
    Abstract: The present invention relates generally to object annotation, specifically to polygonal annotations of objects. Described are methods of annotating an object including steps of receiving an image depicting an object, generating a set of image features using a CNN encoder implemented on one or more computers, and producing a polygon object annotation via a recurrent decoder or a Graph Neural Network. The recurrent decoder may include a recurrent neural network, a graph neural network or a gated graph neural network. A system for annotating an object and a method of training an object annotation system are also described.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: May 5, 2020
    Inventors: Sanja Fidler, Amlan Kar, Huan Ling, Jun Gao, Wenzheng Chen, David Jesus Acuna Marrero
  • Publication number: 20190355155
    Abstract: There is provided a system and method for color representation generation. In an aspect, the method includes: receiving three base colors; receiving a patchwork parameter; and generating a color representation having each of the three base colors at a vertex of a triangular face, the triangular face having a color distribution therein, the color distribution discretized into discrete portions, the amount of discretization based on the patchwork parameter, each discrete portion having an interpolated color determined to be a combination of the base colors at respective coordinates of such discrete portion. In further aspects, one or more color representations are generated based on an input image and can be used to modify colors of a reconstructed image.
    Type: Application
    Filed: May 17, 2019
    Publication date: November 21, 2019
    Inventors: Maria SHUGRINA, Amlan KAR, Sanja FIDLER, Karan SINGH
  • Publication number: 20190294970
    Abstract: The present invention relates generally to object annotation, specifically to polygonal annotations of objects. Described are methods of annotating an object including steps of receiving an image depicting an object, generating a set of image features using a CNN encoder implemented on one or more computers, and producing a polygon object annotation via a recurrent decoder or a Graph Neural Network. The recurrent decoder may include a recurrent neural network, a graph neural network or a gated graph neural network. A system for annotating an object and a method of training an object annotation system are also described.
    Type: Application
    Filed: March 25, 2019
    Publication date: September 26, 2019
    Inventors: Sanja Fidler, Amlan Kar, Huan Ling, Jun Gao, Wenzheng Chen, David Jesus Acuna Marrero