Patents by Inventor Antonio Torralba Barriuso
Antonio Torralba Barriuso has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240096064Abstract: Apparatuses, systems, and techniques to annotate images using neural models. In at least one embodiment, neural networks generate mask information from labels of one or more objects within one or more images identified by one or more other neural networks.Type: ApplicationFiled: June 3, 2022Publication date: March 21, 2024Inventors: Daiqing Li, Huan Ling, Seung Wook Kim, Karsten Julian Kreis, Sanja Fidler, Antonio Torralba Barriuso
-
Publication number: 20230377324Abstract: In various examples, systems and methods are disclosed relating to multi-domain generative adversarial networks with learned warp fields. Input data can be generated according to a noise function and provided as input to a generative machine-learning model. The generative machine-learning model can determine a plurality of output images each corresponding to one of a respective plurality of image domains. The generative machine-learning model can include at least one layer to generate a plurality of morph maps each corresponding to one of the respective plurality of image domains. The output images can be presented using a display device.Type: ApplicationFiled: May 18, 2023Publication date: November 23, 2023Applicant: NVIDIA CorporationInventors: Seung Wook KIM, Karsten Julian KREIS, Daiqing LI, Sanja FIDLER, Antonio TORRALBA BARRIUSO
-
Publication number: 20230229919Abstract: In various examples, a generative model is used to synthesize datasets for use in training a downstream machine learning model to perform an associated task. The synthesized datasets may be generated by sampling a scene graph from a scene grammar—such as a probabilistic grammar— and applying the scene graph to the generative model to compute updated scene graphs more representative of object attribute distributions of real-world datasets. The downstream machine learning model may be validated against a real-world validation dataset, and the performance of the model on the real-world validation dataset may be used as an additional factor in further training or fine-tuning the generative model for generating the synthesized datasets specific to the task of the downstream machine learning model.Type: ApplicationFiled: March 20, 2023Publication date: July 20, 2023Inventors: Amlan Kar, Aayush Prakash, Ming-Yu Liu, David Jesus Acuna Marrero, Antonio Torralba Barriuso, Sanja Fidler
-
Publication number: 20230134690Abstract: Approaches are presented for training an inverse graphics network. An image synthesis network can generate training data for an inverse graphics network. In turn, the inverse graphics network can teach the synthesis network about the physical three-dimensional (3D) controls. Such an approach can provide for accurate 3D reconstruction of objects from 2D images using the trained inverse graphics network, while requiring little annotation of the provided training data. Such an approach can extract and disentangle 3D knowledge learned by generative models by utilizing differentiable renderers, enabling a disentangled generative model to function as a controllable 3D “neural renderer,” complementing traditional graphics renderers.Type: ApplicationFiled: November 7, 2022Publication date: May 4, 2023Inventors: Wenzheng Chen, Yuxuan Zhang, Sanja Fidler, Huan Ling, Jun Gao, Antonio Torralba Barriuso
-
Patent number: 11610115Abstract: In various examples, a generative model is used to synthesize datasets for use in training a downstream machine learning model to perform an associated task. The synthesized datasets may be generated by sampling a scene graph from a scene grammar—such as a probabilistic grammar—and applying the scene graph to the generative model to compute updated scene graphs more representative of object attribute distributions of real-world datasets. The downstream machine learning model may be validated against a real-world validation dataset, and the performance of the model on the real-world validation dataset may be used as an additional factor in further training or fine-tuning the generative model for generating the synthesized datasets specific to the task of the downstream machine learning model.Type: GrantFiled: November 15, 2019Date of Patent: March 21, 2023Assignee: NVIDIA CorporationInventors: Amlan Kar, Aayush Prakash, Ming-Yu Liu, David Jesus Acuna Marrero, Antonio Torralba Barriuso, Sanja Fidler
-
Publication number: 20220383570Abstract: In various examples, high-precision semantic image editing for machine learning systems and applications are described. For example, a generative adversarial network (GAN) may be used to jointly model images and their semantic segmentations based on a same underlying latent code. Image editing may be achieved by using segmentation mask modifications (e.g., provided by a user, or otherwise) to optimize the latent code to be consistent with the updated segmentation, thus effectively changing the original, e.g., RGB image. To improve efficiency of the system, and to not require optimizations for each edit on each image, editing vectors may be learned in latent space that realize the edits, and that can be directly applied on other images with or without additional optimizations. As a result, a GAN in combination with the optimization approaches described herein may simultaneously allow for high precision editing in real-time with straightforward compositionality of multiple edits.Type: ApplicationFiled: May 27, 2022Publication date: December 1, 2022Inventors: Huan Ling, Karsten Kreis, Daiqing Li, Seung Wook Kim, Antonio Torralba Barriuso, Sanja Fidler
-
Patent number: 11494976Abstract: Approaches are presented for training an inverse graphics network. An image synthesis network can generate training data for an inverse graphics network. In turn, the inverse graphics network can teach the synthesis network about the physical three-dimensional (3D) controls. Such an approach can provide for accurate 3D reconstruction of objects from 2D images using the trained inverse graphics network, while requiring little annotation of the provided training data. Such an approach can extract and disentangle 3D knowledge learned by generative models by utilizing differentiable renderers, enabling a disentangled generative model to function as a controllable 3D “neural renderer,” complementing traditional graphics renderers.Type: GrantFiled: March 5, 2021Date of Patent: November 8, 2022Assignee: Nvidia CorporationInventors: Wenzheng Chen, Yuxuan Zhang, Sanja Fidler, Huan Ling, Jun Gao, Antonio Torralba Barriuso
-
Publication number: 20220269937Abstract: Apparatuses, systems, and techniques to use one or more neural networks to generate one or more images based, at least in part, on one or more spatially-independent features within the one or more images. In at least one embodiment, the one or more neural networks determine spatially-independent information and spatially-dependent information of the one or more images and process the spatially-independent information and the spatially-dependent information to generate the one or more spatially-independent features and one or more spatially-dependent features within the one or more images.Type: ApplicationFiled: February 24, 2021Publication date: August 25, 2022Inventors: Seung Wook Kim, Jonah Philion, Sanja Fidler, Antonio Torralba Barriuso
-
Publication number: 20220083807Abstract: Apparatuses, systems, and techniques to determine pixel-level labels of a synthetic image. In at least one embodiment, the synthetic image is generated by one or more generative networks and the pixel-level labels are generated using a combination of data output by a plurality of layers of the generative networks.Type: ApplicationFiled: September 14, 2020Publication date: March 17, 2022Inventors: Yuxuan Zhang, Huan Ling, Jun Gao, Wenzheng Chen, Antonio Torralba Barriuso, Sanja Fidler
-
Publication number: 20210390778Abstract: Apparatuses, systems, and techniques are presented to generate a simulated environment. In at least one embodiment, one or more neural networks are used to generate a simulated environment based, at least in part, on stored information associated with objects within the simulated environment.Type: ApplicationFiled: June 10, 2020Publication date: December 16, 2021Inventors: Seung Wook Kim, Sanja Fidler, Jonah Philion, Antonio Torralba Barriuso
-
Publication number: 20210279952Abstract: Approaches are presented for training an inverse graphics network. An image synthesis network can generate training data for an inverse graphics network. In turn, the inverse graphics network can teach the synthesis network about the physical three-dimensional (3D) controls. Such an approach can provide for accurate 3D reconstruction of objects from 2D images using the trained inverse graphics network, while requiring little annotation of the provided training data. Such an approach can extract and disentangle 3D knowledge learned by generative models by utilizing differentiable renderers, enabling a disentangled generative model to function as a controllable 3D “neural renderer,” complementing traditional graphics renderers.Type: ApplicationFiled: March 5, 2021Publication date: September 9, 2021Inventors: Wenzheng Chen, Yuxuan Zhang, Sanja Fidler, Huan Ling, Jun Gao, Antonio Torralba Barriuso
-
Publication number: 20200302250Abstract: A generative model can be used for generation of spatial layouts and graphs. Such a model can progressively grow these layouts and graphs based on local statistics, where nodes can represent spatial control points of the layout, and edges can represent segments or paths between nodes, such as may correspond to road segments. A generative model can utilize an encoder-decoder architecture where the encoder is a recurrent neural network (RNN) that encodes local incoming paths into a node and the decoder is another RNN that generates outgoing nodes and edges connecting an existing node to the newly generated nodes. Generation is done iteratively, and can finish once all nodes are visited or another end condition is satisfied. Such a model can generate layouts by additionally conditioning on a set of attributes, giving control to a user in generating the layout.Type: ApplicationFiled: March 20, 2020Publication date: September 24, 2020Inventors: Hang Chu, Daiqing Li, David Jesus Acuna Marrero, Amlan Kar, Maria Shugrina, Ming-Yu Liu, Antonio Torralba Barriuso, Sanja Fidler
-
Publication number: 20200160178Abstract: In various examples, a generative model is used to synthesize datasets for use in training a downstream machine learning model to perform an associated task. The synthesized datasets may be generated by sampling a scene graph from a scene grammar—such as a probabilistic grammar—and applying the scene graph to the generative model to compute updated scene graphs more representative of object attribute distributions of real-world datasets. The downstream machine learning model may be validated against a real-world validation dataset, and the performance of the model on the real-world validation dataset may be used as an additional factor in further training or fine-tuning the generative model for generating the synthesized datasets specific to the task of the downstream machine learning model.Type: ApplicationFiled: November 15, 2019Publication date: May 21, 2020Inventors: Amlan Kar, Aayush Prakash, Ming-Yu Liu, David Jesus Acuna Marrero, Antonio Torralba Barriuso, Sanja Fidler
-
Patent number: 9754177Abstract: One or more aspects of the subject disclosure are directed towards identifying objects within an image via image searching/matching. In one aspect, an image is processed into bounding boxes, with the bounding boxes further processed to each surround a possible object. A sub-image of pixels corresponding to the bounding box is featurized for matching with tagged database images. The information (tags) associated with any matched images is processed to identify/categorize the sub-image and thus the object corresponding thereto.Type: GrantFiled: June 21, 2013Date of Patent: September 5, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Ce Liu, Yair Weiss, Antonio Torralba Barriuso
-
Publication number: 20140376819Abstract: One or more aspects of the subject disclosure are directed towards identifying objects within an image via image searching/matching. In one aspect, an image is processed into bounding boxes, with the bounding boxes further processed to each surround a possible object. A sub-image of pixels corresponding to the bounding box is featurized for matching with tagged database images. The information (tags) associated with any matched images is processed to identify/categorize the sub-image and thus the object corresponding thereto.Type: ApplicationFiled: June 21, 2013Publication date: December 25, 2014Inventors: Ce Liu, Yair Weiss, Antonio Torralba Barriuso