Patents by Inventor Sanja Fidler

Sanja Fidler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250111109
    Abstract: In various examples, systems and methods are disclosed relating to generating tokens for traffic modeling. One or more circuits can identify trajectories in a dataset, and generate actions from the identified trajectories. The one or more circuits can generate, based at least on the plurality of actions and at least one trajectory of the plurality of trajectories, a set of tokens representing actions to generate trajectories of one or more agents in a simulation. The one or more circuits may update a transformer model to generate simulated actions for simulated agents based at least on tokens generated from the trajectories in the dataset.
    Type: Application
    Filed: May 15, 2024
    Publication date: April 3, 2025
    Applicant: NVIDIA Corporation
    Inventors: Jonah PHILION, Sanja FIDLER, Jason PENG
  • Publication number: 20250111588
    Abstract: Systems and methods of the present disclosure include interactive editing for generated three-dimensional (3D) models, such as those represented by neural radiance fields (NeRFs). A 3D model may be presented to a user in which the user may identify one or more localized regions for editing and/or modification. The localized regions may be selected and a corresponding 3D volume for that region may be provided to one or more generative networks, along with a prompt, to generate new content for the localized regions. Each of the original NeRF and the newly generated NeRF for the new content may then be combined into a single NeRF for a combined 3D representation with the original content and the localized modifications.
    Type: Application
    Filed: October 2, 2023
    Publication date: April 3, 2025
    Inventors: Karsten Julian Kreis, Maria Shugrina, Ming-Yu Liu, Or Perel, Sanja Fidler, Towaki Alan Takikawa, Tsung-Yi Lin, Xiaohui Zeng
  • Publication number: 20250095275
    Abstract: In various examples, images (e.g., novel views) of an object may be rendered using an optimized number of samples of a 3D representation of the object. The optimized number of the samples may be determined based at least on casting rays into a scene that includes the 3D representation of the object and/or an acceleration data structure corresponding to the object. The acceleration data structure may include features corresponding to characteristics of the object, and the features may be indicative of the number of samples to be obtained from various portions of the 3D representation of the object to render the images. In some examples, the 3D representation may be a neural radiance field that includes, as a neural output, a spatially varying kernel size predicting the characteristics of the object, and the features of the acceleration data structure may be related to the spatially varying kernel size.
    Type: Application
    Filed: April 9, 2024
    Publication date: March 20, 2025
    Inventors: Zian Wang, Tianchang Shen, Jun Gao, Merlin Nimier-David, Thomas Müller-Höhne, Alexander Keller, Sanja Fidler, Zan Gojcic, Nicholas Mark Worth Sharp
  • Publication number: 20250095229
    Abstract: Apparatuses, systems, and techniques to generate an image of an environment. In at least one embodiment, one or more neural networks are used to identify one or more static and dynamic features of an environment to be used to generate a representation of the environment.
    Type: Application
    Filed: December 27, 2023
    Publication date: March 20, 2025
    Inventors: Yue Wang, Jiawei Yang, Boris Ivanovic, Xinshuo Weng, Or Litany, Danfei Xu, Seung Wook Kim, Sanja Fidler, Marco Pavone, Boyi Li, Tong Che
  • Publication number: 20250086922
    Abstract: Apparatuses, system, and techniques use one or more neural networks to generate a modified bounding box based, at least in part, on one or more second bounding boxes.
    Type: Application
    Filed: September 7, 2023
    Publication date: March 13, 2025
    Inventors: David Jesus Acuna Marrero, Rafid Mahmood, James Robert Lucas, Yuan-Hong Liao, Sanja Fidler
  • Publication number: 20250086896
    Abstract: In various examples, systems and methods are disclosed relating to neural networks for three-dimensional (3D) scene representations and modifying the 3D scene representations. In some implementations, a diffusion model can be configured to modify selected portions of 3D scenes represented using neural radiance fields, without painting back in content of the selected portions that was originally present. A first view of the neural radiance fields can be inpainted to remove a target feature from the first view, and used as guidance for updating the neural radiance field so that the target feature can be realistically removed from various second views of the neural radiance fields while context is retained outside of the selected portions.
    Type: Application
    Filed: September 12, 2023
    Publication date: March 13, 2025
    Applicant: NVIDIA Corporation
    Inventors: Or LITANY, Sanja FIDLER, Cho-Ying WU, Huan LING, Zan GOJCIC, Riccardo DE LUTIO, Sameh KHAMIS
  • Patent number: 12243152
    Abstract: In various examples, information may be received for a 3D model, such as 3D geometry information, lighting information, and material information. A machine learning model may be trained to disentangle the 3D geometry information, the lighting information, and/or material information from input data to provide the information, which may be used to project geometry of the 3D model onto an image plane to generate a mapping between pixels and portions of the 3D model. Rasterization may then use the mapping to determine which pixels are covered and in what manner, by the geometry. The mapping may also be used to compute radiance for points corresponding to the one or more 3D models using light transport simulation. Disclosed approaches may be used in various applications, such as image editing, 3D model editing, synthetic data generation, and/or data set augmentation.
    Type: Grant
    Filed: February 14, 2024
    Date of Patent: March 4, 2025
    Assignee: NVIDIA Corporation
    Inventors: Wenzheng Chen, Joey Litalien, Jun Gao, Zian Wang, Clement Tse Tsian Christophe Louis Fuji Tsang, Sameh Khamis, Or Litany, Sanja Fidler
  • Publication number: 20250061153
    Abstract: A generative model can be used for generation of spatial layouts and graphs. Such a model can progressively grow these layouts and graphs based on local statistics, where nodes can represent spatial control points of the layout, and edges can represent segments or paths between nodes, such as may correspond to road segments. A generative model can utilize an encoder-decoder architecture where the encoder is a recurrent neural network (RNN) that encodes local incoming paths into a node and the decoder is another RNN that generates outgoing nodes and edges connecting an existing node to the newly generated nodes. Generation is done iteratively, and can finish once all nodes are visited or another end condition is satisfied. Such a model can generate layouts by additionally conditioning on a set of attributes, giving control to a user in generating the layout.
    Type: Application
    Filed: November 1, 2024
    Publication date: February 20, 2025
    Inventors: Hang Chu, Daiqing Li, David Jesus Acuna Marrero, Amlan Kar, Maria Shugrina, Ming-Yu Liu, Antonio Torralba Barriuso, Sanja Fidler
  • Publication number: 20250054288
    Abstract: Various examples relate to translating image labels from one domain (e.g., a synthetic domain) to another domain (e.g., a real-world domain) to improve model performance on real-world datasets and applications. Systems and methods are disclosed that provide an unsupervised label translator that may employ a generative adversarial network (GAN)-based approach. In contrast to conventional systems, the disclosed approach can employ a data-centric perspective that addresses systematic mismatches between datasets from different sources.
    Type: Application
    Filed: August 7, 2023
    Publication date: February 13, 2025
    Applicant: NVIDIA Corporation
    Inventors: Yuan-Hong LIAO, David Jesus ACUNA MARRERO, James LUCAS, Rafid MAHMOOD, Sanja FIDLER, Viraj Uday PRABHU
  • Publication number: 20250045980
    Abstract: Aspects of this technical solution can obtain, according to a plurality of cameras oriented toward the surface of a three-dimensional (3D) model having a surface including a two-dimensional (2D) texture model, input according to corresponding views from the plurality of cameras of the 2D texture model on the surface of the 3D model, and generate, according to the input and according to a model configured to generate a two-dimensional (2D) image, an output including a 2D texture for the 3D model, the output responsive to receiving an indication of the 3D model and the 2D texture.
    Type: Application
    Filed: July 31, 2023
    Publication date: February 6, 2025
    Applicant: NVIDIA Corporation
    Inventors: Tianshi CAO, Kangxue YIN, Nicholas Mark Worth SHARP, Karsten Julian KREIS, Sanja FIDLER
  • Publication number: 20250029351
    Abstract: Generation of three-dimensional (3D) object models may be challenging for users without a sufficient skill set for content creation and may also be resource intensive. One or more style transfer networks may be used for part-aware style transformation of both geometric features and textural components of a source asset to a target asset. The source asset may be segmented into particular parts and then ellipsoid approximations may be warped according to correspondence of the particular parts to the target assets. Moreover, a texture associated with the target asset may be used to warp or adjust a source texture, where the new texture can be applied to the warped parts.
    Type: Application
    Filed: October 3, 2024
    Publication date: January 23, 2025
    Inventors: Kangxue Yin, Jun Gao, Masha Shugrina, Sameh Khamis, Sanja Fidler
  • Publication number: 20250029334
    Abstract: Approaches presented herein provide systems and methods for generating three-dimensional (3D) objects using compressed data as an input. One or more models may learn from a hash table of latent features to map different features to a reconstruction domain, using a hash function as part of a learned process. A 3D shape for an object may be encoded to a multi-layered grid and represented by a series of embeddings, where given point within the grid may be interpolated based on the embeddings for a given layer of the multi-layered grid. A decoder may then be trained to use the embeddings to generate an output object.
    Type: Application
    Filed: July 21, 2023
    Publication date: January 23, 2025
    Inventors: Xingguang Yan, Or Perel, James Robert Lucas, Towaki Takikawa, Karsten Julian Kreis, Maria Shugrina, Sanja Fidler, Or Litany
  • Publication number: 20250020481
    Abstract: Apparatuses, systems, and techniques are presented to determination about objects in an environment. In at least one embodiment, a neural network can be used to determine one or more positions of one or more objects within a three-dimensional (3D) environment and to generate a segmented map of the 3D environment based, at least in part, on one or more two dimensional (2D) images of the one or more objects.
    Type: Application
    Filed: April 7, 2022
    Publication date: January 16, 2025
    Inventors: Enze Xie, Zhiding Yu, Jonah Philion, Anima Anandkumar, Sanja Fidler, Jose Manuel Alvarez Lopez
  • Patent number: 12192547
    Abstract: In various examples, systems and methods are disclosed relating to aligning images into frames of a first video using at least one first temporal attention layer of a neural network model. The first video has a first spatial resolution. A second video having a second spatial resolution is generated by up-sampling the first video using at least one second temporal attention layer of an up-sampler neural network model, wherein the second spatial resolution is higher than the first spatial resolution.
    Type: Grant
    Filed: March 10, 2023
    Date of Patent: January 7, 2025
    Assignee: NVIDIA Corporation
    Inventors: Karsten Julian Kreis, Robin Rombach, Andreas Blattmann, Seung Wook Kim, Huan Ling, Sanja Fidler, Tim Dockhorn
  • Patent number: 12141986
    Abstract: Various types of image analysis benefit from a multi-stream architecture that allows the analysis to consider shape data. A shape stream can process image data in parallel with a primary stream, where data from layers of a network in the primary stream is provided as input to a network of the shape stream. The shape data can be fused with the primary analysis data to produce more accurate output, such as to produce accurate boundary information when the shape data is used with semantic segmentation data produced by the primary stream. A gate structure can be used to connect the intermediate layers of the primary and shape streams, using higher level activations to gate lower level activations in the shape stream. Such a gate structure can help focus the shape stream on the relevant information and reduces any additional weight of the shape stream.
    Type: Grant
    Filed: June 12, 2023
    Date of Patent: November 12, 2024
    Assignee: Nvidia Corporation
    Inventors: David Jesus Acuna Marrero, Towaki Takikawa, Varun Jampani, Sanja Fidler
  • Publication number: 20240371096
    Abstract: Approaches presented herein provide systems and methods for disentangling identity from expression input models. One or more machine learning systems may be trained directly from three-dimensional (3D) points to develop unique latent codes for expressions associated with different identities. These codes may then be mapped to different identities to independently model an object, such as a face, to generate a new mesh including an expression for an independent identity. A pipeline may include a set of machine learning systems to determine model parameters and also adjust input expression codes using gradient backpropagation in order train models for incorporation into a content development pipeline.
    Type: Application
    Filed: May 4, 2023
    Publication date: November 7, 2024
    Inventors: Sameh Khamis, Koki Nagano, Jan Kautz, Sanja Fidler
  • Publication number: 20240362897
    Abstract: In various examples, systems and methods are disclosed relating to synthetic data generation using viewpoint augmentation for autonomous and semi-autonomous systems and applications. One or more circuits can identify a set of sequential images corresponding to a first viewpoint and generate a first transformed image corresponding to a second viewpoint using a first image of the set of sequential images as input to a machine-learning model. The one or more circuits can update the machine-learning model based at least on a loss determined according to the first transformed image and a second image of the set of sequential images.
    Type: Application
    Filed: April 12, 2024
    Publication date: October 31, 2024
    Applicant: NVIDIA Corporation
    Inventors: Tzofi Klinghoffer, Jonah Philion, Zan Gojcic, Sanja Fidler, Or Litany, Wenzheng Chen, Jose Manuel Alvarez Lopez
  • Patent number: 12112445
    Abstract: Generation of three-dimensional (3D) object models may be challenging for users without a sufficient skill set for content creation and may also be resource intensive. One or more style transfer networks may be used for part-aware style transformation of both geometric features and textural components of a source asset to a target asset. The source asset may be segmented into particular parts and then ellipsoid approximations may be warped according to correspondence of the particular parts to the target assets. Moreover, a texture associated with the target asset may be used to warp or adjust a source texture, where the new texture can be applied to the warped parts.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: October 8, 2024
    Assignee: Nvidia Corporation
    Inventors: Kangxue Yin, Jun Gao, Masha Shugrina, Sameh Khamis, Sanja Fidler
  • Publication number: 20240312123
    Abstract: In various examples, systems and methods are disclosed that relate to data augmentation for training/updating perception models in autonomous or semi-autonomous systems and applications. For example, a system may receive data associated with a set of frames that are captured using a plurality of cameras positioned in fixed relation relative to the machine; generate a panoramic view based at least on the set of frames; provide data associated with the panoramic view to a model to cause the model to generate a high dynamic range (HDR) panoramic view; determine lighting information associated with a light distribution map based at least on the HDR panoramic view; determine a virtual scene; and render an asset and a shadow on at least one of the frames, based at least on the virtual scene and the light distribution map, the shadow being a shadow corresponding to the asset.
    Type: Application
    Filed: February 29, 2024
    Publication date: September 19, 2024
    Applicant: NVIDIA Corporation
    Inventors: Malik Aqeel Anwar, Tae Eun Choe, Zian Wang, Sanja Fidler, Minwoo Park
  • Publication number: 20240296623
    Abstract: Approaches presented herein provide for the reconstruction of implicit multi-dimensional shapes. In one embodiment, oriented point cloud data representative of an object can be obtained using a physical scanning process. The point cloud data can be provided as input to a trained density model that can infer density functions for various points. The points can be mapped to a voxel hierarchy, allowing density functions to be determined for those voxels at the various levels that are associated with at least one point of the input point cloud. Contribution weights can be determined for the various density functions for the sparse voxel hierarchy, and the weighted density functions combined to obtain a density field. The density field can be evaluated to generate a geometric mesh where points having a zero, or near-zero, value are determined to contribute to the surface of the object.
    Type: Application
    Filed: February 15, 2023
    Publication date: September 5, 2024
    Inventors: Jiahui Huang, Francis Williams, Zan Gojcic, Matan Atzmon, Or Litany, Sanja Fidler