Patents by Inventor Sanja Fidler

Sanja Fidler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250142145
    Abstract: In various examples, systems and methods are disclosed relating to aligning images into frames of a first video using at least one first temporal attention layer of a neural network model. The first video has a first spatial resolution. A second video having a second spatial resolution is generated by up-sampling the first video using at least one second temporal attention layer of an up-sampler neural network model, wherein the second spatial resolution is higher than the first spatial resolution.
    Type: Application
    Filed: January 6, 2025
    Publication date: May 1, 2025
    Applicant: NVIDIA Corporation
    Inventors: Karsten Julian Kreis, Robin Rombach, Andreas Blattmann, Seung Wook Kim, Huan Ling, Sanja Fidler, Tim Dockhorn
  • Publication number: 20250139783
    Abstract: Various types of image analysis benefit from a multi-stream architecture that allows the analysis to consider shape data. A shape stream can process image data in parallel with a primary stream, where data from layers of a network in the primary stream is provided as input to a network of the shape stream. The shape data can be fused with the primary analysis data to produce more accurate output, such as to produce accurate boundary information when the shape data is used with semantic segmentation data produced by the primary stream. A gate structure can be used to connect the intermediate layers of the primary and shape streams, using higher level activations to gate lower level activations in the shape stream. Such a gate structure can help focus the shape stream on the relevant information and reduces any additional weight of the shape stream.
    Type: Application
    Filed: November 11, 2024
    Publication date: May 1, 2025
    Inventors: David Jesus Acuna Marrero, Towaki Takikawa, Varun Jampani, Sanja Fidler
  • Patent number: 12288277
    Abstract: In various examples, high-precision semantic image editing for machine learning systems and applications are described. For example, a generative adversarial network (GAN) may be used to jointly model images and their semantic segmentations based on a same underlying latent code. Image editing may be achieved by using segmentation mask modifications (e.g., provided by a user, or otherwise) to optimize the latent code to be consistent with the updated segmentation, thus effectively changing the original, e.g., RGB image. To improve efficiency of the system, and to not require optimizations for each edit on each image, editing vectors may be learned in latent space that realize the edits, and that can be directly applied on other images with or without additional optimizations. As a result, a GAN in combination with the optimization approaches described herein may simultaneously allow for high precision editing in real-time with straightforward compositionality of multiple edits.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: April 29, 2025
    Assignee: NVIDIA Corporation
    Inventors: Huan Ling, Karsten Kreis, Daiqing Li, Seung Wook Kim, Antonio Torralba Barriuso, Sanja Fidler
  • Publication number: 20250131680
    Abstract: Disclosed are systems and methods relating to extracting 3D features, such as bounding boxes. The systems can apply, to one or more features of a source image that depicts a scene using a first set of camera parameters, based on a condition view image associated with the source image, an epipolar geometric warping to determine a second set of camera parameters. The systems can generate, using a neural network, a synthetic image representing the one or more features and corresponding to the second set of camera parameters.
    Type: Application
    Filed: August 13, 2024
    Publication date: April 24, 2025
    Applicant: NVIDIA Corporation
    Inventors: Or Litany, Sanja Fidler, Huan Ling, Chenfeng Xu
  • Publication number: 20250131685
    Abstract: In various examples, a technique for modeling equivariance in point neural networks includes generating, via execution of one or more layers included in a neural network, a set of features associated with a first partition prediction for a plurality of points included in a scene. The technique also includes applying, to the set of features, one or more transformations included in a frame associated with the plurality of points to generate a set of equivariant features. The technique further includes generating a second partition prediction for the plurality of points based at least on the set of equivariant features, and causing an object recognition result associated with the plurality of points to be generated based at least on the second partition prediction.
    Type: Application
    Filed: May 24, 2024
    Publication date: April 24, 2025
    Inventors: Sanja FIDLER, Matan Atzmon, Jiahui Huang, Or Litany, Francis Williams
  • Publication number: 20250131700
    Abstract: In various examples, a technique for modeling equivariance in point neural networks includes determining a first partition prediction associated with partitioning of a plurality of points included in a scene into a first set of parts. The technique also includes generating, using a neural network, a second partition prediction associated with partitioning of the plurality of points into a second set of parts based at least on one or more aggregations associated with the first set of parts. The technique further includes determining a plurality of piecewise equivariant regions included in the scene based on the second partition prediction and generating an object recognition result associated with the plurality of points based on the plurality of piecewise equivariant regions.
    Type: Application
    Filed: May 24, 2024
    Publication date: April 24, 2025
    Inventors: Sanja FIDLER, Matan ATZMON, Jiahui HUANG, Or LITANY, Francis WILLIAMS
  • Publication number: 20250124640
    Abstract: Apparatuses, systems, and techniques to train one or more neural networks using stratified sampled training data parameters. In at least one embodiment, one or more stochastic training data parameters may be stratified sampled from one or more sampling ranges to compute a gradient for updating the one or more neural networks.
    Type: Application
    Filed: October 12, 2023
    Publication date: April 17, 2025
    Inventors: Jonathan Peter Lorraine, Cheng (Kevin) Xie, Xiaohui Zeng, Jun Gao, Sanja Fidler, James Lucas
  • Publication number: 20250111109
    Abstract: In various examples, systems and methods are disclosed relating to generating tokens for traffic modeling. One or more circuits can identify trajectories in a dataset, and generate actions from the identified trajectories. The one or more circuits can generate, based at least on the plurality of actions and at least one trajectory of the plurality of trajectories, a set of tokens representing actions to generate trajectories of one or more agents in a simulation. The one or more circuits may update a transformer model to generate simulated actions for simulated agents based at least on tokens generated from the trajectories in the dataset.
    Type: Application
    Filed: May 15, 2024
    Publication date: April 3, 2025
    Applicant: NVIDIA Corporation
    Inventors: Jonah PHILION, Sanja FIDLER, Jason PENG
  • Publication number: 20250111588
    Abstract: Systems and methods of the present disclosure include interactive editing for generated three-dimensional (3D) models, such as those represented by neural radiance fields (NeRFs). A 3D model may be presented to a user in which the user may identify one or more localized regions for editing and/or modification. The localized regions may be selected and a corresponding 3D volume for that region may be provided to one or more generative networks, along with a prompt, to generate new content for the localized regions. Each of the original NeRF and the newly generated NeRF for the new content may then be combined into a single NeRF for a combined 3D representation with the original content and the localized modifications.
    Type: Application
    Filed: October 2, 2023
    Publication date: April 3, 2025
    Inventors: Karsten Julian Kreis, Maria Shugrina, Ming-Yu Liu, Or Perel, Sanja Fidler, Towaki Alan Takikawa, Tsung-Yi Lin, Xiaohui Zeng
  • Publication number: 20250095275
    Abstract: In various examples, images (e.g., novel views) of an object may be rendered using an optimized number of samples of a 3D representation of the object. The optimized number of the samples may be determined based at least on casting rays into a scene that includes the 3D representation of the object and/or an acceleration data structure corresponding to the object. The acceleration data structure may include features corresponding to characteristics of the object, and the features may be indicative of the number of samples to be obtained from various portions of the 3D representation of the object to render the images. In some examples, the 3D representation may be a neural radiance field that includes, as a neural output, a spatially varying kernel size predicting the characteristics of the object, and the features of the acceleration data structure may be related to the spatially varying kernel size.
    Type: Application
    Filed: April 9, 2024
    Publication date: March 20, 2025
    Inventors: Zian Wang, Tianchang Shen, Jun Gao, Merlin Nimier-David, Thomas Müller-Höhne, Alexander Keller, Sanja Fidler, Zan Gojcic, Nicholas Mark Worth Sharp
  • Publication number: 20250095229
    Abstract: Apparatuses, systems, and techniques to generate an image of an environment. In at least one embodiment, one or more neural networks are used to identify one or more static and dynamic features of an environment to be used to generate a representation of the environment.
    Type: Application
    Filed: December 27, 2023
    Publication date: March 20, 2025
    Inventors: Yue Wang, Jiawei Yang, Boris Ivanovic, Xinshuo Weng, Or Litany, Danfei Xu, Seung Wook Kim, Sanja Fidler, Marco Pavone, Boyi Li, Tong Che
  • Publication number: 20250086922
    Abstract: Apparatuses, system, and techniques use one or more neural networks to generate a modified bounding box based, at least in part, on one or more second bounding boxes.
    Type: Application
    Filed: September 7, 2023
    Publication date: March 13, 2025
    Inventors: David Jesus Acuna Marrero, Rafid Mahmood, James Robert Lucas, Yuan-Hong Liao, Sanja Fidler
  • Publication number: 20250086896
    Abstract: In various examples, systems and methods are disclosed relating to neural networks for three-dimensional (3D) scene representations and modifying the 3D scene representations. In some implementations, a diffusion model can be configured to modify selected portions of 3D scenes represented using neural radiance fields, without painting back in content of the selected portions that was originally present. A first view of the neural radiance fields can be inpainted to remove a target feature from the first view, and used as guidance for updating the neural radiance field so that the target feature can be realistically removed from various second views of the neural radiance fields while context is retained outside of the selected portions.
    Type: Application
    Filed: September 12, 2023
    Publication date: March 13, 2025
    Applicant: NVIDIA Corporation
    Inventors: Or LITANY, Sanja FIDLER, Cho-Ying WU, Huan LING, Zan GOJCIC, Riccardo DE LUTIO, Sameh KHAMIS
  • Patent number: 12243152
    Abstract: In various examples, information may be received for a 3D model, such as 3D geometry information, lighting information, and material information. A machine learning model may be trained to disentangle the 3D geometry information, the lighting information, and/or material information from input data to provide the information, which may be used to project geometry of the 3D model onto an image plane to generate a mapping between pixels and portions of the 3D model. Rasterization may then use the mapping to determine which pixels are covered and in what manner, by the geometry. The mapping may also be used to compute radiance for points corresponding to the one or more 3D models using light transport simulation. Disclosed approaches may be used in various applications, such as image editing, 3D model editing, synthetic data generation, and/or data set augmentation.
    Type: Grant
    Filed: February 14, 2024
    Date of Patent: March 4, 2025
    Assignee: NVIDIA Corporation
    Inventors: Wenzheng Chen, Joey Litalien, Jun Gao, Zian Wang, Clement Tse Tsian Christophe Louis Fuji Tsang, Sameh Khamis, Or Litany, Sanja Fidler
  • Publication number: 20250061153
    Abstract: A generative model can be used for generation of spatial layouts and graphs. Such a model can progressively grow these layouts and graphs based on local statistics, where nodes can represent spatial control points of the layout, and edges can represent segments or paths between nodes, such as may correspond to road segments. A generative model can utilize an encoder-decoder architecture where the encoder is a recurrent neural network (RNN) that encodes local incoming paths into a node and the decoder is another RNN that generates outgoing nodes and edges connecting an existing node to the newly generated nodes. Generation is done iteratively, and can finish once all nodes are visited or another end condition is satisfied. Such a model can generate layouts by additionally conditioning on a set of attributes, giving control to a user in generating the layout.
    Type: Application
    Filed: November 1, 2024
    Publication date: February 20, 2025
    Inventors: Hang Chu, Daiqing Li, David Jesus Acuna Marrero, Amlan Kar, Maria Shugrina, Ming-Yu Liu, Antonio Torralba Barriuso, Sanja Fidler
  • Publication number: 20250054288
    Abstract: Various examples relate to translating image labels from one domain (e.g., a synthetic domain) to another domain (e.g., a real-world domain) to improve model performance on real-world datasets and applications. Systems and methods are disclosed that provide an unsupervised label translator that may employ a generative adversarial network (GAN)-based approach. In contrast to conventional systems, the disclosed approach can employ a data-centric perspective that addresses systematic mismatches between datasets from different sources.
    Type: Application
    Filed: August 7, 2023
    Publication date: February 13, 2025
    Applicant: NVIDIA Corporation
    Inventors: Yuan-Hong LIAO, David Jesus ACUNA MARRERO, James LUCAS, Rafid MAHMOOD, Sanja FIDLER, Viraj Uday PRABHU
  • Publication number: 20250045980
    Abstract: Aspects of this technical solution can obtain, according to a plurality of cameras oriented toward the surface of a three-dimensional (3D) model having a surface including a two-dimensional (2D) texture model, input according to corresponding views from the plurality of cameras of the 2D texture model on the surface of the 3D model, and generate, according to the input and according to a model configured to generate a two-dimensional (2D) image, an output including a 2D texture for the 3D model, the output responsive to receiving an indication of the 3D model and the 2D texture.
    Type: Application
    Filed: July 31, 2023
    Publication date: February 6, 2025
    Applicant: NVIDIA Corporation
    Inventors: Tianshi CAO, Kangxue YIN, Nicholas Mark Worth SHARP, Karsten Julian KREIS, Sanja FIDLER
  • Publication number: 20250029334
    Abstract: Approaches presented herein provide systems and methods for generating three-dimensional (3D) objects using compressed data as an input. One or more models may learn from a hash table of latent features to map different features to a reconstruction domain, using a hash function as part of a learned process. A 3D shape for an object may be encoded to a multi-layered grid and represented by a series of embeddings, where given point within the grid may be interpolated based on the embeddings for a given layer of the multi-layered grid. A decoder may then be trained to use the embeddings to generate an output object.
    Type: Application
    Filed: July 21, 2023
    Publication date: January 23, 2025
    Inventors: Xingguang Yan, Or Perel, James Robert Lucas, Towaki Takikawa, Karsten Julian Kreis, Maria Shugrina, Sanja Fidler, Or Litany
  • Publication number: 20250029351
    Abstract: Generation of three-dimensional (3D) object models may be challenging for users without a sufficient skill set for content creation and may also be resource intensive. One or more style transfer networks may be used for part-aware style transformation of both geometric features and textural components of a source asset to a target asset. The source asset may be segmented into particular parts and then ellipsoid approximations may be warped according to correspondence of the particular parts to the target assets. Moreover, a texture associated with the target asset may be used to warp or adjust a source texture, where the new texture can be applied to the warped parts.
    Type: Application
    Filed: October 3, 2024
    Publication date: January 23, 2025
    Inventors: Kangxue Yin, Jun Gao, Masha Shugrina, Sameh Khamis, Sanja Fidler
  • Publication number: 20250020481
    Abstract: Apparatuses, systems, and techniques are presented to determination about objects in an environment. In at least one embodiment, a neural network can be used to determine one or more positions of one or more objects within a three-dimensional (3D) environment and to generate a segmented map of the 3D environment based, at least in part, on one or more two dimensional (2D) images of the one or more objects.
    Type: Application
    Filed: April 7, 2022
    Publication date: January 16, 2025
    Inventors: Enze Xie, Zhiding Yu, Jonah Philion, Anima Anandkumar, Sanja Fidler, Jose Manuel Alvarez Lopez