Patents by Inventor Or Litany

Or Litany has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250232471
    Abstract: Approaches presented herein provide for visual localization by matching features of a query image with features obtained from representation of a three-dimensional (3D) environment. A model such as a neural radiance field (NeRF) can be trained to represent the 3D environment. When a query image is received, query features can be extracted at two different resolutions. A lower resolution set of query features can be compared against NeRF descriptor features for a set of training images, to narrow the search space by finding a set of coarse matches. Higher resolution query features can then be compared against sampled features of these coarse matches, to identify 2D-3D correspondences that can be used to calculate camera pose information for the query image.
    Type: Application
    Filed: January 9, 2025
    Publication date: July 17, 2025
    Inventors: Qunjie Zhou, Maxim Maximov, Or Litany, Laura Leal-Taixe
  • Publication number: 20250166288
    Abstract: Systems and methods of the present disclosure include providing higher levels of detail (LODs) for generated three-dimensional (3D) models, such as those represented by neural radiance fields (NeRFs). A 3D model may be presented to a user in which the user may request additional LODs, such as to zoom into the image or to receive information about features within the image. A request to generate finer levels of detail may include using one or more diffusion models to generate images at higher resolutions and/or to hallucinate finer details based on information extracted from the original image or text prompts. Newly generated images may then be added to a set of images associated with the 3D models to enable later model generation to have finer details.
    Type: Application
    Filed: November 17, 2023
    Publication date: May 22, 2025
    Inventors: Or Perel, Maria Shugrina, Yoni Kasten, Or Litany, Gal Chechik, Sanja Fidler
  • Publication number: 20250131680
    Abstract: Disclosed are systems and methods relating to extracting 3D features, such as bounding boxes. The systems can apply, to one or more features of a source image that depicts a scene using a first set of camera parameters, based on a condition view image associated with the source image, an epipolar geometric warping to determine a second set of camera parameters. The systems can generate, using a neural network, a synthetic image representing the one or more features and corresponding to the second set of camera parameters.
    Type: Application
    Filed: August 13, 2024
    Publication date: April 24, 2025
    Applicant: NVIDIA Corporation
    Inventors: Or Litany, Sanja Fidler, Huan Ling, Chenfeng Xu
  • Publication number: 20250131685
    Abstract: In various examples, a technique for modeling equivariance in point neural networks includes generating, via execution of one or more layers included in a neural network, a set of features associated with a first partition prediction for a plurality of points included in a scene. The technique also includes applying, to the set of features, one or more transformations included in a frame associated with the plurality of points to generate a set of equivariant features. The technique further includes generating a second partition prediction for the plurality of points based at least on the set of equivariant features, and causing an object recognition result associated with the plurality of points to be generated based at least on the second partition prediction.
    Type: Application
    Filed: May 24, 2024
    Publication date: April 24, 2025
    Inventors: Sanja FIDLER, Matan Atzmon, Jiahui Huang, Or Litany, Francis Williams
  • Publication number: 20250095229
    Abstract: Apparatuses, systems, and techniques to generate an image of an environment. In at least one embodiment, one or more neural networks are used to identify one or more static and dynamic features of an environment to be used to generate a representation of the environment.
    Type: Application
    Filed: December 27, 2023
    Publication date: March 20, 2025
    Inventors: Yue Wang, Jiawei Yang, Boris Ivanovic, Xinshuo Weng, Or Litany, Danfei Xu, Seung Wook Kim, Sanja Fidler, Marco Pavone, Boyi Li, Tong Che
  • Patent number: 12243152
    Abstract: In various examples, information may be received for a 3D model, such as 3D geometry information, lighting information, and material information. A machine learning model may be trained to disentangle the 3D geometry information, the lighting information, and/or material information from input data to provide the information, which may be used to project geometry of the 3D model onto an image plane to generate a mapping between pixels and portions of the 3D model. Rasterization may then use the mapping to determine which pixels are covered and in what manner, by the geometry. The mapping may also be used to compute radiance for points corresponding to the one or more 3D models using light transport simulation. Disclosed approaches may be used in various applications, such as image editing, 3D model editing, synthetic data generation, and/or data set augmentation.
    Type: Grant
    Filed: February 14, 2024
    Date of Patent: March 4, 2025
    Assignee: NVIDIA Corporation
    Inventors: Wenzheng Chen, Joey Litalien, Jun Gao, Zian Wang, Clement Tse Tsian Christophe Louis Fuji Tsang, Sameh Khamis, Or Litany, Sanja Fidler
  • Patent number: 12217519
    Abstract: A method for 3D object detection is described. The method includes predicting, using a trained monocular depth network, an estimated monocular input depth map of a monocular image of a video stream and an estimated depth uncertainty map associated with the estimated monocular input depth map. The method also includes feeding back a depth uncertainty regression loss associated with the estimated monocular input depth map during training of the trained monocular depth network to update the estimated monocular input depth map. The method further includes detecting 3D objects from a 3D point cloud computed from the estimated monocular input depth map based on seed positions selected from the 3D point cloud and the estimated depth uncertainty map. The method also includes selecting 3D bounding boxes of the 3D objects detected from the 3D point cloud based on the seed positions and an aggregated depth uncertainty.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: February 4, 2025
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY
    Inventors: Rares Andrei Ambrus, Or Litany, Vitor Guizilini, Leonidas Guibas, Adrien David Gaidon, Jie Li
  • Publication number: 20250029334
    Abstract: Approaches presented herein provide systems and methods for generating three-dimensional (3D) objects using compressed data as an input. One or more models may learn from a hash table of latent features to map different features to a reconstruction domain, using a hash function as part of a learned process. A 3D shape for an object may be encoded to a multi-layered grid and represented by a series of embeddings, where given point within the grid may be interpolated based on the embeddings for a given layer of the multi-layered grid. A decoder may then be trained to use the embeddings to generate an output object.
    Type: Application
    Filed: July 21, 2023
    Publication date: January 23, 2025
    Inventors: Xingguang Yan, Or Perel, James Robert Lucas, Towaki Takikawa, Karsten Julian Kreis, Maria Shugrina, Sanja Fidler, Or Litany
  • Publication number: 20240362897
    Abstract: In various examples, systems and methods are disclosed relating to synthetic data generation using viewpoint augmentation for autonomous and semi-autonomous systems and applications. One or more circuits can identify a set of sequential images corresponding to a first viewpoint and generate a first transformed image corresponding to a second viewpoint using a first image of the set of sequential images as input to a machine-learning model. The one or more circuits can update the machine-learning model based at least on a loss determined according to the first transformed image and a second image of the set of sequential images.
    Type: Application
    Filed: April 12, 2024
    Publication date: October 31, 2024
    Applicant: NVIDIA Corporation
    Inventors: Tzofi Klinghoffer, Jonah Philion, Zan Gojcic, Sanja Fidler, Or Litany, Wenzheng Chen, Jose Manuel Alvarez Lopez
  • Publication number: 20240296623
    Abstract: Approaches presented herein provide for the reconstruction of implicit multi-dimensional shapes. In one embodiment, oriented point cloud data representative of an object can be obtained using a physical scanning process. The point cloud data can be provided as input to a trained density model that can infer density functions for various points. The points can be mapped to a voxel hierarchy, allowing density functions to be determined for those voxels at the various levels that are associated with at least one point of the input point cloud. Contribution weights can be determined for the various density functions for the sparse voxel hierarchy, and the weighted density functions combined to obtain a density field. The density field can be evaluated to generate a geometric mesh where points having a zero, or near-zero, value are determined to contribute to the surface of the object.
    Type: Application
    Filed: February 15, 2023
    Publication date: September 5, 2024
    Inventors: Jiahui Huang, Francis Williams, Zan Gojcic, Matan Atzmon, Or Litany, Sanja Fidler
  • Publication number: 20240257443
    Abstract: A technique for reconstructing a three-dimensional scene from monocular video adaptively allocates an explicit sparse-dense voxel grid with dense voxel blocks around surfaces in the scene and sparse voxel blocks further from the surfaces. In contrast to conventional systems, the two-level voxel grid can be efficiently queried and sampled. In an embodiment, the scene surface geometry is represented as a signed distance field (SDF). Representation of the scene surface geometry can be extended to multi-modal data such as semantic labels and color. Because properties stored in the sparse-dense voxel grid structure are differentiable, the scene surface geometry can be optimized via differentiable volume rendering.
    Type: Application
    Filed: November 30, 2023
    Publication date: August 1, 2024
    Inventors: Christopher B. Choy, Or Litany, Charles Loop, Yuke Zhu, Animashree Anandkumar, Wei Dong
  • Publication number: 20240185506
    Abstract: In various examples, information may be received for a 3D model, such as 3D geometry information, lighting information, and material information. A machine learning model may be trained to disentangle the 3D geometry information, the lighting information, and/or material information from input data to provide the information, which may be used to project geometry of the 3D model onto an image plane to generate a mapping between pixels and portions of the 3D model. Rasterization may then use the mapping to determine which pixels are covered and in what manner, by the geometry. The mapping may also be used to compute radiance for points corresponding to the one or more 3D models using light transport simulation. Disclosed approaches may be used in various applications, such as image editing, 3D model editing, synthetic data generation, and/or data set augmentation.
    Type: Application
    Filed: February 14, 2024
    Publication date: June 6, 2024
    Inventors: Wenzheng Chen, Joey Litalien, Jun Gao, Zian Wang, Clement Tse Tsian Christophe Louis Fuji Tsang, Sameh Khamis, Or Litany, Sanja Fidler
  • Publication number: 20240160888
    Abstract: In various examples, systems and methods are disclosed relating to neural networks for realistic and controllable agent simulation using guided trajectories. The neural networks can be configured using training data including trajectories and other state data associated with subjects or agents and remote or neighboring subjects or agents, as well as context data representative of an environment in which the subjects are present. The trajectories can be determining using the neural networks and using various forms of guidance for controllability, such as for waypoint navigation, obstacle avoidance, and group movement.
    Type: Application
    Filed: March 31, 2023
    Publication date: May 16, 2024
    Applicant: NVIDIA Corporation
    Inventors: Davis Winston Rempe, Karsten Julian Kreis, Sanja Fidler, Or Litany, Jonah Philion
  • Publication number: 20240161377
    Abstract: In various examples, systems and methods are disclosed relating to generating a simulated environment and update a machine learning model to move each of a plurality of human characters having a plurality of body shapes, to follow a corresponding trajectory within the simulated environment as conditioned on a respective body shape. The simulated human characters can have diverse characteristics (such as gender, body proportions, body shape, and so on) as observed in real-life crowds. A machine learning model can determine an action for a human character in a simulated environment, based at least on a humanoid state, a body shape, and task-related features. The task-related features can include an environmental feature and a trajectory.
    Type: Application
    Filed: March 31, 2023
    Publication date: May 16, 2024
    Applicant: NVIDIA Corporation
    Inventors: Zhengyi Luo, Jason Peng, Sanja Fidler, Or Litany, Davis Winston Rempe, Ye Yuan
  • Publication number: 20240096017
    Abstract: Apparatuses, systems, and techniques are presented to generate digital content. In at least one embodiment, one or more neural networks are used to generate one or more textured three-dimensional meshes corresponding to one or more objects based, at least in part, one or more two-dimensional images of the one or more objects.
    Type: Application
    Filed: August 25, 2022
    Publication date: March 21, 2024
    Inventors: Jun Gao, Tianchang Shen, Zan Gojcic, Wenzheng Chen, Zian Wang, Daiqing Li, Or Litany, Sanja Fidler
  • Patent number: 11922558
    Abstract: In various examples, information may be received for a 3D model, such as 3D geometry information, lighting information, and material information. A machine learning model may be trained to disentangle the 3D geometry information, the lighting information, and/or material information from input data to provide the information, which may be used to project geometry of the 3D model onto an image plane to generate a mapping between pixels and portions of the 3D model. Rasterization may then use the mapping to determine which pixels are covered and in what manner, by the geometry. The mapping may also be used to compute radiance for points corresponding to the one or more 3D models using light transport simulation. Disclosed approaches may be used in various applications, such as image editing, 3D model editing, synthetic data generation, and/or data set augmentation.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: March 5, 2024
    Assignee: NVIDIA Corporation
    Inventors: Wenzheng Chen, Joey Litalien, Jun Gao, Zian Wang, Clement Tse Tsian Christophe Louis Fuji Tsang, Sameh Khamis, Or Litany, Sanja Fidler
  • Publication number: 20240005604
    Abstract: Approaches presented herein provide for the unconditional generation of novel three dimensional (3D) object shape representations, such as point clouds or meshes. In at least one embodiment, a first denoising diffusion model (DDM) can be trained to synthesize a 1D shape latent from Gaussian noise, and a second DDM can be trained to generate a set of latent points conditioned on this 1D shape latent. The shape latent and set of latent points can be provided to a decoder to generate a 3D point cloud representative of a random object from among the object classes on which the models were trained. A surface reconstruction process may be used to generate a surface mesh from this generated point cloud. Such an approach can scale to complex and/or multimodal distributions, and can be highly flexible as it can be adapted to various tasks such as multimodal voxel- or text-guided synthesis.
    Type: Application
    Filed: May 19, 2023
    Publication date: January 4, 2024
    Inventors: Karsten Julian Kreis, Xiaohui Zeng, Arash Vahdat, Francis Williams, Zan Gojcic, Or Litany, Sanja Fidler
  • Publication number: 20220391781
    Abstract: A method performed by a server is provided. The method comprises sending copies of a set of parameters of a hyper network (HN) to at least one client device, receiving from each client device in the at least one client device, a corresponding set of updated parameters of the HN, and determining a next set of parameters of the HN based on the corresponding sets of updated parameters received from the at least one client device. Each client device generates the corresponding set of updated parameters based on a local model architecture of the client device.
    Type: Application
    Filed: May 27, 2022
    Publication date: December 8, 2022
    Inventors: Or Litany, Haggai Maron, David Jesus Acuna Marrero, Jan Kautz, Sanja Fidler, Gal Chechik
  • Publication number: 20220383582
    Abstract: In various examples, information may be received for a 3D model, such as 3D geometry information, lighting information, and material information. A machine learning model may be trained to disentangle the 3D geometry information, the lighting information, and/or material information from input data to provide the information, which may be used to project geometry of the 3D model onto an image plane to generate a mapping between pixels and portions of the 3D model. Rasterization may then use the mapping to determine which pixels are covered and in what manner, by the geometry. The mapping may also be used to compute radiance for points corresponding to the one or more 3D models using light transport simulation. Disclosed approaches may be used in various applications, such as image editing, 3D model editing, synthetic data generation, and/or data set augmentation.
    Type: Application
    Filed: May 27, 2022
    Publication date: December 1, 2022
    Inventors: Wenzheng Chen, Joey Litalien, Jun Gao, Zian Wang, Clement Tse Tsian Christophe Louis Fuji Tsang, Sameh Khamis, Or Litany, Sanja Fidler
  • Patent number: 10387743
    Abstract: A method for image reconstruction includes defining a dictionary including a set of atoms selected such that patches of natural images can be represented as linear combinations of the atoms. A binary input image, including a single bit of input image data per input pixel, is captured using an image sensor. A maximum-likelihood (ML) estimator is applied, subject to a sparse synthesis prior derived from the dictionary, to the input image data so as to reconstruct an output image comprising multiple bits per output pixel of output image data.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: August 20, 2019
    Assignee: Ramot at Tel-Aviv university Ltd.
    Inventors: Alex Bronstein, Or Litany, Tal Remez, Yoseff Shachar