Patents by Inventor Varun Jampani
Varun Jampani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250191206Abstract: A method includes determining, based on an image having an initial viewpoint, a depth image, and determining a foreground visibility map including visibility values that are inversely proportional to a depth gradient of the depth image. The method also includes determining, based on the depth image, a background disocclusion mask indicating a likelihood that pixel of the image will be disoccluded by a viewpoint adjustment. The method additionally includes generating, based on the image, the depth image, and the background disocclusion mask, an inpainted image and an inpainted depth image. The method further includes generating, based on the depth image and the inpainted depth image, respectively, a first three-dimensional (3D) representation of the image and a second 3D representation of the inpainted image, and generating a modified image having an adjusted viewpoint by combining the first and second 3D representation based on the foreground visibility map.Type: ApplicationFiled: February 24, 2025Publication date: June 12, 2025Inventors: Varun Jampani, Huiwen Chang, Kyle Sargent, Abhishek Kar, Richard Tucker, Dominik Kaeser, Brian L. Curless, David Salesin, William T. Freeman, Michael Krainin, Ce Liu
-
Publication number: 20250139783Abstract: Various types of image analysis benefit from a multi-stream architecture that allows the analysis to consider shape data. A shape stream can process image data in parallel with a primary stream, where data from layers of a network in the primary stream is provided as input to a network of the shape stream. The shape data can be fused with the primary analysis data to produce more accurate output, such as to produce accurate boundary information when the shape data is used with semantic segmentation data produced by the primary stream. A gate structure can be used to connect the intermediate layers of the primary and shape streams, using higher level activations to gate lower level activations in the shape stream. Such a gate structure can help focus the shape stream on the relevant information and reduces any additional weight of the shape stream.Type: ApplicationFiled: November 11, 2024Publication date: May 1, 2025Inventors: David Jesus Acuna Marrero, Towaki Takikawa, Varun Jampani, Sanja Fidler
-
Patent number: 12266144Abstract: Apparatuses, systems, and techniques to identify orientations of objects within images. In at least one embodiment, one or more neural networks are trained to identify an orientations of one or more objects based, at least in part, on one or more characteristics of the object other than the object's orientation.Type: GrantFiled: November 20, 2019Date of Patent: April 1, 2025Assignee: NVIDIA CorporationInventors: Siva Karthik Mustikovela, Varun Jampani, Shalini De Mello, Sifei Liu, Umar Iqbal, Jan Kautz
-
Patent number: 12260572Abstract: A method includes determining, based on an image having an initial viewpoint, a depth image, and determining a foreground visibility map including visibility values that are inversely proportional to a depth gradient of the depth image. The method also includes determining, based on the depth image, a background disocclusion mask indicating a likelihood that pixel of the image will be disoccluded by a viewpoint adjustment. The method additionally includes generating, based on the image, the depth image, and the background disocclusion mask, an inpainted image and an inpainted depth image. The method further includes generating, based on the depth image and the inpainted depth image, respectively, a first three-dimensional (3D) representation of the image and a second 3D representation of the inpainted image, and generating a modified image having an adjusted viewpoint by combining the first and second 3D representation based on the foreground visibility map.Type: GrantFiled: August 5, 2021Date of Patent: March 25, 2025Assignee: Google LLCInventors: Varun Jampani, Huiwen Chang, Kyle Sargent, Abhishek Kar, Richard Tucker, Dominik Kaeser, Brian L. Curless, David Salesin, William T. Freeman, Michael Krainin, Ce Liu
-
Patent number: 12182940Abstract: Apparatuses, systems, and techniques to identify a shape or camera pose of a three-dimensional object from a two-dimensional image of the object. In at least one embodiment, objects are identified in an image using one or more neural networks that have been trained on objects of a similar category and a three-dimensional mesh template.Type: GrantFiled: January 18, 2022Date of Patent: December 31, 2024Assignee: NVIDIA CorporationInventors: Xueting Li, Sifei Liu, Kihwan Kim, Shalini De Mello, Varun Jampani, Jan Kautz
-
Publication number: 20240412458Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for editing images based on decoder-based accumulative score sampling (DASS) losses.Type: ApplicationFiled: June 12, 2024Publication date: December 12, 2024Inventors: Varun Jampani, Chun-Han Yao, Amit Raj, Wei-Chih Hung, Ming-Hsuan Yang, Michael Rubinstein, Yuanzhen Li
-
Patent number: 12141986Abstract: Various types of image analysis benefit from a multi-stream architecture that allows the analysis to consider shape data. A shape stream can process image data in parallel with a primary stream, where data from layers of a network in the primary stream is provided as input to a network of the shape stream. The shape data can be fused with the primary analysis data to produce more accurate output, such as to produce accurate boundary information when the shape data is used with semantic segmentation data produced by the primary stream. A gate structure can be used to connect the intermediate layers of the primary and shape streams, using higher level activations to gate lower level activations in the shape stream. Such a gate structure can help focus the shape stream on the relevant information and reduces any additional weight of the shape stream.Type: GrantFiled: June 12, 2023Date of Patent: November 12, 2024Assignee: Nvidia CorporationInventors: David Jesus Acuna Marrero, Towaki Takikawa, Varun Jampani, Sanja Fidler
-
Publication number: 20240320912Abstract: A fractional training process can be performed training images to an instance of a machine-learned generative image model to obtain a partially trained instance of the model. A fractional optimization process can be performed with the partially trained instance to an instance of a machine-learned three-dimensional (3D) implicit representation model obtain a partially optimized instance of the model. Based on the plurality of training images, pseudo multi-view subject images can be generated with the partially optimized instance of the 3D implicit representation model and a fully trained instance of the generative image model; The partially trained instance of the model can be trained with a set of training data. The partially optimized instance of the machine-learned 3D implicit representation model can be trained with the machine-learned multi-view image model.Type: ApplicationFiled: March 20, 2024Publication date: September 26, 2024Inventors: Yuanzhen Li, Amit Raj, Varun Jampani, Benjamin Joseph Mildenhall, Benjamin Michael Poole, Jonathan Tilton Barron, Kfir Aberman, Michael Niemeyer, Michael Rubinstein, Nataniel Ruiz Gutierrez, Shiran Elyahu Zada, Srinivas Kaza
-
Publication number: 20240296596Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a text-to-image model so that the text-to-image model generates images that each depict a variable instance of an object class when the object class without the unique identifier is provided as a text input, and that generates images that each depict a same subject instance of the object class when the unique identifier is provided as the text input.Type: ApplicationFiled: August 23, 2023Publication date: September 5, 2024Inventors: Kfir Aberman, Nataniel Ruiz Gutierrez, Michael Rubinstein, Yuanzhen Li, Yael Pritch Knaan, Varun Jampani
-
Publication number: 20240249422Abstract: A method includes determining, based on an image having an initial viewpoint, a depth image, and determining a foreground visibility map including visibility values that are inversely proportional to a depth gradient of the depth image. The method also includes determining, based on the depth image, a background disocclusion mask indicating a likelihood that pixel of the image will be disoccluded by a viewpoint adjustment. The method additionally includes generating, based on the image, the depth image, and the background disocclusion mask, an inpainted image and an inpainted depth image. The method further includes generating, based on the depth image and the inpainted depth image, respectively, a first three-dimensional (3D) representation of the image and a second 3D representation of the inpainted image, and generating a modified image having an adjusted viewpoint by combining the first and second 3D representation based on the foreground visibility map.Type: ApplicationFiled: August 5, 2021Publication date: July 25, 2024Inventors: Varun Jampani, Huiwen Chang, Kyle Sargent, Abhishek Kar, Richard Tucker, Dominik Kaeser, Brian L. Curless, David Salesin, William T. Freeman, Michael Krainin, Ce Liu
-
Patent number: 11907846Abstract: One embodiment of the present invention sets forth a technique for performing spatial propagation. The technique includes generating a first directed acyclic graph (DAG) by connecting spatially adjacent points included in a set of unstructured points via directed edges along a first direction. The technique also includes applying a first set of neural network layers to one or more images associated with the set of unstructured points to generate (i) a set of features for the set of unstructured points and (ii) a set of pairwise affinities between the spatially adjacent points connected by the directed edges. The technique further includes generating a set of labels for the set of unstructured points by propagating the set of features across the first DAG based on the set of pairwise affinities.Type: GrantFiled: September 10, 2020Date of Patent: February 20, 2024Assignee: NVIDIA CorporationInventors: Sifei Liu, Shalini De Mello, Varun Jampani, Jan Kautz, Xueting Li
-
Publication number: 20240013497Abstract: A computing system and method can be used to render a 3D shape from one or more images. In particular, the present disclosure provides a general pipeline for learning articulated shape reconstruction from images (LASR). The pipeline can reconstruct rigid or nonrigid 3D shapes. In particular, the pipeline can automatically decompose non-rigidly deforming shapes into rigid motions near rigid-bones. This pipeline incorporates an analysis-by-synthesis strategy and forward-renders silhouette, optical flow, and color images which can be compared against the video observations to adjust the internal parameters of the model. By inverting a rendering pipeline and incorporating optical flow, the pipeline can recover a mesh of a 3D model from the one or more images input by a user.Type: ApplicationFiled: December 21, 2020Publication date: January 11, 2024Inventors: Deqing Sun, Varun Jampani, Gengshan Yang, Daniel Vlasic, Huiwen Chang, Forrester H. Cole, Ce Liu, William Tafel Freeman
-
Publication number: 20230342941Abstract: Various types of image analysis benefit from a multi-stream architecture that allows the analysis to consider shape data. A shape stream can process image data in parallel with a primary stream, where data from layers of a network in the primary stream is provided as input to a network of the shape stream. The shape data can be fused with the primary analysis data to produce more accurate output, such as to produce accurate boundary information when the shape data is used with semantic segmentation data produced by the primary stream. A gate structure can be used to connect the intermediate layers of the primary and shape streams, using higher level activations to gate lower level activations in the shape stream. Such a gate structure can help focus the shape stream on the relevant information and reduces any additional weight of the shape stream.Type: ApplicationFiled: June 12, 2023Publication date: October 26, 2023Inventors: David Jesus Acuna Marrero, Towaki Takikawa, Varun Jampani, Sanja Fidler
-
Patent number: 11748887Abstract: Systems and methods to detect one or more segments of one or more objects within one or more images based, at least in part, on a neural network trained in an unsupervised manner to infer the one or more segments. Systems and methods to help train one or more neural networks to detect one or more segments of one or more objects within one or more images in an unsupervised manner.Type: GrantFiled: April 8, 2019Date of Patent: September 5, 2023Assignee: NVIDIA CorporationInventors: Varun Jampani, Wei-Chih Hung, Sifei Liu, Pavlo Molchanov, Jan Kautz
-
Patent number: 11715251Abstract: Training deep neural networks requires a large amount of labeled training data. Conventionally, labeled training data is generated by gathering real images that are manually labelled which is very time-consuming. Instead of manually labelling a training dataset, domain randomization technique is used generate training data that is automatically labeled. The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. In an embodiment, the generated training data includes synthetic input images generated by rendering three-dimensional (3D) objects of interest in a 3D scene. In an embodiment, the generated training data includes synthetic input images generated by rendering 3D objects of interest on a 2D background image. The 3D objects of interest are objects that a neural network is trained to detect and/or label.Type: GrantFiled: October 21, 2021Date of Patent: August 1, 2023Assignee: NVIDIA CorporationInventors: Jonathan Tremblay, Aayush Prakash, Mark A. Brophy, Varun Jampani, Cem Anil, Stanley Thomas Birchfield, Thang Hong To, David Jesus Acuna Marrero
-
Patent number: 11676284Abstract: Various types of image analysis benefit from a multi-stream architecture that allows the analysis to consider shape data. A shape stream can process image data in parallel with a primary stream, where data from layers of a network in the primary stream is provided as input to a network of the shape stream. The shape data can be fused with the primary analysis data to produce more accurate output, such as to produce accurate boundary information when the shape data is used with semantic segmentation data produced by the primary stream. A gate structure can be used to connect the intermediate layers of the primary and shape streams, using higher level activations to gate lower level activations in the shape stream. Such a gate structure can help focus the shape stream on the relevant information and reduces any additional weight of the shape stream.Type: GrantFiled: March 20, 2020Date of Patent: June 13, 2023Assignee: Nvidia CorporationInventors: David Jesus Acuna Marrero, Towaki Takikawa, Varun Jampani, Sanja Fidler
-
Patent number: 11636668Abstract: A method includes filtering a point cloud transformation of a 3D object to generate a 3D lattice and processing the 3D lattice through a series of bilateral convolution networks (BCL), each BCL in the series having a lower lattice feature scale than a preceding BCL in the series. The output of each BCL in the series is concatenated to generate an intermediate 3D lattice. Further filtering of the intermediate 3D lattice generates a first prediction of features of the 3D object.Type: GrantFiled: May 22, 2018Date of Patent: April 25, 2023Inventors: Varun Jampani, Hang Su, Deqing Sun, Ming-Hsuan Yang, Jan Kautz
-
Publication number: 20220254029Abstract: The neural network includes an encoder, a common decoder, and a residual decoder. The encoder encodes input images into a latent space. The latent space disentangles unique features from other common features. The common decoder decodes common features resident in the latent space to generate translated images which lack the unique features. The residual decoder decodes unique features resident in the latent space to generate image deltas corresponding to the unique features. The neural network combines the translated images with the image deltas to generate combined images that may include both common features and unique features. The combined images can be used to drive autoencoding. Once training is complete, the residual decoder can be modified to generate segmentation masks that indicate any regions of a given input image where a unique feature resides.Type: ApplicationFiled: October 13, 2021Publication date: August 11, 2022Inventors: Eugene Vorontsov, Wonmin Byeon, Shalini De Mello, Varun Jampani, Ming-Yu Liu, Pavlo Molchanov
-
Patent number: 11328169Abstract: A temporal propagation network (TPN) system learns the affinity matrix for video image processing tasks. An affinity matrix is a generic matrix that defines the similarity of two points in space. The TPN system includes a guidance neural network model and a temporal propagation module and is trained for a particular computer vision task to propagate visual properties from a key-frame represented by dense data (color), to another frame that is represented by coarse data (grey-scale). The guidance neural network model generates an affinity matrix referred to as a global transformation matrix from task-specific data for the key-frame and the other frame. The temporal propagation module applies the global transformation matrix to the key-frame property data to produce propagated property data (color) for the other frame. For example, the TPN system may be used to colorize several frames of greyscale video using a single manually colorized key-frame.Type: GrantFiled: March 14, 2019Date of Patent: May 10, 2022Assignee: NVIDIA CorporationInventors: Sifei Liu, Shalini De Mello, Jinwei Gu, Varun Jampani, Jan Kautz
-
Patent number: 11328173Abstract: A temporal propagation network (TPN) system learns the affinity matrix for video image processing tasks. An affinity matrix is a generic matrix that defines the similarity of two points in space. The TPN system includes a guidance neural network model and a temporal propagation module and is trained for a particular computer vision task to propagate visual properties from a key-frame represented by dense data (color), to another frame that is represented by coarse data (grey-scale). The guidance neural network model generates an affinity matrix referred to as a global transformation matrix from task-specific data for the key-frame and the other frame. The temporal propagation module applies the global transformation matrix to the key-frame property data to produce propagated property data (color) for the other frame. For example, the TPN system may be used to colorize several frames of greyscale video using a single manually colorized key-frame.Type: GrantFiled: October 27, 2020Date of Patent: May 10, 2022Assignee: NVIDIA CorporationInventors: Sifei Liu, Shalini De Mello, Jinwei Gu, Varun Jampani, Jan Kautz