Patents by Inventor Shalini De Mello

Shalini De Mello has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220284621
    Abstract: One embodiment of a method includes calculating one or more activation values of one or more neural networks trained to infer eye gaze information based, at least in part, on eye position of one or more images of one or more faces indicated by an infrared light reflection from the one or more images.
    Type: Application
    Filed: May 2, 2022
    Publication date: September 8, 2022
    Inventors: Joohwan Kim, Michael Stengel, Zander Majercik, Shalini De Mello, Samuli Laine, Morgan McGuire, David Luebke
  • Publication number: 20220270318
    Abstract: A three-dimensional (3D) object reconstruction neural network system learns to predict a 3D shape representation of an object from a video that includes the object. The 3D reconstruction technique may be used for content creation, such as generation of 3D characters for games, movies, and 3D printing. When 3D characters are generated from video, the content may also include motion of the character, as predicted based on the video. The 3D object construction technique exploits temporal consistency to reconstruct a dynamic 3D representation of the object from an unlabeled video. Specifically, an object in a video has a consistent shape and consistent texture across multiple frames. Texture, base shape, and part correspondence invariance constraints may be applied to fine-tune the neural network system. The reconstruction technique generalizes well—particularly for non-rigid objects.
    Type: Application
    Filed: May 2, 2022
    Publication date: August 25, 2022
    Inventors: Xueting Li, Sifei Liu, Kihwan Kim, Shalini De Mello, Jan Kautz
  • Publication number: 20220254029
    Abstract: The neural network includes an encoder, a common decoder, and a residual decoder. The encoder encodes input images into a latent space. The latent space disentangles unique features from other common features. The common decoder decodes common features resident in the latent space to generate translated images which lack the unique features. The residual decoder decodes unique features resident in the latent space to generate image deltas corresponding to the unique features. The neural network combines the translated images with the image deltas to generate combined images that may include both common features and unique features. The combined images can be used to drive autoencoding. Once training is complete, the residual decoder can be modified to generate segmentation masks that indicate any regions of a given input image where a unique feature resides.
    Type: Application
    Filed: October 13, 2021
    Publication date: August 11, 2022
    Inventors: Eugene Vorontsov, Wonmin Byeon, Shalini De Mello, Varun Jampani, Ming-Yu Liu, Pavlo Molchanov
  • Publication number: 20220222832
    Abstract: A method and system are provided for tracking instances within a sequence of video frames. The method includes the steps of processing an image frame by a backbone network to generate a set of feature maps, processing the set of feature maps by one or more prediction heads, and analyzing the embedding features corresponding to a set of instances in two or more image frames of the sequence of video frames to establish a one-to-one correlation between instances in different image frames. The one or more prediction heads includes an embedding head configured to generate a set of embedding features corresponding to one or more instances of an object identified in the image frame. The method may also include training the one or more prediction heads using a set of annotated image frames and/or a plurality of sequences of unlabeled video frames.
    Type: Application
    Filed: January 6, 2022
    Publication date: July 14, 2022
    Inventors: Yang Fu, Sifei Liu, Umar Iqbal, Shalini De Mello, Jan Kautz
  • Patent number: 11375176
    Abstract: When an image is projected from 3D, the viewpoint of objects in the image, relative to the camera, must be determined. Since the image itself will not have sufficient information to determine the viewpoint of the various objects in the image, techniques to estimate the viewpoint must be employed. To date, neural networks have been used to infer such viewpoint estimates on an object category basis, but must first be trained with numerous examples that have been manually created. The present disclosure provides a neural network that is trained to learn, from just a few example images, a unique viewpoint estimation network capable of inferring viewpoint estimations for a new object category.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: June 28, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Hung-Yu Tseng, Shalini De Mello, Jonathan Tremblay, Sifei Liu, Jan Kautz, Stanley Thomas Birchfield
  • Patent number: 11354847
    Abstract: A three-dimensional (3D) object reconstruction neural network system learns to predict a 3D shape representation of an object from a video that includes the object. The 3D reconstruction technique may be used for content creation, such as generation of 3D characters for games, movies, and 3D printing. When 3D characters are generated from video, the content may also include motion of the character, as predicted based on the video. The 3D object construction technique exploits temporal consistency to reconstruct a dynamic 3D representation of the object from an unlabeled video. Specifically, an object in a video has a consistent shape and consistent texture across multiple frames. Texture, base shape, and part correspondence invariance constraints may be applied to fine-tune the neural network system. The reconstruction technique generalizes well—particularly for non-rigid objects.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: June 7, 2022
    Assignee: NVIDIA Corporation
    Inventors: Xueting Li, Sifei Liu, Kihwan Kim, Shalini De Mello, Jan Kautz
  • Patent number: 11328173
    Abstract: A temporal propagation network (TPN) system learns the affinity matrix for video image processing tasks. An affinity matrix is a generic matrix that defines the similarity of two points in space. The TPN system includes a guidance neural network model and a temporal propagation module and is trained for a particular computer vision task to propagate visual properties from a key-frame represented by dense data (color), to another frame that is represented by coarse data (grey-scale). The guidance neural network model generates an affinity matrix referred to as a global transformation matrix from task-specific data for the key-frame and the other frame. The temporal propagation module applies the global transformation matrix to the key-frame property data to produce propagated property data (color) for the other frame. For example, the TPN system may be used to colorize several frames of greyscale video using a single manually colorized key-frame.
    Type: Grant
    Filed: October 27, 2020
    Date of Patent: May 10, 2022
    Assignee: NVIDIA Corporation
    Inventors: Sifei Liu, Shalini De Mello, Jinwei Gu, Varun Jampani, Jan Kautz
  • Patent number: 11328169
    Abstract: A temporal propagation network (TPN) system learns the affinity matrix for video image processing tasks. An affinity matrix is a generic matrix that defines the similarity of two points in space. The TPN system includes a guidance neural network model and a temporal propagation module and is trained for a particular computer vision task to propagate visual properties from a key-frame represented by dense data (color), to another frame that is represented by coarse data (grey-scale). The guidance neural network model generates an affinity matrix referred to as a global transformation matrix from task-specific data for the key-frame and the other frame. The temporal propagation module applies the global transformation matrix to the key-frame property data to produce propagated property data (color) for the other frame. For example, the TPN system may be used to colorize several frames of greyscale video using a single manually colorized key-frame.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: May 10, 2022
    Assignee: NVIDIA Corporation
    Inventors: Sifei Liu, Shalini De Mello, Jinwei Gu, Varun Jampani, Jan Kautz
  • Publication number: 20220139037
    Abstract: Apparatuses, systems, and techniques to identify a shape or camera pose of a three-dimensional object from a two-dimensional image of the object. In at least one embodiment, objects are identified in an image using one or more neural networks that have been trained on objects of a similar category and a three-dimensional mesh template.
    Type: Application
    Filed: January 18, 2022
    Publication date: May 5, 2022
    Inventors: Xueting Li, Sifei Liu, Kihwan Kim, Shalini De Mello, Varun Jampani, Jan Kautz
  • Patent number: 11321865
    Abstract: One embodiment of a method includes calculating one or more activation values of one or more neural networks trained to infer eye gaze information based, at least in part, on eye position of one or more images of one or more faces indicated by an infrared light reflection from the one or more images.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: May 3, 2022
    Assignee: Nvidia Corporation
    Inventors: Joohwan Kim, Michael Stengel, Zander Majercik, Shalini De Mello, Samuli Laine, Morgan McGuire, David Luebke
  • Publication number: 20220076128
    Abstract: One embodiment of the present invention sets forth a technique for performing spatial propagation. The technique includes generating a first directed acyclic graph (DAG) by connecting spatially adjacent points included in a set of unstructured points via directed edges along a first direction. The technique also includes applying a first set of neural network layers to one or more images associated with the set of unstructured points to generate (i) a set of features for the set of unstructured points and (ii) a set of pairwise affinities between the spatially adjacent points connected by the directed edges. The technique further includes generating a set of labels for the set of unstructured points by propagating the set of features across the first DAG based on the set of pairwise affinities.
    Type: Application
    Filed: September 10, 2020
    Publication date: March 10, 2022
    Inventors: Sifei LIU, Shalini DE MELLO, Varun JAMPANI, Jan KAUTZ
  • Publication number: 20220036635
    Abstract: A three-dimensional (3D) object reconstruction neural network system learns to predict a 3D shape representation of an object from a video that includes the object. The 3D reconstruction technique may be used for content creation, such as generation of 3D characters for games, movies, and 3D printing. When 3D characters are generated from video, the content may also include motion of the character, as predicted based on the video. The 3D object construction technique exploits temporal consistency to reconstruct a dynamic 3D representation of the object from an unlabeled video. Specifically, an object in a video has a consistent shape and consistent texture across multiple frames. Texture, base shape, and part correspondence invariance constraints may be applied to fine-tune the neural network system. The reconstruction technique generalizes well—particularly for non-rigid objects.
    Type: Application
    Filed: July 31, 2020
    Publication date: February 3, 2022
    Inventors: Xueting Li, Sifei Liu, Kihwan Kim, Shalini De Mello, Jan Kautz
  • Patent number: 11238650
    Abstract: Apparatuses, systems, and techniques to identify a shape or camera pose of a three-dimensional object from a two-dimensional image of the object. In at least one embodiment, objects are identified in an image using one or more neural networks that have been trained on objects of a similar category and a three-dimensional mesh template.
    Type: Grant
    Filed: April 15, 2020
    Date of Patent: February 1, 2022
    Assignee: NVIDIA Corporation
    Inventors: Xueting Li, Sifei Liu, Kihwan Kim, Shalini De Mello, Varun Jampani, Jan Kautz
  • Patent number: 11132543
    Abstract: A method, computer readable medium, and system are disclosed for performing unconstrained appearance-based gaze estimation. The method includes the steps of identifying an image of an eye and a head orientation associated with the image of the eye, determining an orientation for the eye by analyzing, within a convolutional neural network (CNN), the image of the eye and the head orientation associated with the image of the eye, and returning the orientation of the eye.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: September 28, 2021
    Assignee: NVIDIA CORPORATION
    Inventors: Rajeev Ranjan, Shalini De Mello, Jan Kautz
  • Publication number: 20210287430
    Abstract: Apparatuses, systems, and techniques to identify a shape or camera pose of a three-dimensional object from a two-dimensional image of the object. In at least one embodiment, objects are identified in an image using one or more neural networks that have been trained on objects of a similar category and a three-dimensional mesh template.
    Type: Application
    Filed: April 15, 2020
    Publication date: September 16, 2021
    Inventors: Xueting Li, Sifei Liu, Kihwan Kim, Shalini De Mello, Varun Jampani, Jan Kautz
  • Publication number: 20210150757
    Abstract: Apparatuses, systems, and techniques to identify orientations of objects within images. In at least one embodiment, one or more neural networks are trained to identify an orientations of one or more objects based, at least in part, on one or more characteristics of the object other than the object's orientation.
    Type: Application
    Filed: November 20, 2019
    Publication date: May 20, 2021
    Inventors: Siva Karthik Mustikovela, Varun Jampani, Shalini De Mello, Sifei Liu, Umar Iqbal, Jan Kautz
  • Publication number: 20210073575
    Abstract: A temporal propagation network (TPN) system learns the affinity matrix for video image processing tasks. An affinity matrix is a generic matrix that defines the similarity of two points in space. The TPN system includes a guidance neural network model and a temporal propagation module and is trained for a particular computer vision task to propagate visual properties from a key-frame represented by dense data (color), to another frame that is represented by coarse data (grey-scale). The guidance neural network model generates an affinity matrix referred to as a global transformation matrix from task-specific data for the key-frame and the other frame. The temporal propagation module applies the global transformation matrix to the key-frame property data to produce propagated property data (color) for the other frame. For example, the TPN system may be used to colorize several frames of greyscale video using a single manually colorized key-frame.
    Type: Application
    Filed: October 27, 2020
    Publication date: March 11, 2021
    Inventors: Sifei Liu, Shalini De Mello, Jinwei Gu, Varun Jampani, Jan Kautz
  • Publication number: 20200334543
    Abstract: A neural network is trained to identify one or more features of an image. The neural network is trained using a small number of original images, from which a plurality of additional images are derived. The additional images generated by rotating and decoding embeddings of the image in a latent space generated by an autoencoder. The images generated by the rotation and decoding exhibit changes to a feature that is in proportion to the amount of rotation.
    Type: Application
    Filed: April 19, 2019
    Publication date: October 22, 2020
    Inventors: Seonwook Park, Shalini De Mello, Pavlo Molchanov, Umar Iqbal, Jan Kautz
  • Patent number: 10762425
    Abstract: A spatial linear propagation network (SLPN) system learns the affinity matrix for vision tasks. An affinity matrix is a generic matrix that defines the similarity of two points in space. The SLPN system is trained for a particular computer vision task and refines an input map (i.e., affinity matrix) that indicates pixels the share a particular property (e.g., color, object, texture, shape, etc.). Inputs to the SLPN system are input data (e.g., pixel values for an image) and the input map corresponding to the input data to be propagated. The input data is processed to produce task-specific affinity values (guidance data). The task-specific affinity values are applied to values in the input map, with at least two weighted values from each column contributing to a value in the refined map data for the adjacent column.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: September 1, 2020
    Assignee: NVIDIA Corporation
    Inventors: Sifei Liu, Shalini De Mello, Jinwei Gu, Ming-Hsuan Yang, Jan Kautz
  • Publication number: 20200252600
    Abstract: When an image is projected from 3D, the viewpoint of objects in the image, relative to the camera, must be determined. Since the image itself will not have sufficient information to determine the viewpoint of the various objects in the image, techniques to estimate the viewpoint must be employed. To date, neural networks have been used to infer such viewpoint estimates on an object category basis, but must first be trained with numerous examples that have been manually created. The present disclosure provides a neural network that is trained to learn, from just a few example images, a unique viewpoint estimation network capable of inferring viewpoint estimations for a new object category.
    Type: Application
    Filed: February 3, 2020
    Publication date: August 6, 2020
    Inventors: Hung-Yu Tseng, Shalini De Mello, Jonathan Tremblay, Sifei Liu, Jan Kautz, Stanley Thomas Birchfield