Patents by Inventor Umar Iqbal

Umar Iqbal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250111474
    Abstract: Systems and methods are disclosed that relate to synthesizing high-resolution 3D geometry and strictly view-consistent images that maintain image quality without relying on post-processing super resolution. For instance, embodiments of the present disclosure describe techniques, systems, and/or methods to scale neural volume rendering to the much higher resolution of native 2D images, thereby resolving fine-grained 3D geometry with unprecedented detail. Embodiments of the present disclosure employ learning-based samplers for accelerating neural rendering for 3D GAN training using up to five times fewer depth samples, which enables embodiments of the present disclosure to explicitly “render every pixel” of the full-resolution image during training and inference without post-processing super-resolution in 2D.
    Type: Application
    Filed: September 11, 2024
    Publication date: April 3, 2025
    Inventors: Koki Nagano, Alexander Trevithick, Matthew Aaron Wong Chan, Towaki Takikawa, Umar Iqbal, Shalini De Mello
  • Patent number: 12266144
    Abstract: Apparatuses, systems, and techniques to identify orientations of objects within images. In at least one embodiment, one or more neural networks are trained to identify an orientations of one or more objects based, at least in part, on one or more characteristics of the object other than the object's orientation.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: April 1, 2025
    Assignee: NVIDIA Corporation
    Inventors: Siva Karthik Mustikovela, Varun Jampani, Shalini De Mello, Sifei Liu, Umar Iqbal, Jan Kautz
  • Publication number: 20250061729
    Abstract: Apparatuses, systems, and techniques to identify three-dimensional positions of partially occluded objects in images. In at least one embodiment, one or more neural networks identify the three-dimensional positions of occluded portions of objects in a first image based, at least in part, on one or more second images including non-occluded objects.
    Type: Application
    Filed: August 25, 2023
    Publication date: February 20, 2025
    Inventors: Prash Goel, Umar Iqbal, Akarsh Umesh Zingade, Pavlo Molchanov
  • Publication number: 20250051443
    Abstract: Here is disclosed isolated or purified monoclonal antibodies, or an antigen-binding fragments thereof, which binds to human CD3, and which comprises: a CDRH1 amino acid sequence of SEQ ID NO: 150, a CDRH2 amino acid sequence of SEQ ID NO: 151, a CDRH3 amino acid sequence of SEQ ID NO: 152, a CDRL1 amino acid sequence of SEQ ID NO: 153, a CDRL2 amino acid sequence of SEQ ID NO: 154, and a CDRL3 amino acid sequence of SEQ ID NO: 155. Also provided are recombinant polypeptides comprising said monoclonal antibody, or an antigen-binding fragment thereof, such as multivalent antibodies, including bispecific T-cell engagers. Also described are chimeric antigen receptors (CARs) for CAR-T therapy comprising any one of the monoclonal antibodies, or antigen-binding fragments thereof. Nucleic acids encoding the aforementioned antibodies, fragments, and polypeptides are also disclosed, along with therapeutic applications in the treatment of and autoinflammatory disease or cancer.
    Type: Application
    Filed: October 5, 2022
    Publication date: February 13, 2025
    Inventors: Anne MARCIL, Robert PON, Scott MCCOMB, Umar IQBAL
  • Publication number: 20250022290
    Abstract: In various examples, image-based three-dimensional occupant assessment for in-cabin monitoring systems and applications are provided. An evaluation function may determine a 3D representation of an occupant of a machine by evaluating sensor data comprising an image frame from an optical image sensor. The 3D representation may comprise at least one characteristic representative of a size of the occupant, (e.g., a 3D pose and/or 3D shape), which may be used to derive other characteristics such as, but not limited to weight, height, and/or age. A first processing path may generate a representation of one or more features corresponding to at least a portion of the occupant based on optical image data, and a second processing path may determine a depth corresponding to the one or more features based on depth data derived from the optical image data and ground truth depth data corresponding to the interior of the machine.
    Type: Application
    Filed: July 10, 2023
    Publication date: January 16, 2025
    Inventors: Sakthivel SIVARAMAN, Arjun Guru, Rajath Shetty, Umar Iqbal, Orazio Gallo, Hang Su, Abhishek Badki, Varsha Hedau
  • Publication number: 20240404174
    Abstract: Systems and methods are disclosed that animate a source portrait image with motion (i.e., pose and expression) from a target image. In contrast to conventional systems, given an unseen single-view portrait image, an implicit three-dimensional (3D) head avatar is constructed that not only captures photo-realistic details within and beyond the face region, but also is readily available for animation without requiring further optimization during inference. In an embodiment, three processing branches of a system produce three tri-planes representing coarse 3D geometry for the head avatar, detailed appearance of a source image, as well as the expression of a target image. By applying volumetric rendering to a combination of the three tri-planes, an image of the desired identity, expression and pose is generated.
    Type: Application
    Filed: May 2, 2024
    Publication date: December 5, 2024
    Inventors: Xueting Li, Shalini De Mello, Sifei Liu, Koki Nagano, Umar Iqbal, Jan Kautz
  • Patent number: 12100113
    Abstract: In order to determine accurate three-dimensional (3D) models for objects within a video, the objects are first identified and tracked within the video, and a pose and shape are estimated for these tracked objects. A translation and global orientation are removed from the tracked objects to determine local motion for the objects, and motion infilling is performed to fill in any missing portions for the object within the video. A global trajectory is then determined for the objects within the video, and the infilled motion and global trajectory are then used to determine infilled global motion for the object within the video. This enables the accurate depiction of each object as a 3D pose sequence for that model that accounts for occlusions and global factors within the video.
    Type: Grant
    Filed: January 25, 2022
    Date of Patent: September 24, 2024
    Assignee: NVIDIA CORPORATION
    Inventors: Ye Yuan, Umar Iqbal, Pavlo Molchanov, Jan Kautz
  • Publication number: 20240169636
    Abstract: Systems and methods are disclosed that improve performance of synthesized motion generated by a diffusion neural network model. A physics-guided motion diffusion model incorporates physical constraints into the diffusion process to model the complex dynamics induced by forces and contact. Specifically, a physics-based motion projection module uses motion imitation in a physics simulator to project the denoised motion of a diffusion step to a physically plausible motion. The projected motion is further used in the next diffusion iteration to guide the denoising diffusion process. The use of physical constraints in the physics-guided motion diffusion model iteratively pulls the motion toward a physically-plausible space, reducing artifacts such as floating, foot sliding, and ground penetration.
    Type: Application
    Filed: May 15, 2023
    Publication date: May 23, 2024
    Inventors: Ye Yuan, Jiaming Song, Umar Iqbal, Arash Vahdat, Jan Kautz
  • Publication number: 20240153188
    Abstract: In various examples, systems and methods are disclosed relating to generating physics-plausible whole body motion, including determining a mesh sequence corresponding to a motion of at least one dynamic character of one or more dynamic characters and a mesh of a terrain using a video sequence, determining using a generative model and based at least one the mesh sequence and the mesh of the terrain, an occlusion-free motion of the at least one dynamic character by infilling physics-plausible character motions in the mesh sequence for at least one frame of the video sequence that includes an occlusion of at least a portion of the at least one dynamic character, and determining physics-plausible whole body motion of the at least one dynamic character by applying physics-based imitation upon the occlusion-free motion.
    Type: Application
    Filed: August 24, 2023
    Publication date: May 9, 2024
    Applicant: NVIDIA Corporation
    Inventors: Jingbo WANG, Ye YUAN, Cheng XIE, Sanja FIDLER, Jan KAUTZ, Umar IQBAL, Zan GOJCIC, Sameh KHAMIS
  • Publication number: 20240070874
    Abstract: Estimating motion of a human or other object in video is a common computer task with applications in robotics, sports, mixed reality, etc. However, motion estimation becomes difficult when the camera capturing the video is moving, because the observed object and camera motions are entangled. The present disclosure provides for joint estimation of the motion of a camera and the motion of articulated objects captured in video by the camera.
    Type: Application
    Filed: April 17, 2023
    Publication date: February 29, 2024
    Inventors: Muhammed Kocabas, Ye Yuan, Umar Iqbal, Pavlo Molchanov, Jan Kautz
  • Publication number: 20230368501
    Abstract: A neural network is trained to identify one or more features of an image. The neural network is trained using a small number of original images, from which a plurality of additional images are derived. The additional images generated by rotating and decoding embeddings of the image in a latent space generated by an autoencoder. The images generated by the rotation and decoding exhibit changes to a feature that is in proportion to the amount of rotation.
    Type: Application
    Filed: February 24, 2023
    Publication date: November 16, 2023
    Inventors: Seonwook Park, Shalini De Mello, Pavlo Molchanov, Umar Iqbal, Jan Kautz
  • Publication number: 20230214784
    Abstract: Various embodiments of a predictive double-booking system for use in medical appointment booking applications are described herein.
    Type: Application
    Filed: May 18, 2021
    Publication date: July 6, 2023
    Inventors: Sunilkumar Kakade, Umar Iqbal, Mark Page, Jordan Durham, Nilesh Mehta, Harry Schned, Daniel Hagen
  • Publication number: 20230144458
    Abstract: In examples, locations of facial landmarks may be applied to one or more machine learning models (MLMs) to generate output data indicating profiles corresponding to facial expressions, such as facial action coding system (FACS) values. The output data may be used to determine geometry of a model. For example, video frames depicting one or more faces may be analyzed to determine the locations. The facial landmarks may be normalized, then be applied to the MLM(s) to infer the profile(s), which may then be used to animate the mode for expression retargeting from the video. The MLM(s) may include sub-networks that each analyze a set of input data corresponding to a region of the face to determine profiles that correspond to the region. The profiles from the sub-networks, along global locations of facial landmarks may be used by a subsequent network to infer the profiles for the overall face.
    Type: Application
    Filed: October 31, 2022
    Publication date: May 11, 2023
    Inventors: Alexander Malafeev, Shalini De Mello, Jaewoo Seo, Umar Iqbal, Koki Nagano, Jan Kautz, Simon Yuen
  • Publication number: 20230137403
    Abstract: Apparatuses, systems, and techniques are presented to generate one or more images. In at least one embodiment, one or more neural networks are used to generate one or more images of one or more objects in two or more different poses from two or more different points of view.
    Type: Application
    Filed: October 29, 2021
    Publication date: May 4, 2023
    Inventors: Orazio Gallo, Umar Iqbal, Atsuhiro Noguchi
  • Publication number: 20230070514
    Abstract: In order to determine accurate three-dimensional (3D) models for objects within a video, the objects are first identified and tracked within the video, and a pose and shape are estimated for these tracked objects. A translation and global orientation are removed from the tracked objects to determine local motion for the objects, and motion infilling is performed to fill in any missing portions for the object within the video. A global trajectory is then determined for the objects within the video, and the infilled motion and global trajectory are then used to determine infilled global motion for the object within the video. This enables the accurate depiction of each object as a 3D pose sequence for that model that accounts for occlusions and global factors within the video.
    Type: Application
    Filed: January 25, 2022
    Publication date: March 9, 2023
    Inventors: Ye Yuan, Umar Iqbal, Pavlo Molchanov, Jan Kautz
  • Patent number: 11593661
    Abstract: A neural network is trained to identify one or more features of an image. The neural network is trained using a small number of original images, from which a plurality of additional images are derived. The additional images generated by rotating and decoding embeddings of the image in a latent space generated by an autoencoder. The images generated by the rotation and decoding exhibit changes to a feature that is in proportion to the amount of rotation.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: February 28, 2023
    Assignee: NVIDIA Corporation
    Inventors: Seonwook Park, Shalini De Mello, Pavlo Molchanov, Umar Iqbal, Jan Kautz
  • Publication number: 20230004760
    Abstract: Apparatuses, systems, and techniques to identify objects within an image using self-supervised machine learning. In at least one embodiment, a machine learning system is trained to recognize objects by training a first network to recognize objects within images that are generated by a second network. In at least one embodiment, the second network is a controllable network.
    Type: Application
    Filed: June 28, 2021
    Publication date: January 5, 2023
    Inventors: Siva Karthik Mustikovela, Shalini De Mello, Aayush Prakash, Umar Iqbal, Sifei Liu, Jan Kautz
  • Patent number: 11488418
    Abstract: Estimating a three-dimensional (3D) pose of an object, such as a hand or body (human, animal, robot, etc.), from a 2D image is necessary for human-computer interaction. A hand pose can be represented by a set of points in 3D space, called keypoints. Two coordinates (x,y) represent spatial displacement and a third coordinate represents a depth of every point with respect to the camera. A monocular camera is used to capture an image of the 3D pose, but does not capture depth information. A neural network architecture is configured to generate a depth value for each keypoint in the captured image, even when portions of the pose are occluded, or the orientation of the object is ambiguous. Generation of the depth values enables estimation of the 3D pose of the object.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: November 1, 2022
    Assignee: NVIDIA Corporation
    Inventors: Umar Iqbal, Pavlo Molchanov, Thomas Michael Breuel, Jan Kautz
  • Patent number: 11417011
    Abstract: Learning to estimate a 3D body pose, and likewise the pose of any type of object, from a single 2D image is of great interest for many practical graphics applications and generally relies on neural networks that have been trained with sample data which annotates (labels) each sample 2D image with a known 3D pose. Requiring this labeled training data however has various drawbacks, including for example that traditionally used training data sets lack diversity and therefore limit the extent to which neural networks are able to estimate 3D pose. Expanding these training data sets is also difficult since it requires manually provided annotations for 2D images, which is time consuming and prone to errors. The present disclosure overcomes these and other limitations of existing techniques by providing a model that is trained from unlabeled multi-view data for use in 3D pose estimation.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: August 16, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Umar Iqbal, Pavlo Molchanov, Jan Kautz
  • Publication number: 20220222832
    Abstract: A method and system are provided for tracking instances within a sequence of video frames. The method includes the steps of processing an image frame by a backbone network to generate a set of feature maps, processing the set of feature maps by one or more prediction heads, and analyzing the embedding features corresponding to a set of instances in two or more image frames of the sequence of video frames to establish a one-to-one correlation between instances in different image frames. The one or more prediction heads includes an embedding head configured to generate a set of embedding features corresponding to one or more instances of an object identified in the image frame. The method may also include training the one or more prediction heads using a set of annotated image frames and/or a plurality of sequences of unlabeled video frames.
    Type: Application
    Filed: January 6, 2022
    Publication date: July 14, 2022
    Inventors: Yang Fu, Sifei Liu, Umar Iqbal, Shalini De Mello, Jan Kautz