Patents by Inventor Umar Iqbal

Umar Iqbal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250239036
    Abstract: Apparatuses, systems, and techniques to generate 3D models. In at least one embodiment, a 3D model, generated by a second neural network, is refined by a first neural network. In at least one embodiment, the first neural network is adjusted based on a determination made by the first neural network.
    Type: Application
    Filed: January 22, 2024
    Publication date: July 24, 2025
    Inventors: Ye Yuan, Umar Iqbal, Jiaming Song, Arash Vahdat, Jan Kautz
  • Publication number: 20250232504
    Abstract: In various examples, systems and methods are disclosed relating to receive at least one of a text prompt or a kinematic constraint and determine first human motion data using a motion model by applying the at least one of the text prompt or the kinematic constraint to the motion model. The motion model is updated by generating, using the motion model, second human motion data by applying motion capture (mocap) data and video reconstruction data as inputs to the motion model, receiving user feedback information for the second human motion data, and updating the motion model based on the user feedback information. The video reconstruction data is generated by reconstructing human motions from a plurality of videos. Physically implausible artifacts are filtered from the video reconstruction data using a motion imitation controller. The motion imitation controller is updated using at least one of Reinforced Learning (RL) or physics-based character simulations.
    Type: Application
    Filed: January 16, 2024
    Publication date: July 17, 2025
    Applicant: NVIDIA Corporation
    Inventors: Jason PENG, Ye YUAN, Davis Winston REMPE, Umar IQBAL, Or LITANY, Tingwu WANG, Chen TESSLER, Jan KAUTZ, Sanja FIDLER, Michael BUTTNER
  • Publication number: 20250232505
    Abstract: Systems and methods are disclosed relating to receiving at least one of a text prompt or a kinematic constraint, generating, by a motion model including a first model and a second model, human motion data of a human character by applying a random noise and the at least one of the text prompt or the kinematic constraint into the motion model. Generating the human motion data includes, for each iteration of diffusion determining, using the first model, global root motion by applying noisy global root motion and noisy local joint motion as inputs into the first model and determining, using the second model, local joint motion by applying the noisy local joint motion and local root motion as inputs into the second model. The local root motion is determined based on the global root motion. The human motion data includes the local joint motion and the global root motion.
    Type: Application
    Filed: January 16, 2024
    Publication date: July 17, 2025
    Applicant: NVIDIA Corporation
    Inventors: Jason PENG, Ye YUAN, Davis Winston REMPE, Umar IQBAL, Or LITANY, Tingwu WANG, Chen TESSLER, Jan KAUTZ, Sanja FIDLER, Michael BUTTNER
  • Publication number: 20250225706
    Abstract: In various examples, a timeline of text prompt(s) specifying any number of (e.g., sequential and/or simultaneous) actions may be specified or generated, and the timeline may be used to drive a diffusion model to generate compositional human motion that implements the arrangement of action(s) specified by the timeline. For example, at each denoising step, a pre-trained motion diffusion model may be used to denoise a motion segment corresponding to each text prompt independently of the others, and the resulting denoised motion segments may be temporally stitched, and/or spatially stitched based on body part labels associated with each text prompt. As such, the techniques described herein may be used to synthesize realistic motion that accurately reflects the semantics and timing of the text prompt(s) specified in the timeline.
    Type: Application
    Filed: January 4, 2024
    Publication date: July 10, 2025
    Inventors: Mathis PETROVICH, Xue Bin PENG, Davis REMPE, Umar IQBAL, Or LITANY, Sanja FIDLER
  • Publication number: 20250222129
    Abstract: The present document describes a pharmaceutical composition comprising a) a lipid nanoparticle operable to encapsulate a therapeutic agent, comprising a core and an external surface, said therapeutic agent being encapsulated within said core; said lipid nanoparticle having a size of said lipid nanoparticle of from about 30 to about 80 nm, or a pegylated lipid comprising a distearoyl-rac-glycerol (DSG)-PEG and 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-(DSPE)-PEG-DBCO; or a combination of: a size of from about 30 to about 80 nm and a pegylated lipid comprising a DSG-PEG and DSPE-PEG-DBCO; and b) an antibody or antigen-binding fragment thereof operable to transmigrate the blood-brain barrier (BBB), wherein the antibody or antigen-binding fragment thereof comprises complementarity determining regions (CDR1, CDR2 and CDR3), operably linked to said external surface of said lipid nanoparticle.
    Type: Application
    Filed: March 21, 2023
    Publication date: July 10, 2025
    Applicant: National Research Council of Canada
    Inventors: Abedelnasser Abulrob, Danica Stanimirovic, Umar Iqbal, Bryan Simard, Michel Gilbert, Yves Durocher, Warren Wakarchuk
  • Publication number: 20250111474
    Abstract: Systems and methods are disclosed that relate to synthesizing high-resolution 3D geometry and strictly view-consistent images that maintain image quality without relying on post-processing super resolution. For instance, embodiments of the present disclosure describe techniques, systems, and/or methods to scale neural volume rendering to the much higher resolution of native 2D images, thereby resolving fine-grained 3D geometry with unprecedented detail. Embodiments of the present disclosure employ learning-based samplers for accelerating neural rendering for 3D GAN training using up to five times fewer depth samples, which enables embodiments of the present disclosure to explicitly “render every pixel” of the full-resolution image during training and inference without post-processing super-resolution in 2D.
    Type: Application
    Filed: September 11, 2024
    Publication date: April 3, 2025
    Inventors: Koki Nagano, Alexander Trevithick, Matthew Aaron Wong Chan, Towaki Takikawa, Umar Iqbal, Shalini De Mello
  • Patent number: 12266144
    Abstract: Apparatuses, systems, and techniques to identify orientations of objects within images. In at least one embodiment, one or more neural networks are trained to identify an orientations of one or more objects based, at least in part, on one or more characteristics of the object other than the object's orientation.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: April 1, 2025
    Assignee: NVIDIA Corporation
    Inventors: Siva Karthik Mustikovela, Varun Jampani, Shalini De Mello, Sifei Liu, Umar Iqbal, Jan Kautz
  • Publication number: 20250061729
    Abstract: Apparatuses, systems, and techniques to identify three-dimensional positions of partially occluded objects in images. In at least one embodiment, one or more neural networks identify the three-dimensional positions of occluded portions of objects in a first image based, at least in part, on one or more second images including non-occluded objects.
    Type: Application
    Filed: August 25, 2023
    Publication date: February 20, 2025
    Inventors: Prash Goel, Umar Iqbal, Akarsh Umesh Zingade, Pavlo Molchanov
  • Publication number: 20250051443
    Abstract: Here is disclosed isolated or purified monoclonal antibodies, or an antigen-binding fragments thereof, which binds to human CD3, and which comprises: a CDRH1 amino acid sequence of SEQ ID NO: 150, a CDRH2 amino acid sequence of SEQ ID NO: 151, a CDRH3 amino acid sequence of SEQ ID NO: 152, a CDRL1 amino acid sequence of SEQ ID NO: 153, a CDRL2 amino acid sequence of SEQ ID NO: 154, and a CDRL3 amino acid sequence of SEQ ID NO: 155. Also provided are recombinant polypeptides comprising said monoclonal antibody, or an antigen-binding fragment thereof, such as multivalent antibodies, including bispecific T-cell engagers. Also described are chimeric antigen receptors (CARs) for CAR-T therapy comprising any one of the monoclonal antibodies, or antigen-binding fragments thereof. Nucleic acids encoding the aforementioned antibodies, fragments, and polypeptides are also disclosed, along with therapeutic applications in the treatment of and autoinflammatory disease or cancer.
    Type: Application
    Filed: October 5, 2022
    Publication date: February 13, 2025
    Inventors: Anne MARCIL, Robert PON, Scott MCCOMB, Umar IQBAL
  • Publication number: 20250022290
    Abstract: In various examples, image-based three-dimensional occupant assessment for in-cabin monitoring systems and applications are provided. An evaluation function may determine a 3D representation of an occupant of a machine by evaluating sensor data comprising an image frame from an optical image sensor. The 3D representation may comprise at least one characteristic representative of a size of the occupant, (e.g., a 3D pose and/or 3D shape), which may be used to derive other characteristics such as, but not limited to weight, height, and/or age. A first processing path may generate a representation of one or more features corresponding to at least a portion of the occupant based on optical image data, and a second processing path may determine a depth corresponding to the one or more features based on depth data derived from the optical image data and ground truth depth data corresponding to the interior of the machine.
    Type: Application
    Filed: July 10, 2023
    Publication date: January 16, 2025
    Inventors: Sakthivel SIVARAMAN, Arjun Guru, Rajath Shetty, Umar Iqbal, Orazio Gallo, Hang Su, Abhishek Badki, Varsha Hedau
  • Publication number: 20240404174
    Abstract: Systems and methods are disclosed that animate a source portrait image with motion (i.e., pose and expression) from a target image. In contrast to conventional systems, given an unseen single-view portrait image, an implicit three-dimensional (3D) head avatar is constructed that not only captures photo-realistic details within and beyond the face region, but also is readily available for animation without requiring further optimization during inference. In an embodiment, three processing branches of a system produce three tri-planes representing coarse 3D geometry for the head avatar, detailed appearance of a source image, as well as the expression of a target image. By applying volumetric rendering to a combination of the three tri-planes, an image of the desired identity, expression and pose is generated.
    Type: Application
    Filed: May 2, 2024
    Publication date: December 5, 2024
    Inventors: Xueting Li, Shalini De Mello, Sifei Liu, Koki Nagano, Umar Iqbal, Jan Kautz
  • Patent number: 12100113
    Abstract: In order to determine accurate three-dimensional (3D) models for objects within a video, the objects are first identified and tracked within the video, and a pose and shape are estimated for these tracked objects. A translation and global orientation are removed from the tracked objects to determine local motion for the objects, and motion infilling is performed to fill in any missing portions for the object within the video. A global trajectory is then determined for the objects within the video, and the infilled motion and global trajectory are then used to determine infilled global motion for the object within the video. This enables the accurate depiction of each object as a 3D pose sequence for that model that accounts for occlusions and global factors within the video.
    Type: Grant
    Filed: January 25, 2022
    Date of Patent: September 24, 2024
    Assignee: NVIDIA CORPORATION
    Inventors: Ye Yuan, Umar Iqbal, Pavlo Molchanov, Jan Kautz
  • Publication number: 20240169636
    Abstract: Systems and methods are disclosed that improve performance of synthesized motion generated by a diffusion neural network model. A physics-guided motion diffusion model incorporates physical constraints into the diffusion process to model the complex dynamics induced by forces and contact. Specifically, a physics-based motion projection module uses motion imitation in a physics simulator to project the denoised motion of a diffusion step to a physically plausible motion. The projected motion is further used in the next diffusion iteration to guide the denoising diffusion process. The use of physical constraints in the physics-guided motion diffusion model iteratively pulls the motion toward a physically-plausible space, reducing artifacts such as floating, foot sliding, and ground penetration.
    Type: Application
    Filed: May 15, 2023
    Publication date: May 23, 2024
    Inventors: Ye Yuan, Jiaming Song, Umar Iqbal, Arash Vahdat, Jan Kautz
  • Publication number: 20240153188
    Abstract: In various examples, systems and methods are disclosed relating to generating physics-plausible whole body motion, including determining a mesh sequence corresponding to a motion of at least one dynamic character of one or more dynamic characters and a mesh of a terrain using a video sequence, determining using a generative model and based at least one the mesh sequence and the mesh of the terrain, an occlusion-free motion of the at least one dynamic character by infilling physics-plausible character motions in the mesh sequence for at least one frame of the video sequence that includes an occlusion of at least a portion of the at least one dynamic character, and determining physics-plausible whole body motion of the at least one dynamic character by applying physics-based imitation upon the occlusion-free motion.
    Type: Application
    Filed: August 24, 2023
    Publication date: May 9, 2024
    Applicant: NVIDIA Corporation
    Inventors: Jingbo WANG, Ye YUAN, Cheng XIE, Sanja FIDLER, Jan KAUTZ, Umar IQBAL, Zan GOJCIC, Sameh KHAMIS
  • Publication number: 20240070874
    Abstract: Estimating motion of a human or other object in video is a common computer task with applications in robotics, sports, mixed reality, etc. However, motion estimation becomes difficult when the camera capturing the video is moving, because the observed object and camera motions are entangled. The present disclosure provides for joint estimation of the motion of a camera and the motion of articulated objects captured in video by the camera.
    Type: Application
    Filed: April 17, 2023
    Publication date: February 29, 2024
    Inventors: Muhammed Kocabas, Ye Yuan, Umar Iqbal, Pavlo Molchanov, Jan Kautz
  • Publication number: 20230368501
    Abstract: A neural network is trained to identify one or more features of an image. The neural network is trained using a small number of original images, from which a plurality of additional images are derived. The additional images generated by rotating and decoding embeddings of the image in a latent space generated by an autoencoder. The images generated by the rotation and decoding exhibit changes to a feature that is in proportion to the amount of rotation.
    Type: Application
    Filed: February 24, 2023
    Publication date: November 16, 2023
    Inventors: Seonwook Park, Shalini De Mello, Pavlo Molchanov, Umar Iqbal, Jan Kautz
  • Publication number: 20230214784
    Abstract: Various embodiments of a predictive double-booking system for use in medical appointment booking applications are described herein.
    Type: Application
    Filed: May 18, 2021
    Publication date: July 6, 2023
    Inventors: Sunilkumar Kakade, Umar Iqbal, Mark Page, Jordan Durham, Nilesh Mehta, Harry Schned, Daniel Hagen
  • Publication number: 20230144458
    Abstract: In examples, locations of facial landmarks may be applied to one or more machine learning models (MLMs) to generate output data indicating profiles corresponding to facial expressions, such as facial action coding system (FACS) values. The output data may be used to determine geometry of a model. For example, video frames depicting one or more faces may be analyzed to determine the locations. The facial landmarks may be normalized, then be applied to the MLM(s) to infer the profile(s), which may then be used to animate the mode for expression retargeting from the video. The MLM(s) may include sub-networks that each analyze a set of input data corresponding to a region of the face to determine profiles that correspond to the region. The profiles from the sub-networks, along global locations of facial landmarks may be used by a subsequent network to infer the profiles for the overall face.
    Type: Application
    Filed: October 31, 2022
    Publication date: May 11, 2023
    Inventors: Alexander Malafeev, Shalini De Mello, Jaewoo Seo, Umar Iqbal, Koki Nagano, Jan Kautz, Simon Yuen
  • Publication number: 20230137403
    Abstract: Apparatuses, systems, and techniques are presented to generate one or more images. In at least one embodiment, one or more neural networks are used to generate one or more images of one or more objects in two or more different poses from two or more different points of view.
    Type: Application
    Filed: October 29, 2021
    Publication date: May 4, 2023
    Inventors: Orazio Gallo, Umar Iqbal, Atsuhiro Noguchi
  • Publication number: 20230070514
    Abstract: In order to determine accurate three-dimensional (3D) models for objects within a video, the objects are first identified and tracked within the video, and a pose and shape are estimated for these tracked objects. A translation and global orientation are removed from the tracked objects to determine local motion for the objects, and motion infilling is performed to fill in any missing portions for the object within the video. A global trajectory is then determined for the objects within the video, and the infilled motion and global trajectory are then used to determine infilled global motion for the object within the video. This enables the accurate depiction of each object as a 3D pose sequence for that model that accounts for occlusions and global factors within the video.
    Type: Application
    Filed: January 25, 2022
    Publication date: March 9, 2023
    Inventors: Ye Yuan, Umar Iqbal, Pavlo Molchanov, Jan Kautz