Patents by Inventor Hairong Jiang

Hairong Jiang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104879
    Abstract: In various examples, calibration techniques for interior depth sensors and image sensors for in-cabin monitoring systems and applications are provided. An intermediary coordinate system may be generated using calibration targets distributed within an interior space to reference 3D positions of features detected by both depth-perception and optical image sensors. Rotation-translation transforms may be determined to compute a first transform (H1) between the depth-perception sensor's 3D coordinate system and the 3D intermediary coordinate system, and a second transform (H2) between the optical image sensor's 2D coordinate system and the intermediary coordinate system. A third transform (H3) between the depth-perception sensor's 3D coordinate system and the optical image sensor's 2D coordinate system can be computed as a function of H1 and H2. The calibration targets may comprise a structural substrate that includes one or more fiducial point markers and one or more motion targets.
    Type: Application
    Filed: September 26, 2022
    Publication date: March 28, 2024
    Inventors: Hairong JIANG, Yuzhuo REN, Nitin BHARADWAJ, Chun-Wei CHEN, Varsha Chandrashekhar HEDAU
  • Publication number: 20240104941
    Abstract: In various examples, sensor parameter calibration techniques for in-cabin monitoring systems and applications are presented. An occupant monitoring system (OMS) is an example of a system that may be used within a vehicle or machine cabin to perform real-time assessments of driver and occupant presence, gaze, alertness, and/or other conditions. In some embodiments, a calibration parameter for an interior image sensor is determined so that the coordinates of features detected in 2D captured images may be referenced to an in-cabin 3D coordinate system. In some embodiments, a processing unit may detect fiducial points using an image of an interior space captured by a sensor, determine a 2D image coordinate for a fiducial point using the image, determine a 3D coordinate for the fiducial point, determine a calibration parameter comprising a rotation-translation transform from the 2D image coordinate and the 3D coordinate, and configure an operation based on the calibration parameter.
    Type: Application
    Filed: September 26, 2022
    Publication date: March 28, 2024
    Inventors: Yuzhuo REN, Hairong JIANG, Niranjan AVADHANAM, Varsha Chandrashekhar HEDAU
  • Publication number: 20240062067
    Abstract: Apparatuses, systems, and techniques are described to determine locations of objects using images including digital representations of those objects. In at least one embodiment, a gaze of one or more occupants of a vehicle is determined independently of a location of one or more sensors used to detect those occupants.
    Type: Application
    Filed: October 30, 2023
    Publication date: February 22, 2024
    Inventors: Feng Hu, Niranjan Avadhanam, Yuzhuo Ren, Sujay Yadawadkar, Sakthivel Sivaraman, Hairong Jiang, Siyue Wu
  • Patent number: 11886634
    Abstract: In various examples, systems and methods are disclosed that provide highly accurate gaze predictions that are specific to a particular user by generating and applying, in deployment, personalized calibration functions to outputs and/or layers of a machine learning model. The calibration functions corresponding to a specific user may operate on outputs (e.g., gaze predictions from a machine learning model) to provide updated values and gaze predictions. The calibration functions may also be applied one or more last layers of the machine learning model to operate on features identified by the model and provide values that are more accurate. The calibration functions may be generated using explicit calibration methods by instructing users to gaze at a number of identified ground truth locations within the interior of the vehicle. Once generated, the calibration functions may be modified or refined through implicit gaze calibration points and/or regions based on gaze saliency maps.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: January 30, 2024
    Assignee: NVIDIA Corporation
    Inventors: Nuri Murat Arar, Sujay Yadawadkar, Hairong Jiang, Nishant Puri, Niranjan Avadhanam
  • Patent number: 11841987
    Abstract: Machine learning systems and methods that learn glare, and thus determine gaze direction in a manner more resilient to the effects of glare on input images. The machine learning systems have an isolated representation of glare, e.g., information on the locations of glare points in an image, as an explicit input, in addition to the image itself. In this manner, the machine learning systems explicitly consider glare while making a determination of gaze direction, thus producing more accurate results for images containing glare.
    Type: Grant
    Filed: May 23, 2022
    Date of Patent: December 12, 2023
    Assignee: NVIDIA Corporation
    Inventors: Hairong Jiang, Nishant Puri, Niranjan Avadhanam, Nuri Murat Arar
  • Publication number: 20230356728
    Abstract: Approaches for an advanced AI-assisted vehicle can utilize an extensive suite of sensors inside and outside the vehicle, providing information to a computing platform running one or more neural networks. The neural networks can perform functions such as facial recognition, eye tracking, gesture recognition, head position, and gaze tracking to monitor the condition and safety of the driver and passengers. The system also identifies and tracks body pose and signals of people inside and outside the vehicle to understand their intent and actions. The system can track driver gaze to identify objects the driver might not see, such as cross-traffic and approaching cyclists. The system can provide notification of potential hazards, advice, and warnings. The system can also take corrective action, which may include controlling one or more vehicle subsystems, or when necessary, autonomously controlling the entire vehicle. The system can work with vehicle systems for enhanced analytics and recommendations.
    Type: Application
    Filed: May 8, 2023
    Publication date: November 9, 2023
    Inventors: Anshul Jain, Ratin Kumar, Feng Hu, Niranjan Avadhanam, Atousa Torabi, Hairong Jiang, Ram Ganapathi, Taek Kim
  • Patent number: 11803759
    Abstract: Apparatuses, systems, and techniques are described to determine locations of objects using images including digital representations of those objects. In at least one embodiment, a gaze of one or more occupants of a vehicle is determined independently of a location of one or more sensors used to detect those occupants.
    Type: Grant
    Filed: October 11, 2021
    Date of Patent: October 31, 2023
    Assignee: Nvidia Corporation
    Inventors: Feng Hu, Niranjan Avadhanam, Yuzhuo Ren, Sujay Yadawadkar, Sakthivel Sivaraman, Hairong Jiang, Siyue Wu
  • Publication number: 20230316635
    Abstract: In various examples, an environment surrounding an ego-object is visualized using an adaptive 3D bowl that models the environment with a shape that changes based on distance (and direction) to one or more representative point(s) on detected objects. Distance (and direction) to detected objects may be determined using 3D object detection or a top-down 2D or 3D occupancy grid, and used to adapt the shape of the adaptive 3D bowl in various ways (e.g., by sizing its ground plane to fit within the distance to the closest detected object, fitting a shape using an optimization algorithm). The adaptive 3D bowl may be enabled or disabled during each time slice (e.g., based on ego-speed), and the 3D bowl for each time slice may be used to render a visualization of the environment (e.g., a top-down projection image, a textured 3D bowl, and/or a rendered view thereof).
    Type: Application
    Filed: February 23, 2023
    Publication date: October 5, 2023
    Inventors: Hairong JIANG, Nuri Murat ARAR, Orazio GALLO, Jan KAUTZ, Ronan LETOQUIN
  • Publication number: 20230244941
    Abstract: Systems and methods for determining the gaze direction of a subject and projecting this gaze direction onto specific regions of an arbitrary three-dimensional geometry. In an exemplary embodiment, gaze direction may be determined by a regression-based machine learning model. The determined gaze direction is then projected onto a three-dimensional map or set of surfaces that may represent any desired object or system. Maps may represent any three-dimensional layout or geometry, whether actual or virtual. Gaze vectors can thus be used to determine the object of gaze within any environment. Systems can also readily and efficiently adapt for use in different environments by retrieving a different set of surfaces or regions for each environment.
    Type: Application
    Filed: April 10, 2023
    Publication date: August 3, 2023
    Inventors: Nuri Murat Arar, Hairong Jiang, Nishant Puri, Rajath Shetty, Niranjan Avadhanam
  • Patent number: 11704814
    Abstract: In various examples, an adaptive eye tracking machine learning model engine (“adaptive-model engine”) for an eye tracking system is described. The adaptive-model engine may include an eye tracking or gaze tracking development pipeline (“adaptive-model training pipeline”) that supports collecting data, training, optimizing, and deploying an adaptive eye tracking model that is a customized eye tracking model based on a set of features of an identified deployment environment. The adaptive-model engine supports ensembling the adaptive eye tracking model that may be trained on gaze vector estimation in surround environments and ensemble based on a plurality of eye tracking variant models and a plurality of facial landmark neural network metrics.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: July 18, 2023
    Assignee: NVIDIA Corporation
    Inventors: Nuri Murat Arar, Niranjan Avadhanam, Hairong Jiang, Nishant Puri, Rajath Shetty, Shagan Sah
  • Patent number: 11657263
    Abstract: Systems and methods for determining the gaze direction of a subject and projecting this gaze direction onto specific regions of an arbitrary three-dimensional geometry. In an exemplary embodiment, gaze direction may be determined by a regression-based machine learning model. The determined gaze direction is then projected onto a three-dimensional map or set of surfaces that may represent any desired object or system. Maps may represent any three-dimensional layout or geometry, whether actual or virtual. Gaze vectors can thus be used to determine the object of gaze within any environment. Systems can also readily and efficiently adapt for use in different environments by retrieving a different set of surfaces or regions for each environment.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: May 23, 2023
    Assignee: NVIDIA Corporation
    Inventors: Nuri Murat Arar, Hairong Jiang, Nishant Puri, Rajath Shetty, Niranjan Avadhanam
  • Publication number: 20220366568
    Abstract: In various examples, an adaptive eye tracking machine learning model engine (“adaptive-model engine”) for an eye tracking system is described. The adaptive-model engine may include an eye tracking or gaze tracking development pipeline (“adaptive-model training pipeline”) that supports collecting data, training, optimizing, and deploying an adaptive eye tracking model that is a customized eye tracking model based on a set of features of an identified deployment environment. The adaptive-model engine supports ensembling the adaptive eye tracking model that may be trained on gaze vector estimation in surround environments and ensemble based on a plurality of eye tracking variant models and a plurality of facial landmark neural network metrics.
    Type: Application
    Filed: May 13, 2021
    Publication date: November 17, 2022
    Inventors: Nuri Murat Arar, Niranjan Avadhanam, Hairong Jiang, Nishant Puri, Rajath Shetty, Shagan Sah
  • Publication number: 20220300072
    Abstract: In various examples, systems and methods are disclosed that provide highly accurate gaze predictions that are specific to a particular user by generating and applying, in deployment, personalized calibration functions to outputs and/or layers of a machine learning model. The calibration functions corresponding to a specific user may operate on outputs (e.g., gaze predictions from a machine learning model) to provide updated values and gaze predictions. The calibration functions may also be applied one or more last layers of the machine learning model to operate on features identified by the model and provide values that are more accurate. The calibration functions may be generated using explicit calibration methods by instructing users to gaze at a number of identified ground truth locations within the interior of the vehicle. Once generated, the calibration functions may be modified or refined through implicit gaze calibration points and/or regions based on gaze saliency maps.
    Type: Application
    Filed: March 19, 2021
    Publication date: September 22, 2022
    Inventors: Nuri Murat Arar, Sujay Yadawadkar, Hairong Jiang, Nishant Puri, Niranjan Avadhanam
  • Publication number: 20220283638
    Abstract: Machine learning systems and methods that learn glare, and thus determine gaze direction in a manner more resilient to the effects of glare on input images. The machine learning systems have an isolated representation of glare, e.g., information on the locations of glare points in an image, as an explicit input, in addition to the image itself. In this manner, the machine learning systems explicitly consider glare while making a determination of gaze direction, thus producing more accurate results for images containing glare.
    Type: Application
    Filed: May 23, 2022
    Publication date: September 8, 2022
    Inventors: Hairong Jiang, Nishant Puri, Niranjan Avadhanam, Nuri Murat Arar
  • Patent number: 11340701
    Abstract: Machine learning systems and methods that learn glare, and thus determine gaze direction in a manner more resilient to the effects of glare on input images. The machine learning systems have an isolated representation of glare, e.g., information on the locations of glare points in an image, as an explicit input, in addition to the image itself. In this manner, the machine learning systems explicitly consider glare while making a determination of gaze direction, thus producing more accurate results for images containing glare.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: May 24, 2022
    Assignee: NVIDIA Corporation
    Inventors: Hairong Jiang, Nishant Puri, Niranjan Avadhanam, Nuri Murat Arar
  • Publication number: 20220026987
    Abstract: Apparatuses, systems, and techniques are described to determine locations of objects using images including digital representations of those objects. In at least one embodiment, a gaze of one or more occupants of a vehicle is determined independently of a location of one or more sensors used to detect those occupants.
    Type: Application
    Filed: October 11, 2021
    Publication date: January 27, 2022
    Inventors: Feng Hu, Niranjan Avadhanam, Yuzhuo Ren, Sujay Yadawadkar, Sakthivel Sivaraman, Hairong Jiang, Siyue Wu
  • Patent number: 11144754
    Abstract: Apparatuses, systems, and techniques are described to determine locations of objects using images including digital representations of those objects. In at least one embodiment, a gaze of one or more occupants of a vehicle is determined independently of a location of one or more sensors used to detect those occupants.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: October 12, 2021
    Assignee: Nvidia Corporation
    Inventors: Feng Hu, Niranjan Avadhanam, Yuzhuo Ren, Sujay Yadawadkar, Sakthivel Sivaraman, Hairong Jiang, Siyue Wu
  • Publication number: 20210181837
    Abstract: Machine learning systems and methods that learn glare, and thus determine gaze direction in a manner more resilient to the effects of glare on input images. The machine learning systems have an isolated representation of glare, e.g., information on the locations of glare points in an image, as an explicit input, in addition to the image itself In this manner, the machine learning systems explicitly consider glare while making a determination of gaze direction, thus producing more accurate results for images containing glare.
    Type: Application
    Filed: June 16, 2020
    Publication date: June 17, 2021
    Inventors: Hairong Jiang, Nishant Puri, Niranjan Avadhanam, Nuri Murat Arar
  • Publication number: 20210182609
    Abstract: Systems and methods for determining the gaze direction of a subject and projecting this gaze direction onto specific regions of an arbitrary three-dimensional geometry. In an exemplary embodiment, gaze direction may be determined by a regression-based machine learning model. The determined gaze direction is then projected onto a three-dimensional map or set of surfaces that may represent any desired object or system. Maps may represent any three-dimensional layout or geometry, whether actual or virtual. Gaze vectors can thus be used to determine the object of gaze within any environment. Systems can also readily and efficiently adapt for use in different environments by retrieving a different set of surfaces or regions for each environment.
    Type: Application
    Filed: August 28, 2020
    Publication date: June 17, 2021
    Inventors: Nuri Murat Arar, Hairong Jiang, Nishant Puri, Rajath Shetty, Niranjan Avadhanam
  • Publication number: 20210056306
    Abstract: Apparatuses, systems, and techniques are described to determine locations of objects using images including digital representations of those objects. In at least one embodiment, a gaze of one or more occupants of a vehicle is determined independently of a location of one or more sensors used to detect those occupants.
    Type: Application
    Filed: August 19, 2019
    Publication date: February 25, 2021
    Inventors: Feng Hu, Niranjan Avadhanam, Yuzhuo Ren, Sujay Yadawadkar, Sakthivel Sivaraman, Hairong Jiang, Siyue Wu