Patents by Inventor Nuri Murat ARAR

Nuri Murat ARAR has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11978266
    Abstract: In various examples, estimated field of view or gaze information of a user may be projected external to a vehicle and compared to vehicle perception information corresponding to an environment outside of the vehicle. As a result, interior monitoring of a driver or occupant of the vehicle may be used to determine whether the driver or occupant has processed or seen certain object types, environmental conditions, or other information exterior to the vehicle. For a more holistic understanding of the state of the user, attentiveness and/or cognitive load of the user may be monitored to determine whether one or more actions should be taken. As a result, notifications, AEB system activations, and/or other actions may be determined based on a more complete state of the user as determined based on cognitive load, attentiveness, and/or a comparison between external perception of the vehicle and estimated perception of the user.
    Type: Grant
    Filed: October 21, 2020
    Date of Patent: May 7, 2024
    Assignee: NVIDIA Corporation
    Inventors: Nuri Murat Arar, Niranjan Avadhanam, Yuzhuo Ren
  • Publication number: 20240143072
    Abstract: In various examples, systems and methods are disclosed that provide highly accurate gaze predictions that are specific to a particular user by generating and applying, in deployment, personalized calibration functions to outputs and/or layers of a machine learning model. The calibration functions corresponding to a specific user may operate on outputs (e.g., gaze predictions from a machine learning model) to provide updated values and gaze predictions. The calibration functions may also be applied one or more last layers of the machine learning model to operate on features identified by the model and provide values that are more accurate. The calibration functions may be generated using explicit calibration methods by instructing users to gaze at a number of identified ground truth locations within the interior of the vehicle. Once generated, the calibration functions may be modified or refined through implicit gaze calibration points and/or regions based on gaze saliency maps.
    Type: Application
    Filed: January 11, 2024
    Publication date: May 2, 2024
    Inventors: Nuri Murat Arar, Sujay Yadawadkar, Hairong Jiang, Nishant Puri, Niranjan Avadhanam
  • Patent number: 11934955
    Abstract: Systems and methods for more accurate and robust determination of subject characteristics from an image of the subject. One or more machine learning models receive as input an image of a subject, and output both facial landmarks and associated confidence values. Confidence values represent the degrees to which portions of the subject's face corresponding to those landmarks are occluded, i.e., the amount of uncertainty in the position of each landmark location. These landmark points and their associated confidence values, and/or associated information, may then be input to another set of one or more machine learning models which may output any facial analysis quantity or quantities, such as the subject's gaze direction, head pose, drowsiness state, cognitive load, or distraction state.
    Type: Grant
    Filed: October 31, 2022
    Date of Patent: March 19, 2024
    Assignee: NVIDIA Corporation
    Inventors: Nuri Murat Arar, Niranjan Avadhanam, Nishant Puri, Shagan Sah, Rajath Shetty, Sujay Yadawadkar, Pavlo Molchanov
  • Patent number: 11886634
    Abstract: In various examples, systems and methods are disclosed that provide highly accurate gaze predictions that are specific to a particular user by generating and applying, in deployment, personalized calibration functions to outputs and/or layers of a machine learning model. The calibration functions corresponding to a specific user may operate on outputs (e.g., gaze predictions from a machine learning model) to provide updated values and gaze predictions. The calibration functions may also be applied one or more last layers of the machine learning model to operate on features identified by the model and provide values that are more accurate. The calibration functions may be generated using explicit calibration methods by instructing users to gaze at a number of identified ground truth locations within the interior of the vehicle. Once generated, the calibration functions may be modified or refined through implicit gaze calibration points and/or regions based on gaze saliency maps.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: January 30, 2024
    Assignee: NVIDIA Corporation
    Inventors: Nuri Murat Arar, Sujay Yadawadkar, Hairong Jiang, Nishant Puri, Niranjan Avadhanam
  • Patent number: 11841987
    Abstract: Machine learning systems and methods that learn glare, and thus determine gaze direction in a manner more resilient to the effects of glare on input images. The machine learning systems have an isolated representation of glare, e.g., information on the locations of glare points in an image, as an explicit input, in addition to the image itself. In this manner, the machine learning systems explicitly consider glare while making a determination of gaze direction, thus producing more accurate results for images containing glare.
    Type: Grant
    Filed: May 23, 2022
    Date of Patent: December 12, 2023
    Assignee: NVIDIA Corporation
    Inventors: Hairong Jiang, Nishant Puri, Niranjan Avadhanam, Nuri Murat Arar
  • Publication number: 20230316458
    Abstract: In various examples, dynamic seam placement is used to position seams in regions of overlapping image data to avoid crossing salient objects or regions. Objects may be detected from image frames representing overlapping views of an environment surrounding an ego-object such as a vehicle. The images may be aligned to create an aligned composite image or surface (e.g., a panorama, a 360° image, bowl shaped surface) with regions of overlapping image data, and a representation of the detected objects and/or salient regions (e.g., a saliency mask) may be generated and projected onto the aligned composite image or surface. Seams may be positioned in the overlapping regions to avoid or minimize crossing salient pixels represented in the projected masks, and the image data may be blended at the seams to create a stitched image or surface (e.g., a stitched panorama, stitched 360° image, stitched textured surface).
    Type: Application
    Filed: February 23, 2023
    Publication date: October 5, 2023
    Inventors: Yuzhuo REN, Kenneth TURKOWSKI, Nuri Murat ARAR, Orazio GALLO, Jan KAUTZ, Niranjan AVADHANAM, Hang SU
  • Publication number: 20230319218
    Abstract: In various examples, a state machine is used to select between a default seam placement or dynamic seam placement that avoids salient regions, and to enable and disable dynamic seam placement based on speed of ego-motion, direction of ego-motion, proximity to salient objects, active viewport, driver gaze, and/or other factors. Images representing overlapping views of an environment may be aligned to create an aligned composite image or surface (e.g., a panorama, a 360° image, bowl shaped surface) with overlapping regions of image data, and a default or dynamic seam placement may be selected based on driving scenario (e.g., driving direction, speed, proximity to nearby objects). As such, seams may be positioned in the overlapping regions of image data, and the image data may be blended at the seams to create a stitched image or surface (e.g., a stitched panorama, stitched 360° image, stitched textured surface).
    Type: Application
    Filed: February 23, 2023
    Publication date: October 5, 2023
    Inventors: Yuzhuo REN, Nuri Murat ARAR, Orazio GALLO, Jan KAUTZ, Niranjan AVADHANAM, Hang SU
  • Publication number: 20230316635
    Abstract: In various examples, an environment surrounding an ego-object is visualized using an adaptive 3D bowl that models the environment with a shape that changes based on distance (and direction) to one or more representative point(s) on detected objects. Distance (and direction) to detected objects may be determined using 3D object detection or a top-down 2D or 3D occupancy grid, and used to adapt the shape of the adaptive 3D bowl in various ways (e.g., by sizing its ground plane to fit within the distance to the closest detected object, fitting a shape using an optimization algorithm). The adaptive 3D bowl may be enabled or disabled during each time slice (e.g., based on ego-speed), and the 3D bowl for each time slice may be used to render a visualization of the environment (e.g., a top-down projection image, a textured 3D bowl, and/or a rendered view thereof).
    Type: Application
    Filed: February 23, 2023
    Publication date: October 5, 2023
    Inventors: Hairong JIANG, Nuri Murat ARAR, Orazio GALLO, Jan KAUTZ, Ronan LETOQUIN
  • Publication number: 20230244941
    Abstract: Systems and methods for determining the gaze direction of a subject and projecting this gaze direction onto specific regions of an arbitrary three-dimensional geometry. In an exemplary embodiment, gaze direction may be determined by a regression-based machine learning model. The determined gaze direction is then projected onto a three-dimensional map or set of surfaces that may represent any desired object or system. Maps may represent any three-dimensional layout or geometry, whether actual or virtual. Gaze vectors can thus be used to determine the object of gaze within any environment. Systems can also readily and efficiently adapt for use in different environments by retrieving a different set of surfaces or regions for each environment.
    Type: Application
    Filed: April 10, 2023
    Publication date: August 3, 2023
    Inventors: Nuri Murat Arar, Hairong Jiang, Nishant Puri, Rajath Shetty, Niranjan Avadhanam
  • Patent number: 11704814
    Abstract: In various examples, an adaptive eye tracking machine learning model engine (“adaptive-model engine”) for an eye tracking system is described. The adaptive-model engine may include an eye tracking or gaze tracking development pipeline (“adaptive-model training pipeline”) that supports collecting data, training, optimizing, and deploying an adaptive eye tracking model that is a customized eye tracking model based on a set of features of an identified deployment environment. The adaptive-model engine supports ensembling the adaptive eye tracking model that may be trained on gaze vector estimation in surround environments and ensemble based on a plurality of eye tracking variant models and a plurality of facial landmark neural network metrics.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: July 18, 2023
    Assignee: NVIDIA Corporation
    Inventors: Nuri Murat Arar, Niranjan Avadhanam, Hairong Jiang, Nishant Puri, Rajath Shetty, Shagan Sah
  • Patent number: 11657263
    Abstract: Systems and methods for determining the gaze direction of a subject and projecting this gaze direction onto specific regions of an arbitrary three-dimensional geometry. In an exemplary embodiment, gaze direction may be determined by a regression-based machine learning model. The determined gaze direction is then projected onto a three-dimensional map or set of surfaces that may represent any desired object or system. Maps may represent any three-dimensional layout or geometry, whether actual or virtual. Gaze vectors can thus be used to determine the object of gaze within any environment. Systems can also readily and efficiently adapt for use in different environments by retrieving a different set of surfaces or regions for each environment.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: May 23, 2023
    Assignee: NVIDIA Corporation
    Inventors: Nuri Murat Arar, Hairong Jiang, Nishant Puri, Rajath Shetty, Niranjan Avadhanam
  • Publication number: 20230078171
    Abstract: Systems and methods for more accurate and robust determination of subject characteristics from an image of the subject. One or more machine learning models receive as input an image of a subject, and output both facial landmarks and associated confidence values. Confidence values represent the degrees to which portions of the subject's face corresponding to those landmarks are occluded, i.e., the amount of uncertainty in the position of each landmark location. These landmark points and their associated confidence values, and/or associated information, may then be input to another set of one or more machine learning models which may output any facial analysis quantity or quantities, such as the subject's gaze direction, head pose, drowsiness state, cognitive load, or distraction state.
    Type: Application
    Filed: October 31, 2022
    Publication date: March 16, 2023
    Inventors: Nuri Murat Arar, Niranjan Avadhanam, Nishant Puri, Shagan Sah, Rajath Shetty, Sujay Yadawadkar, Pavlo Molchanov
  • Publication number: 20220366568
    Abstract: In various examples, an adaptive eye tracking machine learning model engine (“adaptive-model engine”) for an eye tracking system is described. The adaptive-model engine may include an eye tracking or gaze tracking development pipeline (“adaptive-model training pipeline”) that supports collecting data, training, optimizing, and deploying an adaptive eye tracking model that is a customized eye tracking model based on a set of features of an identified deployment environment. The adaptive-model engine supports ensembling the adaptive eye tracking model that may be trained on gaze vector estimation in surround environments and ensemble based on a plurality of eye tracking variant models and a plurality of facial landmark neural network metrics.
    Type: Application
    Filed: May 13, 2021
    Publication date: November 17, 2022
    Inventors: Nuri Murat Arar, Niranjan Avadhanam, Hairong Jiang, Nishant Puri, Rajath Shetty, Shagan Sah
  • Patent number: 11487968
    Abstract: Systems and methods for more accurate and robust determination of subject characteristics from an image of the subject. One or more machine learning models receive as input an image of a subject, and output both facial landmarks and associated confidence values. Confidence values represent the degrees to which portions of the subject's face corresponding to those landmarks are occluded, i.e., the amount of uncertainty in the position of each landmark location. These landmark points and their associated confidence values, and/or associated information, may then be input to another set of one or more machine learning models which may output any facial analysis quantity or quantities, such as the subject's gaze direction, head pose, drowsiness state, cognitive load, or distraction state.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: November 1, 2022
    Assignee: NVIDIA Corporation
    Inventors: Nuri Murat Arar, Niranjan Avadhanam, Nishant Puri, Shagan Sah, Rajath Shetty, Sujay Yadawadkar, Pavlo Molchanov
  • Publication number: 20220300072
    Abstract: In various examples, systems and methods are disclosed that provide highly accurate gaze predictions that are specific to a particular user by generating and applying, in deployment, personalized calibration functions to outputs and/or layers of a machine learning model. The calibration functions corresponding to a specific user may operate on outputs (e.g., gaze predictions from a machine learning model) to provide updated values and gaze predictions. The calibration functions may also be applied one or more last layers of the machine learning model to operate on features identified by the model and provide values that are more accurate. The calibration functions may be generated using explicit calibration methods by instructing users to gaze at a number of identified ground truth locations within the interior of the vehicle. Once generated, the calibration functions may be modified or refined through implicit gaze calibration points and/or regions based on gaze saliency maps.
    Type: Application
    Filed: March 19, 2021
    Publication date: September 22, 2022
    Inventors: Nuri Murat Arar, Sujay Yadawadkar, Hairong Jiang, Nishant Puri, Niranjan Avadhanam
  • Publication number: 20220283638
    Abstract: Machine learning systems and methods that learn glare, and thus determine gaze direction in a manner more resilient to the effects of glare on input images. The machine learning systems have an isolated representation of glare, e.g., information on the locations of glare points in an image, as an explicit input, in addition to the image itself. In this manner, the machine learning systems explicitly consider glare while making a determination of gaze direction, thus producing more accurate results for images containing glare.
    Type: Application
    Filed: May 23, 2022
    Publication date: September 8, 2022
    Inventors: Hairong Jiang, Nishant Puri, Niranjan Avadhanam, Nuri Murat Arar
  • Patent number: 11340701
    Abstract: Machine learning systems and methods that learn glare, and thus determine gaze direction in a manner more resilient to the effects of glare on input images. The machine learning systems have an isolated representation of glare, e.g., information on the locations of glare points in an image, as an explicit input, in addition to the image itself. In this manner, the machine learning systems explicitly consider glare while making a determination of gaze direction, thus producing more accurate results for images containing glare.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: May 24, 2022
    Assignee: NVIDIA Corporation
    Inventors: Hairong Jiang, Nishant Puri, Niranjan Avadhanam, Nuri Murat Arar
  • Publication number: 20220121867
    Abstract: In various examples, estimated field of view or gaze information of a user may be projected external to a vehicle and compared to vehicle perception information corresponding to an environment outside of the vehicle. As a result, interior monitoring of a driver or occupant of the vehicle may be used to determine whether the driver or occupant has processed or seen certain object types, environmental conditions, or other information exterior to the vehicle. For a more holistic understanding of the state of the user, attentiveness and/or cognitive load of the user may be monitored to determine whether one or more actions should be taken. As a result, notifications, AEB system activations, and/or other actions may be determined based on a more complete state of the user as determined based on cognitive load, attentiveness, and/or a comparison between external perception of the vehicle and estimated perception of the user.
    Type: Application
    Filed: October 21, 2020
    Publication date: April 21, 2022
    Inventors: Nuri Murat Arar, Niranjan Avadhanam, Yuzhuo Ren
  • Publication number: 20210181837
    Abstract: Machine learning systems and methods that learn glare, and thus determine gaze direction in a manner more resilient to the effects of glare on input images. The machine learning systems have an isolated representation of glare, e.g., information on the locations of glare points in an image, as an explicit input, in addition to the image itself In this manner, the machine learning systems explicitly consider glare while making a determination of gaze direction, thus producing more accurate results for images containing glare.
    Type: Application
    Filed: June 16, 2020
    Publication date: June 17, 2021
    Inventors: Hairong Jiang, Nishant Puri, Niranjan Avadhanam, Nuri Murat Arar
  • Publication number: 20210182609
    Abstract: Systems and methods for determining the gaze direction of a subject and projecting this gaze direction onto specific regions of an arbitrary three-dimensional geometry. In an exemplary embodiment, gaze direction may be determined by a regression-based machine learning model. The determined gaze direction is then projected onto a three-dimensional map or set of surfaces that may represent any desired object or system. Maps may represent any three-dimensional layout or geometry, whether actual or virtual. Gaze vectors can thus be used to determine the object of gaze within any environment. Systems can also readily and efficiently adapt for use in different environments by retrieving a different set of surfaces or regions for each environment.
    Type: Application
    Filed: August 28, 2020
    Publication date: June 17, 2021
    Inventors: Nuri Murat Arar, Hairong Jiang, Nishant Puri, Rajath Shetty, Niranjan Avadhanam