Patents by Inventor Niranjan Avadhanam

Niranjan Avadhanam has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250117981
    Abstract: In various examples, infrared image data (e.g., frames of an infrared (IR) video feed) may be colorized by transferring color statistics from an RGB image with an overlapping field of view, by modifying one or more dimensions of an encoded representation of a generated RGB image, and/or otherwise. For example, segmentation may be applied to the IR and RGB image data, and the one or more colors or statistics may be transferred from a segmented region of the RGB image data to a corresponding segmented region of the IR image data. In some embodiments, synthesized RGB image data may be fined tuned by transferring color or color statistic(s) from corresponding real RGB image data, and/or by modifying one or more dimensions of an encoded representation of the synthesized RGB image data.
    Type: Application
    Filed: October 10, 2023
    Publication date: April 10, 2025
    Inventors: Yuzhuo REN, Niranjan AVADHANAM
  • Patent number: 12236351
    Abstract: Apparatuses, systems, and techniques are described to determine locations of objects using images including digital representations of those objects. In at least one embodiment, a gaze of one or more occupants of a vehicle is determined independently of a location of one or more sensors used to detect those occupants.
    Type: Grant
    Filed: October 30, 2023
    Date of Patent: February 25, 2025
    Assignee: Nvidia Corporation
    Inventors: Feng Hu, Niranjan Avadhanam, Yuzhuo Ren, Sujay Yadawadkar, Sakthivel Sivaraman, Hairong Jiang, Siyue Wu
  • Patent number: 12230040
    Abstract: State information can be determined for a subject that is robust to different inputs or conditions. For drowsiness, facial landmarks can be determined from captured image data and used to determine a set of blink parameters. These parameters can be used, such as with a temporal network, to estimate a state (e.g., drowsiness) of the subject. To improve robustness, an eye state determination network can determine eye state from the image data, without reliance on intermediate landmarks, that can be used, such as with another temporal network, to estimate the state of the subject. A weighted combination of these values can be used to determine an overall state of the subject. To improve accuracy, individual behavior patterns and context information can be utilized to account for variations in the data due to subject variation or current context rather than changes in state.
    Type: Grant
    Filed: November 21, 2023
    Date of Patent: February 18, 2025
    Assignee: Nvidia Corporation
    Inventors: Yuzhuo Ren, Niranjan Avadhanam
  • Publication number: 20250050831
    Abstract: In various examples, systems and methods are disclosed that accurately identify driver and passenger in-cabin activities that may indicate a biomechanical distraction that prevents a driver from being fully engaged in driving a vehicle. In particular, image data representative of an image of an occupant of a vehicle may be applied to one or more deep neural networks (DNNs). Using the DNNs, data indicative of key point locations corresponding to the occupant may be computed, a shape and/or a volume corresponding to the occupant may be reconstructed, a position and size of the occupant may be estimated, hand gesture activities may be classified, and/or body postures or poses may be classified. These determinations may be used to determine operations or settings for the vehicle to increase not only the safety of the occupants, but also of surrounding motorists, bicyclists, and pedestrians.
    Type: Application
    Filed: October 30, 2024
    Publication date: February 13, 2025
    Inventors: Atousa Torabi, Sakthivel Sivaraman, Niranjan Avadhanam, Shagan Sah
  • Publication number: 20250042413
    Abstract: State information can be determined for a subject that is robust to different inputs or conditions. For drowsiness, facial landmarks can be determined from captured image data and used to determine a set of blink parameters. These parameters can be used, such as with a temporal network, to estimate a state (e.g., drowsiness) of the subject. To improve robustness, an eye state determination network can determine eye state from the image data, without reliance on intermediate landmarks, that can be used, such as with another temporal network, to estimate the state of the subject. A weighted combination of these values can be used to determine an overall state of the subject. To improve accuracy, individual behavior patterns and context information can be utilized to account for variations in the data due to subject variation or current context rather than changes in state.
    Type: Application
    Filed: October 21, 2024
    Publication date: February 6, 2025
    Inventors: Yuzhuo Ren, Niranjan Avadhanam
  • Patent number: 12208732
    Abstract: Systems and methods for a self-adjusting vehicle mirror. The mirror automatically locates the face of the driver or another passenger, and orients the mirror to provide the driver/passenger face with a desired view from the mirror. The mirror may continue to reorient itself as the driver or passenger shifts position, to continuously provide a desired field of view even as he or she changes position over time. In certain embodiments, the mirror system of the disclosure can be a self-contained system, with the mirror, mirror actuator, camera, and computing device all contained within the mirror housing as a single integrated unit.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: January 28, 2025
    Assignee: NVIDIA Corporation
    Inventors: Feng Hu, Niranjan Avadhanam, Ratin Kumar, Simon John Baker
  • Patent number: 12211308
    Abstract: Interactions with virtual systems may be difficult when users inadvertently fail to provide sufficient information to proceed with their requests. Certain types of inputs, such as auditory inputs, may lack sufficient information to properly provide a response to the user. Additional information, such as image data, may enable user gestures or poses to supplement the auditory inputs to enable response generation without requesting additional information from users.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: January 28, 2025
    Assignee: Nvidia Corporation
    Inventors: Sakthivel Sivaraman, Nishant Puri, Yuzhuo Ren, Atousa Torabi, Shubhadeep Das, Niranjan Avadhanam, Sumit Kumar Bhattacharya, Jason Roche
  • Publication number: 20250022218
    Abstract: In various examples, updates to a dynamic seam placement and/or fitted 3D bowl may be at least partially concealed using temporal masking. A future time in which a predicted change in dynamic seam placement and/or fitted 3D bowl exceeds some threshold may be determined. A predicted dynamic seam placement and/or fitted 3D bowl update may be temporally masked by triggering the update before arriving at the future time to compensate for the latency of the temporal filtering and/or by adjusting the temporal filter size (e.g., shortening a temporal window over which temporal filtering is applied) in anticipation of the predicted dynamic seam placement and/or fitted 3D bowl update, effectively maintaining some of the smoothing effects of temporal filtering, while reducing the latency.
    Type: Application
    Filed: July 17, 2023
    Publication date: January 16, 2025
    Inventors: Nuri Murat ARAR, Niranjan AVADHANAM, Yuzhuo REN, Hairong JIANG
  • Publication number: 20250022223
    Abstract: In various examples, a visualization of an environment may be generated using a Panini projection that is optimized based on detected scene content. For example, image data of an environment may be perspective projected (e.g., using a rectilinear projection) to generate a reference projection image, which may be analyzed to detect the presence of vanishing points and/or horizontal lines (e.g., in a central region). The image data of the environment may be projected using a Panini projection that is optimized based on distances to detected objects, the absence of a detected vanishing point, and/or the presence of a detected horizontal line to generate a Panini projection image. In some embodiments, vertical compression is applied to the Panini projection image to correct for distortion of horizontal lines (e.g., based on the presence of a detected horizontal line).
    Type: Application
    Filed: July 12, 2023
    Publication date: January 16, 2025
    Inventors: Yuzhuo REN, Niranjan AVADHANAM
  • Publication number: 20250022224
    Abstract: In various examples, updates to a dynamic seam placement and/or fitted 3D bowl may be at least partially concealed using spatial masking. A future time in which a predicted change in dynamic seam placement and/or fitted 3D bowl exceeds some threshold may be determined, and a predicted dynamic seam movement and/or fitted 3D bowl update may be spatially masked by triggering a viewport switch to coincide with (a) the predicted dynamic seam placement and/or fitted 3D bowl update and/or (b) a relaxation or disabling of temporal filtering. Additionally or alternatively to predicting that a future change will exceed a threshold, the determination of the change may occur based on a change between a current and previous frame. In some embodiments that employ viewport switching to spatially mask visualization updates, the switch may be to one of a plurality of candidate viewports for an applicable scene maintained in a scene catalog.
    Type: Application
    Filed: July 14, 2023
    Publication date: January 16, 2025
    Inventors: Nuri Murat ARAR, Niranjan AVADHANAM, Yuzhuo REN, Hairong JIANG
  • Patent number: 12198450
    Abstract: In various examples, systems and methods are disclosed herein for a vehicle command operation system that may use technology across multiple modalities to cause vehicular operations to be performed in response to determining a focal point based on a gaze of an occupant. The system may utilize sensors to receive first data indicative of an eye gaze of an occupant of the vehicle. The system may utilize sensors to receive second data indicative of other data from the occupant. The system may then calculate a gaze vector based on the data indicative of the eye gaze of the occupant. The system may determine a focal point based on the gaze vector. In response to determining the focal point, the system causes an operation to be performed in the vehicle based on the second data.
    Type: Grant
    Filed: October 5, 2023
    Date of Patent: January 14, 2025
    Assignee: NVIDIA Corporation
    Inventors: Jason Conrad Roche, Niranjan Avadhanam
  • Patent number: 12162418
    Abstract: In various examples, systems and methods are disclosed that accurately identify driver and passenger in-cabin activities that may indicate a biomechanical distraction that prevents a driver from being fully engaged in driving a vehicle. In particular, image data representative of an image of an occupant of a vehicle may be applied to one or more deep neural networks (DNNs). Using the DNNs, data indicative of key point locations corresponding to the occupant may be computed, a shape and/or a volume corresponding to the occupant may be reconstructed, a position and size of the occupant may be estimated, hand gesture activities may be classified, and/or body postures or poses may be classified. These determinations may be used to determine operations or settings for the vehicle to increase not only the safety of the occupants, but also of surrounding motorists, bicyclists, and pedestrians.
    Type: Grant
    Filed: October 5, 2023
    Date of Patent: December 10, 2024
    Assignee: NVIDIA Corporation
    Inventors: Atousa Torabi, Sakthivel Sivaraman, Niranjan Avadhanam, Shagan Sah
  • Publication number: 20240404296
    Abstract: In various examples, low power proximity based threat detection using optical flow for vehicle systems and applications are provided. Some embodiments may use a tiered framework that uses sensor fusion techniques to detect and track the movement of a threat candidate, and perform a threat classification and/or intent prediction as the threat candidate approaches approach. Relative depth indications from optical flow, computed using data from image sensors, can be used to initially segment and track a moving object over a sequence of image frames. Additional sensors and processing may be brought online when a moving object becomes close enough to be considered a higher risk threat candidate. A threat response system may generate a risk score based on a predicted intent of a threat candidate, and when the risk score exceeds a certain threshold, then the threat response system may respond accordingly based on the threat classification and/or risk score.
    Type: Application
    Filed: June 1, 2023
    Publication date: December 5, 2024
    Inventors: Shagan Sah, Niranjan Avadhanam, Rajath Shetty, Ratin Kumar, Yile Chen
  • Publication number: 20240371136
    Abstract: In various examples, the present disclosure relates to using temporal filters for automated real-time classification. The technology described herein improves the performance of a multiclass classifier that may be used to classify a temporal sequence of input signals-such as input signals representative of video frames. A performance improvement may be achieved, at least in part, by applying a temporal filter to an output of the multiclass classifier. For example, the temporal filter may leverage classifications associated with preceding input signals to improve the final classification given to a subsequent signal. In some embodiments, the temporal filter may also use data from a confusion matrix to correct for the probable occurrence of certain types of classification errors. The temporal filter may be a linear filter, a nonlinear filter, an adaptive filter, and/or a statistical filter.
    Type: Application
    Filed: July 12, 2024
    Publication date: November 7, 2024
    Inventors: Sakthivel Sivaraman, Shagan Sah, Niranjan Avadhanam
  • Patent number: 12122392
    Abstract: State information can be determined for a subject that is robust to different inputs or conditions. For drowsiness, facial landmarks can be determined from captured image data and used to determine a set of blink parameters. These parameters can be used, such as with a temporal network, to estimate a state (e.g., drowsiness) of the subject. To improve robustness, an eye state determination network can determine eye state from the image data, without reliance on intermediate landmarks, that can be used, such as with another temporal network, to estimate the state of the subject. A weighted combination of these values can be used to determine an overall state of the subject. To improve accuracy, individual behavior patterns and context information can be utilized to account for variations in the data due to subject variation or current context rather than changes in state.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: October 22, 2024
    Assignee: Nvidia Corporation
    Inventors: Yuzhuo Ren, Niranjan Avadhanam
  • Patent number: 12073604
    Abstract: In various examples, the present disclosure relates to using temporal filters for automated real-time classification. The technology described herein improves the performance of a multiclass classifier that may be used to classify a temporal sequence of input signals—such as input signals representative of video frames. A performance improvement may be achieved, at least in part, by applying a temporal filter to an output of the multiclass classifier. For example, the temporal filter may leverage classifications associated with preceding input signals to improve the final classification given to a subsequent signal. In some embodiments, the temporal filter may also use data from a confusion matrix to correct for the probable occurrence of certain types of classification errors. The temporal filter may be a linear filter, a nonlinear filter, an adaptive filter, and/or a statistical filter.
    Type: Grant
    Filed: June 12, 2023
    Date of Patent: August 27, 2024
    Assignee: NVIDIA Corporation
    Inventors: Sakthivel Sivaraman, Shagan Sah, Niranjan Avadhanam
  • Publication number: 20240265254
    Abstract: Systems and methods for more accurate and robust determination of subject characteristics from an image of the subject. One or more machine learning models receive as input an image of a subject, and output both facial landmarks and associated confidence values. Confidence values represent the degrees to which portions of the subject's face corresponding to those landmarks are occluded, i.e., the amount of uncertainty in the position of each landmark location. These landmark points and their associated confidence values, and/or associated information, may then be input to another set of one or more machine learning models which may output any facial analysis quantity or quantities, such as the subject's gaze direction, head pose, drowsiness state, cognitive load, or distraction state.
    Type: Application
    Filed: March 14, 2024
    Publication date: August 8, 2024
    Inventors: Nuri Murat Arar, Niranjan Avadhanam, Nishant Puri, Shagan Sah, Rajath Shetty, Sujay Yadawadkar, Pavlo Molchanov
  • Publication number: 20240257539
    Abstract: In various examples, estimated field of view or gaze information of a user may be projected external to a vehicle and compared to vehicle perception information corresponding to an environment outside of the vehicle. As a result, interior monitoring of a driver or occupant of the vehicle may be used to determine whether the driver or occupant has processed or seen certain object types, environmental conditions, or other information exterior to the vehicle. For a more holistic understanding of the state of the user, attentiveness and/or cognitive load of the user may be monitored to determine whether one or more actions should be taken. As a result, notifications, AEB system activations, and/or other actions may be determined based on a more complete state of the user as determined based on cognitive load, attentiveness, and/or a comparison between external perception of the vehicle and estimated perception of the user.
    Type: Application
    Filed: March 20, 2024
    Publication date: August 1, 2024
    Inventors: Nuri Murat Arar, Niranjan Avadhanam, Yuzhuo Ren
  • Patent number: 12005855
    Abstract: Systems and methods for machine learning based seatbelt position detection and classification. A number of fiducial markers are placed on a vehicle seatbelt. A camera or other sensor is placed within the vehicle, to capture images or other data relating positions of the fiducial markers when the seatbelt is in use. One or more models such as machine learning models may then determine the spatial positions of the fiducial markers from the captured image information, and determine the worn state of the seatbelt. In particular, the system may determine whether the seatbelt is being worn in one or more improper states, such as not being worn or being worn in an unsafe or dangerous manner, and if so, the system may alert the vehicle to take corrective action. In this manner, the system provides constant and real-time monitoring of seatbelts to improve seatbelt usage and safety.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: June 11, 2024
    Assignee: NVIDIA Corporation
    Inventors: Feng Hu, Niranjan Avadhanam
  • Patent number: 11978266
    Abstract: In various examples, estimated field of view or gaze information of a user may be projected external to a vehicle and compared to vehicle perception information corresponding to an environment outside of the vehicle. As a result, interior monitoring of a driver or occupant of the vehicle may be used to determine whether the driver or occupant has processed or seen certain object types, environmental conditions, or other information exterior to the vehicle. For a more holistic understanding of the state of the user, attentiveness and/or cognitive load of the user may be monitored to determine whether one or more actions should be taken. As a result, notifications, AEB system activations, and/or other actions may be determined based on a more complete state of the user as determined based on cognitive load, attentiveness, and/or a comparison between external perception of the vehicle and estimated perception of the user.
    Type: Grant
    Filed: October 21, 2020
    Date of Patent: May 7, 2024
    Assignee: NVIDIA Corporation
    Inventors: Nuri Murat Arar, Niranjan Avadhanam, Yuzhuo Ren