Patents by Inventor Abhishek Bajpayee

Abhishek Bajpayee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230214654
    Abstract: In various examples, one or more deep neural networks (DNNs) are executed to regress on control points of a curve, and the control points may be used to perform a curve fitting operation—e.g., Bezier curve fitting—to identify landmark locations and geometries in an environment. The outputs of the DNN(s) may thus indicate the two-dimensional (2D) image-space and/or three-dimensional (3D) world-space control point locations, and post-processing techniques—such as clustering and temporal smoothing—may be executed to determine landmark locations and poses with precision and in real-time. As a result, reconstructed curves corresponding to the landmarks—e.g., lane line, road boundary line, crosswalk, pole, text, etc.—may be used by a vehicle to perform one or more operations for navigating an environment.
    Type: Application
    Filed: February 27, 2023
    Publication date: July 6, 2023
    Inventors: Minwoo Park, Yilin Yang, Xiaolin Lin, Abhishek Bajpayee, Hae-Jong Seo, Eric Jonathan Yuan, Xudong Chen
  • Publication number: 20230186593
    Abstract: In various examples, contrast values corresponding to pixels of one or more images generated using one or more sensors of a vehicle may be computed to detect and identify objects that trigger glare mitigating operations. Pixel luminance values are determined and used to compute a contrast value based on comparing the pixel luminance values to a reference luminance value that is based on a set of the pixels and the corresponding luminance values. A contrast threshold may be applied to the computed contrast values to identify glare in the image data to trigger glare mitigating operations so that the vehicle may modify the configuration of one or more illumination sources so as to reduce glare experienced by occupants and/or sensors of the vehicle.
    Type: Application
    Filed: December 13, 2021
    Publication date: June 15, 2023
    Inventors: Igor Tryndin, Abhishek Bajpayee, Yu Wang, Hae-Jong Seo
  • Publication number: 20230177839
    Abstract: In various examples, methods and systems are provided for determining, using a machine learning model, one or more of the following operational domain conditions related to an autonomous and/or semi-autonomous machine: amount of camera blindness, blindness classification, illumination level, path surface condition, visibility distance, scene type classification, and distance to a scene. Once one or more of these conditions are determined, an operational level of the machine may be determined, and the machine may be controlled according to the operational level.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Abhishek Bajpayee, Arjun Gupta, Dylan Doblar, Hae-Jong Seo, George Tang, Keerthi Raj Nagaraja
  • Patent number: 11651215
    Abstract: In various examples, one or more deep neural networks (DNNs) are executed to regress on control points of a curve, and the control points may be used to perform a curve fitting operation—e.g., Bezier curve fitting—to identify landmark locations and geometries in an environment. The outputs of the DNN(s) may thus indicate the two-dimensional (2D) image-space and/or three-dimensional (3D) world-space control point locations, and post-processing techniques—such as clustering and temporal smoothing—may be executed to determine landmark locations and poses with precision and in real-time. As a result, reconstructed curves corresponding to the landmarks—e.g., lane line, road boundary line, crosswalk, pole, text, etc.—may be used by a vehicle to perform one or more operations for navigating an environment.
    Type: Grant
    Filed: December 2, 2020
    Date of Patent: May 16, 2023
    Assignee: NVIDIA Corporation
    Inventors: Minwoo Park, Yilin Yang, Xiaolin Lin, Abhishek Bajpayee, Hae-Jong Seo, Eric Jonathan Yuan, Xudong Chen
  • Publication number: 20230110027
    Abstract: In various examples, systems and methods are disclosed that use one or more machine learning models (MLMs) - such as deep neural networks (DNNs) - to compute outputs indicative of an estimated visibility distance corresponding to sensor data generated using one or more sensors of an autonomous or semi-autonomous machine. Once the visibility distance is computed using the one or more MLMs, a determination of the usability of the sensor data for one or more downstream tasks of the machine may be evaluated. As such, where an estimated visibility distance is low, the corresponding sensor data may be relied upon for less tasks than when the visibility distance is high.
    Type: Application
    Filed: September 29, 2021
    Publication date: April 13, 2023
    Inventors: Abhishek Bajpayee, Arjun Gupta, George Tang, Hae-Jong Seo
  • Publication number: 20230012645
    Abstract: In various examples, a deep neural network (DNN) is trained for sensor blindness detection using a region and context-based approach. Using sensor data, the DNN may compute locations of blindness or compromised visibility regions as well as associated blindness classifications and/or blindness attributes associated therewith. In addition, the DNN may predict a usability of each instance of the sensor data for performing one or more operations—such as operations associated with semi-autonomous or autonomous driving. The combination of the outputs of the DNN may be used to filter out instances of the sensor data—or to filter out portions of instances of the sensor data determined to be compromised—that may lead to inaccurate or ineffective results for the one or more operations of the system.
    Type: Application
    Filed: September 26, 2022
    Publication date: January 19, 2023
    Inventors: Hae-Jong Seo, Abhishek Bajpayee, David Nister, Minwoo Park, Neda Cvijetic
  • Patent number: 11508049
    Abstract: In various examples, a deep neural network (DNN) is trained for sensor blindness detection using a region and context-based approach. Using sensor data, the DNN may compute locations of blindness or compromised visibility regions as well as associated blindness classifications and/or blindness attributes associated therewith. In addition, the DNN may predict a usability of each instance of the sensor data for performing one or more operations—such as operations associated with semi-autonomous or autonomous driving. The combination of the outputs of the DNN may be used to filter out instances of the sensor data—or to filter out portions of instances of the sensor data determined to be compromised—that may lead to inaccurate or ineffective results for the one or more operations of the system.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: November 22, 2022
    Assignee: NVIDIA Corporation
    Inventors: Hae-Jong Seo, Abhishek Bajpayee, David Nister, Minwoo Park, Neda Cvijetic
  • Publication number: 20210166052
    Abstract: In various examples, one or more deep neural networks (DNNs) are executed to regress on control points of a curve, and the control points may be used to perform a curve fitting operation—e.g., Bezier curve fitting—to identify landmark locations and geometries in an environment. The outputs of the DNN(s) may thus indicate the two-dimensional (2D) image-space and/or three-dimensional (3D) world-space control point locations, and post-processing techniques—such as clustering and temporal smoothing—may be executed to determine landmark locations and poses with precision and in real-time. As a result, reconstructed curves corresponding to the landmarks—e.g., lane line, road boundary line, crosswalk, pole, text, etc.—may be used by a vehicle to perform one or more operations for navigating an environment.
    Type: Application
    Filed: December 2, 2020
    Publication date: June 3, 2021
    Inventors: Minwoo Park, Yilin Yang, Xiaolin Lin, Abhishek Bajpayee, Hae-Jong Seo, Eric Jonathan Yuan, Xudong Chen
  • Publication number: 20200090322
    Abstract: In various examples, a deep neural network (DNN) is trained for sensor blindness detection using a region and context-based approach. Using sensor data, the DNN may compute locations of blindness or compromised visibility regions as well as associated blindness classifications and/or blindness attributes associated therewith. In addition, the DNN may predict a usability of each instance of the sensor data for performing one or more operations—such as operations associated with semi-autonomous or autonomous driving. The combination of the outputs of the DNN may be used to filter out instances of the sensor data—or to filter out portions of instances of the sensor data determined to be compromised—that may lead to inaccurate or ineffective results for the one or more operations of the system.
    Type: Application
    Filed: September 13, 2019
    Publication date: March 19, 2020
    Inventors: Hae-Jong Seo, Abhishek Bajpayee, David Nister, Minwoo Park, Neda Cvijetic