Patents by Inventor Ashish TAWARI

Ashish TAWARI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11216001
    Abstract: A system and method for outputting vehicle dynamic controls using deep neural networks that include receiving environmental sensor data from at least one sensor of a vehicle of a surrounding environment of the vehicle. The system and method also include inputting the environmental sensor data to a primary deep neural network structure and inputting intermediate representation, at least one applicable traffic rule, and at least one applicable vehicle maneuver to a secondary deep neural network structure. The system and method further include outputting vehicle dynamic controls to autonomously control the vehicle to navigate within the surrounding environment of the vehicle based on the at least one applicable traffic rule and the at least one applicable vehicle maneuver.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: January 4, 2022
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
  • Patent number: 11188766
    Abstract: A system and method for providing context aware road user importance estimation that include receiving at least one image of a vicinity of an ego vehicle. The system and method also include analyzing the at least one image to determine a local context associated with at least one road user located within the vicinity of the ego vehicle. The system and method additionally include determining a global context associated with the ego vehicle. The system and method further include fusing the local context and the global context to classify at least one highly important road user that is to be accounted for with respect to operating the ego vehicle.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: November 30, 2021
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Alireza Rahimpour, Sujitha Catherine Martin, Ashish Tawari, Hairong Qi
  • Patent number: 11150656
    Abstract: Systems and techniques for autonomous vehicle decision making may include training an autonomous vehicle decision making database by capturing an image including a first training object and a second training object during a training phase. The first training object may be classified as a first class and the second training object may be classified as a second class based on a driver gaze location associated with a driver of the vehicle. The database may be built based on classification of the first training object and the second training object. The autonomous vehicle decision making database may be utilized to classify a first object as a first class and a second object as a second class during an operation phase. A processor may perform a first computation associated with the first object based on the classification of the first object and the classification of the second object.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: October 19, 2021
    Assignee: Honda Motor Co., Ltd.
    Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
  • Publication number: 20210248399
    Abstract: The present disclosure provides a method and system to operationalize driver eye movement data analysis based on moving objects of interest. Correlation and/or regression analyses between indirect (e.g., eye-tracking) and/or direct measures of driver awareness may identify variables that feature spatial and/or temporal aspects of driver eye glance behavior relative to objects of interest. The proposed systems and methods may be further combined with computer-vision techniques such as object recognition to (e.g., fully) automate eye movement data processing as well as machine learning approaches to improve the accuracy of driver awareness estimation.
    Type: Application
    Filed: November 17, 2020
    Publication date: August 12, 2021
    Inventors: Sujitha Catherine MARTIN, Teruhisa MISU, Hyungil KIM, Ashish TAWARI, Joseph L. GABBARD
  • Publication number: 20210081780
    Abstract: A system and method for providing object-level driver attention reasoning with a graph convolution network that include receiving image data associated with a plurality of image clips of a surrounding environment of a vehicle and determining anchor object-ness scores and anchor importance scores associated with relevant objects included within the plurality of image clips. The system and method also include analyzing the anchor object-ness scores and anchor importance scores associated with relevant objects and determining top relevant objects with respect to an operation of the vehicle. The system and method further include passing object node features and edges of an interaction graph through the graph convolution network to update features of each object node through interaction with other object nodes and determining importance scores for the top relevant objects.
    Type: Application
    Filed: January 17, 2020
    Publication date: March 18, 2021
    Inventors: Ashish Tawari, Zehua Zhang
  • Patent number: 10902279
    Abstract: Saliency training may be provided to build a saliency database, which may be utilized to facilitate operation of an autonomous vehicle. The saliency database may be built by minimizing a loss function between a saliency prediction result and a saliency mapper result. The saliency mapper result may be obtained from a ground truth database, which includes image frames of an operation environment where objects or regions within respective image frames are associated with a positive saliency, a neutral saliency, or a negative saliency. Neutral saliency may be indicative of a detected gaze location of a driver corresponding to the object or region at a time prior to the time associated with a given image frame. The saliency prediction result may be generated based on features extracted from respective image frames, depth-wise concatenations associated with respective image frames, and a long short-term memory layer or a recurrent neural network.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: January 26, 2021
    Assignee: Honda Motor Co., Ltd.
    Inventors: Ashish Tawari, Sujitha Catherine Martin
  • Publication number: 20200384981
    Abstract: Aspects of the present disclosure may include methods, apparatuses, and computer readable media for receiving one or more images having a plurality of objects, receiving a notification from an occupant of the self-driving vehicle, generating an attention map highlighting the plurality of objects based on at least one of the one or more images and the notification, and providing at least one of a steering control or a velocity control to operate the self-driving vehicle based on the attention map and the notification.
    Type: Application
    Filed: June 10, 2019
    Publication date: December 10, 2020
    Inventors: Ashish Tawari, Yi-Ting Chen, Teruhisa Misu, John F. Canny, Jinkyu Kim
  • Publication number: 20200377111
    Abstract: A trainer device trains an automated driver system. The trainer device may include a vehicle manager that manages data associated with controlling a vehicle and a simulation manager that manages data associated with simulating the vehicle. The vehicle manager may analyze vehicle data to identify an intervention event, and the simulation manager obtains a portion of the vehicle data corresponding to the intervention event to generate simulation data, obtains user data associated with the simulation data, analyzes the user data to determine whether the user data satisfies a predetermined intervention threshold, and, on condition that the user data satisfies the predetermined intervention threshold, transmits the user data to the vehicle manager for modifying the first control data.
    Type: Application
    Filed: May 30, 2019
    Publication date: December 3, 2020
    Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
  • Patent number: 10809799
    Abstract: Systems and methods for estimating an object of fixation from a gaze of a driver. The method includes: receiving image data from a plurality of input devices; processing the image data from a first one of the plurality of input devices to identify an object track; analyzing the image data from a second one of the plurality of input devices and the image data from the first one of the plurality of input devices to determine a projected gaze of a driver; analyzing the object track and the projected gaze to identify a plurality of objects in the gaze of the driver; performing a probability analysis to estimate the object of fixation from among the plurality of objects; and generating an output image identifying the estimated object of fixation.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: October 20, 2020
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Sujitha Martin, Ashish Tawari
  • Publication number: 20200298847
    Abstract: Determining object importance in vehicle control systems can include obtaining, for a vehicle in operation, an image of a dynamic scene, identifying an object type associated with one or more objects in the image, determining, based on the object type and a goal associated with the vehicle, an importance metric associated with the one or more objects, and controlling the vehicle based at least in part on the importance metric associated with the one or more objects.
    Type: Application
    Filed: March 20, 2019
    Publication date: September 24, 2020
    Inventors: Ashish Tawari, Sujitha Catherine Martin, Mingfei Gao
  • Publication number: 20200301437
    Abstract: A system and method for outputting vehicle dynamic controls using deep neural networks that include receiving environmental sensor data from at least one sensor of a vehicle of a surrounding environment of the vehicle. The system and method also include inputting the environmental sensor data to a primary deep neural network structure and inputting intermediate representation, at least one applicable traffic rule, and at least one applicable vehicle maneuver to a secondary deep neural network structure. The system and method further include outputting vehicle dynamic controls to autonomously control the vehicle to navigate within the surrounding environment of the vehicle based on the at least one applicable traffic rule and the at least one applicable vehicle maneuver.
    Type: Application
    Filed: March 20, 2019
    Publication date: September 24, 2020
    Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
  • Patent number: 10759424
    Abstract: The systems and methods provided herein are directed to the uploading and transmission of vehicle data to a remote system when a physiological event for a driver has been detected using one or more sensors. Information such as the driver's heart rate, temperature, voice inflection or facial expression may be monitored to detect the physiological event. Vehicle data, such as gathering or control system data, may be sent once the event has been detected. Selected vehicle data associated with the event or all data during the time of the event may be sent. After receiving the vehicle data, the remote system may process or store it where it may be used to modify automated driving functionalities.
    Type: Grant
    Filed: August 16, 2016
    Date of Patent: September 1, 2020
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Teruhisa Misu, Nanxiang Li, Ashish Tawari
  • Publication number: 20200250437
    Abstract: A system and method for providing context aware road user importance estimation that include receiving at least one image of a vicinity of an ego vehicle. The system and method also include analyzing the at least one image to determine a local context associated with at least one road user located within the vicinity of the ego vehicle. The system and method additionally include determining a global context associated with the ego vehicle. The system and method further include fusing the local context and the global context to classify at least one highly important road user that is to be accounted for with respect to operating the ego vehicle.
    Type: Application
    Filed: August 16, 2019
    Publication date: August 6, 2020
    Inventors: Alirerza Rahimpour, Sujitha Catherine Martin, Ashish Tawari, Hairong Qi
  • Publication number: 20200159214
    Abstract: Systems and techniques for autonomous vehicle decision making may include training an autonomous vehicle decision making database by capturing an image including a first training object and a second training object during a training phase. The first training object may be classified as a first class and the second training object may be classified as a second class based on a driver gaze location associated with a driver of the vehicle. The database may be built based on classification of the first training object and the second training object. The autonomous vehicle decision making database may be utilized to classify a first object as a first class and a second object as a second class during an operation phase. A processor may perform a first computation associated with the first object based on the classification of the first object and the classification of the second object.
    Type: Application
    Filed: November 19, 2018
    Publication date: May 21, 2020
    Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
  • Publication number: 20200097754
    Abstract: Saliency training may be provided to build a saliency database, which may be utilized to facilitate operation of an autonomous vehicle. The saliency database may be built by minimizing a loss function between a saliency prediction result and a saliency mapper result. The saliency mapper result may be obtained from a ground truth database, which includes image frames of an operation environment where objects or regions within respective image frames are associated with a positive saliency, a neutral saliency, or a negative saliency. Neutral saliency may be indicative of a detected gaze location of a driver corresponding to the object or region at a time prior to the time associated with a given image frame. The saliency prediction result may be generated based on features extracted from respective image frames, depth-wise concatenations associated with respective image frames, and a long short-term memory layer or a recurrent neural network.
    Type: Application
    Filed: September 25, 2018
    Publication date: March 26, 2020
    Inventors: Ashish Tawari, Sujitha Catherine Martin
  • Publication number: 20190361522
    Abstract: Systems and methods for estimating an object of fixation from a gaze of a driver. The method includes: receiving image data from a plurality of input devices; processing the image data from a first one of the plurality of input devices to identify an object track; analyzing the image data from a second one of the plurality of input devices and the image data from the first one of the plurality of input devices to determine a projected gaze of a driver; analyzing the object track and the projected gaze to identify a plurality of objects in the gaze of the driver; performing a probability analysis to estimate the object of fixation from among the plurality of objects; and generating an output image identifying the estimated object of fixation.
    Type: Application
    Filed: May 22, 2018
    Publication date: November 28, 2019
    Inventors: Sujitha MARTIN, Ashish TAWARI
  • Publication number: 20180225554
    Abstract: Systems and methods for estimating a saliency of one or more targets of a drive scene are provided. In some aspects, the system includes a memory that stores instructions for executing processes for estimating the saliency of the one or more targets of the drive scene. The system further includes a processor configured to execute the instructions. In various aspects, the processes include generating a Bayesian framework to model visual attention of a driver, the Bayesian framework comprising a bottom-up saliency element and a top-down saliency element. In various aspects, the processes also include generating a fully convolutional neural network, based on the Bayesian framework, to generate a visual saliency model of the one or more targets in the driving scene. In further aspects, the processes include outputting the visual saliency model to indicate features that attract attention of the driver.
    Type: Application
    Filed: May 30, 2017
    Publication date: August 9, 2018
    Inventors: Ashish TAWARI, Byeongkeun Kang
  • Publication number: 20180050696
    Abstract: The systems and methods provided herein are directed to the uploading and transmission of vehicle data to a remote system when a physiological event for a driver has been detected using one or more sensors. Information such as the driver's heart rate, temperature, voice inflection or facial expression may be monitored to detect the physiological event. Vehicle data, such as gathering or control system data, may be sent once the event has been detected. Selected vehicle data associated with the event or all data during the time of the event may be sent. After receiving the vehicle data, the remote system may process or store it where it may be used to modify automated driving functionalities.
    Type: Application
    Filed: August 16, 2016
    Publication date: February 22, 2018
    Inventors: Teruhisa MISU, Nanxiang LI, Ashish TAWARI