Patents by Inventor Sujitha Catherine Martin

Sujitha Catherine Martin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954921
    Abstract: A system and method for improving driver situation awareness prediction using human visual sensory and memory mechanism that includes receiving data associated with a driving scene of a vehicle and an eye gaze of a driver of the vehicle. The system and method also include analyzing the data and extracting features associated with objects located within the driving scene and determining a situational awareness score that is associated with a situational awareness of the driver with respect to each of the objects located within the driving scene. The system and method further include communicating control signals to electronically control at least one system of the vehicle based on the situational awareness score that is associated with each of the objects located within the driving scene.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: April 9, 2024
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Haibei Zhu, Teruhisa Misu, Sujitha Catherine Martin, Xingwei Wu, Kumar Akash
  • Patent number: 11538259
    Abstract: The present disclosure provides a method and system to operationalize driver eye movement data analysis based on moving objects of interest. Correlation and/or regression analyses between indirect (e.g., eye-tracking) and/or direct measures of driver awareness may identify variables that feature spatial and/or temporal aspects of driver eye glance behavior relative to objects of interest. The proposed systems and methods may be further combined with computer-vision techniques such as object recognition to (e.g., fully) automate eye movement data processing as well as machine learning approaches to improve the accuracy of driver awareness estimation.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: December 27, 2022
    Assignee: Honda Motor Co., Ltd.
    Inventors: Sujitha Catherine Martin, Teruhisa Misu, Hyungil Kim, Ashish Tawari, Joseph L. Gabbard
  • Publication number: 20220277165
    Abstract: A system and method for improving driver situation awareness prediction using human visual sensory and memory mechanism that includes receiving data associated with a driving scene of a vehicle and an eye gaze of a driver of the vehicle. The system and method also include analyzing the data and extracting features associated with objects located within the driving scene and determining a situational awareness score that is associated a situational awareness of the driver with respect to each of the objects located within the driving scene. The system and method further include communicating control signals to electronically control at least one system of the vehicle based on the situational awareness score that is associated with each of the objects located within the driving scene.
    Type: Application
    Filed: May 19, 2021
    Publication date: September 1, 2022
    Inventors: Haibei ZHU, Teruhisa MISU, Sujitha Catherine MARTIN, Xingwei WU, Kumar AKASH
  • Patent number: 11420623
    Abstract: Determining object importance in vehicle control systems can include obtaining, for a vehicle in operation, an image of a dynamic scene, identifying an object type associated with one or more objects in the image, determining, based on the object type and a goal associated with the vehicle, an importance metric associated with the one or more objects, and controlling the vehicle based at least in part on the importance metric associated with the one or more objects.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: August 23, 2022
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Ashish Tawari, Sujitha Catherine Martin, Mingfei Gao
  • Patent number: 11216001
    Abstract: A system and method for outputting vehicle dynamic controls using deep neural networks that include receiving environmental sensor data from at least one sensor of a vehicle of a surrounding environment of the vehicle. The system and method also include inputting the environmental sensor data to a primary deep neural network structure and inputting intermediate representation, at least one applicable traffic rule, and at least one applicable vehicle maneuver to a secondary deep neural network structure. The system and method further include outputting vehicle dynamic controls to autonomously control the vehicle to navigate within the surrounding environment of the vehicle based on the at least one applicable traffic rule and the at least one applicable vehicle maneuver.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: January 4, 2022
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
  • Patent number: 11188766
    Abstract: A system and method for providing context aware road user importance estimation that include receiving at least one image of a vicinity of an ego vehicle. The system and method also include analyzing the at least one image to determine a local context associated with at least one road user located within the vicinity of the ego vehicle. The system and method additionally include determining a global context associated with the ego vehicle. The system and method further include fusing the local context and the global context to classify at least one highly important road user that is to be accounted for with respect to operating the ego vehicle.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: November 30, 2021
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Alireza Rahimpour, Sujitha Catherine Martin, Ashish Tawari, Hairong Qi
  • Patent number: 11150656
    Abstract: Systems and techniques for autonomous vehicle decision making may include training an autonomous vehicle decision making database by capturing an image including a first training object and a second training object during a training phase. The first training object may be classified as a first class and the second training object may be classified as a second class based on a driver gaze location associated with a driver of the vehicle. The database may be built based on classification of the first training object and the second training object. The autonomous vehicle decision making database may be utilized to classify a first object as a first class and a second object as a second class during an operation phase. A processor may perform a first computation associated with the first object based on the classification of the first object and the classification of the second object.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: October 19, 2021
    Assignee: Honda Motor Co., Ltd.
    Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
  • Publication number: 20210248399
    Abstract: The present disclosure provides a method and system to operationalize driver eye movement data analysis based on moving objects of interest. Correlation and/or regression analyses between indirect (e.g., eye-tracking) and/or direct measures of driver awareness may identify variables that feature spatial and/or temporal aspects of driver eye glance behavior relative to objects of interest. The proposed systems and methods may be further combined with computer-vision techniques such as object recognition to (e.g., fully) automate eye movement data processing as well as machine learning approaches to improve the accuracy of driver awareness estimation.
    Type: Application
    Filed: November 17, 2020
    Publication date: August 12, 2021
    Inventors: Sujitha Catherine MARTIN, Teruhisa MISU, Hyungil KIM, Ashish TAWARI, Joseph L. GABBARD
  • Publication number: 20210232913
    Abstract: In some examples, a dynamic system, including a vehicle, may be represented using a graph-based representation. One or more nodes in the graph-based representation may correspond to one or more agents in the dynamic system, and one or more edges between the nodes in the graph-based representation may correspond to one or more interactions between the agents in the dynamic system. The interactions may be defined based on human domain knowledge of the dynamic system. The dynamic system may be modeled using a respective machine learning model that includes a reward decoder that operates on the graph-based representation and evaluates one or more reward functions for the dynamic system. The one or more reward functions may be defined based on the human domain knowledge of the dynamic system. Autonomous operation of the vehicle may be controlled based on the modeling of the dynamic system.
    Type: Application
    Filed: September 4, 2020
    Publication date: July 29, 2021
    Inventors: Sujitha Catherine MARTIN, Chen TANG
  • Patent number: 10902279
    Abstract: Saliency training may be provided to build a saliency database, which may be utilized to facilitate operation of an autonomous vehicle. The saliency database may be built by minimizing a loss function between a saliency prediction result and a saliency mapper result. The saliency mapper result may be obtained from a ground truth database, which includes image frames of an operation environment where objects or regions within respective image frames are associated with a positive saliency, a neutral saliency, or a negative saliency. Neutral saliency may be indicative of a detected gaze location of a driver corresponding to the object or region at a time prior to the time associated with a given image frame. The saliency prediction result may be generated based on features extracted from respective image frames, depth-wise concatenations associated with respective image frames, and a long short-term memory layer or a recurrent neural network.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: January 26, 2021
    Assignee: Honda Motor Co., Ltd.
    Inventors: Ashish Tawari, Sujitha Catherine Martin
  • Publication number: 20200377111
    Abstract: A trainer device trains an automated driver system. The trainer device may include a vehicle manager that manages data associated with controlling a vehicle and a simulation manager that manages data associated with simulating the vehicle. The vehicle manager may analyze vehicle data to identify an intervention event, and the simulation manager obtains a portion of the vehicle data corresponding to the intervention event to generate simulation data, obtains user data associated with the simulation data, analyzes the user data to determine whether the user data satisfies a predetermined intervention threshold, and, on condition that the user data satisfies the predetermined intervention threshold, transmits the user data to the vehicle manager for modifying the first control data.
    Type: Application
    Filed: May 30, 2019
    Publication date: December 3, 2020
    Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
  • Publication number: 20200298847
    Abstract: Determining object importance in vehicle control systems can include obtaining, for a vehicle in operation, an image of a dynamic scene, identifying an object type associated with one or more objects in the image, determining, based on the object type and a goal associated with the vehicle, an importance metric associated with the one or more objects, and controlling the vehicle based at least in part on the importance metric associated with the one or more objects.
    Type: Application
    Filed: March 20, 2019
    Publication date: September 24, 2020
    Inventors: Ashish Tawari, Sujitha Catherine Martin, Mingfei Gao
  • Publication number: 20200301437
    Abstract: A system and method for outputting vehicle dynamic controls using deep neural networks that include receiving environmental sensor data from at least one sensor of a vehicle of a surrounding environment of the vehicle. The system and method also include inputting the environmental sensor data to a primary deep neural network structure and inputting intermediate representation, at least one applicable traffic rule, and at least one applicable vehicle maneuver to a secondary deep neural network structure. The system and method further include outputting vehicle dynamic controls to autonomously control the vehicle to navigate within the surrounding environment of the vehicle based on the at least one applicable traffic rule and the at least one applicable vehicle maneuver.
    Type: Application
    Filed: March 20, 2019
    Publication date: September 24, 2020
    Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
  • Publication number: 20200250437
    Abstract: A system and method for providing context aware road user importance estimation that include receiving at least one image of a vicinity of an ego vehicle. The system and method also include analyzing the at least one image to determine a local context associated with at least one road user located within the vicinity of the ego vehicle. The system and method additionally include determining a global context associated with the ego vehicle. The system and method further include fusing the local context and the global context to classify at least one highly important road user that is to be accounted for with respect to operating the ego vehicle.
    Type: Application
    Filed: August 16, 2019
    Publication date: August 6, 2020
    Inventors: Alirerza Rahimpour, Sujitha Catherine Martin, Ashish Tawari, Hairong Qi
  • Publication number: 20200159214
    Abstract: Systems and techniques for autonomous vehicle decision making may include training an autonomous vehicle decision making database by capturing an image including a first training object and a second training object during a training phase. The first training object may be classified as a first class and the second training object may be classified as a second class based on a driver gaze location associated with a driver of the vehicle. The database may be built based on classification of the first training object and the second training object. The autonomous vehicle decision making database may be utilized to classify a first object as a first class and a second object as a second class during an operation phase. A processor may perform a first computation associated with the first object based on the classification of the first object and the classification of the second object.
    Type: Application
    Filed: November 19, 2018
    Publication date: May 21, 2020
    Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
  • Publication number: 20200097754
    Abstract: Saliency training may be provided to build a saliency database, which may be utilized to facilitate operation of an autonomous vehicle. The saliency database may be built by minimizing a loss function between a saliency prediction result and a saliency mapper result. The saliency mapper result may be obtained from a ground truth database, which includes image frames of an operation environment where objects or regions within respective image frames are associated with a positive saliency, a neutral saliency, or a negative saliency. Neutral saliency may be indicative of a detected gaze location of a driver corresponding to the object or region at a time prior to the time associated with a given image frame. The saliency prediction result may be generated based on features extracted from respective image frames, depth-wise concatenations associated with respective image frames, and a long short-term memory layer or a recurrent neural network.
    Type: Application
    Filed: September 25, 2018
    Publication date: March 26, 2020
    Inventors: Ashish Tawari, Sujitha Catherine Martin