Patents by Inventor Teruhisa Misu

Teruhisa Misu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11447127
    Abstract: Aspects of the present disclosure may include methods, apparatuses, and computer readable media for receiving one or more images having a plurality of objects, receiving a notification from an occupant of the self-driving vehicle, generating an attention map highlighting the plurality of objects based on at least one of the one or more images and the notification, and providing at least one of a steering control or a velocity control to operate the self-driving vehicle based on the attention map and the notification.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: September 20, 2022
    Assignees: HONDA MOTOR CO., LTD., THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Ashish Tawari, Yi-Ting Chen, Teruhisa Misu, John F. Canny, Jinkyu Kim
  • Publication number: 20220277165
    Abstract: A system and method for improving driver situation awareness prediction using human visual sensory and memory mechanism that includes receiving data associated with a driving scene of a vehicle and an eye gaze of a driver of the vehicle. The system and method also include analyzing the data and extracting features associated with objects located within the driving scene and determining a situational awareness score that is associated a situational awareness of the driver with respect to each of the objects located within the driving scene. The system and method further include communicating control signals to electronically control at least one system of the vehicle based on the situational awareness score that is associated with each of the objects located within the driving scene.
    Type: Application
    Filed: May 19, 2021
    Publication date: September 1, 2022
    Inventors: Haibei ZHU, Teruhisa MISU, Sujitha Catherine MARTIN, Xingwei WU, Kumar AKASH
  • Patent number: 11410048
    Abstract: According to one aspect, anomalous event detection based on deep learning may include a system for anomalous event detection for a device. The system includes a computing device having a processor, an encoding module, and a decoding module. The processor is configured to receive sensor data. The encoding module generates reconstruction data based on the sensor data, identifies at least one reconstruction error in the reconstruction data, and determines an anomaly score based on the at least one reconstruction error. The decoding module generates an action prediction based on the sensor data and determines a likelihood value based on the action prediction. The processor can then calculate a scaled anomaly score based on the anomaly score and the likelihood value and causes the processor to execute an action based on the scaled anomaly score.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: August 9, 2022
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Teruhisa Misu, Vidyasagar Sadhu, Dario Pompili
  • Publication number: 20220204020
    Abstract: In some examples, one or more characteristics of one or more driving scenes may be obtained. Based at least on the one or more characteristics, one or more behaviors of a simulated driver may be simulated via a machine learning model. An operation associated with one or more advanced driving assistance system (ADAS) functions may be performed based at least on the simulated one or more behaviors.
    Type: Application
    Filed: December 31, 2020
    Publication date: June 30, 2022
    Inventor: Teruhisa MISU
  • Patent number: 11370446
    Abstract: A system and method for learning naturalistic driving behavior based on vehicle dynamic data that include receiving vehicle dynamic data and image data and analyzing the vehicle dynamic data and the image data to detect a plurality of behavioral events. The system and method also include classifying at least one behavioral event as a stimulus-driven action and predicting at least one behavioral event as a goal-oriented action based on the stimulus-driven action. The system and method additionally include building a naturalistic driving behavior data set that includes annotations that are based on the at least one behavioral event that is classified as the stimulus-driven action. The system and method further include controlling a vehicle to be autonomously driven based on the naturalistic driving behavior data set.
    Type: Grant
    Filed: November 9, 2018
    Date of Patent: June 28, 2022
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Teruhisa Misu, Yi-Ting Chen
  • Patent number: 11332165
    Abstract: An autonomous driving agent is provided. The autonomous driving agent determines a set of observations from sensor information of a sensor system of a vehicle. The set of observations includes human attention information for a scene of surrounding environment and a level of human reliance as indicated by human inputs to the autonomous driving agent. The autonomous driving agent estimates, based on the set of observations, belief states for a first state of human trust on the autonomous driving agent and a second state of human's cognitive workload during journey. The autonomous driving agent selects, based on the estimated belief states, a first value for a first action associated with a level of automation transparency between a human user and the autonomous driving agent and controls a display system based on the selected first value to display a cue for calibration of the human trust on the autonomous driving agent.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: May 17, 2022
    Assignee: Honda Motor Co., Ltd.
    Inventors: Kumar Akash, Teruhisa Misu
  • Patent number: 11216001
    Abstract: A system and method for outputting vehicle dynamic controls using deep neural networks that include receiving environmental sensor data from at least one sensor of a vehicle of a surrounding environment of the vehicle. The system and method also include inputting the environmental sensor data to a primary deep neural network structure and inputting intermediate representation, at least one applicable traffic rule, and at least one applicable vehicle maneuver to a secondary deep neural network structure. The system and method further include outputting vehicle dynamic controls to autonomously control the vehicle to navigate within the surrounding environment of the vehicle based on the at least one applicable traffic rule and the at least one applicable vehicle maneuver.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: January 4, 2022
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
  • Patent number: 11150656
    Abstract: Systems and techniques for autonomous vehicle decision making may include training an autonomous vehicle decision making database by capturing an image including a first training object and a second training object during a training phase. The first training object may be classified as a first class and the second training object may be classified as a second class based on a driver gaze location associated with a driver of the vehicle. The database may be built based on classification of the first training object and the second training object. The autonomous vehicle decision making database may be utilized to classify a first object as a first class and a second object as a second class during an operation phase. A processor may perform a first computation associated with the first object based on the classification of the first object and the classification of the second object.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: October 19, 2021
    Assignee: Honda Motor Co., Ltd.
    Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
  • Publication number: 20210248399
    Abstract: The present disclosure provides a method and system to operationalize driver eye movement data analysis based on moving objects of interest. Correlation and/or regression analyses between indirect (e.g., eye-tracking) and/or direct measures of driver awareness may identify variables that feature spatial and/or temporal aspects of driver eye glance behavior relative to objects of interest. The proposed systems and methods may be further combined with computer-vision techniques such as object recognition to (e.g., fully) automate eye movement data processing as well as machine learning approaches to improve the accuracy of driver awareness estimation.
    Type: Application
    Filed: November 17, 2020
    Publication date: August 12, 2021
    Inventors: Sujitha Catherine MARTIN, Teruhisa MISU, Hyungil KIM, Ashish TAWARI, Joseph L. GABBARD
  • Publication number: 20210229707
    Abstract: An autonomous driving agent is provided. The autonomous driving agent determines a set of observations from sensor information of a sensor system of a vehicle. The set of observations includes human attention information for a scene of surrounding environment and a level of human reliance as indicated by human inputs to the autonomous driving agent. The autonomous driving agent estimates, based on the set of observations, belief states for a first state of human trust on the autonomous driving agent and a second state of human's cognitive workload during journey. The autonomous driving agent selects, based on the estimated belief states, a first value for a first action associated with a level of automation transparency between a human user and the autonomous driving agent and controls a display system based on the selected first value to display a cue for calibration of the human trust on the autonomous driving agent.
    Type: Application
    Filed: January 27, 2020
    Publication date: July 29, 2021
    Inventors: Kumar Akash, Teruhisa Misu
  • Patent number: 11042156
    Abstract: A system and method for learning and executing naturalistic driving behavior that include classifying a driving maneuver as a goal-oriented action or a stimulus-driven action based on data associated with a trip of a vehicle. The system and method also include determining a cause associated with the driving maneuver classified as a stimulus-driven action and determining an attention capturing traffic related object associated with the driving maneuver. The system and method additionally include building a naturalistic driving behavior data set that includes at least one of: an annotation of the driving maneuver based on a classification of the driving maneuver, an annotation of the cause, and an annotation of the attention capturing traffic object. The system and method further include controlling the vehicle to be autonomously driven based on the naturalistic driving behavior data set.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: June 22, 2021
    Assignee: Honda Motor Co., Ltd.
    Inventors: Yi-Ting Chen, Teruhisa Misu, Vasili Ramanishka
  • Publication number: 20210078608
    Abstract: A system and method for providing adaptive trust calibration in driving automation that include receiving image data of a vehicle and vehicle automation data associated with automated of driving of the vehicle. The system and method also include analyzing the image data and vehicle automation data and determining an eye gaze direction of a driver of the vehicle and a driver reliance upon automation of the vehicle and processing a Markov decision process model based on the eye gaze direction and the driver reliance to model effects of human trust and workload on observable variables to determine a control policy to provide an optimal level of automation transparency. The system and method further include controlling autonomous transparency of at least one driving function of the vehicle based on the control policy.
    Type: Application
    Filed: February 21, 2020
    Publication date: March 18, 2021
    Inventor: Teruhisa Misu
  • Patent number: 10943154
    Abstract: Multi-modal data representing driving events and corresponding actions related to the driving events can be obtained and used to train a neural network at least in part by using a triplet loss computed for the driving events as a regression loss to determine an embedding of driving event data. In some cases, using the trained neural network, a retrieval request for an input driving event and corresponding action can be processed by determining, from the neural network, one or more similar driving events or corresponding actions in the multi-modal data.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: March 9, 2021
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Ahmed Taha, Yi-Ting Chen, Teruhisa Misu, Larry Davis, Xitong Yang
  • Patent number: 10902303
    Abstract: Methods, systems, and computer-readable mediums storing computer executable code for visual recognition implementing a triplet loss function are provided. The method include receiving an image generated from an image source associated with a vehicle. The method may also include analyzing the image based on a convolutional neural network. The convolutional neural network may apply both a triplet loss function and a softmax loss function to the image to determine classification logits. The method may also include classifying the image into a predetermined class distribution based upon the determined classification logits. The method may also include instructing the vehicle to perform a specific task based upon the classified image.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: January 26, 2021
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Ahmed Taha, Yi-Ting Chen, Teruhisa Misu, Larry Davis
  • Publication number: 20200384981
    Abstract: Aspects of the present disclosure may include methods, apparatuses, and computer readable media for receiving one or more images having a plurality of objects, receiving a notification from an occupant of the self-driving vehicle, generating an attention map highlighting the plurality of objects based on at least one of the one or more images and the notification, and providing at least one of a steering control or a velocity control to operate the self-driving vehicle based on the attention map and the notification.
    Type: Application
    Filed: June 10, 2019
    Publication date: December 10, 2020
    Inventors: Ashish Tawari, Yi-Ting Chen, Teruhisa Misu, John F. Canny, Jinkyu Kim
  • Publication number: 20200377111
    Abstract: A trainer device trains an automated driver system. The trainer device may include a vehicle manager that manages data associated with controlling a vehicle and a simulation manager that manages data associated with simulating the vehicle. The vehicle manager may analyze vehicle data to identify an intervention event, and the simulation manager obtains a portion of the vehicle data corresponding to the intervention event to generate simulation data, obtains user data associated with the simulation data, analyzes the user data to determine whether the user data satisfies a predetermined intervention threshold, and, on condition that the user data satisfies the predetermined intervention threshold, transmits the user data to the vehicle manager for modifying the first control data.
    Type: Application
    Filed: May 30, 2019
    Publication date: December 3, 2020
    Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
  • Publication number: 20200364579
    Abstract: According to one aspect, anomalous event detection based on deep learning may include a system for anomalous event detection for a device. The system includes a computing device having a processor, an encoding module, and a decoding module. The processor is configured to receive sensor data. The encoding module generates reconstruction data based on the sensor data, identifies at least one reconstruction error in the reconstruction data, and determines an anomaly score based on the at least one reconstruction error. The decoding module generates an action prediction based on the sensor data and determines a likelihood value based on the action prediction. The processor can then calculate a scaled anomaly score based on the anomaly score and the likelihood value and causes the processor to execute an action based on the scaled anomaly score.
    Type: Application
    Filed: May 17, 2019
    Publication date: November 19, 2020
    Inventors: Teruhisa Misu, Vidyasagar Sadhu, Dario Pompili
  • Publication number: 20200301437
    Abstract: A system and method for outputting vehicle dynamic controls using deep neural networks that include receiving environmental sensor data from at least one sensor of a vehicle of a surrounding environment of the vehicle. The system and method also include inputting the environmental sensor data to a primary deep neural network structure and inputting intermediate representation, at least one applicable traffic rule, and at least one applicable vehicle maneuver to a secondary deep neural network structure. The system and method further include outputting vehicle dynamic controls to autonomously control the vehicle to navigate within the surrounding environment of the vehicle based on the at least one applicable traffic rule and the at least one applicable vehicle maneuver.
    Type: Application
    Filed: March 20, 2019
    Publication date: September 24, 2020
    Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
  • Patent number: 10759424
    Abstract: The systems and methods provided herein are directed to the uploading and transmission of vehicle data to a remote system when a physiological event for a driver has been detected using one or more sensors. Information such as the driver's heart rate, temperature, voice inflection or facial expression may be monitored to detect the physiological event. Vehicle data, such as gathering or control system data, may be sent once the event has been detected. Selected vehicle data associated with the event or all data during the time of the event may be sent. After receiving the vehicle data, the remote system may process or store it where it may be used to modify automated driving functionalities.
    Type: Grant
    Filed: August 16, 2016
    Date of Patent: September 1, 2020
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Teruhisa Misu, Nanxiang Li, Ashish Tawari
  • Publication number: 20200234086
    Abstract: Multi-modal data representing driving events and corresponding actions related to the driving events can be obtained and used to train a neural network at least in part by using a triplet loss computed for the driving events as a regression loss to determine an embedding of driving event data. In some cases, using the trained neural network, a retrieval request for an input driving event and corresponding action can be processed by determining, from the neural network, one or more similar driving events or corresponding actions in the multi-modal data.
    Type: Application
    Filed: January 22, 2019
    Publication date: July 23, 2020
    Inventors: Ahmed TAHA, Yi-Ting CHEN, Teruhisa MISU, Larry DAVIS, Xitong YANG