Patents by Inventor Teruhisa Misu
Teruhisa Misu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11954921Abstract: A system and method for improving driver situation awareness prediction using human visual sensory and memory mechanism that includes receiving data associated with a driving scene of a vehicle and an eye gaze of a driver of the vehicle. The system and method also include analyzing the data and extracting features associated with objects located within the driving scene and determining a situational awareness score that is associated with a situational awareness of the driver with respect to each of the objects located within the driving scene. The system and method further include communicating control signals to electronically control at least one system of the vehicle based on the situational awareness score that is associated with each of the objects located within the driving scene.Type: GrantFiled: May 19, 2021Date of Patent: April 9, 2024Assignee: HONDA MOTOR CO., LTD.Inventors: Haibei Zhu, Teruhisa Misu, Sujitha Catherine Martin, Xingwei Wu, Kumar Akash
-
Publication number: 20240071090Abstract: Provided is a mobile object control device including a storage medium storing a computer-readable command and a processor connected to the storage medium, the processor executing the computer-readable command to: acquire a photographed image, which is obtained by photographing surroundings of a mobile object by a camera mounted on the mobile object, and an input instruction sentence, which is input by a user of the mobile object; detect a stop position of the mobile object corresponding to the input instruction sentence in the photographed image by inputting at least the photographed image and the input instruction sentence into a trained model including a pre-trained visual-language model, the trained model being trained so as to receive input of at least an image and an instruction sentence to output a stop position of the mobile object corresponding to the instruction sentence in the image; and cause the mobile object to travel to the stop position.Type: ApplicationFiled: August 25, 2022Publication date: February 29, 2024Inventors: Naoki Hosomi, Teruhisa Misu, Kentaro Yamada
-
Publication number: 20240043027Abstract: According to one aspect, an adaptive driving style system may include a set of two or more sensors, a memory, and a processor. The set of two or more sensors may receive two or more sensor signals. The memory may store one or more instructions. The processor may execute one or more of the instructions stored on the memory to perform one or more acts, actions, or steps, including training a trust model using two or more of the sensor signals as input, training a preference model using the trust model and two or more of the sensor signals as input, and generating a driving style preference based on an adaptive driving style model including the trust model and the preference model.Type: ApplicationFiled: August 8, 2022Publication date: February 8, 2024Inventors: Zhaobo K. ZHENG, Teruhisa MISU, Kumar AKASH
-
Publication number: 20240025418Abstract: According to one aspect, profile modeling may be achieved by receiving a first set of data and performing feature selection on the first set of data, receiving a second set of data and performing classification on the second set of data using fuzzy logic inference, receiving a third set of data and performing clustering on the third set of data using hierarchical cluster analysis, and generating a prediction model based on the first set of data, the second set of data, and the third set of data. The prediction model may generate a prediction for profile modeling by receiving a first input of the same data type as the first set of data, a second input of the same data type as the second set of data and outputting the prediction for profile modeling having the same data type as the third set of data.Type: ApplicationFiled: July 20, 2022Publication date: January 25, 2024Inventors: Xishun LIAO, Shashank MEHROTRA, Chun-Ming Samson HO, Teruhisa MISU
-
Publication number: 20230391366Abstract: A system and method for detecting a perceived level of driver discomfort in an automated vehicle that include receiving image data associated with a driving scene of an ego vehicle, dynamic data associated with an operation of the ego vehicle, and driver data associated with a driver of the ego vehicle during autonomous operation of the ego vehicle. The system and method also include analyzing the image data, the dynamic data, and the driver data and extracting features associated with a plurality of modalities. The system and method additionally include analyzing the extracted features and detecting the perceived level of driver discomfort. The system and method further include analyzing the perceived level of driver discomfort and detecting a probable driver takeover intent of the driver of the ego vehicle to takeover manual operation of the ego vehicle.Type: ApplicationFiled: June 1, 2022Publication date: December 7, 2023Inventors: Zhaobo K. ZHENG, Kumar AKASH, Teruhisa MISU
-
Publication number: 20230326348Abstract: A control system comprises a travel control unit configured to cause a moving body to travel to a designated place designated by a dispatch request, in response to the dispatch request from a user. Interactive communication is performed between the user and the moving body via a mobile terminal held by the user, before the moving body reaches the designated place. Image data in which surroundings of the designated place are captured by an imaging unit is acquired, after the moving body has reached the designated place as a result of the interactive communication. Information that has been transmitted from the user in the interactive communication is evaluated, based on the image data that has been acquired.Type: ApplicationFiled: March 24, 2022Publication date: October 12, 2023Inventors: Teruhisa MISU, Kentaro YAMADA
-
Publication number: 20230326048Abstract: A system including an acquisition unit configured to acquire, from a user via a communication device associated with the user, target object data including a feature of a target object selected by the user, an analysis unit configured to analyze whether the target object data that has been acquired by the acquisition unit includes, as the feature, at least one of data of a proper noun or data of a character string related to the target object, and whether the target object data includes data of a color related to the target object, and an estimation unit configured to estimate a distance from the target object to the user, based on an analysis result of the analysis unit, wherein the estimation unit estimates the distance from the target object to the user such that the distance from the target object to the user in a case where the target object data includes at least one of the data of the proper noun or the data of the character string is shorter than the distance from the target object to the user in a caType: ApplicationFiled: March 24, 2022Publication date: October 12, 2023Inventors: Teruhisa MISU, Naoki HOSOMI, Kentaro YAMADA
-
Patent number: 11745744Abstract: A system and method for determining object-wise situational awareness that includes receiving data associated with a driving scene of a vehicle, an eye gaze of a driver of the vehicle, and alerts that are provided to the driver of the vehicle. The system and method also includes analyzing the data and extracting features associated with dynamic objects located within the driving scene, the eye gaze of the driver of the vehicle, and the alerts provided to the driver of the vehicle. The system and method additionally includes determining a level of situational awareness of the driver with respect to the each of the dynamic objects based on the features. The system and method further includes communicating control signals to electronically control at least one component of the vehicle based on the situational awareness of the driver.Type: GrantFiled: December 15, 2021Date of Patent: September 5, 2023Assignee: HONDA MOTOR CO., LTD.Inventors: Teruhisa Misu, Chun Ming Samson Ho, Kumar Akash, Xiaofeng Gao, Xingwei Wu
-
Publication number: 20230256973Abstract: A system and method for predicting a driver's situational awareness that includes receiving driving scene data associated with a driving scene of an ego vehicle and eye gaze data to track a driver's eye gaze behavior with respect to the driving scene. The system and method also include analyzing the eye gaze data and determining an eye gaze fixation value associated with each object that is located within the driving scene and analyzing the driving scene data and determining a situational awareness probability value associated with each object that is located within the driving scene that is based on a salience, effort, expectancy, and a cost value associated with each of the objects within the driving scene. The system and method further include communicating control signals to electronically control at least one component based on the situational awareness probability value and the eye gaze fixation value.Type: ApplicationFiled: March 30, 2022Publication date: August 17, 2023Inventors: Teruhisa MISU, Kumar AKASH
-
Publication number: 20230202525Abstract: A system and method for providing a situational awareness based adaptive driver vehicle interface that include receiving data associated with a driving scene of an ego vehicle and eye gaze data and analyzing the driving scene and the eye gaze data and performing real time fixation detection pertaining to the driver's eye gaze behavior to determine a level of situational awareness with respect to objects that are located within the driving scene. The system and method also include determining at least one level of importance associated with each of the objects and communicating control signals to control at least one component based on at least one of: the at least one level of importance associated with each of the objects that are located within the driving scene and the level of situational awareness with respect to each of the objects that are located within the driving scene.Type: ApplicationFiled: March 16, 2022Publication date: June 29, 2023Inventors: Tong WU, Enna SACHDEVA, Kumar AKASH, Teruhisa MISU
-
Publication number: 20230182745Abstract: A system and method for determining object-wise situational awareness that includes receiving data associated with a driving scene of a vehicle, an eye gaze of a driver of the vehicle, and alerts that are provided to the driver of the vehicle. The system and method also includes analyzing the data and extracting features associated with dynamic objects located within the driving scene, the eye gaze of the driver of the vehicle, and the alerts provided to the driver of the vehicle. The system and method additionally includes determining a level of situational awareness of the driver with respect to the each of the dynamic objects based on the features. The system and method further includes communicating control signals to electronically control at least one component of the vehicle based on the situational awareness of the driver.Type: ApplicationFiled: December 15, 2021Publication date: June 15, 2023Inventors: Teruhisa MISU, Chun Ming Samson HO, Kumar AKASH, Xiaofeng GAO, Xingwei WU
-
Publication number: 20230128456Abstract: An adaptive trust calibration based autonomous vehicle may include vehicle systems, a system behavior controller, and a driving automation controller. The system behavior controller may generate a driving automation signal indicative of a desired autonomous driving adaptation. The driving automation controller may control the vehicle systems based on parameters including a desired velocity, current velocity of the autonomous vehicle, desired minimum gap distance between the autonomous vehicle and a detected object, current gap distance gap between the autonomous vehicle and a detected object, relative velocity of the detected object with respect to the autonomous vehicle, desired time headway, desired maximum acceleration, desired braking deceleration, and an exponent.Type: ApplicationFiled: October 25, 2021Publication date: April 27, 2023Inventors: Manisha NATARAJAN, Kumar AKASH, Teruhisa MISU
-
Publication number: 20230109171Abstract: According to one aspect, systems, methods, and/or techniques associated with operator take-over prediction may include receiving a two-channel series of images of an operating environment through which a vehicle is travelling. A first series of images may be represented by labels corresponding to classified objects. A second series of images may be represented as a gaze heatmap or an eye tracking heatmap. Additionally, feeding encoded series of images through a three-dimensional (3D) convolutional neural network (CNN) to produce a first output, receiving sets of information corresponding to a first series of images and a second series of images in time, feeding sets of information through processing layers to produce additional outputs, concatenating the first output and the additional outputs to produce a concatenation output, and feeding the concatenation output through additional processing layers to generate an operator take-over prediction may be performed.Type: ApplicationFiled: January 7, 2022Publication date: April 6, 2023Inventors: Yuning QIU, Teruhisa MISU, Kumar AKASH
-
Patent number: 11584379Abstract: A system and method for learning naturalistic driving behavior based on vehicle dynamic data that include receiving vehicle dynamic data and image data and analyzing the vehicle dynamic data and the image data to detect a plurality of behavioral events. The system and method also include classifying at least one behavioral event as a stimulus-driven action and building a naturalistic driving behavior data set that includes annotations that are based on the at least one behavioral event that is classified as the stimulus-driven action. The system and method further include controlling a vehicle to be autonomously driven based on the naturalistic driving behavior data set.Type: GrantFiled: August 6, 2018Date of Patent: February 21, 2023Assignee: HONDA MOTOR CO., LTD.Inventors: Teruhisa Misu, Yi-Ting Chen
-
Patent number: 11538259Abstract: The present disclosure provides a method and system to operationalize driver eye movement data analysis based on moving objects of interest. Correlation and/or regression analyses between indirect (e.g., eye-tracking) and/or direct measures of driver awareness may identify variables that feature spatial and/or temporal aspects of driver eye glance behavior relative to objects of interest. The proposed systems and methods may be further combined with computer-vision techniques such as object recognition to (e.g., fully) automate eye movement data processing as well as machine learning approaches to improve the accuracy of driver awareness estimation.Type: GrantFiled: November 17, 2020Date of Patent: December 27, 2022Assignee: Honda Motor Co., Ltd.Inventors: Sujitha Catherine Martin, Teruhisa Misu, Hyungil Kim, Ashish Tawari, Joseph L. Gabbard
-
Publication number: 20220396287Abstract: Aspects of adaptive trust calibration may include receiving a trust model for an occupant of an autonomous vehicle calculated based on occupant sensor data and a first scene context sensor data, and/or receiving a second scene context sensor data associated with an environment of the autonomous vehicle, determining an over trust scenario or an under trust scenario based on the trust model and a trust model threshold, and generating and implementing a human machine interface (HMI) action or a driving automation action based on the determination of the over trust scenario or the determination of the under trust scenario, and/or the second scene context sensor data.Type: ApplicationFiled: June 10, 2021Publication date: December 15, 2022Inventors: Kumar AKASH, Teruhisa MISU
-
Publication number: 20220396273Abstract: Systems and methods for clustering human trust dynamics are provided. In one embodiment, a computer implemented method for clustering human trust dynamics is provided. The computer implemented method includes receiving trust data for a plurality of participants interacting with one or more agents in an interaction. The computer implemented method also includes identifying a plurality of phases for the interaction. The computer implemented method further includes extracting features characterizing trust dynamics from the trust data for at least one interaction for each participant of the plurality participants. The at least one interaction is between the participant and an agent of the one or more agents. The computer implemented yet further includes assigning the features characterizing trust dynamics to a phase of the plurality of phases. The computer implemented method includes grouping a subset of the participant of the plurality of participants based on the on features characterizing trust dynamics.Type: ApplicationFiled: March 4, 2022Publication date: December 15, 2022Inventors: Kumar AKASH, Teruhisa MISU, Xingwei WU, Jundi LIU
-
Publication number: 20220383509Abstract: A system and method for learning temporally consistent video synthesis using fake optical flow that include receiving data associated with a source video and a target video. The system and method also include processing image-to-image translation across domains of the source video and the target video and processing a synthesized temporally consistent video based on the image-to-image translation. The system and method further include training a neural network with data that is based on synthesizing of the source video and the target video.Type: ApplicationFiled: October 13, 2021Publication date: December 1, 2022Inventors: Teruhisa MISU, Kumar AKASH, Kaihong WANG
-
Patent number: 11498591Abstract: A system and method for providing adaptive trust calibration in driving automation that include receiving image data of a vehicle and vehicle automation data associated with automated of driving of the vehicle. The system and method also include analyzing the image data and vehicle automation data and determining an eye gaze direction of a driver of the vehicle and a driver reliance upon automation of the vehicle and processing a Markov decision process model based on the eye gaze direction and the driver reliance to model effects of human trust and workload on observable variables to determine a control policy to provide an optimal level of automation transparency. The system and method further include controlling autonomous transparency of at least one driving function of the vehicle based on the control policy.Type: GrantFiled: February 21, 2020Date of Patent: November 15, 2022Assignee: HONDA MOTOR CO., LTD.Inventor: Teruhisa Misu
-
Publication number: 20220324490Abstract: A system and method for providing an RNN-based human trust model that include receiving a plurality of inputs related to an autonomous operation of a vehicle and a driving scene of the vehicle and analyzing the plurality of inputs to determine automation variables and scene variables. The system and method also include outputting a short-term trust recurrent neural network state that captures an effect of the driver's experience with respect to an instantaneous vehicle maneuver and a long-term trust recurrent neural network state that captures the effect of the driver's experience with respect to the autonomous operation of the vehicle during a traffic scenario. The system and method further include predicting a take-over intent of the driver to take over control of the vehicle from an automated operation of the vehicle during the traffic scenario.Type: ApplicationFiled: September 3, 2021Publication date: October 13, 2022Inventors: Kumar AKASH, Teruhisa MISU, Xingwei WU