Patents by Inventor Kumar Akash
Kumar Akash has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11954921Abstract: A system and method for improving driver situation awareness prediction using human visual sensory and memory mechanism that includes receiving data associated with a driving scene of a vehicle and an eye gaze of a driver of the vehicle. The system and method also include analyzing the data and extracting features associated with objects located within the driving scene and determining a situational awareness score that is associated with a situational awareness of the driver with respect to each of the objects located within the driving scene. The system and method further include communicating control signals to electronically control at least one system of the vehicle based on the situational awareness score that is associated with each of the objects located within the driving scene.Type: GrantFiled: May 19, 2021Date of Patent: April 9, 2024Assignee: HONDA MOTOR CO., LTD.Inventors: Haibei Zhu, Teruhisa Misu, Sujitha Catherine Martin, Xingwei Wu, Kumar Akash
-
Publication number: 20240043027Abstract: According to one aspect, an adaptive driving style system may include a set of two or more sensors, a memory, and a processor. The set of two or more sensors may receive two or more sensor signals. The memory may store one or more instructions. The processor may execute one or more of the instructions stored on the memory to perform one or more acts, actions, or steps, including training a trust model using two or more of the sensor signals as input, training a preference model using the trust model and two or more of the sensor signals as input, and generating a driving style preference based on an adaptive driving style model including the trust model and the preference model.Type: ApplicationFiled: August 8, 2022Publication date: February 8, 2024Inventors: Zhaobo K. ZHENG, Teruhisa MISU, Kumar AKASH
-
Publication number: 20230391366Abstract: A system and method for detecting a perceived level of driver discomfort in an automated vehicle that include receiving image data associated with a driving scene of an ego vehicle, dynamic data associated with an operation of the ego vehicle, and driver data associated with a driver of the ego vehicle during autonomous operation of the ego vehicle. The system and method also include analyzing the image data, the dynamic data, and the driver data and extracting features associated with a plurality of modalities. The system and method additionally include analyzing the extracted features and detecting the perceived level of driver discomfort. The system and method further include analyzing the perceived level of driver discomfort and detecting a probable driver takeover intent of the driver of the ego vehicle to takeover manual operation of the ego vehicle.Type: ApplicationFiled: June 1, 2022Publication date: December 7, 2023Inventors: Zhaobo K. ZHENG, Kumar AKASH, Teruhisa MISU
-
Patent number: 11745744Abstract: A system and method for determining object-wise situational awareness that includes receiving data associated with a driving scene of a vehicle, an eye gaze of a driver of the vehicle, and alerts that are provided to the driver of the vehicle. The system and method also includes analyzing the data and extracting features associated with dynamic objects located within the driving scene, the eye gaze of the driver of the vehicle, and the alerts provided to the driver of the vehicle. The system and method additionally includes determining a level of situational awareness of the driver with respect to the each of the dynamic objects based on the features. The system and method further includes communicating control signals to electronically control at least one component of the vehicle based on the situational awareness of the driver.Type: GrantFiled: December 15, 2021Date of Patent: September 5, 2023Assignee: HONDA MOTOR CO., LTD.Inventors: Teruhisa Misu, Chun Ming Samson Ho, Kumar Akash, Xiaofeng Gao, Xingwei Wu
-
Publication number: 20230256973Abstract: A system and method for predicting a driver's situational awareness that includes receiving driving scene data associated with a driving scene of an ego vehicle and eye gaze data to track a driver's eye gaze behavior with respect to the driving scene. The system and method also include analyzing the eye gaze data and determining an eye gaze fixation value associated with each object that is located within the driving scene and analyzing the driving scene data and determining a situational awareness probability value associated with each object that is located within the driving scene that is based on a salience, effort, expectancy, and a cost value associated with each of the objects within the driving scene. The system and method further include communicating control signals to electronically control at least one component based on the situational awareness probability value and the eye gaze fixation value.Type: ApplicationFiled: March 30, 2022Publication date: August 17, 2023Inventors: Teruhisa MISU, Kumar AKASH
-
Publication number: 20230202525Abstract: A system and method for providing a situational awareness based adaptive driver vehicle interface that include receiving data associated with a driving scene of an ego vehicle and eye gaze data and analyzing the driving scene and the eye gaze data and performing real time fixation detection pertaining to the driver's eye gaze behavior to determine a level of situational awareness with respect to objects that are located within the driving scene. The system and method also include determining at least one level of importance associated with each of the objects and communicating control signals to control at least one component based on at least one of: the at least one level of importance associated with each of the objects that are located within the driving scene and the level of situational awareness with respect to each of the objects that are located within the driving scene.Type: ApplicationFiled: March 16, 2022Publication date: June 29, 2023Inventors: Tong WU, Enna SACHDEVA, Kumar AKASH, Teruhisa MISU
-
Publication number: 20230182745Abstract: A system and method for determining object-wise situational awareness that includes receiving data associated with a driving scene of a vehicle, an eye gaze of a driver of the vehicle, and alerts that are provided to the driver of the vehicle. The system and method also includes analyzing the data and extracting features associated with dynamic objects located within the driving scene, the eye gaze of the driver of the vehicle, and the alerts provided to the driver of the vehicle. The system and method additionally includes determining a level of situational awareness of the driver with respect to the each of the dynamic objects based on the features. The system and method further includes communicating control signals to electronically control at least one component of the vehicle based on the situational awareness of the driver.Type: ApplicationFiled: December 15, 2021Publication date: June 15, 2023Inventors: Teruhisa MISU, Chun Ming Samson HO, Kumar AKASH, Xiaofeng GAO, Xingwei WU
-
Publication number: 20230128456Abstract: An adaptive trust calibration based autonomous vehicle may include vehicle systems, a system behavior controller, and a driving automation controller. The system behavior controller may generate a driving automation signal indicative of a desired autonomous driving adaptation. The driving automation controller may control the vehicle systems based on parameters including a desired velocity, current velocity of the autonomous vehicle, desired minimum gap distance between the autonomous vehicle and a detected object, current gap distance gap between the autonomous vehicle and a detected object, relative velocity of the detected object with respect to the autonomous vehicle, desired time headway, desired maximum acceleration, desired braking deceleration, and an exponent.Type: ApplicationFiled: October 25, 2021Publication date: April 27, 2023Inventors: Manisha NATARAJAN, Kumar AKASH, Teruhisa MISU
-
Publication number: 20230109171Abstract: According to one aspect, systems, methods, and/or techniques associated with operator take-over prediction may include receiving a two-channel series of images of an operating environment through which a vehicle is travelling. A first series of images may be represented by labels corresponding to classified objects. A second series of images may be represented as a gaze heatmap or an eye tracking heatmap. Additionally, feeding encoded series of images through a three-dimensional (3D) convolutional neural network (CNN) to produce a first output, receiving sets of information corresponding to a first series of images and a second series of images in time, feeding sets of information through processing layers to produce additional outputs, concatenating the first output and the additional outputs to produce a concatenation output, and feeding the concatenation output through additional processing layers to generate an operator take-over prediction may be performed.Type: ApplicationFiled: January 7, 2022Publication date: April 6, 2023Inventors: Yuning QIU, Teruhisa MISU, Kumar AKASH
-
Publication number: 20220396273Abstract: Systems and methods for clustering human trust dynamics are provided. In one embodiment, a computer implemented method for clustering human trust dynamics is provided. The computer implemented method includes receiving trust data for a plurality of participants interacting with one or more agents in an interaction. The computer implemented method also includes identifying a plurality of phases for the interaction. The computer implemented method further includes extracting features characterizing trust dynamics from the trust data for at least one interaction for each participant of the plurality participants. The at least one interaction is between the participant and an agent of the one or more agents. The computer implemented yet further includes assigning the features characterizing trust dynamics to a phase of the plurality of phases. The computer implemented method includes grouping a subset of the participant of the plurality of participants based on the on features characterizing trust dynamics.Type: ApplicationFiled: March 4, 2022Publication date: December 15, 2022Inventors: Kumar AKASH, Teruhisa MISU, Xingwei WU, Jundi LIU
-
Publication number: 20220396287Abstract: Aspects of adaptive trust calibration may include receiving a trust model for an occupant of an autonomous vehicle calculated based on occupant sensor data and a first scene context sensor data, and/or receiving a second scene context sensor data associated with an environment of the autonomous vehicle, determining an over trust scenario or an under trust scenario based on the trust model and a trust model threshold, and generating and implementing a human machine interface (HMI) action or a driving automation action based on the determination of the over trust scenario or the determination of the under trust scenario, and/or the second scene context sensor data.Type: ApplicationFiled: June 10, 2021Publication date: December 15, 2022Inventors: Kumar AKASH, Teruhisa MISU
-
Publication number: 20220383509Abstract: A system and method for learning temporally consistent video synthesis using fake optical flow that include receiving data associated with a source video and a target video. The system and method also include processing image-to-image translation across domains of the source video and the target video and processing a synthesized temporally consistent video based on the image-to-image translation. The system and method further include training a neural network with data that is based on synthesizing of the source video and the target video.Type: ApplicationFiled: October 13, 2021Publication date: December 1, 2022Inventors: Teruhisa MISU, Kumar AKASH, Kaihong WANG
-
Publication number: 20220324490Abstract: A system and method for providing an RNN-based human trust model that include receiving a plurality of inputs related to an autonomous operation of a vehicle and a driving scene of the vehicle and analyzing the plurality of inputs to determine automation variables and scene variables. The system and method also include outputting a short-term trust recurrent neural network state that captures an effect of the driver's experience with respect to an instantaneous vehicle maneuver and a long-term trust recurrent neural network state that captures the effect of the driver's experience with respect to the autonomous operation of the vehicle during a traffic scenario. The system and method further include predicting a take-over intent of the driver to take over control of the vehicle from an automated operation of the vehicle during the traffic scenario.Type: ApplicationFiled: September 3, 2021Publication date: October 13, 2022Inventors: Kumar AKASH, Teruhisa MISU, Xingwei WU
-
Publication number: 20220277165Abstract: A system and method for improving driver situation awareness prediction using human visual sensory and memory mechanism that includes receiving data associated with a driving scene of a vehicle and an eye gaze of a driver of the vehicle. The system and method also include analyzing the data and extracting features associated with objects located within the driving scene and determining a situational awareness score that is associated a situational awareness of the driver with respect to each of the objects located within the driving scene. The system and method further include communicating control signals to electronically control at least one system of the vehicle based on the situational awareness score that is associated with each of the objects located within the driving scene.Type: ApplicationFiled: May 19, 2021Publication date: September 1, 2022Inventors: Haibei ZHU, Teruhisa MISU, Sujitha Catherine MARTIN, Xingwei WU, Kumar AKASH
-
Patent number: 11332165Abstract: An autonomous driving agent is provided. The autonomous driving agent determines a set of observations from sensor information of a sensor system of a vehicle. The set of observations includes human attention information for a scene of surrounding environment and a level of human reliance as indicated by human inputs to the autonomous driving agent. The autonomous driving agent estimates, based on the set of observations, belief states for a first state of human trust on the autonomous driving agent and a second state of human's cognitive workload during journey. The autonomous driving agent selects, based on the estimated belief states, a first value for a first action associated with a level of automation transparency between a human user and the autonomous driving agent and controls a display system based on the selected first value to display a cue for calibration of the human trust on the autonomous driving agent.Type: GrantFiled: January 27, 2020Date of Patent: May 17, 2022Assignee: Honda Motor Co., Ltd.Inventors: Kumar Akash, Teruhisa Misu
-
Publication number: 20210229707Abstract: An autonomous driving agent is provided. The autonomous driving agent determines a set of observations from sensor information of a sensor system of a vehicle. The set of observations includes human attention information for a scene of surrounding environment and a level of human reliance as indicated by human inputs to the autonomous driving agent. The autonomous driving agent estimates, based on the set of observations, belief states for a first state of human trust on the autonomous driving agent and a second state of human's cognitive workload during journey. The autonomous driving agent selects, based on the estimated belief states, a first value for a first action associated with a level of automation transparency between a human user and the autonomous driving agent and controls a display system based on the selected first value to display a cue for calibration of the human trust on the autonomous driving agent.Type: ApplicationFiled: January 27, 2020Publication date: July 29, 2021Inventors: Kumar Akash, Teruhisa Misu