Patents by Inventor Sujitha Catherine Martin
Sujitha Catherine Martin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11954921Abstract: A system and method for improving driver situation awareness prediction using human visual sensory and memory mechanism that includes receiving data associated with a driving scene of a vehicle and an eye gaze of a driver of the vehicle. The system and method also include analyzing the data and extracting features associated with objects located within the driving scene and determining a situational awareness score that is associated with a situational awareness of the driver with respect to each of the objects located within the driving scene. The system and method further include communicating control signals to electronically control at least one system of the vehicle based on the situational awareness score that is associated with each of the objects located within the driving scene.Type: GrantFiled: May 19, 2021Date of Patent: April 9, 2024Assignee: HONDA MOTOR CO., LTD.Inventors: Haibei Zhu, Teruhisa Misu, Sujitha Catherine Martin, Xingwei Wu, Kumar Akash
-
Patent number: 11538259Abstract: The present disclosure provides a method and system to operationalize driver eye movement data analysis based on moving objects of interest. Correlation and/or regression analyses between indirect (e.g., eye-tracking) and/or direct measures of driver awareness may identify variables that feature spatial and/or temporal aspects of driver eye glance behavior relative to objects of interest. The proposed systems and methods may be further combined with computer-vision techniques such as object recognition to (e.g., fully) automate eye movement data processing as well as machine learning approaches to improve the accuracy of driver awareness estimation.Type: GrantFiled: November 17, 2020Date of Patent: December 27, 2022Assignee: Honda Motor Co., Ltd.Inventors: Sujitha Catherine Martin, Teruhisa Misu, Hyungil Kim, Ashish Tawari, Joseph L. Gabbard
-
Publication number: 20220277165Abstract: A system and method for improving driver situation awareness prediction using human visual sensory and memory mechanism that includes receiving data associated with a driving scene of a vehicle and an eye gaze of a driver of the vehicle. The system and method also include analyzing the data and extracting features associated with objects located within the driving scene and determining a situational awareness score that is associated a situational awareness of the driver with respect to each of the objects located within the driving scene. The system and method further include communicating control signals to electronically control at least one system of the vehicle based on the situational awareness score that is associated with each of the objects located within the driving scene.Type: ApplicationFiled: May 19, 2021Publication date: September 1, 2022Inventors: Haibei ZHU, Teruhisa MISU, Sujitha Catherine MARTIN, Xingwei WU, Kumar AKASH
-
Patent number: 11420623Abstract: Determining object importance in vehicle control systems can include obtaining, for a vehicle in operation, an image of a dynamic scene, identifying an object type associated with one or more objects in the image, determining, based on the object type and a goal associated with the vehicle, an importance metric associated with the one or more objects, and controlling the vehicle based at least in part on the importance metric associated with the one or more objects.Type: GrantFiled: March 20, 2019Date of Patent: August 23, 2022Assignee: HONDA MOTOR CO., LTD.Inventors: Ashish Tawari, Sujitha Catherine Martin, Mingfei Gao
-
Patent number: 11216001Abstract: A system and method for outputting vehicle dynamic controls using deep neural networks that include receiving environmental sensor data from at least one sensor of a vehicle of a surrounding environment of the vehicle. The system and method also include inputting the environmental sensor data to a primary deep neural network structure and inputting intermediate representation, at least one applicable traffic rule, and at least one applicable vehicle maneuver to a secondary deep neural network structure. The system and method further include outputting vehicle dynamic controls to autonomously control the vehicle to navigate within the surrounding environment of the vehicle based on the at least one applicable traffic rule and the at least one applicable vehicle maneuver.Type: GrantFiled: March 20, 2019Date of Patent: January 4, 2022Assignee: HONDA MOTOR CO., LTD.Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
-
Patent number: 11188766Abstract: A system and method for providing context aware road user importance estimation that include receiving at least one image of a vicinity of an ego vehicle. The system and method also include analyzing the at least one image to determine a local context associated with at least one road user located within the vicinity of the ego vehicle. The system and method additionally include determining a global context associated with the ego vehicle. The system and method further include fusing the local context and the global context to classify at least one highly important road user that is to be accounted for with respect to operating the ego vehicle.Type: GrantFiled: August 16, 2019Date of Patent: November 30, 2021Assignee: HONDA MOTOR CO., LTD.Inventors: Alireza Rahimpour, Sujitha Catherine Martin, Ashish Tawari, Hairong Qi
-
Patent number: 11150656Abstract: Systems and techniques for autonomous vehicle decision making may include training an autonomous vehicle decision making database by capturing an image including a first training object and a second training object during a training phase. The first training object may be classified as a first class and the second training object may be classified as a second class based on a driver gaze location associated with a driver of the vehicle. The database may be built based on classification of the first training object and the second training object. The autonomous vehicle decision making database may be utilized to classify a first object as a first class and a second object as a second class during an operation phase. A processor may perform a first computation associated with the first object based on the classification of the first object and the classification of the second object.Type: GrantFiled: November 19, 2018Date of Patent: October 19, 2021Assignee: Honda Motor Co., Ltd.Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
-
Publication number: 20210248399Abstract: The present disclosure provides a method and system to operationalize driver eye movement data analysis based on moving objects of interest. Correlation and/or regression analyses between indirect (e.g., eye-tracking) and/or direct measures of driver awareness may identify variables that feature spatial and/or temporal aspects of driver eye glance behavior relative to objects of interest. The proposed systems and methods may be further combined with computer-vision techniques such as object recognition to (e.g., fully) automate eye movement data processing as well as machine learning approaches to improve the accuracy of driver awareness estimation.Type: ApplicationFiled: November 17, 2020Publication date: August 12, 2021Inventors: Sujitha Catherine MARTIN, Teruhisa MISU, Hyungil KIM, Ashish TAWARI, Joseph L. GABBARD
-
Publication number: 20210232913Abstract: In some examples, a dynamic system, including a vehicle, may be represented using a graph-based representation. One or more nodes in the graph-based representation may correspond to one or more agents in the dynamic system, and one or more edges between the nodes in the graph-based representation may correspond to one or more interactions between the agents in the dynamic system. The interactions may be defined based on human domain knowledge of the dynamic system. The dynamic system may be modeled using a respective machine learning model that includes a reward decoder that operates on the graph-based representation and evaluates one or more reward functions for the dynamic system. The one or more reward functions may be defined based on the human domain knowledge of the dynamic system. Autonomous operation of the vehicle may be controlled based on the modeling of the dynamic system.Type: ApplicationFiled: September 4, 2020Publication date: July 29, 2021Inventors: Sujitha Catherine MARTIN, Chen TANG
-
Patent number: 10902279Abstract: Saliency training may be provided to build a saliency database, which may be utilized to facilitate operation of an autonomous vehicle. The saliency database may be built by minimizing a loss function between a saliency prediction result and a saliency mapper result. The saliency mapper result may be obtained from a ground truth database, which includes image frames of an operation environment where objects or regions within respective image frames are associated with a positive saliency, a neutral saliency, or a negative saliency. Neutral saliency may be indicative of a detected gaze location of a driver corresponding to the object or region at a time prior to the time associated with a given image frame. The saliency prediction result may be generated based on features extracted from respective image frames, depth-wise concatenations associated with respective image frames, and a long short-term memory layer or a recurrent neural network.Type: GrantFiled: September 25, 2018Date of Patent: January 26, 2021Assignee: Honda Motor Co., Ltd.Inventors: Ashish Tawari, Sujitha Catherine Martin
-
Publication number: 20200377111Abstract: A trainer device trains an automated driver system. The trainer device may include a vehicle manager that manages data associated with controlling a vehicle and a simulation manager that manages data associated with simulating the vehicle. The vehicle manager may analyze vehicle data to identify an intervention event, and the simulation manager obtains a portion of the vehicle data corresponding to the intervention event to generate simulation data, obtains user data associated with the simulation data, analyzes the user data to determine whether the user data satisfies a predetermined intervention threshold, and, on condition that the user data satisfies the predetermined intervention threshold, transmits the user data to the vehicle manager for modifying the first control data.Type: ApplicationFiled: May 30, 2019Publication date: December 3, 2020Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
-
Publication number: 20200298847Abstract: Determining object importance in vehicle control systems can include obtaining, for a vehicle in operation, an image of a dynamic scene, identifying an object type associated with one or more objects in the image, determining, based on the object type and a goal associated with the vehicle, an importance metric associated with the one or more objects, and controlling the vehicle based at least in part on the importance metric associated with the one or more objects.Type: ApplicationFiled: March 20, 2019Publication date: September 24, 2020Inventors: Ashish Tawari, Sujitha Catherine Martin, Mingfei Gao
-
Publication number: 20200301437Abstract: A system and method for outputting vehicle dynamic controls using deep neural networks that include receiving environmental sensor data from at least one sensor of a vehicle of a surrounding environment of the vehicle. The system and method also include inputting the environmental sensor data to a primary deep neural network structure and inputting intermediate representation, at least one applicable traffic rule, and at least one applicable vehicle maneuver to a secondary deep neural network structure. The system and method further include outputting vehicle dynamic controls to autonomously control the vehicle to navigate within the surrounding environment of the vehicle based on the at least one applicable traffic rule and the at least one applicable vehicle maneuver.Type: ApplicationFiled: March 20, 2019Publication date: September 24, 2020Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
-
Publication number: 20200250437Abstract: A system and method for providing context aware road user importance estimation that include receiving at least one image of a vicinity of an ego vehicle. The system and method also include analyzing the at least one image to determine a local context associated with at least one road user located within the vicinity of the ego vehicle. The system and method additionally include determining a global context associated with the ego vehicle. The system and method further include fusing the local context and the global context to classify at least one highly important road user that is to be accounted for with respect to operating the ego vehicle.Type: ApplicationFiled: August 16, 2019Publication date: August 6, 2020Inventors: Alirerza Rahimpour, Sujitha Catherine Martin, Ashish Tawari, Hairong Qi
-
Publication number: 20200159214Abstract: Systems and techniques for autonomous vehicle decision making may include training an autonomous vehicle decision making database by capturing an image including a first training object and a second training object during a training phase. The first training object may be classified as a first class and the second training object may be classified as a second class based on a driver gaze location associated with a driver of the vehicle. The database may be built based on classification of the first training object and the second training object. The autonomous vehicle decision making database may be utilized to classify a first object as a first class and a second object as a second class during an operation phase. A processor may perform a first computation associated with the first object based on the classification of the first object and the classification of the second object.Type: ApplicationFiled: November 19, 2018Publication date: May 21, 2020Inventors: Teruhisa Misu, Ashish Tawari, Sujitha Catherine Martin
-
Publication number: 20200097754Abstract: Saliency training may be provided to build a saliency database, which may be utilized to facilitate operation of an autonomous vehicle. The saliency database may be built by minimizing a loss function between a saliency prediction result and a saliency mapper result. The saliency mapper result may be obtained from a ground truth database, which includes image frames of an operation environment where objects or regions within respective image frames are associated with a positive saliency, a neutral saliency, or a negative saliency. Neutral saliency may be indicative of a detected gaze location of a driver corresponding to the object or region at a time prior to the time associated with a given image frame. The saliency prediction result may be generated based on features extracted from respective image frames, depth-wise concatenations associated with respective image frames, and a long short-term memory layer or a recurrent neural network.Type: ApplicationFiled: September 25, 2018Publication date: March 26, 2020Inventors: Ashish Tawari, Sujitha Catherine Martin