Patents by Inventor Simon A.I. Stent
Simon A.I. Stent has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250121819Abstract: Systems, methods, and other embodiments described herein relate to predicting future trajectories of ado vehicles and an ego vehicle based on the awareness of the driver of the ego vehicle towards the ado vehicles. In one embodiment, a method includes determining an awareness of a driver of an ego vehicle to ado vehicles in the vicinity of the ego vehicle. The method also includes altering track data of ado vehicles based on a lack of awareness of the driver towards the ado vehicles. The method also includes transmitting altered track data of the ado vehicles to a prediction module. The prediction module predicts future trajectories of the ado vehicles and the ego vehicle based on the altered track data and an ego vehicle track data.Type: ApplicationFiled: March 5, 2024Publication date: April 17, 2025Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki KaishaInventors: John H. Gideon, Guy Rosman, Simon A.I. Stent, Kimimasa Tamura, Abhijat Biswas
-
Publication number: 20250058792Abstract: Systems and methods are provided for an advanced driver-assistance system (ADAS) that obtains data from a plurality of sensors. In some embodiments, the system can retrieve data regarding a user's past interactions and analyze the data with the sensor data to determine the user's behavior. In some embodiments, the ADAS can determine whether a user is unaware of an ADAS feature based on this behavior and a prompt that recommends the ADAS feature. The user's response to this prompt may be incorporated into the user's behavior for future recommendations.Type: ApplicationFiled: October 31, 2024Publication date: February 20, 2025Inventors: SIMON A. I. STENT, GUY ROSMAN
-
Patent number: 12195006Abstract: Extended reality content in a video can be varied based on driver attentiveness. The video can be of an external environment of a vehicle and can be presented in real-time on a display located within the vehicle. The display can be a video pass through display. The display can be an in-vehicle display, or it can be a part of a video pass-through extended reality headset. The video can present a view of an external environment of the vehicle as well as extended reality content. A level of attentiveness of a driver of the vehicle can be determined. An amount of the extended reality content presented in the video can be varied based on the level of attentiveness.Type: GrantFiled: February 22, 2022Date of Patent: January 14, 2025Assignee: Toyota Research Institute, Inc.Inventors: Hiroshi Yasuda, Simon A. I. Stent
-
Patent number: 12162507Abstract: Systems and methods are provided for an advanced driver-assistance system (ADAS) that obtains data from a plurality of sensors. In some embodiments, the system can retrieve data regarding a user's past interactions and analyze the data with the sensor data to determine the user's behavior. In some embodiments, the ADAS can determine whether a user is unaware of an ADAS feature based on this behavior and a prompt that recommends the ADAS feature. The user's response to this prompt may be incorporated into the user's behavior for future recommendations.Type: GrantFiled: March 7, 2022Date of Patent: December 10, 2024Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Simon A. I. Stent, Guy Rosman
-
Patent number: 12157483Abstract: Systems and methods for use of operator state conditions to generate, adapt or otherwise produce visual signals relaying information to the operator are disclosed. A monitoring system may observe and analyze a vehicle operator to determine an operator state. The monitoring system may transmit the observed driver state to an operator alert system that generates, conditions and controls the transmission of signals to the operator.Type: GrantFiled: May 5, 2022Date of Patent: December 3, 2024Assignees: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHAInventor: Simon A. I. Stent
-
Publication number: 20240391485Abstract: A method for triggering capture of diverse driving data from captions is described. The method includes training a discriminator network to identify similarities between a received text description and a received scene description. The method also includes feeding a trained discriminator network with real scene information along with text/sentence descriptions to verify whether the real scene information matches the text/sentence description. The method further includes generating a dataset of diverse driving scenarios retrieved from a dataset of vehicle driving log data in response to a text/sentence query.Type: ApplicationFiled: May 26, 2023Publication date: November 28, 2024Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Guy ROSMAN, Yen-Ling KUO, Stephen G. MCGILL, Simon A.I. STENT
-
Patent number: 12139072Abstract: A vehicular warning system control system can include a processor and a memory in communication with the processor. The memory can include a warning system control module having instructions that, when executed by the processor, cause the processor to detect, using sensor data having information about a gaze of each eye of a driver of a vehicle, an abnormality of a gaze of the driver. The instructions further cause the processor to modify, using the sensor data, a signal emitted by the vehicle when the abnormality is detected.Type: GrantFiled: March 9, 2022Date of Patent: November 12, 2024Assignee: Toyota Research Institute, Inc.Inventors: Simon A.I. Stent, Heishiro Toyoda
-
Patent number: 12084080Abstract: Systems and methods for learning and managing robot user interfaces are disclosed herein. One embodiment generates, based on input data including information about past interactions of a particular user with a robot and with existing HMIs of the robot, a latent space using one or more encoder neural networks, wherein the latent space is a reduced-dimensionality representation of learned behavior and characteristics of the particular user, and uses the latent space as input to train a decoder neural network associated with (1) a new HMI distinct from the existing HMIs or (2) a particular HMI among the existing HMIs to alter operation of the particular HMI. The trained first decoder neural network is deployed in the robot to control, at least in part, operation of the new HMI or the particular HMI in accordance with the learned behavior and characteristics of the particular user.Type: GrantFiled: August 26, 2022Date of Patent: September 10, 2024Assignees: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki KaishaInventors: Guy Rosman, Daniel J. Brooks, Simon A. I. Stent, Tiffany Chen, Emily Sarah Sumner, Shabnam Hakimi, Jonathan DeCastro, Deepak Edakkattil Gopinath
-
Patent number: 12036006Abstract: Systems and methods for training remote photoplethysmography (“PPG”) models that outputs a subject PPG signal based on a subject video clip of a subject are described herein. The system may have a processor and a memory in communication with the processor. The memory may include a training module having instructions that, when executed by the processor, cause the processor to train the remote PPG model in a self-supervised contrastive learning manner using an unlabeled video clip having a sequence of images of a face of a person.Type: GrantFiled: May 11, 2021Date of Patent: July 16, 2024Assignee: Toyota Research Institute, Inc.Inventors: John H. Gideon, Simon A. I. Stent
-
Patent number: 12005919Abstract: Extended reality content in a video can be varied based on a risk level of an external environment of a vehicle. The video can be of an external environment of a vehicle can be presented in real-time on a display located within the vehicle. The display can be a video pass through display. The display can be an in-vehicle display, or it can be a part of a video pass-through extended reality headset. The video can present a view of an external environment of the vehicle as well as extended reality content. A risk level of the external environment of the vehicle can be determined. An amount of the extended reality content presented in the video can be varied based on the risk level.Type: GrantFiled: February 22, 2022Date of Patent: June 11, 2024Assignee: Toyota Research Institute, Inc.Inventors: Hiroshi Yasuda, Simon A. I. Stent
-
Patent number: 11999356Abstract: A system includes a camera configured to capture image data of an environment, a monitoring system configured to generate a gaze sequences of a subject, and a computing device communicatively coupled to the camera and the monitoring system. The computing device is configured to receive the image data from the camera and the gaze sequences from the monitoring system, implement a machine learning model comprising a convolutional encoder-decoder neural network configured to process the image data and a side-channel configured to inject the gaze sequences into a decoder stage of the convolutional encoder-decoder neural network, generate, with the machine learning model, a gaze probability density heat map, and generate, with the machine learning model, an attended awareness heat map.Type: GrantFiled: June 18, 2021Date of Patent: June 4, 2024Assignee: Toyota Research Institute, Inc.Inventors: Guy Rosman, Simon A. I. Stent, Luke Fletcher, John Leonard, Deepak Gopinath, Katsuya Terahata
-
Publication number: 20240104905Abstract: A method for multi-view dataset formation from fleet data is described. The method includes detecting at least a pair of vehicles within a vicinity of one another, and having overlapping viewing frustums of a scene. The method also includes triggering a capture of sensor data from the pair of vehicles. The method further includes synchronizing the sensor data captured by the pair of vehicles. The method also includes registering the sensor data captured by the pair of vehicles within a shared coordinate system to form a multi-view dataset of the scene.Type: ApplicationFiled: September 28, 2022Publication date: March 28, 2024Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Simon A.I. STENT, Dennis PARK
-
Publication number: 20240092356Abstract: Systems and methods for training a policy are disclosed. In one example, a system includes a processor and a memory with instructions that cause the processor to train the policy using a training data set with training scenes to generate an identification policy and perform a closed-loop simulation on the identification policy to collect closed-loop metrics. Based on the closed-loop metrics, the instructions cause the processor to construct an error set of the training scenes and construct an upsampled training set by upsampling the error set. After that, the policy is trained using the upsampled training set to generate a final policy.Type: ApplicationFiled: February 15, 2023Publication date: March 21, 2024Applicant: Woven by Toyota, Inc.Inventors: Eesha Kumar, Yiming Zhang, Stefano Pini, Simon A.I. Stent, Ana Sofia Rufino Ferreira, Sergey Zagoruyko, Christian Samuel Perone
-
Publication number: 20240010218Abstract: Systems and methods for learning and managing robot user interfaces are disclosed herein. One embodiment generates, based on input data including information about past interactions of a particular user with a robot and with existing HMIs of the robot, a latent space using one or more encoder neural networks, wherein the latent space is a reduced-dimensionality representation of learned behavior and characteristics of the particular user, and uses the latent space as input to train a decoder neural network associated with (1) a new HMI distinct from the existing HMIs or (2) a particular HMI among the existing HMIs to alter operation of the particular HMI. The trained first decoder neural network is deployed in the robot to control, at least in part, operation of the new HMI or the particular HMI in accordance with the learned behavior and characteristics of the particular user.Type: ApplicationFiled: August 26, 2022Publication date: January 11, 2024Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki KaishaInventors: Guy Rosman, Daniel J. Brooks, Simon A.I. Stent, Tiffany Chen, Emily Sarah Sumner, Shabnam Hakimi, Jonathan DeCastro, Deepak Edakkattil Gopinath
-
Publication number: 20230356736Abstract: Systems and methods for use of operator state conditions to generate, adapt or otherwise produce visual signals relaying information to the operator are disclosed. A monitoring system may observe and analyze a vehicle operator to determine an operator state. The monitoring system may transmit the observed driver state to an operator alert system that generates, conditions and controls the transmission of signals to the operator.Type: ApplicationFiled: May 5, 2022Publication date: November 9, 2023Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHAInventor: Simon A.I. STENT
-
Patent number: 11810372Abstract: A control system, computer-readable storage medium and method of preventing occlusion of and minimizing shadows on the driver's face for driver monitoring. The system includes a steering wheel, a plurality of fiberscopes arranged evenly spaced around the steering wheel, and one or more video cameras arranged at remote ends of the plurality of fiberscopes. Distal ends of the fiberscopes emerge to a surface of the steering wheel through holes that are perpendicular to an axis of rotation of the steering wheel. Each of the distal ends of the fiberscopes includes a lens. The system includes a plurality of light sources and an electronic control unit connected to the one or more video cameras and the light sources.Type: GrantFiled: November 22, 2022Date of Patent: November 7, 2023Assignee: TOYOTA JIDOSHA KABUSHIKIInventors: Thomas Balch, Simon A. I. Stent, Guy Rosman, John Gideon
-
Publication number: 20230331240Abstract: Disclosed are systems and methods for training at least one policy using a framework for encoding human behaviors and preferences in a driving environment. In one example, the method includes the steps of setting parameters of rewards and a Markov Decision Process (MDP) of the at least one policy that models a simulated human driver of a simulated vehicle and an adaptive human-machine interface (HMI) system configured to interact with each other and training the at least one policy to maximize a total reward based on the parameters of the rewards of the at least one policy.Type: ApplicationFiled: January 19, 2023Publication date: October 19, 2023Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki KaishaInventors: Jonathan DeCastro, Guy Rosman, Simon A.I. Stent, Emily Sumner, Shabnam Hakimi, Deepak Edakkattil Gopinath, Allison Morgan
-
Patent number: 11772671Abstract: Embodiments described herein include systems and methods for predicting a transfer of control of a vehicle to a driver. A method includes receiving information about an environment of the vehicle, identifying at least one condition represented in the information about the environment of the vehicle that corresponds to at least one of one or more known conditions that lead to a handback of operational control of the vehicle to the driver, and predicting the transfer of control of the vehicle to the driver based on the at least one condition identified from the information about the environment of the vehicle.Type: GrantFiled: June 3, 2019Date of Patent: October 3, 2023Assignee: Toyota Research Institute, Inc.Inventor: Simon A. I. Stent
-
Publication number: 20230298199Abstract: Systems and methods for detecting occluded objects are disclosed. In one embodiment, a method of determining a shape and pose of an object occluded by an occlusion object includes receiving, by a generative model, a latent vector, and iteratively performing an optimization routine until a loss is less than a loss threshold. The optimization routine includes generating, by the generative model, a predicted object having a shape and a pose from the latent vector, generating a predicted shadow cast by the predicted object, calculating the loss by comparing the predicted shadow with an observed shadow, and modifying the latent vector when the loss is greater than the loss threshold. The method further includes selecting the predicted object as the object when the loss is less than the loss threshold.Type: ApplicationFiled: February 10, 2023Publication date: September 21, 2023Applicants: Toyota Research Institute, Inc., Columbia University, Toyota Jidosha Kabushiki KaishaInventors: Simon A.I. Stent, Ruoshi Liu, Sachit Menon, Chengzhi Mao, Dennis Park, Carl M. Vondrick
-
Publication number: 20230286437Abstract: A vehicular warning system control system can include a processor and a memory in communication with the processor. The memory can include a warning system control module having instructions that, when executed by the processor, cause the processor to detect, using sensor data having information about a gaze of each eye of a driver of a vehicle, an abnormality of a gaze of the driver. The instructions further cause the processor to modify, using the sensor data, a signal emitted by the vehicle when the abnormality is detected.Type: ApplicationFiled: March 9, 2022Publication date: September 14, 2023Inventors: Simon A.I. Stent, Heishiro Toyoda