Patents by Inventor Simon Stent
Simon Stent has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240269844Abstract: A method for interacting with an object includes identifying, via a keypoint classifier, one or more uncertainty regions in an environment based on an estimated trajectory of an object in the environment. The method also includes planning an interaction with the object based on identifying the one or more uncertainty regions, the planned interaction being within a region of the environment that is different from the one or more uncertainty regions. The method further includes interacting with the object based on planning the interaction.Type: ApplicationFiled: February 6, 2024Publication date: August 15, 2024Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA, THE TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW YORKInventors: Pavel TOKMAKOV, Ishaan CHANDRATREYA, Shuran SONG, Carl VONDRICK, Simon STENT, Huy HA
-
Publication number: 20240249637Abstract: A driving simulator may include a controller programmed to simulate operation of a vehicle being driven by a driver, the vehicle including assistive driving technology, receive driver data associated with the driver, determine whether the driver is distracted based on the driver data, and upon determination that the driver is distracted, simulate a particular driving event.Type: ApplicationFiled: January 19, 2023Publication date: July 25, 2024Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki KaishaInventors: Simon Stent, Andrew P. Best, Shabnam Hakimi, Guy Rosman, Emily S. Sumner, Jonathan DeCastro
-
Patent number: 11475720Abstract: An embodiment takes the form of a vehicle that generates a data collection configuration for one or more vehicle sensors of a vehicle based on an estimated information gain to a neural network were the vehicle to provision the neural network with notional sensor data, and based on a vehicle resource consumption by the vehicle were the vehicle to provision the neural network with the notional sensor data. The notional sensor data comprises sensor data that would be collected from a given sensor among the vehicle sensors according to a respective sensor configuration of the given sensor. The vehicle collects sensor data from the vehicle sensors according to the generated data collection configuration.Type: GrantFiled: March 31, 2020Date of Patent: October 18, 2022Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Stephen G. McGill, Guy Rosman, Luke S. Fletcher, John J. Leonard, Simon Stent
-
Patent number: 11335104Abstract: An embodiment takes the form of a training server that presents a video comprising a plurality of frames, each comprising a respective scene representation of a scene at a respective time. The scene representations comprise respective representations of a feature in the scene. The training server presents a respective gaze representation of a driver gaze for each frame. The gaze representations comprise respective representations of driver gaze locations at the times of the respective scene representations. The training server generates an awareness prediction via a neural network based on the driver gaze locations, the awareness prediction reflecting a predicted driver awareness of the feature. The training server receives an awareness indication associated with the video and the gaze representations, and trains the neural network based on a comparison of the awareness prediction with the awareness indication.Type: GrantFiled: March 31, 2020Date of Patent: May 17, 2022Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Stephen G. McGill, Guy Rosman, Simon Stent, Luke S. Fletcher, Deepak Edakkattil Gopinath
-
Publication number: 20210304524Abstract: An embodiment takes the form of a vehicle that generates a data collection configuration for one or more vehicle sensors of a vehicle based on an estimated information gain to a neural network were the vehicle to provision the neural network with notional sensor data, and based on a vehicle resource consumption by the vehicle were the vehicle to provision the neural network with the notional sensor data. The notional sensor data comprises sensor data that would be collected from a given sensor among the vehicle sensors according to a respective sensor configuration of the given sensor. The vehicle collects sensor data from the vehicle sensors according to the generated data collection configuration.Type: ApplicationFiled: March 31, 2020Publication date: September 30, 2021Applicant: Toyota Research Institute, Inc.Inventors: Stephen G. McGill, Guy Rosman, Luke S. Fletcher, John J. Leonard, Simon Stent
-
Publication number: 20210303888Abstract: An embodiment takes the form of a training server that presents a video comprising a plurality of frames, each comprising a respective scene representation of a scene at a respective time. The scene representations comprise respective representations of a feature in the scene. The training server presents a respective gaze representation of a driver gaze for each frame. The gaze representations comprise respective representations of driver gaze locations at the times of the respective scene representations. The training server generates an awareness prediction via a neural network based on the driver gaze locations, the awareness prediction reflecting a predicted driver awareness of the feature. The training server receives an awareness indication associated with the video and the gaze representations, and trains the neural network based on a comparison of the awareness prediction with the awareness indication.Type: ApplicationFiled: March 31, 2020Publication date: September 30, 2021Applicant: Toyota Research Institute, Inc.Inventors: Stephen G. McGill, Guy Rosman, Simon Stent, Luke S. Fletcher, Deepak Edakkattil Gopinath
-
Publication number: 20210300354Abstract: Systems, vehicles, devices, and methods for controlling an operation of a vehicle feature according to a learned risk preference are disclosed. An embodiment is a vehicle that controls an operation of a vehicle feature of the vehicle according to an initial risk preference. The operation of the vehicle feature comprises is according to a risk estimate and a driver risk preference that is set to the initial risk preference. The vehicle acquires an observation of a driver behavior. The driver behavior comprises a behavior of the driver when the vehicle is in a context associated with the risk estimate, and represents a risk tolerance of the driver. The vehicle updates the driver risk preference to a learned risk preference based on a comparison of the risk estimate with the risk tolerance of the driver, and controls an operation of the vehicle feature according to the learned risk preference.Type: ApplicationFiled: March 31, 2020Publication date: September 30, 2021Applicant: Toyota Research Institute, Inc.Inventors: Stephen G. McGill, Guy Rosman, Luke S. Fletcher, Simon Stent
-
Patent number: 11042994Abstract: A system for determining the gaze direction of a subject includes a camera, a computing device and a machine-readable instruction set. The camera is positioned in an environment to capture image data of head of a subject. The computing device is communicatively coupled to the camera and the computing device includes a processor and a non-transitory computer-readable memory. The machine-readable instruction set is stored in the non-transitory computer-readable memory and causes the computing device to: receive image data from the camera, analyze the image data using a convolutional neural network trained on an image dataset comprising images of a head of a subject captured from viewpoints distributed around up to 360-degrees of head yaw, and predict a gaze direction vector of the subject based upon a combination of head appearance and eye appearance image data from the image dataset.Type: GrantFiled: October 12, 2018Date of Patent: June 22, 2021Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Simon Stent, Adria Recasens, Antonio Torralba, Petr Kellnhofer, Wojciech Matusik
-
Publication number: 20190147607Abstract: A system for determining the gaze direction of a subject includes a camera, a computing device and a machine-readable instruction set. The camera is positioned in an environment to capture image data of head of a subject. The computing device is communicatively coupled to the camera and the computing device includes a processor and a non-transitory computer-readable memory. The machine-readable instruction set is stored in the non-transitory computer-readable memory and causes the computing device to: receive image data from the camera, analyze the image data using a convolutional neural network trained on an image dataset comprising images of a head of a subject captured from viewpoints distributed around up to 360-degrees of head yaw, and predict a gaze direction vector of the subject based upon a combination of head appearance and eye appearance image data from the image dataset.Type: ApplicationFiled: October 12, 2018Publication date: May 16, 2019Applicants: Toyota Research Institute, Inc., Massachusetts Institute of TechnologyInventors: Simon Stent, Adria Recasens, Antonio Torralba, Petr Kellnhofer, Wojciech Matuski
-
Patent number: 9916522Abstract: A source deconvolutional network is adaptively trained to perform semantic segmentation. Image data is then input to the source deconvolutional network and outputs of the S-Net are measured. The same image data and the measured outputs of the source deconvolutional network are then used to train a target deconvolutional network. The target deconvolutional network is defined by a substantially fewer numerical parameters than the source deconvolutional network.Type: GrantFiled: April 5, 2016Date of Patent: March 13, 2018Assignee: Kabushiki Kaisha ToshibaInventors: German Ros Sanchez, Simon Stent, Pablo Alcantarilla
-
Publication number: 20170262735Abstract: A source deconvolutional network is adaptively trained to perform semantic segmentation. Image data is then input to the source deconvolutional network and outputs of the S-Net are measured. The same image data and the measured outputs of the source deconvolutional network are then used to train a target deconvolutional network. The target deconvolutional network is defined by a substantially fewer numerical parameters than the source deconvolutional network.Type: ApplicationFiled: April 5, 2016Publication date: September 14, 2017Applicant: Kabushiki Kaisha ToshibaInventors: German ROS SANCHEZ, Simon Stent, Pablo Alcantarilla