Patents by Inventor Simon Stent

Simon Stent has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11475720
    Abstract: An embodiment takes the form of a vehicle that generates a data collection configuration for one or more vehicle sensors of a vehicle based on an estimated information gain to a neural network were the vehicle to provision the neural network with notional sensor data, and based on a vehicle resource consumption by the vehicle were the vehicle to provision the neural network with the notional sensor data. The notional sensor data comprises sensor data that would be collected from a given sensor among the vehicle sensors according to a respective sensor configuration of the given sensor. The vehicle collects sensor data from the vehicle sensors according to the generated data collection configuration.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: October 18, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Stephen G. McGill, Guy Rosman, Luke S. Fletcher, John J. Leonard, Simon Stent
  • Patent number: 11335104
    Abstract: An embodiment takes the form of a training server that presents a video comprising a plurality of frames, each comprising a respective scene representation of a scene at a respective time. The scene representations comprise respective representations of a feature in the scene. The training server presents a respective gaze representation of a driver gaze for each frame. The gaze representations comprise respective representations of driver gaze locations at the times of the respective scene representations. The training server generates an awareness prediction via a neural network based on the driver gaze locations, the awareness prediction reflecting a predicted driver awareness of the feature. The training server receives an awareness indication associated with the video and the gaze representations, and trains the neural network based on a comparison of the awareness prediction with the awareness indication.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: May 17, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Stephen G. McGill, Guy Rosman, Simon Stent, Luke S. Fletcher, Deepak Edakkattil Gopinath
  • Publication number: 20210303888
    Abstract: An embodiment takes the form of a training server that presents a video comprising a plurality of frames, each comprising a respective scene representation of a scene at a respective time. The scene representations comprise respective representations of a feature in the scene. The training server presents a respective gaze representation of a driver gaze for each frame. The gaze representations comprise respective representations of driver gaze locations at the times of the respective scene representations. The training server generates an awareness prediction via a neural network based on the driver gaze locations, the awareness prediction reflecting a predicted driver awareness of the feature. The training server receives an awareness indication associated with the video and the gaze representations, and trains the neural network based on a comparison of the awareness prediction with the awareness indication.
    Type: Application
    Filed: March 31, 2020
    Publication date: September 30, 2021
    Applicant: Toyota Research Institute, Inc.
    Inventors: Stephen G. McGill, Guy Rosman, Simon Stent, Luke S. Fletcher, Deepak Edakkattil Gopinath
  • Publication number: 20210304524
    Abstract: An embodiment takes the form of a vehicle that generates a data collection configuration for one or more vehicle sensors of a vehicle based on an estimated information gain to a neural network were the vehicle to provision the neural network with notional sensor data, and based on a vehicle resource consumption by the vehicle were the vehicle to provision the neural network with the notional sensor data. The notional sensor data comprises sensor data that would be collected from a given sensor among the vehicle sensors according to a respective sensor configuration of the given sensor. The vehicle collects sensor data from the vehicle sensors according to the generated data collection configuration.
    Type: Application
    Filed: March 31, 2020
    Publication date: September 30, 2021
    Applicant: Toyota Research Institute, Inc.
    Inventors: Stephen G. McGill, Guy Rosman, Luke S. Fletcher, John J. Leonard, Simon Stent
  • Publication number: 20210300354
    Abstract: Systems, vehicles, devices, and methods for controlling an operation of a vehicle feature according to a learned risk preference are disclosed. An embodiment is a vehicle that controls an operation of a vehicle feature of the vehicle according to an initial risk preference. The operation of the vehicle feature comprises is according to a risk estimate and a driver risk preference that is set to the initial risk preference. The vehicle acquires an observation of a driver behavior. The driver behavior comprises a behavior of the driver when the vehicle is in a context associated with the risk estimate, and represents a risk tolerance of the driver. The vehicle updates the driver risk preference to a learned risk preference based on a comparison of the risk estimate with the risk tolerance of the driver, and controls an operation of the vehicle feature according to the learned risk preference.
    Type: Application
    Filed: March 31, 2020
    Publication date: September 30, 2021
    Applicant: Toyota Research Institute, Inc.
    Inventors: Stephen G. McGill, Guy Rosman, Luke S. Fletcher, Simon Stent
  • Patent number: 11042994
    Abstract: A system for determining the gaze direction of a subject includes a camera, a computing device and a machine-readable instruction set. The camera is positioned in an environment to capture image data of head of a subject. The computing device is communicatively coupled to the camera and the computing device includes a processor and a non-transitory computer-readable memory. The machine-readable instruction set is stored in the non-transitory computer-readable memory and causes the computing device to: receive image data from the camera, analyze the image data using a convolutional neural network trained on an image dataset comprising images of a head of a subject captured from viewpoints distributed around up to 360-degrees of head yaw, and predict a gaze direction vector of the subject based upon a combination of head appearance and eye appearance image data from the image dataset.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: June 22, 2021
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Simon Stent, Adria Recasens, Antonio Torralba, Petr Kellnhofer, Wojciech Matusik
  • Publication number: 20190147607
    Abstract: A system for determining the gaze direction of a subject includes a camera, a computing device and a machine-readable instruction set. The camera is positioned in an environment to capture image data of head of a subject. The computing device is communicatively coupled to the camera and the computing device includes a processor and a non-transitory computer-readable memory. The machine-readable instruction set is stored in the non-transitory computer-readable memory and causes the computing device to: receive image data from the camera, analyze the image data using a convolutional neural network trained on an image dataset comprising images of a head of a subject captured from viewpoints distributed around up to 360-degrees of head yaw, and predict a gaze direction vector of the subject based upon a combination of head appearance and eye appearance image data from the image dataset.
    Type: Application
    Filed: October 12, 2018
    Publication date: May 16, 2019
    Applicants: Toyota Research Institute, Inc., Massachusetts Institute of Technology
    Inventors: Simon Stent, Adria Recasens, Antonio Torralba, Petr Kellnhofer, Wojciech Matuski
  • Patent number: 9916522
    Abstract: A source deconvolutional network is adaptively trained to perform semantic segmentation. Image data is then input to the source deconvolutional network and outputs of the S-Net are measured. The same image data and the measured outputs of the source deconvolutional network are then used to train a target deconvolutional network. The target deconvolutional network is defined by a substantially fewer numerical parameters than the source deconvolutional network.
    Type: Grant
    Filed: April 5, 2016
    Date of Patent: March 13, 2018
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: German Ros Sanchez, Simon Stent, Pablo Alcantarilla
  • Publication number: 20170262735
    Abstract: A source deconvolutional network is adaptively trained to perform semantic segmentation. Image data is then input to the source deconvolutional network and outputs of the S-Net are measured. The same image data and the measured outputs of the source deconvolutional network are then used to train a target deconvolutional network. The target deconvolutional network is defined by a substantially fewer numerical parameters than the source deconvolutional network.
    Type: Application
    Filed: April 5, 2016
    Publication date: September 14, 2017
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: German ROS SANCHEZ, Simon Stent, Pablo Alcantarilla