Patents by Inventor Simon I

Simon I has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11488479
    Abstract: A method and system for generating and outputting a targeting warning in association with a road agent is provided. The method includes detecting, by a sensor operating in conjunction with a computing device of a vehicle, a road agent at a distance from the vehicle, analyzing, by the computing device of the vehicle, one or more characteristics of the road agent at the distance, generating, by the computing device, a targeted warning based on analyzing the one or more characteristics of the road agent at the distance, and outputting, by a component operating in conjunction with the computing device of the vehicle, the targeted warning in association with the road agent at the distance.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: November 1, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventor: Simon A. I. Stent
  • Publication number: 20220300764
    Abstract: Systems and methods for training a model are described herein. In one example, a system for training the model includes a processor and a memory in communication with the processor having a training module. The training module has instructions that cause the processor to determine a contrastive loss using a self-supervised contrastive loss function, adjust, based on the contrastive loss, model weights a visual backbone that generated feature maps and/or a textual backbone that generated feature vectors. The training module also has instructions that cause the processor to determine a localized loss using a supervised loss function that compares an image-caption attention map with visual identifiers and adjust, based on the localized loss, the model weights the visual backbone and/or the textual backbone.
    Type: Application
    Filed: May 18, 2021
    Publication date: September 22, 2022
    Inventors: Zhijian Liu, Simon A.I. Stent, John H. Gideon, Jie Li
  • Publication number: 20220296116
    Abstract: Systems and methods for training remote photoplethysmography (“PPG”) models that outputs a subject PPG signal based on a subject video clip of a subject are described herein. The system may have a processor and a memory in communication with the processor. The memory may include a training module having instructions that, when executed by the processor, cause the processor to train the remote PPG model in a self-supervised contrastive learning manner using an unlabeled video clip having a sequence of images of a face of a person.
    Type: Application
    Filed: May 11, 2021
    Publication date: September 22, 2022
    Inventors: John H. Gideon, Simon A.I. Stent
  • Patent number: 11430084
    Abstract: A method includes receiving, with a computing device, an image, identifying one or more salient features in the image, and generating a saliency map of the image including the one or more salient features. The method further includes sampling the image based on the saliency map such that the one or more salient features are sampled at a first density of sampling and at least one portion of the image other than the one or more salient features are sampled at a second density of sampling, where the first density of sampling is greater than the second density of sampling, and storing the sampled image in a non-transitory computer readable memory.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: August 30, 2022
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., MASSACHUSETTS INSTITUTE OF TECHNOLOGY
    Inventors: Simon A. I. Stent, Adrià Recasens, Antonio Torralba, Petr Kellnhofer, Wojciech Matusik
  • Publication number: 20220189308
    Abstract: A method and system for generating and outputting a targeting warning in association with a road agent is provided. The method includes detecting, by a sensor operating in conjunction with a computing device of a vehicle, a road agent at a distance from the vehicle, analyzing, by the computing device of the vehicle, one or more characteristics of the road agent at the distance, generating, by the computing device, a targeted warning based on analyzing the one or more characteristics of the road agent at the distance, and outputting, by a component operating in conjunction with the computing device of the vehicle, the targeted warning in association with the road agent at the distance.
    Type: Application
    Filed: December 10, 2020
    Publication date: June 16, 2022
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventor: Simon A.I. Stent
  • Publication number: 20220153278
    Abstract: A system includes a camera configured to capture image data of an environment, a monitoring system configured to generate a gaze sequences of a subject, and a computing device communicatively coupled to the camera and the monitoring system. The computing device is configured to receive the image data from the camera and the gaze sequences from the monitoring system, implement a machine learning model comprising a convolutional encoder-decoder neural network configured to process the image data and a side-channel configured to inject the gaze sequences into a decoder stage of the convolutional encoder-decoder neural network, generate, with the machine learning model, a gaze probability density heat map, and generate, with the machine learning model, an attended awareness heat map.
    Type: Application
    Filed: June 18, 2021
    Publication date: May 19, 2022
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Guy Rosman, Simon A.I. Stent, Luke Fletcher, John Leonard, Deepak Gopinath, Katsuya Terahata
  • Patent number: 11314402
    Abstract: In some implementations, a user may zoom in on a particular asset to show an all assets view that displays larger assets in a grid, and zoom out to show multiple smaller assets in another grid at different zoom levels while maintaining focus on the particular asset. Particularly, a GUI may display cells of a grid at a first zoom level, receive zoom input to transition to a second zoom level, and display cells of a different size in a second grid while maintaining focus on and positioning of the particular asset across the zoom levels.
    Type: Grant
    Filed: September 4, 2019
    Date of Patent: April 26, 2022
    Assignee: Apple lnc.
    Inventors: Andreas J. Karlsson, Matthieu Lucas, Serhii Tatarchuk, Simon I. Bovet, Graham R. Clarke
  • Publication number: 20220121866
    Abstract: A control system, computer-readable storage medium and method of preventing occlusion of and minimizing shadows on the driver's face for driver monitoring. The system includes a steering wheel, a plurality of fiberscopes arranged evenly spaced around the steering wheel, and one or more video cameras arranged at remote ends of the plurality of fiberscopes. Distal ends of the fiberscopes emerge to a surface of the steering wheel through holes that are perpendicular to an axis of rotation of the steering wheel. Each of the distal ends of the fiberscopes includes a lens. The system includes a plurality of light sources and an electronic control unit connected to the one or more video cameras and the light sources.
    Type: Application
    Filed: October 20, 2020
    Publication date: April 21, 2022
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Thomas BALCH, Simon A. I. STENT, Guy ROSMAN, John GIDEON
  • Patent number: 11221671
    Abstract: A system includes a camera positioned in an environment to capture image data of a subject; a computing device communicatively coupled to the camera, the computing device comprising a processor and a non-transitory computer-readable memory; and a machine-readable instruction set stored in the non-transitory computer-readable memory. The machine-readable instruction set causes the computing device to perform at least the following when executed by the processor: receive the image data from the camera; analyze the image data captured by the camera using a neural network trained on training data generated from a 360-degree panoramic camera configured to collect image data of a subject and a visual target that is moved about an environment; and predict a gaze direction vector of the subject with the neural network.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: January 11, 2022
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., MASSACHUSETTS INSTITUTE OF TECHNOLOGY
    Inventors: Simon A. I. Stent, Adrià Recasens, Petr Kellnhofer, Wojciech Matusik, Antonio Torralba
  • Patent number: 11144052
    Abstract: A vehicle control handoff system includes a controller comprising a processor and a non-transitory computer readable memory, one or more environment sensors and an imaging device communicatively coupled to the controller, and a machine-readable instruction set stored in the non-transitory computer readable memory of the controller. The machine-readable instruction set causes the system to: receive image data from at least one imaging device, receive one or more signals corresponding to an environment of a vehicle from the one or more environment sensors, define a gaze pattern comprising a first gaze direction corresponding to a first location within the environment of the vehicle, determine a first gaze based on the image data of the driver, determine whether the first gaze corresponds to at least one gaze direction of the gaze pattern, and transfer control of a vehicle operation from control by the controller to the driver in response to determining that the first gaze corresponds to the gaze pattern.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: October 12, 2021
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventor: Simon A. I. Stent
  • Publication number: 20210300428
    Abstract: Systems and methods for driver dazzle detection for a subject vehicle, may include: using a plurality of sensors to gather data regarding driver characteristics and light source characteristics in an environment of the subject vehicle; evaluating sensor data from the plurality of sensors received to determine at least one of a driver characteristic and a light source characteristic; determining a level of dazzling of a driver of the vehicle based on the determined at least one of a driver characteristic and a light source characteristic; and engaging remedial action based on the determined level of dazzle of the driver of the vehicle, wherein the remedial action comprises at least one of switching a control of the vehicle from a manual drive mode to an autonomous drive mode and engaging an ADAS feature if it is detected that the determine level of dazzling is above a dazzling threshold.
    Type: Application
    Filed: March 30, 2020
    Publication date: September 30, 2021
    Inventor: SIMON A. I. STENT
  • Publication number: 20210300397
    Abstract: Systems, vehicles and methods for determining wrong direction driving are disclosed. In one embodiment, a system for determining a vehicle traveling in a wrong direction includes one or more sensors that produce sensor data, one or more processors, and one or more non-transitory computer-readable medium storing computer readable-instructions. When the computer-readable instructions are executed by the one or more processors, the computer-readable instructions cause the one or more processors to determine one or more lanes within a roadway using the sensor data, determine a direction of travel of the one or more lanes using the sensor data, and identify a non-compliant vehicle traveling in a direction in the one or more lanes that is different from the determined direction of travel in the one or more lanes.
    Type: Application
    Filed: March 31, 2020
    Publication date: September 30, 2021
    Applicant: Toyota Research Institute, Inc.
    Inventors: Stephen G. McGill, Guy Rosman, Luke S. Fletcher, Simon A. I. Stent
  • Patent number: 11126257
    Abstract: A system for gaze and gesture detection in unconstrained environments includes a 360-degree (omnidirectional) camera system, one or more depth sensors, and associated memory, processors and programming instructions to determine an object of a human user's attention in the unconstrained environment. The illustrative system may identify the object using eye gaze, gesture detection, and/or speech recognition. The system may generate a saliency map and identify areas of interest. A directionality vector may be projected on the saliency map to find intersecting areas of interest. The system may identify the object of attention once the object of attention is located.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: September 21, 2021
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventor: Simon A. I. Stent
  • Patent number: 11079758
    Abstract: A method of collecting data regarding the operation of a vehicle that includes receiving sensor data regarding one or more objects or events in a surrounding environment using one or more vehicle sensors, classifying, by one or more processors, the one or more detected objects or events, generating one or more vehicle inquiries based on the classification of the one or more detected objects or events, presenting one or more vehicle inquiries, and receiving user feedback to the one or more vehicle inquiries.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: August 3, 2021
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: John J. Leonard, Simon A. I. Stent, Luke S. Fletcher, Stephen G. McGill
  • Patent number: 10916030
    Abstract: A system for selectively activating infrared lights in a vehicle cabin includes a controller comprising a processor and a non-transitory computer readable memory, two or more infrared illumination sources positioned within the vehicle cabin, the two or more infrared illumination sources communicatively coupled to the controller, an imaging device communicatively coupled to the controller and a machine-readable instruction set stored in the non-transitory computer readable memory of the controller. The machine-readable instruction set causes the system to perform at least the following when executed by the processor: receive image data from the imaging device, determine a location of an occupant in the vehicle cabin based on the image data, and activate a first infrared illumination source of the two or more infrared illumination sources that corresponds to the location of the occupant in the vehicle cabin.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: February 9, 2021
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventor: Simon A. I. Stent
  • Patent number: 10871868
    Abstract: Systems, methods, and computer-readable medium are provided for presenting a synchronized content scrubber. For example, a user device may store digital content items for presentation on a screen of the user device. A user interface may be configured with multiple viewing areas. An image that represents a content item may be presented in the first viewing and the second viewing area. However, in the second viewing area, the image that represents the content item may be presented in a visually distinct manner from other images in the second viewing area.
    Type: Grant
    Filed: June 5, 2015
    Date of Patent: December 22, 2020
    Assignee: Apple Inc.
    Inventors: Britt S. Miura, Andreas J. Karlsson, Daniel E. Gobera Rubalcava, Justin S. Titi, Simon I. Bovet, Nicholas D. Lupinetti
  • Patent number: 10866635
    Abstract: A method of training a gaze estimation model includes displaying a target image at a known location on a display in front of a subject and receiving images captured from a plurality of image sensors surrounding the subject, wherein each image sensor has a known location relative to the display. The method includes determining a reference gaze vector for one or more eyes of the subject based on the images and the known location of the target image and then determining, with the model, a gaze direction vector of each of the one or more eyes of the subject from data captured by an eye-tracker. The method further includes determining, with the model, an uncertainty in measurement of the gaze direction vector and an error between the reference gaze vector and the gaze direction vector and providing feedback based on at least one of the uncertainty and the error.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: December 15, 2020
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventor: Simon A. I. Stent
  • Publication number: 20200379460
    Abstract: Embodiments described herein include systems and methods for predicting a transfer of control of a vehicle to a driver. A method includes receiving information about an environment of the vehicle, identifying at least one condition represented in the information about the environment of the vehicle that corresponds to at least one of one or more known conditions that lead to a handback of operational control of the vehicle to the driver, and predicting the transfer of control of the vehicle to the driver based on the at least one condition identified from the information about the environment of the vehicle.
    Type: Application
    Filed: June 3, 2019
    Publication date: December 3, 2020
    Applicant: Toyota Research Institute, Inc.
    Inventor: Simon A.I. Stent
  • Publication number: 20200379631
    Abstract: In some implementations, a user may zoom in on a particular asset to show an all assets view that displays larger assets in a grid, and zoom out to show to show multiple smaller assets in another grid at different zoom levels while maintaining focus on the particular asset. Particularly, a GUI may display cells of a grid at a first zoom level, receive zoom input to transition to a second zoom level, and display cells of a different size in a second grid while maintaining focus on and positioning of the particular asset across the zoom levels.
    Type: Application
    Filed: September 4, 2019
    Publication date: December 3, 2020
    Applicant: Apple Inc.
    Inventors: Andreas J. Karlsson, Matthieu Lucas, Serhii Tatarchuk, Simon I. Bovet, Graham R. Clarke
  • Publication number: 20200342303
    Abstract: A system for predicting a hazardous event from road-scene data includes an electronic control unit configured to implement a neural network and a camera communicatively coupled to the electronic control unit, wherein the camera generates the road-scene data. The electronic control unit is configured to receive the road-scene data from the camera, and predict, with the neural network, an occurrence of the hazardous event within the road-scene data from the camera.
    Type: Application
    Filed: April 24, 2019
    Publication date: October 29, 2020
    Applicant: Toyota Research Institute, Inc.
    Inventor: Simon A.I. Stent