Patents by Inventor Adrià Recasens

Adrià Recasens has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11430084
    Abstract: A method includes receiving, with a computing device, an image, identifying one or more salient features in the image, and generating a saliency map of the image including the one or more salient features. The method further includes sampling the image based on the saliency map such that the one or more salient features are sampled at a first density of sampling and at least one portion of the image other than the one or more salient features are sampled at a second density of sampling, where the first density of sampling is greater than the second density of sampling, and storing the sampled image in a non-transitory computer readable memory.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: August 30, 2022
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., MASSACHUSETTS INSTITUTE OF TECHNOLOGY
    Inventors: Simon A. I. Stent, Adrià Recasens, Antonio Torralba, Petr Kellnhofer, Wojciech Matusik
  • Patent number: 11221671
    Abstract: A system includes a camera positioned in an environment to capture image data of a subject; a computing device communicatively coupled to the camera, the computing device comprising a processor and a non-transitory computer-readable memory; and a machine-readable instruction set stored in the non-transitory computer-readable memory. The machine-readable instruction set causes the computing device to perform at least the following when executed by the processor: receive the image data from the camera; analyze the image data captured by the camera using a neural network trained on training data generated from a 360-degree panoramic camera configured to collect image data of a subject and a visual target that is moved about an environment; and predict a gaze direction vector of the subject with the neural network.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: January 11, 2022
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., MASSACHUSETTS INSTITUTE OF TECHNOLOGY
    Inventors: Simon A. I. Stent, Adrià Recasens, Petr Kellnhofer, Wojciech Matusik, Antonio Torralba
  • Patent number: 11042994
    Abstract: A system for determining the gaze direction of a subject includes a camera, a computing device and a machine-readable instruction set. The camera is positioned in an environment to capture image data of head of a subject. The computing device is communicatively coupled to the camera and the computing device includes a processor and a non-transitory computer-readable memory. The machine-readable instruction set is stored in the non-transitory computer-readable memory and causes the computing device to: receive image data from the camera, analyze the image data using a convolutional neural network trained on an image dataset comprising images of a head of a subject captured from viewpoints distributed around up to 360-degrees of head yaw, and predict a gaze direction vector of the subject based upon a combination of head appearance and eye appearance image data from the image dataset.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: June 22, 2021
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Simon Stent, Adria Recasens, Antonio Torralba, Petr Kellnhofer, Wojciech Matusik
  • Publication number: 20200249753
    Abstract: A system includes a camera positioned in an environment to capture image data of a subject; a computing device communicatively coupled to the camera, the computing device comprising a processor and a non-transitory computer-readable memory; and a machine-readable instruction set stored in the non-transitory computer-readable memory. The machine-readable instruction set causes the computing device to perform at least the following when executed by the processor: receive the image data from the camera; analyze the image data captured by the camera using a neural network trained on training data generated from a 360-degree panoramic camera configured to collect image data of a subject and a visual target that is moved about an environment; and predict a gaze direction vector of the subject with the neural network.
    Type: Application
    Filed: January 16, 2020
    Publication date: August 6, 2020
    Applicants: Toyota Research Institute, Inc., Massachusetts Institute of Technology
    Inventors: Simon A.I. Stent, Adrià Recasens, Petr Kellnhofer, Wojciech Matusik, Antonio Torralba
  • Publication number: 20200074589
    Abstract: A method includes receiving, with a computing device, an image, identifying one or more salient features in the image, and generating a saliency map of the image including the one or more salient features. The method further includes sampling the image based on the saliency map such that the one or more salient features are sampled at a first density of sampling and at least one portion of the image other than the one or more salient features are sampled at a second density of sampling, where the first density of sampling is greater than the second density of sampling, and storing the sampled image in a non-transitory computer readable memory.
    Type: Application
    Filed: September 5, 2018
    Publication date: March 5, 2020
    Applicants: Toyota Research Institute, Inc., Massachusetts Institute of Technology
    Inventors: Simon A.I. Stent, Adrià Recasens, Antonio Torralba, Petr Kellnhofer, Wojciech Matusik
  • Publication number: 20190147607
    Abstract: A system for determining the gaze direction of a subject includes a camera, a computing device and a machine-readable instruction set. The camera is positioned in an environment to capture image data of head of a subject. The computing device is communicatively coupled to the camera and the computing device includes a processor and a non-transitory computer-readable memory. The machine-readable instruction set is stored in the non-transitory computer-readable memory and causes the computing device to: receive image data from the camera, analyze the image data using a convolutional neural network trained on an image dataset comprising images of a head of a subject captured from viewpoints distributed around up to 360-degrees of head yaw, and predict a gaze direction vector of the subject based upon a combination of head appearance and eye appearance image data from the image dataset.
    Type: Application
    Filed: October 12, 2018
    Publication date: May 16, 2019
    Applicants: Toyota Research Institute, Inc., Massachusetts Institute of Technology
    Inventors: Simon Stent, Adria Recasens, Antonio Torralba, Petr Kellnhofer, Wojciech Matuski
  • Publication number: 20160299515
    Abstract: The system has as object the integral control of spaces starting from primary low level information obtained from the distribution of sensors and the corresponding actuators permitting as well the interaction with sensors-actuators of ofimatic or administrative character. The system of the invention unlike the known systems is a unitary and general purpose system in the sense that the same system (hardware and software), serve for any space to be controlled (parking lots, libraries, hospitals, hotels, building spaces and other).
    Type: Application
    Filed: December 11, 2013
    Publication date: October 13, 2016
    Inventors: Juan Ramon SAGARRA RIUS, Adria RECASENS CONTINENTE, Berta PONS DOZ, Joaquim GUILANIU BOU, Jose Manuel HUERTAS JANOT, Rafael VAQUES BROC, Ramon GARCIA-BRAGADO ACIN, Ricard MASO CERDA, Javier VILADEGUT GARRAY, Yolanda BLASCO RODRIGUEZ