Patents by Inventor Adria Recasens
Adria Recasens has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250191194Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for tracking query points in videos using a point tracking neural network.Type: ApplicationFiled: March 7, 2023Publication date: June 12, 2025Inventors: Carl Doersch, Ankush Gupta, Larisa Markeeva, Klaus Greff, Andrea Tagliasacchi, Adrià Recasens Continente, Yusuf Aytar, Joao Carreira, Andrew Zisserman, Yi Yang
-
Publication number: 20250181887Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing network inputs using a neural network that implements partitioned attention.Type: ApplicationFiled: March 7, 2023Publication date: June 5, 2025Inventors: Adrià Recasens Continente, Jason Jiachen Lin, Luyu Wang, Jean-Baptiste Alayrac, Andrew Coulter Jaegle, Joao Carreira, Pauline Luc, Antoine Miech, Lucas De Freitas Smaira, Ross Hemsley, Andrew Zisserman
-
Publication number: 20250103856Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for using a neural network to generate a network output that characterizes an entity. In one aspect, a method includes: obtaining a representation of the entity as a set of data element embeddings, obtaining a set of latent embeddings, and processing: (i) the set of data element embeddings, and (ii) the set of latent embeddings, using the neural network to generate the network output. The neural network includes a sequence of neural network blocks including: (i) one or more local cross-attention blocks, and (ii) an output block. Each local cross-attention block partitions the set of latent embeddings and the set of data element embeddings into proper subsets, and updates each proper subset of the set of latent embeddings using attention over only the corresponding proper subset of the set of data element embeddings.Type: ApplicationFiled: January 30, 2023Publication date: March 27, 2025Inventors: Joao Carreira, Andrew Coulter Jaegle, Skanda Kumar Koppula, Daniel Zoran, Adrià Recasens Continente, Catalin-Dumitru Ionescu, Olivier Jean Hénaff, Evan Gerard Shelhamer, Relja Arandjelovic, Matthew Botvinick, Oriol Vinyals, Karen Simonyan, Andrew Zisserman
-
Patent number: 11430084Abstract: A method includes receiving, with a computing device, an image, identifying one or more salient features in the image, and generating a saliency map of the image including the one or more salient features. The method further includes sampling the image based on the saliency map such that the one or more salient features are sampled at a first density of sampling and at least one portion of the image other than the one or more salient features are sampled at a second density of sampling, where the first density of sampling is greater than the second density of sampling, and storing the sampled image in a non-transitory computer readable memory.Type: GrantFiled: September 5, 2018Date of Patent: August 30, 2022Assignees: TOYOTA RESEARCH INSTITUTE, INC., MASSACHUSETTS INSTITUTE OF TECHNOLOGYInventors: Simon A. I. Stent, Adrià Recasens, Antonio Torralba, Petr Kellnhofer, Wojciech Matusik
-
Patent number: 11221671Abstract: A system includes a camera positioned in an environment to capture image data of a subject; a computing device communicatively coupled to the camera, the computing device comprising a processor and a non-transitory computer-readable memory; and a machine-readable instruction set stored in the non-transitory computer-readable memory. The machine-readable instruction set causes the computing device to perform at least the following when executed by the processor: receive the image data from the camera; analyze the image data captured by the camera using a neural network trained on training data generated from a 360-degree panoramic camera configured to collect image data of a subject and a visual target that is moved about an environment; and predict a gaze direction vector of the subject with the neural network.Type: GrantFiled: January 16, 2020Date of Patent: January 11, 2022Assignees: TOYOTA RESEARCH INSTITUTE, INC., MASSACHUSETTS INSTITUTE OF TECHNOLOGYInventors: Simon A. I. Stent, Adrià Recasens, Petr Kellnhofer, Wojciech Matusik, Antonio Torralba
-
Patent number: 11042994Abstract: A system for determining the gaze direction of a subject includes a camera, a computing device and a machine-readable instruction set. The camera is positioned in an environment to capture image data of head of a subject. The computing device is communicatively coupled to the camera and the computing device includes a processor and a non-transitory computer-readable memory. The machine-readable instruction set is stored in the non-transitory computer-readable memory and causes the computing device to: receive image data from the camera, analyze the image data using a convolutional neural network trained on an image dataset comprising images of a head of a subject captured from viewpoints distributed around up to 360-degrees of head yaw, and predict a gaze direction vector of the subject based upon a combination of head appearance and eye appearance image data from the image dataset.Type: GrantFiled: October 12, 2018Date of Patent: June 22, 2021Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Simon Stent, Adria Recasens, Antonio Torralba, Petr Kellnhofer, Wojciech Matusik
-
Publication number: 20200249753Abstract: A system includes a camera positioned in an environment to capture image data of a subject; a computing device communicatively coupled to the camera, the computing device comprising a processor and a non-transitory computer-readable memory; and a machine-readable instruction set stored in the non-transitory computer-readable memory. The machine-readable instruction set causes the computing device to perform at least the following when executed by the processor: receive the image data from the camera; analyze the image data captured by the camera using a neural network trained on training data generated from a 360-degree panoramic camera configured to collect image data of a subject and a visual target that is moved about an environment; and predict a gaze direction vector of the subject with the neural network.Type: ApplicationFiled: January 16, 2020Publication date: August 6, 2020Applicants: Toyota Research Institute, Inc., Massachusetts Institute of TechnologyInventors: Simon A.I. Stent, Adrià Recasens, Petr Kellnhofer, Wojciech Matusik, Antonio Torralba
-
Publication number: 20200074589Abstract: A method includes receiving, with a computing device, an image, identifying one or more salient features in the image, and generating a saliency map of the image including the one or more salient features. The method further includes sampling the image based on the saliency map such that the one or more salient features are sampled at a first density of sampling and at least one portion of the image other than the one or more salient features are sampled at a second density of sampling, where the first density of sampling is greater than the second density of sampling, and storing the sampled image in a non-transitory computer readable memory.Type: ApplicationFiled: September 5, 2018Publication date: March 5, 2020Applicants: Toyota Research Institute, Inc., Massachusetts Institute of TechnologyInventors: Simon A.I. Stent, Adrià Recasens, Antonio Torralba, Petr Kellnhofer, Wojciech Matusik
-
Publication number: 20190147607Abstract: A system for determining the gaze direction of a subject includes a camera, a computing device and a machine-readable instruction set. The camera is positioned in an environment to capture image data of head of a subject. The computing device is communicatively coupled to the camera and the computing device includes a processor and a non-transitory computer-readable memory. The machine-readable instruction set is stored in the non-transitory computer-readable memory and causes the computing device to: receive image data from the camera, analyze the image data using a convolutional neural network trained on an image dataset comprising images of a head of a subject captured from viewpoints distributed around up to 360-degrees of head yaw, and predict a gaze direction vector of the subject based upon a combination of head appearance and eye appearance image data from the image dataset.Type: ApplicationFiled: October 12, 2018Publication date: May 16, 2019Applicants: Toyota Research Institute, Inc., Massachusetts Institute of TechnologyInventors: Simon Stent, Adria Recasens, Antonio Torralba, Petr Kellnhofer, Wojciech Matuski