Patents by Inventor Antonio Torralba

Antonio Torralba has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200249753
    Abstract: A system includes a camera positioned in an environment to capture image data of a subject; a computing device communicatively coupled to the camera, the computing device comprising a processor and a non-transitory computer-readable memory; and a machine-readable instruction set stored in the non-transitory computer-readable memory. The machine-readable instruction set causes the computing device to perform at least the following when executed by the processor: receive the image data from the camera; analyze the image data captured by the camera using a neural network trained on training data generated from a 360-degree panoramic camera configured to collect image data of a subject and a visual target that is moved about an environment; and predict a gaze direction vector of the subject with the neural network.
    Type: Application
    Filed: January 16, 2020
    Publication date: August 6, 2020
    Applicants: Toyota Research Institute, Inc., Massachusetts Institute of Technology
    Inventors: Simon A.I. Stent, Adrià Recasens, Petr Kellnhofer, Wojciech Matusik, Antonio Torralba
  • Publication number: 20200160178
    Abstract: In various examples, a generative model is used to synthesize datasets for use in training a downstream machine learning model to perform an associated task. The synthesized datasets may be generated by sampling a scene graph from a scene grammar—such as a probabilistic grammar—and applying the scene graph to the generative model to compute updated scene graphs more representative of object attribute distributions of real-world datasets. The downstream machine learning model may be validated against a real-world validation dataset, and the performance of the model on the real-world validation dataset may be used as an additional factor in further training or fine-tuning the generative model for generating the synthesized datasets specific to the task of the downstream machine learning model.
    Type: Application
    Filed: November 15, 2019
    Publication date: May 21, 2020
    Inventors: Amlan Kar, Aayush Prakash, Ming-Yu Liu, David Jesus Acuna Marrero, Antonio Torralba Barriuso, Sanja Fidler
  • Publication number: 20200143177
    Abstract: The present disclosure provides systems and methods to detect occluded objects using shadow information to anticipate moving obstacles that are occluded behind a corner or other obstacle. The system may perform a dynamic threshold analysis on enhanced images allowing the detection of even weakly visible shadows. The system may classify an image sequence as either “dynamic” or “static”, enabling an autonomous vehicle, or other moving platform, to react and respond to a moving, yet occluded object by slowing down or stopping.
    Type: Application
    Filed: November 2, 2018
    Publication date: May 7, 2020
    Inventors: Felix Maximilian NASER, Igor GILITSCHENSKI, Guy ROSMAN, Alexander Andre AMINI, Fredo DURAND, Antonio TORRALBA, Gregory WORNELL, William FREEMAN, Sertac KARAMAN, Daniela RUS
  • Publication number: 20200074589
    Abstract: A method includes receiving, with a computing device, an image, identifying one or more salient features in the image, and generating a saliency map of the image including the one or more salient features. The method further includes sampling the image based on the saliency map such that the one or more salient features are sampled at a first density of sampling and at least one portion of the image other than the one or more salient features are sampled at a second density of sampling, where the first density of sampling is greater than the second density of sampling, and storing the sampled image in a non-transitory computer readable memory.
    Type: Application
    Filed: September 5, 2018
    Publication date: March 5, 2020
    Applicants: Toyota Research Institute, Inc., Massachusetts Institute of Technology
    Inventors: Simon A.I. Stent, Adrià Recasens, Antonio Torralba, Petr Kellnhofer, Wojciech Matusik
  • Publication number: 20190188533
    Abstract: A method for pose recognition includes storing parameters for configuration of an automated pose recognition system for detection of a pose of a subject represented in a radio frequency input signal. The parameters having been determined by a first process including accepting training data including a number of images including poses of subjects and a corresponding number of radio frequency signals and executing a parameter training procedure to determine the parameters. The parameter training procedure including, receiving features characterizing the poses in each of the images, and determining the parameters that configure the automated pose recognition system to match the features characterizing the poses from the corresponding radio frequency signals.
    Type: Application
    Filed: December 19, 2018
    Publication date: June 20, 2019
    Inventors: Dina Katabi, Antonio Torralba, Hang Zhao, Mingmin Zhao, Tianhong ` Li, Mohammad Abualsheikh, Yonglong Tian
  • Publication number: 20190147607
    Abstract: A system for determining the gaze direction of a subject includes a camera, a computing device and a machine-readable instruction set. The camera is positioned in an environment to capture image data of head of a subject. The computing device is communicatively coupled to the camera and the computing device includes a processor and a non-transitory computer-readable memory. The machine-readable instruction set is stored in the non-transitory computer-readable memory and causes the computing device to: receive image data from the camera, analyze the image data using a convolutional neural network trained on an image dataset comprising images of a head of a subject captured from viewpoints distributed around up to 360-degrees of head yaw, and predict a gaze direction vector of the subject based upon a combination of head appearance and eye appearance image data from the image dataset.
    Type: Application
    Filed: October 12, 2018
    Publication date: May 16, 2019
    Applicants: Toyota Research Institute, Inc., Massachusetts Institute of Technology
    Inventors: Simon Stent, Adria Recasens, Antonio Torralba, Petr Kellnhofer, Wojciech Matuski
  • Patent number: 9754177
    Abstract: One or more aspects of the subject disclosure are directed towards identifying objects within an image via image searching/matching. In one aspect, an image is processed into bounding boxes, with the bounding boxes further processed to each surround a possible object. A sub-image of pixels corresponding to the bounding box is featurized for matching with tagged database images. The information (tags) associated with any matched images is processed to identify/categorize the sub-image and thus the object corresponding thereto.
    Type: Grant
    Filed: June 21, 2013
    Date of Patent: September 5, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ce Liu, Yair Weiss, Antonio Torralba Barriuso
  • Publication number: 20140376819
    Abstract: One or more aspects of the subject disclosure are directed towards identifying objects within an image via image searching/matching. In one aspect, an image is processed into bounding boxes, with the bounding boxes further processed to each surround a possible object. A sub-image of pixels corresponding to the bounding box is featurized for matching with tagged database images. The information (tags) associated with any matched images is processed to identify/categorize the sub-image and thus the object corresponding thereto.
    Type: Application
    Filed: June 21, 2013
    Publication date: December 25, 2014
    Inventors: Ce Liu, Yair Weiss, Antonio Torralba Barriuso
  • Publication number: 20040268192
    Abstract: Starting from an unlimited number of events or conditions (1), (1′) . . . (1n) together with there respective detectors (2), (2′) . . . (2n) that are combined with a transition detector (3), that relieves information from the “external reactivation pin” (4) and via an external “system control” pin (5). It is possible to watch over both combinational and time events, or to be exact watch over combinational events during execution, and in combination with time dependence generate a flag or signal to register an event, freeze the circuit, communicate that an event has been produced and aid in the identification of the event and the analysis of the system from a graphical or similar user interface.
    Type: Application
    Filed: August 30, 2004
    Publication date: December 30, 2004
    Inventors: Miguel Angel Aguirre Echanove, Jonathan Tombs, Antonio Torralba Silgado, Leopoldo Garcia Franquelo