Patents Assigned to Toyota Research Institute, Inc.
  • Patent number: 11989835
    Abstract: A computing device configured to display a virtual representation of an environment of a robot includes a display device, a memory, and a processor coupled to the memory. The processor is configured to receive data from the one or more sensors of the robot with respect to an object within an environment of the robot. The processor is also configured to display a virtual representation of the object within a virtual mapping of the environment based on the data received from the one or more sensors. The processor is further configured to receive input data selecting the virtual representation of the object. The processor is also further configured to send instructions to the robot to act in response to the received input data.
    Type: Grant
    Filed: July 18, 2018
    Date of Patent: May 21, 2024
    Assignee: Toyota Research Institute, Inc.
    Inventors: Matthew Amacker, Arshan Poursohi, Allison Thackston
  • Publication number: 20240161389
    Abstract: Systems and methods described herein support enhanced computer vision capabilities which may be applicable to, for example, autonomous vehicle operation. An example method includes generating a latent space and a decoder based on image data that includes multiple images, where each image has a different viewing frame of a scene. The method also includes generating a volumetric embedding that is representative of a novel viewing frame of the scene. The method includes decoding, with the decoder, the latent space using cross-attention with the volumetric embedding, and generating a novel viewing frame of the scene based on an output of the decoder.
    Type: Application
    Filed: August 3, 2023
    Publication date: May 16, 2024
    Applicants: Toyota Research Institute, Inc., Massachusetts Institute of Technology, Toyota Jidosha Kabushiki Kaisha
    Inventors: Vitor Guizilini, Rares A. Ambrus, Jiading Fang, Sergey Zakharov, Vincent Sitzmann, Igor Vasiljevic, Adrien Gaidon
  • Publication number: 20240161510
    Abstract: Systems and methods described herein support enhanced computer vision capabilities which may be applicable to, for example, autonomous vehicle operation. An example method includes An example method includes training a shared latent space and a first decoder based on first image data that includes multiple images, and training the shared latent space and a second decoder based on second image data that includes multiple images. The method also includes generating a volumetric embedding that is representative of a novel viewing frame the first scene. Further, the method includes decoding, with the first decoders, the shared latent space with the volumetric embedding, and generating the novel viewing frame of the first scene based on the output of the first decoder.
    Type: Application
    Filed: August 3, 2023
    Publication date: May 16, 2024
    Applicants: Toyota Research Institute, Inc., Massachusetts Institute of Technology, Toyota Jidosha Kabushiki Kaisha
    Inventors: Vitor Guizilini, Rares A. Ambrus, Jiading Fang, Sergey Zakharov, Vincent Sitzmann, Igor Vasiljevic, Adrien Gaidon
  • Publication number: 20240161471
    Abstract: Systems and methods described herein support enhanced computer vision capabilities which may be applicable to, for example, autonomous vehicle operation. An example method includes generating, through training, a shared latent space based on (i) image data that include multiple images, where each image has a different viewing frame of a scene, and (ii) first and second types of embeddings, and training a decoder based on the first type of embeddings. The method also includes generating an embedding based on the first type of embeddings that is representative of a novel viewing frame of the scene, decoding, with the decoder, the shared latent space using cross-attention with the generated embedding, and generating the novel viewing frame of the scene based on an output of the decoder.
    Type: Application
    Filed: August 3, 2023
    Publication date: May 16, 2024
    Applicants: Toyota Research Institute, Inc., Massachusetts Institute of Technology, Toyota Jidosha Kabushiki Kaisha
    Inventors: Vitor Guizilini, Rares A. Ambrus, Jiading Fang, Sergey Zakharov, Vincent Sitzmann, Igor Vasiljevic, Adrien Gaidon
  • Publication number: 20240157977
    Abstract: Systems and methods for modeling and predicting scene occupancy in an environment of a robot are disclosed herein. One embodiment processes past agent-trajectory data, map data, and sensor data using one or more encoder neural networks to produce combined encoded input data; generates a weights vector for a Gaussian Mixture Model (GMM) based on the combined encoded input data; produces a volumetric spatio-temporal representation of occupancy in an environment of a robot by generating, for a plurality of modes of the GMM in accordance with the weights vector, corresponding sample probability distributions of scene occupancy based on respective means and variances of the plurality of modes, wherein the respective means and variances sample coefficients of a set of learned basis functions; and controls the operation of the robot based, at least in part, on the volumetric spatio-temporal representation of occupancy.
    Type: Application
    Filed: November 16, 2022
    Publication date: May 16, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Guy Rosman, Igor Gilitschenski, Xin Huang
  • Publication number: 20240160998
    Abstract: A method for representing atomic structures as Gaussian processes is described. The method includes mapping a crystal structure of chemical elements in a real space, in which atoms of the chemical elements are represented in a unit cell. The method also includes learning, by a machine learning model, a 3D embedding of each of the chemical elements in the real space according to the mapping of the crystal structure of the chemical elements. The method further includes training the machine learning model according to a representation of the atoms of the chemical elements in the unit cell based on the mapping of the crystal structure of the chemical elements. The method also includes predicting a material property corresponding to a point within the real space.
    Type: Application
    Filed: November 15, 2022
    Publication date: May 16, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Jens Strabo HUMMELSHØJ, Joseph Harold MONTOYA
  • Publication number: 20240153101
    Abstract: A method for scene synthesis from human motion is described. The method includes computing three-dimensional (3D) human pose trajectories of human motion in a scene. The method also includes generating contact labels of unseen objects in the scene based on the computing of the 3D human pose trajectories. The method further includes estimating contact points between human body vertices of the 3D human pose trajectories and the contact labels of the unseen objects that are in contact with the human body vertices. The method also includes predicting object placements of the unseen objects in the scene based on the estimated contact points.
    Type: Application
    Filed: October 25, 2023
    Publication date: May 9, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY
    Inventors: Sifan YE, Yixing WANG, Jiaman LI, Dennis PARK, C. Karen LIU, Huazhe XU, Jiajun WU
  • Publication number: 20240153197
    Abstract: An example method includes generating embeddings of image data that includes multiple images, where each image has a different viewpoints of a scene, generating a latent space and a decoder, wherein the decoder receives embeddings as input to generate an output viewpoint, for each viewpoint in the image data, determining a volumetric rendering view synthesis loss and a multi-view photometric loss, and applying an optimization algorithm to the latent space and the decoder over a number of epochs until the volumetric rendering view synthesis loss is within a volumetric threshold and the multi-view photometric loss is within a multi-view threshold.
    Type: Application
    Filed: August 3, 2023
    Publication date: May 9, 2024
    Applicants: Toyota Research Institute, Inc., Massachusetts Institute of Technology, Toyota Jidosha Kabushiki Kaisha
    Inventors: Vitor Guizilini, Rares A. Ambrus, Jiading Fang, Sergey Zakharov, Vincent Sitzmann, Igor Vasiljevic, Adrien Gaidon
  • Publication number: 20240153107
    Abstract: Systems and methods for performing three-dimensional multi-object tracking are disclosed herein. In one example, a method includes the steps of determining a residual based on augmented current frame detection bounding boxes, augmented previous frame detection bounding boxes, augmented current frame shape descriptors, and augmented previous frame shape descriptors and predicting an affinity matrix using the residual. The residual indicates a spatiotemporal and shape similarity between current detections in a current frame point cloud data and previous detections in a previous frame point cloud data. The affinity matrix indicates associations between the previous detections and the current detections, as well as the augmented anchors.
    Type: Application
    Filed: May 10, 2023
    Publication date: May 9, 2024
    Applicants: Toyota Research Institute, Inc., The Board of Trustees of the Leland Stanford Junior University, Toyota Jidosha Kabushiki Kaisha
    Inventors: Jie Li, Rares A. Ambrus, Taraneh Sadjadpour, Christin Jeannette Bohg
  • Patent number: 11975725
    Abstract: A computer implemented method for determining optimal values for operational parameters for a model predictive controller for controlling a vehicle, can receive from a data store or a graphical user interface, ranges for one or more external parameters. The computer implemented method can determine optimum values for external parameters of the vehicle by simulating a vehicle operation across the ranges of the one or more operational parameters by solving a vehicle control problem and determining an output of the vehicle control problem based on a result for the simulated vehicle operation. A vehicle can include a processing component configured to adjust a control input for an actuator of the vehicle according to a control algorithm and based on the optimum values of the vehicle parameter as determined by the computer implemented method.
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: May 7, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Michael Thompson, Carrie Bobier-Tiu, Manuel Ahumada, Arjun Bhargava, Avinash Balachandran
  • Publication number: 20240134898
    Abstract: A method for inferring intent and discrepancies in a label coding scheme is described. The method includes compiling data indicating how one or more individuals labeled unstructured content according to the label coding scheme comprising a plurality of labels. The method also includes analyzing a context associated with a content labeled in a particular manner by the one or more individuals. The method further includes detecting discrepancies of meaning for a particular label used by the one or more individuals. The method also includes inferring a strategic thinking of the one or more individuals associated with the discrepancies of meaning detected for the particular label. The method further includes displaying recorded metadata associated with the strategic thinking and the discrepancies of meaning detected for the particular label between the one or more individuals regarding a coded dataset.
    Type: Application
    Filed: October 19, 2022
    Publication date: April 25, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Yin-Ying CHEN, Shabnam HAKIMI, Kenton Michael LYONS, Yanxia ZHANG, Matthew Kyung-Soo HONG, Totte HARINEN, Monica PhuongThao VAN, Charlene WU
  • Publication number: 20240135721
    Abstract: A method for improving 3D object detection via object-level augmentations is described. The method includes recognizing, using an image recognition model of a differentiable data generation pipeline, an object in an image of a scene. The method also includes generating, using a 3D reconstruction model, a 3D reconstruction of the scene from the image including the recognized object. The method further includes manipulating, using an object level augmentation model, a random property of the object by a random magnitude at an object level to determine a set of properties and a set of magnitudes of an object manipulation that maximizes a loss function of the image recognition model. The method also includes training a downstream task network based on a set of training data generated based on the set of properties and the set of magnitudes of the object manipulation, such that the loss function is minimized.
    Type: Application
    Filed: October 12, 2022
    Publication date: April 25, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Rares Andrei AMBRUS, Sergey ZAKHAROV, Vitor GUIZILINI, Adrien David GAIDON
  • Patent number: 11966234
    Abstract: A method for controlling an ego agent includes capturing a two-dimensional (2D) image of an environment adjacent to the ego agent. The method also includes generating a semantically segmented image of the environment based on the 2D image. The method further includes generating a depth map of the environment based on the semantically segmented image. The method additionally includes generating a three-dimensional (3D) estimate of the environment based on the depth map. The method also includes controlling an action of the ego agent based on the identified location.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: April 23, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Vitor Guizilini, Jie Li, Rares A. Ambrus, Sudeep Pillai, Adrien Gaidon
  • Patent number: 11964610
    Abstract: Systems and methods are provided for adjusting headlight properties according to the speed of the vehicle. In particular, some embodiments aim to optimize a vehicle's lighting in suboptimal conditions. Using the data processed by the ADAS, the system is able to optimize the vehicle's lighting by taking into account various factors beyond the current speed limit on the road.
    Type: Grant
    Filed: April 19, 2022
    Date of Patent: April 23, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Guillermo Pita Gil, Jaime S. Camhi
  • Patent number: 11967141
    Abstract: One or more embodiments of the present disclosure include systems and methods that use neural architecture fusion to learn how to combine multiple separate pre-trained networks by fusing their architectures into a single network for better computational efficiency and higher accuracy. For example, a computer implemented method of the disclosure includes obtaining multiple trained networks. Each of the trained networks may be associated with a respective task and has a respective architecture. The method further includes generating a directed acyclic graph that represents at least a partial union of the architectures of the trained networks. The method additionally includes defining a joint objective for the directed acyclic graph that combines a performance term and a distillation term. The method also includes optimizing the joint objective over the directed acyclic graph.
    Type: Grant
    Filed: January 30, 2023
    Date of Patent: April 23, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Adrien David Gaidon, Jie Li
  • Patent number: 11958498
    Abstract: Systems and methods for trajectory planning for an autonomous vehicle, may include: computing features for each of the plurality of candidate trajectories; computing scores for the features of the candidate trajectories, wherein the scores are based on parameter values associated with their corresponding final trajectories; determining, based on the computed scores, a trajectory of the candidate trajectories to be used as a warm-start trajectory for trajectory optimization and applying the warm-start trajectory to develop a final trajectory for the vehicle; and autonomously operating the autonomous vehicle in accordance with the final trajectory.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: April 16, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Miroslav Baric, Jin Ge, Timothee Cazenave
  • Publication number: 20240119857
    Abstract: System, methods, and other embodiments described herein relate to training a scene simulator for rendering 2D scenes using data from real and simulated agents. In one embodiment, a method includes acquiring trajectories and three-dimensional (3D) views for multiple agents from observations of real vehicles. The method also includes generating a 3D scene having the multiple agents using the 3D views and information from simulated agents. The method also includes training a scene simulator to render scene projections using the 3D scene. The method also includes outputting a 2D scene having simulated observations for a driving scene using the scene simulator.
    Type: Application
    Filed: September 27, 2022
    Publication date: April 11, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha, Massachusetts Institute of Technology
    Inventors: Tsun-Hsuan Wang, Alexander Amini, Wilko Schwarting, Igor Gilitschenski, Sertac Karaman, Daniela Rus
  • Patent number: 11954919
    Abstract: Systems and methods are provided for developing/updating training datasets for traffic light detection/perception models. V2I-based information may indicate a particular traffic light state/state of transition. This information can be compared to a traffic light perception prediction. When the prediction is inconsistent with the V2I-based information, data regarding the condition(s)/traffic light(s)/etc. can be saved and uploaded to a training database to update/refine the training dataset(s) maintained therein. In this way, an existing traffic light perception model can be updated/improved and/or a better traffic light perception model can be developed.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: April 9, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Kun-Hsin Chen, Peiyan Gong, Shunsho Kaku, Sudeep Pillai, Hai Jin, Sarah Yoo, David L. Garber, Ryan W. Wolcott
  • Patent number: 11951633
    Abstract: Systems and methods for determining a location of a robot are provided. A method includes receiving, by a processor, a signal from a deformable sensor including data with respect to a deformation region in a deformable membrane of the deformable sensor resulting from contact with a first object. The data associated with contact with the first object is compared, by the processor, to details associated with contact with the first object to information associated with a plurality of objects stored in a database. The first object is identified, by the processor, as a first identified object of the plurality of objects stored in the database. The first identified object is an object of the plurality of objects stored in the database that is most similar to the first object. The location of the robot is determined, by the processor, based on a location of the first identified object.
    Type: Grant
    Filed: January 10, 2023
    Date of Patent: April 9, 2024
    Assignees: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Alexander Alspach, Naveen Suresh Kuppuswamy, Avinash Uttamchandani, Samson F. Creasey, Russell L Tedrake, Kunimatsu Hashimoto, Erik C. Sobel, Takuya Ikeda
  • Patent number: 11948310
    Abstract: Systems and methods described herein relate to jointly training a machine-learning-based monocular optical flow, depth, and scene flow estimator. One embodiment processes a pair of temporally adjacent monocular image frames using a first neural network structure to produce a first optical flow estimate; processes the pair of temporally adjacent monocular image frames using a second neural network structure to produce an estimated depth map and an estimated scene flow; processes the estimated depth map and the estimated scene flow using the second neural network structure to produce a second optical flow estimate; and imposes a consistency loss between the first optical flow estimate and the second optical flow estimate that minimizes a difference between the first optical flow estimate and the second optical flow estimate to improve performance of the first neural network structure in estimating optical flow and the second neural network structure in estimating depth and scene flow.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: April 2, 2024
    Assignee: Toyota Research Institute, Inc.
    Inventors: Vitor Guizilini, Rares A. Ambrus, Kuan-Hui Lee, Adrien David Gaidon