Patents Assigned to Toyota Research Institute, Inc.
  • Publication number: 20240153101
    Abstract: A method for scene synthesis from human motion is described. The method includes computing three-dimensional (3D) human pose trajectories of human motion in a scene. The method also includes generating contact labels of unseen objects in the scene based on the computing of the 3D human pose trajectories. The method further includes estimating contact points between human body vertices of the 3D human pose trajectories and the contact labels of the unseen objects that are in contact with the human body vertices. The method also includes predicting object placements of the unseen objects in the scene based on the estimated contact points.
    Type: Application
    Filed: October 25, 2023
    Publication date: May 9, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY
    Inventors: Sifan YE, Yixing WANG, Jiaman LI, Dennis PARK, C. Karen LIU, Huazhe XU, Jiajun WU
  • Publication number: 20240153107
    Abstract: Systems and methods for performing three-dimensional multi-object tracking are disclosed herein. In one example, a method includes the steps of determining a residual based on augmented current frame detection bounding boxes, augmented previous frame detection bounding boxes, augmented current frame shape descriptors, and augmented previous frame shape descriptors and predicting an affinity matrix using the residual. The residual indicates a spatiotemporal and shape similarity between current detections in a current frame point cloud data and previous detections in a previous frame point cloud data. The affinity matrix indicates associations between the previous detections and the current detections, as well as the augmented anchors.
    Type: Application
    Filed: May 10, 2023
    Publication date: May 9, 2024
    Applicants: Toyota Research Institute, Inc., The Board of Trustees of the Leland Stanford Junior University, Toyota Jidosha Kabushiki Kaisha
    Inventors: Jie Li, Rares A. Ambrus, Taraneh Sadjadpour, Christin Jeannette Bohg
  • Publication number: 20240153197
    Abstract: An example method includes generating embeddings of image data that includes multiple images, where each image has a different viewpoints of a scene, generating a latent space and a decoder, wherein the decoder receives embeddings as input to generate an output viewpoint, for each viewpoint in the image data, determining a volumetric rendering view synthesis loss and a multi-view photometric loss, and applying an optimization algorithm to the latent space and the decoder over a number of epochs until the volumetric rendering view synthesis loss is within a volumetric threshold and the multi-view photometric loss is within a multi-view threshold.
    Type: Application
    Filed: August 3, 2023
    Publication date: May 9, 2024
    Applicants: Toyota Research Institute, Inc., Massachusetts Institute of Technology, Toyota Jidosha Kabushiki Kaisha
    Inventors: Vitor Guizilini, Rares A. Ambrus, Jiading Fang, Sergey Zakharov, Vincent Sitzmann, Igor Vasiljevic, Adrien Gaidon
  • Patent number: 11975725
    Abstract: A computer implemented method for determining optimal values for operational parameters for a model predictive controller for controlling a vehicle, can receive from a data store or a graphical user interface, ranges for one or more external parameters. The computer implemented method can determine optimum values for external parameters of the vehicle by simulating a vehicle operation across the ranges of the one or more operational parameters by solving a vehicle control problem and determining an output of the vehicle control problem based on a result for the simulated vehicle operation. A vehicle can include a processing component configured to adjust a control input for an actuator of the vehicle according to a control algorithm and based on the optimum values of the vehicle parameter as determined by the computer implemented method.
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: May 7, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Michael Thompson, Carrie Bobier-Tiu, Manuel Ahumada, Arjun Bhargava, Avinash Balachandran
  • Publication number: 20240134898
    Abstract: A method for inferring intent and discrepancies in a label coding scheme is described. The method includes compiling data indicating how one or more individuals labeled unstructured content according to the label coding scheme comprising a plurality of labels. The method also includes analyzing a context associated with a content labeled in a particular manner by the one or more individuals. The method further includes detecting discrepancies of meaning for a particular label used by the one or more individuals. The method also includes inferring a strategic thinking of the one or more individuals associated with the discrepancies of meaning detected for the particular label. The method further includes displaying recorded metadata associated with the strategic thinking and the discrepancies of meaning detected for the particular label between the one or more individuals regarding a coded dataset.
    Type: Application
    Filed: October 19, 2022
    Publication date: April 25, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Yin-Ying CHEN, Shabnam HAKIMI, Kenton Michael LYONS, Yanxia ZHANG, Matthew Kyung-Soo HONG, Totte HARINEN, Monica PhuongThao VAN, Charlene WU
  • Publication number: 20240135721
    Abstract: A method for improving 3D object detection via object-level augmentations is described. The method includes recognizing, using an image recognition model of a differentiable data generation pipeline, an object in an image of a scene. The method also includes generating, using a 3D reconstruction model, a 3D reconstruction of the scene from the image including the recognized object. The method further includes manipulating, using an object level augmentation model, a random property of the object by a random magnitude at an object level to determine a set of properties and a set of magnitudes of an object manipulation that maximizes a loss function of the image recognition model. The method also includes training a downstream task network based on a set of training data generated based on the set of properties and the set of magnitudes of the object manipulation, such that the loss function is minimized.
    Type: Application
    Filed: October 12, 2022
    Publication date: April 25, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Rares Andrei AMBRUS, Sergey ZAKHAROV, Vitor GUIZILINI, Adrien David GAIDON
  • Patent number: 11964610
    Abstract: Systems and methods are provided for adjusting headlight properties according to the speed of the vehicle. In particular, some embodiments aim to optimize a vehicle's lighting in suboptimal conditions. Using the data processed by the ADAS, the system is able to optimize the vehicle's lighting by taking into account various factors beyond the current speed limit on the road.
    Type: Grant
    Filed: April 19, 2022
    Date of Patent: April 23, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Guillermo Pita Gil, Jaime S. Camhi
  • Patent number: 11967141
    Abstract: One or more embodiments of the present disclosure include systems and methods that use neural architecture fusion to learn how to combine multiple separate pre-trained networks by fusing their architectures into a single network for better computational efficiency and higher accuracy. For example, a computer implemented method of the disclosure includes obtaining multiple trained networks. Each of the trained networks may be associated with a respective task and has a respective architecture. The method further includes generating a directed acyclic graph that represents at least a partial union of the architectures of the trained networks. The method additionally includes defining a joint objective for the directed acyclic graph that combines a performance term and a distillation term. The method also includes optimizing the joint objective over the directed acyclic graph.
    Type: Grant
    Filed: January 30, 2023
    Date of Patent: April 23, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Adrien David Gaidon, Jie Li
  • Patent number: 11966234
    Abstract: A method for controlling an ego agent includes capturing a two-dimensional (2D) image of an environment adjacent to the ego agent. The method also includes generating a semantically segmented image of the environment based on the 2D image. The method further includes generating a depth map of the environment based on the semantically segmented image. The method additionally includes generating a three-dimensional (3D) estimate of the environment based on the depth map. The method also includes controlling an action of the ego agent based on the identified location.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: April 23, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Vitor Guizilini, Jie Li, Rares A. Ambrus, Sudeep Pillai, Adrien Gaidon
  • Patent number: 11958498
    Abstract: Systems and methods for trajectory planning for an autonomous vehicle, may include: computing features for each of the plurality of candidate trajectories; computing scores for the features of the candidate trajectories, wherein the scores are based on parameter values associated with their corresponding final trajectories; determining, based on the computed scores, a trajectory of the candidate trajectories to be used as a warm-start trajectory for trajectory optimization and applying the warm-start trajectory to develop a final trajectory for the vehicle; and autonomously operating the autonomous vehicle in accordance with the final trajectory.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: April 16, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Miroslav Baric, Jin Ge, Timothee Cazenave
  • Publication number: 20240119857
    Abstract: System, methods, and other embodiments described herein relate to training a scene simulator for rendering 2D scenes using data from real and simulated agents. In one embodiment, a method includes acquiring trajectories and three-dimensional (3D) views for multiple agents from observations of real vehicles. The method also includes generating a 3D scene having the multiple agents using the 3D views and information from simulated agents. The method also includes training a scene simulator to render scene projections using the 3D scene. The method also includes outputting a 2D scene having simulated observations for a driving scene using the scene simulator.
    Type: Application
    Filed: September 27, 2022
    Publication date: April 11, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha, Massachusetts Institute of Technology
    Inventors: Tsun-Hsuan Wang, Alexander Amini, Wilko Schwarting, Igor Gilitschenski, Sertac Karaman, Daniela Rus
  • Patent number: 11951633
    Abstract: Systems and methods for determining a location of a robot are provided. A method includes receiving, by a processor, a signal from a deformable sensor including data with respect to a deformation region in a deformable membrane of the deformable sensor resulting from contact with a first object. The data associated with contact with the first object is compared, by the processor, to details associated with contact with the first object to information associated with a plurality of objects stored in a database. The first object is identified, by the processor, as a first identified object of the plurality of objects stored in the database. The first identified object is an object of the plurality of objects stored in the database that is most similar to the first object. The location of the robot is determined, by the processor, based on a location of the first identified object.
    Type: Grant
    Filed: January 10, 2023
    Date of Patent: April 9, 2024
    Assignees: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Alexander Alspach, Naveen Suresh Kuppuswamy, Avinash Uttamchandani, Samson F. Creasey, Russell L Tedrake, Kunimatsu Hashimoto, Erik C. Sobel, Takuya Ikeda
  • Patent number: 11954919
    Abstract: Systems and methods are provided for developing/updating training datasets for traffic light detection/perception models. V2I-based information may indicate a particular traffic light state/state of transition. This information can be compared to a traffic light perception prediction. When the prediction is inconsistent with the V2I-based information, data regarding the condition(s)/traffic light(s)/etc. can be saved and uploaded to a training database to update/refine the training dataset(s) maintained therein. In this way, an existing traffic light perception model can be updated/improved and/or a better traffic light perception model can be developed.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: April 9, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Kun-Hsin Chen, Peiyan Gong, Shunsho Kaku, Sudeep Pillai, Hai Jin, Sarah Yoo, David L. Garber, Ryan W. Wolcott
  • Patent number: 11948310
    Abstract: Systems and methods described herein relate to jointly training a machine-learning-based monocular optical flow, depth, and scene flow estimator. One embodiment processes a pair of temporally adjacent monocular image frames using a first neural network structure to produce a first optical flow estimate; processes the pair of temporally adjacent monocular image frames using a second neural network structure to produce an estimated depth map and an estimated scene flow; processes the estimated depth map and the estimated scene flow using the second neural network structure to produce a second optical flow estimate; and imposes a consistency loss between the first optical flow estimate and the second optical flow estimate that minimizes a difference between the first optical flow estimate and the second optical flow estimate to improve performance of the first neural network structure in estimating optical flow and the second neural network structure in estimating depth and scene flow.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: April 2, 2024
    Assignee: Toyota Research Institute, Inc.
    Inventors: Vitor Guizilini, Rares A. Ambrus, Kuan-Hui Lee, Adrien David Gaidon
  • Patent number: 11948309
    Abstract: Systems and methods described herein relate to jointly training a machine-learning-based monocular optical flow, depth, and scene flow estimator. One embodiment processes a pair of temporally adjacent monocular image frames using a first neural network structure to produce an optical flow estimate and to extract, from at least one image frame in the pair of temporally adjacent monocular image frames, a set of encoded image context features; triangulates the optical flow estimate to generate a depth map; extracts a set of encoded depth context features from the depth map using a depth context encoder; and combines the set of encoded image context features and the set of encoded depth context features to improve performance of a second neural network structure in estimating depth and scene flow.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: April 2, 2024
    Assignee: Toyota Research Institute, Inc.
    Inventors: Vitor Guizilini, Rares A. Ambrus, Kuan-Hui Lee, Adrien David Gaidon
  • Patent number: 11945342
    Abstract: A method for adjusting a seat in a vehicle includes adjusting the seat from an initial position to a passenger adjusted position based on receiving an input from a passenger. The method further includes determining the passenger exited the vehicle after adjusting the seat and predicting a likelihood of the passenger returning to the vehicle based on determining the passenger exited the vehicle. The method also includes adjusting the seat to the initial position based on the likelihood of the passenger returning is less than a passenger returning threshold. The method still further includes identifying a person approaching the vehicle after the passenger exited the vehicle based on information captured by one or more sensors of the vehicle. The method also includes adjusting the seat to a position associated with the previous passenger based on identifying the person approaching the vehicle as the previous passenger.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: April 2, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Hiromitsu Urano, Kentaro Ichikawa, Junya Ueno
  • Publication number: 20240104905
    Abstract: A method for multi-view dataset formation from fleet data is described. The method includes detecting at least a pair of vehicles within a vicinity of one another, and having overlapping viewing frustums of a scene. The method also includes triggering a capture of sensor data from the pair of vehicles. The method further includes synchronizing the sensor data captured by the pair of vehicles. The method also includes registering the sensor data captured by the pair of vehicles within a shared coordinate system to form a multi-view dataset of the scene.
    Type: Application
    Filed: September 28, 2022
    Publication date: March 28, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Simon A.I. STENT, Dennis PARK
  • Patent number: 11940798
    Abstract: An autonomous vehicle configured to autonomously pass a cyclist includes an imaging device and processing circuitry configured to receive information from the imaging device. Additionally, the processing circuitry of the autonomous vehicle is configured to identify a cyclist passing situation based on the information received from the imaging device, and plan a path of an autonomous vehicle based on the cyclist passing situation. The autonomous vehicle also includes a positioning system and the processing circuitry is further configured to receive information from the positioning system, determine if the cyclist passing situation is sufficiently identified, and identify the cyclist passing situation based on the information from the imaging device and the positioning system when the cyclist passing situation is not sufficiently identified based on the information received from the imaging device.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: March 26, 2024
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., The Regents of the University of Michigan
    Inventors: Michael J. Delp, Ruijia Feng, Shan Bao
  • Patent number: 11934476
    Abstract: A method for search construct validation is described. The method includes determining a construct of a search query recognized on a search engine of a third party webpage and related constructs represented in a plurality of search results generated based on the search query. The method includes mapping the determined construct to the related constructs represented in the plurality of search results generated based on the search query. The method includes generating a knowledge graph illustrating the mapping between the determined construct and the related constructs and a hierarchy and a strength of a conceptual relationship between the determined construct and the related constructs. The method includes displaying, via an interactive user interface, an interactive graph illustrating the mapping between the determined construct and the plurality of related constructs and the hierarchy and the strength of the conceptual relationship between the determined construct and the plurality of related constructs.
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: March 19, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Shabnam Hakimi, Charlene C. Wu, Matthew Len Lee, Nikos Arechiga
  • Patent number: 11935427
    Abstract: A driver training system includes an unmanned aerial vehicle (UAV) including a processor and a memory communicably coupled to the processor and storing a UAV control module including computer-readable instructions that when executed by the processor cause the processor to control operation of a driver training interface operably connected to the UAV to communicate driving instruction information in a manner configured to be perceptible by a human driver in a driver training vehicle following behind the UAV.
    Type: Grant
    Filed: April 27, 2022
    Date of Patent: March 19, 2024
    Assignees: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Manuel Ludwig Kuehner, Hiroshi Yasuda, Alexander R. Green