Patents by Inventor Adrien D. Gaidon

Adrien D. Gaidon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240171724
    Abstract: The present disclosure provides neural fields for sparse novel view synthesis of outdoor scenes. Given just a single or a few input images from a novel scene, the disclosed technology can render new 360° views of complex unbounded outdoor scenes. This can be achieved by constructing an image-conditional triplanar representation to model the 3D surrounding from various perspectives. The disclosed technology can generalize across novel scenes and viewpoints for complex 360° outdoor scenes.
    Type: Application
    Filed: October 16, 2023
    Publication date: May 23, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: MUHAMMAD ZUBAIR IRSHAD, SERGEY ZAKHAROV, KATHERINE Y. LIU, VITOR GUIZILINI, THOMAS KOLLAR, ADRIEN D. GAIDON, RARES A. AMBRUS
  • Publication number: 20240029286
    Abstract: A method of generating additional supervision data to improve learning of a geometrically-consistent latent scene representation with a geometric scene representation architecture is provided. The method includes receiving, with a computing device, a latent scene representation encoding a pointcloud from images of a scene captured by a plurality of cameras each with known intrinsics and poses, generating a virtual camera having a viewpoint different from viewpoints of the plurality of cameras, projecting information from the pointcloud onto the viewpoint of the virtual camera, and decoding the latent scene representation based on the virtual camera thereby generating an RGB image and depth map corresponding to the viewpoint of the virtual camera for implementation as additional supervision data.
    Type: Application
    Filed: February 16, 2023
    Publication date: January 25, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha, Toyota Technological Institute at Chicago
    Inventors: Vitor Guizilini, Igor Vasiljevic, Adrien D. Gaidon, Jiading Fang, Gregory Shakhnarovich, Matthew R. Walter, Rares A. Ambrus
  • Publication number: 20240028792
    Abstract: The disclosure provides implicit representations for multi-object 3D shape, 6D pose and size, and appearance optimization, including obtaining shape, 6D pose and size, and appearance codes. Training is employed using shape and appearance priors from an implicit joint differential database. 2D masks are also obtained and are used in an optimization process that utilizes a combined loss minimizing function and an Octree-based coarse-to-fine differentiable optimization to jointly optimize the latest shape, appearance, pose and size, and 2D masks. An object surface is recovered from the latest shape codes to a desired resolution level. The database represents shapes as Signed Distance Fields (SDF), and appearance as Texture Fields (TF).
    Type: Application
    Filed: July 19, 2022
    Publication date: January 25, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: MUHAMMAD ZUBAIR IRSHAD, Sergey Zakharov, Rares A. Ambrus, Adrien D. Gaidon
  • Publication number: 20240013409
    Abstract: A method for multiple object tracking includes receiving, with a computing device, a point cloud dataset, detecting one or more objects in the point cloud dataset, each of the detected one or more objects defined by points of the point cloud dataset and a bounding box, querying one or more historical tracklets for historical tracklet states corresponding to each of the one or more detected objects, implementing a 4D encoding backbone comprising two branches: a first branch configured to compute per-point features for each of the one or more objects and the corresponding historical tracklet states, and a second branch configured to obtain 4D point features, concatenating the per-point features and the 4D point features, and predicting, with a decoder receiving the concatenated per-point features, current tracklet states for each of the one or more objects.
    Type: Application
    Filed: May 26, 2023
    Publication date: January 11, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha, The Board of Trustees of the Leland Stanford Junior University
    Inventors: Colton Stearns, Jie Li, Rares A. Ambrus, Vitor Campagnolo Guizilini, Sergey Zakharov, Adrien D. Gaidon, Davis Rempe, Tolga Birdal, Leonidas J. Guibas
  • Publication number: 20230237807
    Abstract: A method for tracking occluded objects includes encoding locations of a plurality of objects in an environment, determining a target object, receiving a first end point corresponding to a position of the target object before occlusion behind an occlusion object, distributing a hypothesis between both sides of the occlusion object during occlusion from a subsequent frame of the sequence of frames, receiving a second end point corresponding to a position of the target object after emerging from occlusion from another subsequent frame of the sequence of frames, and determining a trajectory of the target object when occluded by the occlusion object by performing inferences using a spatio-temporal probabilistic graph based on the current frame and the subsequent frames of the sequence of frames. The trajectory of the target object when occluded is used as a learning model for future target objects that are occluded by the occlusion object.
    Type: Application
    Filed: May 6, 2022
    Publication date: July 27, 2023
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha, The Regents of the University of California
    Inventors: Pavel Tokmakov, Allan Jabri, Adrien D. Gaidon
  • Publication number: 20230029993
    Abstract: Systems, methods, computer-readable media, techniques, and methodologies are disclosed for generating vehicle controls and/or driving policies based on machine learning models that utilize intermediate representation of driving scenes as well as demonstrations (e.g. by behavioral cloning). An intermediate representation that includes inductive biases about the structure of driving scenes for a vehicle can be generated by a self-supervised first machine learning model. A driving policy for the vehicle can be determined by a second machine learning model trained by a set of expert demonstrations and based on the intermediate representation. The expert demonstrations can include labelled data. An appropriate vehicle action may then be determined based on the driving policy. A control signal indicative of this vehicle action may then be output to cause an autonomous vehicle, for example, to implement the appropriate vehicle action.
    Type: Application
    Filed: July 28, 2021
    Publication date: February 2, 2023
    Inventors: ALBERT ZHAO, RARES A. AMBRUS, ADRIEN D. GAIDON
  • Patent number: 11501490
    Abstract: The embodiments disclosed herein describe vehicles, systems and methods for multi-resolution fusion of pseudo-LiDAR features. In one aspect, a method for multi-resolution fusion of pseudo-LiDAR features includes receiving image data from one or more image sensors, generating a point cloud from the image data, generating, from the point cloud, a first bird's eye view map having a first resolution, generating, from the point cloud, a second bird's eye view map having a second resolution, and generating a combined bird's eye view map by combining features of the first bird's eye view map with features from the second bird's eye view map.
    Type: Grant
    Filed: July 28, 2020
    Date of Patent: November 15, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Victor Vaquero Gomez, Rares A. Ambrus, Vitor Guizilini, Adrien D. Gaidon
  • Patent number: 11378965
    Abstract: Exemplary implementations may: generate output signals conveying contextual information and vehicle information; determine, based on the output signals, the contextual information; determine, based on the output signals, the vehicle information, determine, in an ongoing manner, based on the contextual information and/or the vehicle information, values of a complexity metric, the complexity metric quantifying predicted complexity of a current contextual environment and/or predicted complexity of a likely needed response to a change in the contextual information; filter, based on the values of the complexity metric, the contextual information spatially; and control, based on the vehicle information and the spatially filtered contextual information, the vehicle such that the likely needed response is satisfied.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: July 5, 2022
    Assignee: Toyota Research Institute, Inc.
    Inventors: Felipe Codevilla, Eder Santana, Adrien D. Gaidon
  • Patent number: 11347788
    Abstract: Systems and methods for generating a requested image view are disclosed. Exemplary implementations may: electronically store map information and contextual information for an area; receive a query for the requested image view; determine, based on the parameter values specified by the query and the map information, values of the physics-based metric; translate the contextual information to a translated representation of the contextual information; encode, based on the translated representation of the contextual information and the values of the physics-based metric, an image file that defines the requested image view such that the translated representation of the contextual information and the values of the physics-based metric are combined; and generate the requested image view by decoding the image file.
    Type: Grant
    Filed: January 16, 2019
    Date of Patent: May 31, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Felipe Codevilla, Eder Santana, Adrien D. Gaidon
  • Publication number: 20220036650
    Abstract: The embodiments disclosed herein describe vehicles, systems and methods for multi-resolution fusion of pseudo-LiDAR features. In one aspect, a method for multi-resolution fusion of pseudo-LiDAR features includes receiving image data from one or more image sensors, generating a point cloud from the image data, generating, from the point cloud, a first bird's eye view map having a first resolution, generating, from the point cloud, a second bird's eye view map having a second resolution, and generating a combined bird's eye view map by combining features of the first bird's eye view map with features from the second bird's eye view map.
    Type: Application
    Filed: July 28, 2020
    Publication date: February 3, 2022
    Applicant: Toyota Research Institute, Inc.
    Inventors: Victor Vaquero Gomez, Rares A. Ambrus, Vitor Guizilini, Adrien D. Gaidon
  • Publication number: 20220024048
    Abstract: A deformable sensor comprises an enclosure comprising a deformable membrane, the enclosure configured to be filled with a medium, and an imaging sensor, disposed within the enclosure, having a field of view configured to be directed toward a bottom surface of the deformable membrane. The imaging sensor is configured to capture an image of the deformable membrane. The deformable sensor is configured to determine depth values for a plurality of points on the deformable membrane based on the image captured by the imaging sensor and a trained neural network.
    Type: Application
    Filed: January 13, 2021
    Publication date: January 27, 2022
    Applicant: Toyota Research Institute, Inc.
    Inventors: Rares A. Ambrus, Vitor Guizilini, Naveen Suresh Kuppuswamy, Andrew M. Beaulieu, Adrien D. Gaidon, Alexander Alspach
  • Patent number: 11157940
    Abstract: The systems and methods described herein disclose providing compensation for data transmission during a refill event. As described here, a vehicle collects operation data sets during movement in the vehicular environment. Vehicles can then transfer one or more of the operation data sets during the refill of the collecting vehicle. Thus, operator can determine the desirability and value of trading the upload time for compensation. The systems and methods can include detecting a refill event for a collecting vehicle. A data analysis can then be received for one or more operation data sets produced by the collecting vehicle. A data value can then be determined from the data analysis, with the operator determining transfer one or more operation data sets from the collecting vehicle during the refill event. Once received, compensation can be provided to the collecting vehicle for the received operation data sets based on the data value.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: October 26, 2021
    Assignee: Toyota Research Institute, Inc.
    Inventors: Adrien D. Gaidon, Nikolaos Michalakis
  • Patent number: 10866588
    Abstract: System, methods, and other embodiments described herein relate to improving training of sub-modules for autonomously controlling a vehicle. In one embodiment, a method includes generating projected controls for autonomously controlling the vehicle through a driving scene by analyzing sensor data about the driving scene using an end-to-end (E2E) model. The E2E model is based, at least in part, on at least one of the sub-modules. The method includes training the E2E model according to the projected controls and labels for the sensor data that indicate expected controls for driving the vehicle through the driving scene. The method includes transferring electronic data of the E2E model into the at least one of the sub-modules associated with the E2E model to initialize the at least one of the sub-modules to improve operation of the at least one sub-modules at sub-tasks for autonomously controlling the vehicle.
    Type: Grant
    Filed: March 7, 2018
    Date of Patent: December 15, 2020
    Assignee: Toyota Research Institute, Inc.
    Inventors: Shyamal D. Buch, Adrien D. Gaidon
  • Patent number: 10824909
    Abstract: System, methods, and other embodiments described herein relate to conditionally generating custom images by sampling latent space of a generator. In one embodiment, a method includes, in response to receiving a request to generate a custom image, generating a component instruction by translating a description about requested characteristics for the object instance into a vector that identifies a portion of a latent space within a respective generator. The method includes computing the object instance by controlling the respective one of the generators according to the component instruction to produce the object instance. The respective one of the generators being configured to generate objects within a semantic object class. The method includes generating the custom image from at least the object instance to produce the custom image from the description as a photorealistic image approximating a real image corresponding to the description.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: November 3, 2020
    Assignee: Toyota Research Institute, Inc.
    Inventors: German Ros Sanchez, Adrien D. Gaidon, Kuan-Hui Lee, Jie Li
  • Publication number: 20200226175
    Abstract: Systems and methods for generating a requested image view are disclosed. Exemplary implementations may: electronically store map information and contextual information for an area; receive a query for the requested image view; determine, based on the parameter values specified by the query and the map information, values of the physics-based metric; translate the contextual information to a translated representation of the contextual information; encode, based on the translated representation of the contextual information and the values of the physics-based metric, an image file that defines the requested image view such that the translated representation of the contextual information and the values of the physics-based metric are combined; and generate the requested image view by decoding the image file.
    Type: Application
    Filed: January 16, 2019
    Publication date: July 16, 2020
    Inventors: FELIPE CODEVILLA, EDER SANTANA, ADRIEN D. GAIDON
  • Patent number: 10713569
    Abstract: System, methods, and other embodiments described herein relate to improving the generation of realistic images. In one embodiment, a method includes acquiring a synthetic image including identified labels of simulated components within the synthetic image. The synthetic image is a simulated visualization and the identified labels distinguish between the components within the synthetic image. The method includes computing, from the simulated components, translated components that visually approximate real instances of the simulated components by using a generative module comprised of neural networks that are configured to separately generate the translated components. The method includes blending the translated components together to produce a new image from the simulated components of the synthetic image.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: July 14, 2020
    Assignee: Toyota Research Institute, Inc.
    Inventors: German Ros Sanchez, Adrien D. Gaidon, Kuan-Hui Lee, Jie Li
  • Publication number: 20200159231
    Abstract: Exemplary implementations may: generate output signals conveying contextual information and vehicle information; determine, based on the output signals, the contextual information; determine, based on the output signals, the vehicle information, determine, in an ongoing manner, based on the contextual information and/or the vehicle information, values of a complexity metric, the complexity metric quantifying predicted complexity of a current contextual environment and/or predicted complexity of a likely needed response to a change in the contextual information; filter, based on the values of the complexity metric, the contextual information spatially; and control, based on the vehicle information and the spatially filtered contextual information, the vehicle such that the likely needed response is satisfied.
    Type: Application
    Filed: November 15, 2018
    Publication date: May 21, 2020
    Inventors: Felipe Codevilla, Eder Santana, Adrien D. Gaidon
  • Patent number: 10643320
    Abstract: Systems and method for generating photorealistic images include training a generative adversarial network (GAN) model by jointly learning a first generator, a first discriminator, and a set of predictors through an iterative process of optimizing a minimax objective. The first discriminator learns to determine a synthetic-to-real image from a real image. The first generator learns to generate the synthetic-to-real image from a synthetic image such that the first discriminator determines the synthetic-to-real image is real. The set of predictors learn to predict at least one of a semantic segmentation labeled data and a privileged information from the synthetic-to-real image based on at least one of a known semantic segmentation labeled data and a known privileged information corresponding to the synthetic image. Once trained, the GAN model may generate one or more photorealistic images using the trained GAN model.
    Type: Grant
    Filed: February 12, 2018
    Date of Patent: May 5, 2020
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Kuan-Hui Lee, German Ros, Adrien D. Gaidon, Jie Li
  • Publication number: 20200034871
    Abstract: The systems and methods described herein disclose providing compensation for data transmission during a refill event. As described here, a vehicle collects operation data sets during movement in the vehicular environment. Vehicles can then transfer one or more of the operation data sets during the refill of the collecting vehicle. Thus, operator can determine the desirability and value of trading the upload time for compensation. The systems and methods can include detecting a refill event for a collecting vehicle. A data analysis can then be received for one or more operation data sets produced by the collecting vehicle. A data value can then be determined from the data analysis, with the operator determining transfer one or more operation data sets from the collecting vehicle during the refill event. Once received, compensation can be provided to the collecting vehicle for the received operation data sets based on the data value.
    Type: Application
    Filed: July 27, 2018
    Publication date: January 30, 2020
    Inventors: Adrien D. Gaidon, Nikolaos Michalakis
  • Publication number: 20190370666
    Abstract: System, methods, and other embodiments described herein relate to improving the generation of realistic images. In one embodiment, a method includes acquiring a synthetic image including identified labels of simulated components within the synthetic image. The synthetic image is a simulated visualization and the identified labels distinguish between the components within the synthetic image. The method includes computing, from the simulated components, translated components that visually approximate real instances of the simulated components by using a generative module comprised of neural networks that are configured to separately generate the translated components. The method includes blending the translated components together to produce a new image from the simulated components of the synthetic image.
    Type: Application
    Filed: May 31, 2018
    Publication date: December 5, 2019
    Inventors: German Ros Sanchez, Adrien D. Gaidon, Kuan-Hui Lee, Jie Li