Patents by Inventor Adrien David GAIDON

Adrien David GAIDON has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220148203
    Abstract: System, methods, and other embodiments described herein relate to training a depth model for joint depth completion and prediction. In one arrangement, a method includes generating depth features from sparse depth data according to a sparse auxiliary network (SAN) of a depth model. The method includes generating a first depth map from a monocular image and a second depth map from the monocular image and the depth features using the depth model. The method includes generating a depth loss from the second depth map and the sparse depth data and an image loss from the first depth map and the sparse depth data. The method includes updating the depth model including the SAN using the depth loss and the image loss.
    Type: Application
    Filed: January 21, 2021
    Publication date: May 12, 2022
    Inventors: Vitor Guizilini, Rares A. Ambrus, Adrien David Gaidon
  • Publication number: 20220148202
    Abstract: System, methods, and other embodiments described herein relate to determining depths of a scene from a monocular image. In one embodiment, a method includes generating depth features from depth data using a sparse auxiliary network (SAN) by i) sparsifying the depth data, ii) applying sparse residual blocks of the SAN to the depth data, and iii) densifying the depth features. The method includes generating a depth map from the depth features and a monocular image that corresponds with the depth data according to a depth model that includes the SAN. The method includes providing the depth map as depth estimates of objects represented in the monocular image.
    Type: Application
    Filed: January 7, 2021
    Publication date: May 12, 2022
    Inventors: Vitor Guizilini, Rares A. Ambrus, Adrien David Gaidon
  • Patent number: 11328517
    Abstract: A system and method generate feature space data that may be used for object detection. The system includes one or more processors and a memory. The memory may include one or more modules having instructions that, when executed by the one or more processors, cause the one or more processors to obtain a two-dimension image of a scene, generate an output depth map based on the two-dimension image of the scene, generate a pseudo-LIDAR point cloud based on the output depth map, generate a bird's eye view (BEV) feature space based on the pseudo-LIDAR point cloud, and modify the BEV feature space to generate an improved BEV feature space using feature space neural network that was trained by using a training LIDAR feature space as a ground truth based on a LIDAR point cloud.
    Type: Grant
    Filed: May 20, 2020
    Date of Patent: May 10, 2022
    Assignee: Toyota Research Institute, Inc.
    Inventors: Victor Vaquero Gomez, Rares A. Ambrus, Vitor Guizilini, Adrien David Gaidon
  • Patent number: 11321859
    Abstract: A method for scene reconstruction includes generating a depth estimate and a first pose estimate from a current image. The method also includes generating a second pose estimate based on the current image and one or more previous images in a sequence of images. The method further includes generating a warped image by warping each pixel in the current image based on the depth estimate, the first pose estimate, and the second pose estimate. The method still further includes controlling an action of an agent based on the second warped image.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: May 3, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Vitor Guizilini, Adrien David Gaidon
  • Patent number: 11321863
    Abstract: Systems, methods, and other embodiments described herein relate to generating depth estimates of an environment depicted in a monocular image. In one embodiment, a method includes identifying semantic features in the monocular image according to a semantic model. The method includes injecting the semantic features into a depth model using pixel-adaptive convolutions. The method includes generating a depth map from the monocular image using the depth model that is guided by the semantic features. The pixel-adaptive convolutions are integrated into a decoder of the depth model. The method includes providing the depth map as the depth estimates for the monocular image.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: May 3, 2022
    Assignee: Toyota Research Institute, Inc.
    Inventors: Vitor Guizilini, Rares A. Ambrus, Jie Li, Adrien David Gaidon
  • Patent number: 11315269
    Abstract: A system for generating point clouds having surface normal information includes one or more processors and a memory having a depth map generating module, a point cloud generating module, and surface normal generating module. The depth map generating module causes the one or more processors to generate a depth map from one or more images of a scene. The point cloud causes the one or more processors to generate a point cloud from the depth map having a plurality of points corresponding to one or more pixels of the depth map. The surface normal generating module causes the one or more processors to generate surface normal information for at least a portion of the one or more pixels of the depth map and inject the surface normal information into the point cloud such that the plurality of points of the point cloud include three-dimensional location information and surface normal information.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: April 26, 2022
    Assignee: Toyota Research Institute, Inc.
    Inventors: Victor Vaquero Gomez, Rares A. Ambrus, Vitor Guizilini, Adrien David Gaidon
  • Publication number: 20220108463
    Abstract: A method for using an artificial neural network associated with an agent to estimate depth, includes receiving, at the artificial neural network, an input image captured via a sensor associated with the agent. The method also includes upsampling, at each decoding layer of a plurality of decoding layers of the artificial neural network, decoded features associated with the input image to a resolution associated with a final output of the artificial neural network. The method further includes concatenating, at each decoding layer, the upsampled decoded features with features obtained at a convolution layer associated with a respective decoding layer. The method still further includes estimating, at a recurrent module of the artificial neural network, a depth of the input image based on receiving the concatenated upsampled decoded features from each decoding layer. The method also includes controlling an action of an agent based on the depth estimate.
    Type: Application
    Filed: December 17, 2021
    Publication date: April 7, 2022
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Vitor GUIZILINI, Adrien David GAIDON
  • Publication number: 20220080585
    Abstract: Systems and methods described herein relate to controlling a robot. One embodiment receives an initial state of the robot, an initial nominal control trajectory of the robot, and a Kullback-Leibler (KL) divergence bound between a modeled probability distribution for a stochastic disturbance and an unknown actual probability distribution for the stochastic disturbance; solves a bilevel optimization problem subject to the modeled probability distribution and the KL divergence bound using an iterative Linear-Exponential-Quadratic-Gaussian (iLEQG) algorithm and a cross-entropy process, the iLEQG algorithm outputting an updated nominal control trajectory, the cross-entropy process outputting a risk-sensitivity parameter; and controls operation of the robot based, at least in part, on the updated nominal control trajectory and the risk-sensitivity parameter.
    Type: Application
    Filed: February 12, 2021
    Publication date: March 17, 2022
    Inventors: Haruki Nishimura, Negar Zahedi Mehr, Adrien David Gaidon, Mac Schwager
  • Publication number: 20220066460
    Abstract: A mobile robot can be caused to move according to a planned trajectory. The mobile robot can be a vehicle. Information about agents in an environment of the mobile robot can be received from sensors. At a first time, a spatiotemporal graph can be produced. The spatiotemporal graph can represent relationships among the agents in the environment. The mobile robot can be one of the agents in the environment. Information from the spatiotemporal graph can be input to neural networks to produce information for a mixture of affine time-varying systems. The mixture of affine time-varying systems can represent an evolution of agent states of the agents. Using the mixture of affine time-varying systems and information associated with the first time, a prediction of the agent states at a second time can be calculated. The mobile robot can be caused to move according to the planned trajectory determined from the prediction.
    Type: Application
    Filed: April 12, 2021
    Publication date: March 3, 2022
    Inventors: Boris Ivanovic, Amine Elhafsi, Guy Rosman, Adrien David Gaidon, Marco Pavone
  • Publication number: 20220058817
    Abstract: A system for generating point clouds having surface normal information includes one or more processors and a memory having a depth map generating module, a point cloud generating module, and surface normal generating module. The depth map generating module causes the one or more processors to generate a depth map from one or more images of a scene. The point cloud causes the one or more processors to generate a point cloud from the depth map having a plurality of points corresponding to one or more pixels of the depth map. The surface normal generating module causes the one or more processors to generate surface normal information for at least a portion of the one or more pixels of the depth map and inject the surface normal information into the point cloud such that the plurality of points of the point cloud include three-dimensional location information and surface normal information.
    Type: Application
    Filed: August 24, 2020
    Publication date: February 24, 2022
    Inventors: Victor Vaquero Gomez, Rares A. Ambrus, Vitor Guizilini, Adrien David Gaidon
  • Publication number: 20220055663
    Abstract: A method for behavior cloned vehicle trajectory planning is described. The method includes perceiving vehicles proximate an ego vehicle in a driving environment, including a scalar confidence value of each perceived vehicle. The method also includes generating a bird's-eye-view (BEV) grid showing the ego vehicle and each perceived vehicle based on each of the scalar confidence value. The method further includes ignoring at least one of the perceived vehicles when the scalar confidence value of the at least one of the perceived vehicles is less than a predetermined value. The method also includes selecting an ego vehicle trajectory based on a cloned expert vehicle behavior policy according to remaining perceived vehicles.
    Type: Application
    Filed: August 21, 2020
    Publication date: February 24, 2022
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Andreas BUEHLER, Adrien David GAIDON, Rares A. AMBRUS, Guy ROSMAN, Wolfram BURGARD
  • Patent number: 11257231
    Abstract: A method for monocular depth/pose estimation in a camera agnostic network is described. The method includes training a monocular depth model and a monocular pose model to learn monocular depth estimation and monocular pose estimation based on a target image and context images from monocular video captured by the camera agnostic network. The method also includes lifting 3D points from image pixels of the target image according to the context images. The method further includes projecting the lifted 3D points onto an image plane according to a predicted ray vector based on the monocular depth model, the monocular pose model, and a camera center of the camera agnostic network. The method also includes predicting a warped target image from a predicted depth map of the monocular depth model, a ray surface of the predicted ray vector, and a projection of the lifted 3D points according to the camera agnostic network.
    Type: Grant
    Filed: June 17, 2020
    Date of Patent: February 22, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Vitor Guizilini, Sudeep Pillai, Adrien David Gaidon, Rares A. Ambrus, Igor Vasiljevic
  • Publication number: 20220032960
    Abstract: A method for risk-aware game-theoretic trajectory planning is described. The method includes modeling an ego vehicle and at least one other vehicle as risk-aware agents in a game-theoretic driving environment. The method also includes ranking upcoming planned trajectories according to a risk-aware cost function of the ego vehicle and a risk-sensitivity of the other vehicle associated with each of the upcoming planned trajectories. The method further includes selecting a vehicle trajectory according to the ranking of the upcoming planned trajectories based on the risk-aware cost function and the risk-sensitivity of the other vehicle associated with each of the upcoming planned trajectories to reach a target destination according to a mission plan.
    Type: Application
    Filed: July 29, 2020
    Publication date: February 3, 2022
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY
    Inventors: Mingyu WANG, Negar ZAHEDI MEHR, Adrien David GAIDON, Mac SCHWAGER
  • Publication number: 20220036126
    Abstract: A detector system having a detector model includes one or more processor(s) and a memory. The memory includes an image acquisition module, a training module, and a label propagating module. The modules cause the processor(s) to obtain a first training set, train the detector model using the first training set and a first loss function, label propagate a second training set by the detector model after the detector model is trained with the first training set, and train the detector model using the first training set, the second training set, the first loss function, and a discriminative loss function. The detector model is trained through an intermediate multidimensional feature predicted at each pixel location of the one or more objects of the first training set and the second training set. The intermediate multidimensional feature being an instance identifier expressing the temporal consistency of objects along the temporal axis.
    Type: Application
    Filed: July 30, 2020
    Publication date: February 3, 2022
    Inventors: Adrien David Gaidon, Jie Li
  • Patent number: 11238601
    Abstract: A method for estimating depth is presented. The method includes generating, at each decoding layer of a neural network, decoded features of an input image. The method also includes upsampling, at each decoding layer, the decoded features to a resolution of a final output of the neural network. The method still further includes concatenating, at each decoding layer, the upsampled decoded features with features generated at a convolution layer of the neural network. The method additionally includes sequentially receiving the concatenated upsampled decoded features at a long-short term memory (LSTM) module of the neural network from each decoding layer. The method still further includes generating, at the LSTM module, a depth estimate of the input image after receiving the concatenated upsampled inverse depth estimate from a final layer of a decoder of the neural network. The method also includes controlling an action of an agent based on the depth estimate.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: February 1, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Vitor Guizilini, Adrien David Gaidon
  • Publication number: 20220005217
    Abstract: A method for estimating depth of a scene includes selecting an image of the scene from a sequence of images of the scene captured via an in-vehicle sensor of a first agent. The method also includes identifying previously captured images of the scene. The method further includes selecting a set of images from the previously captured images based on each image of the set of images satisfying depth criteria. The method still further includes estimating the depth of the scene based on the selected image and the selected set of images.
    Type: Application
    Filed: July 6, 2021
    Publication date: January 6, 2022
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jiexiong TANG, Rares Andrei AMBRUS, Sudeep PILLAI, Vitor GUIZILINI, Adrien David GAIDON
  • Publication number: 20210407117
    Abstract: Systems and methods for extracting ground plane information directly from monocular images using self-supervised depth networks are disclosed. Self-supervised depth networks are used to generate a three-dimensional reconstruction of observed structures. From this reconstruction the system may generate surface normals. The surface normals can be calculated directly from depth maps in a way that is much less computationally expensive and accurate than surface normals extraction from standard LiDAR data. Surface normals facing substantially the same direction and facing upwards may be determined to reflect a ground plane.
    Type: Application
    Filed: June 26, 2020
    Publication date: December 30, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Vitor GUIZILINI, Rares A. AMBRUS, Adrien David GAIDON
  • Publication number: 20210407115
    Abstract: Systems and methods for generating depth models and depth maps from images obtained from an imaging system are presented. A self-supervised neural network may be capable of regularizing depth information from surface normals. Rather than rely on separate depth and surface normal networks, surface normal information is extracted from the depth information and a smoothness function is applied to the surface normals instead of a depth gradient. Smoothing the surface normal may provide improved representation of environmental structures by both smoothing texture-less areas while preserving sharp boundaries between structures.
    Type: Application
    Filed: June 26, 2020
    Publication date: December 30, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Vitor GUIZILINI, Adrien David GAIDON, Rares A. AMBRUS
  • Patent number: 11210802
    Abstract: System, methods, and other embodiments described herein relate to self-supervised training for monocular depth estimation. In one embodiment, a method includes filtering disfavored images from first training data to produce second training data that is a subsampled version of the first training data. The disfavored images correspond with anomaly maps within a set of depth maps. The first depth model is trained according to the first training data and generates the depth maps from the first training data after initially being trained with the first training data. The method includes training a second depth model according to a self-supervised training process using the second training data. The method includes providing the second depth model to infer distances from monocular images.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: December 28, 2021
    Assignee: Toyota Research Institute, Inc.
    Inventors: Vitor Guizilini, Rares A. Ambrus, Rui Hou, Jie Li, Adrien David Gaidon
  • Publication number: 20210398302
    Abstract: A method for scene reconstruction includes generating a depth estimate and a first pose estimate from a current image. The method also includes generating a second pose estimate based on the current image and one or more previous images in a sequence of images. The method further includes generating a warped image by warping each pixel in the current image based on the depth estimate, the first pose estimate, and the second pose estimate. The method still further includes controlling an action of an agent based on the second warped image.
    Type: Application
    Filed: June 22, 2020
    Publication date: December 23, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Vitor GUIZILINI, Adrien David GAIDON