Patents by Inventor Jiexiong Tang

Jiexiong Tang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220005217
    Abstract: A method for estimating depth of a scene includes selecting an image of the scene from a sequence of images of the scene captured via an in-vehicle sensor of a first agent. The method also includes identifying previously captured images of the scene. The method further includes selecting a set of images from the previously captured images based on each image of the set of images satisfying depth criteria. The method still further includes estimating the depth of the scene based on the selected image and the selected set of images.
    Type: Application
    Filed: July 6, 2021
    Publication date: January 6, 2022
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jiexiong TANG, Rares Andrei AMBRUS, Sudeep PILLAI, Vitor GUIZILINI, Adrien David GAIDON
  • Publication number: 20210326601
    Abstract: A method for keypoint matching includes determining a first set of keypoints corresponding to a current environment of the agent. The method further includes determining a second set of keypoints from a pre-built map of the current environment. The method still further includes identifying matching pairs of keypoints from the first set of keypoints and the second set of keypoints based on geometrical similarities between respective keypoints of the first set of keypoints and the second set of keypoints. The method also includes determining a current location of the agent based on the identified matching pairs of keypoints. The method further includes controlling an action of the agent based on the current location.
    Type: Application
    Filed: April 15, 2021
    Publication date: October 21, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jiexiong TANG, Rares Andrei AMBRUS, Jie LI, Vitor GUIZILINI, Sudeep PILLAI, Adrien David GAIDON
  • Publication number: 20210319577
    Abstract: A method for depth estimation performed by a depth estimation system of an autonomous agent includes determining a first pose of a sensor based on a first image captured by the sensor and a second image captured by the sensor. The method also includes determining a first depth of the first image and a second depth of the second image. The method further includes generating a warped depth image based on at least the first depth and the first pose. The method still further includes determining a second pose based on the warped depth image and the second depth image. The method also includes updating the first pose based on the second pose and updating a first warped image based on the updated first pose.
    Type: Application
    Filed: April 14, 2021
    Publication date: October 14, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jiexiong TANG, Rares Andrei AMBRUS, Vitor GUIZILINI, Adrien David GAIDON
  • Publication number: 20210318140
    Abstract: A method for localization performed by an agent includes receiving a query image of a current environment of the agent captured by a sensor integrated with the agent. The method also includes receiving a target image comprising a first set of keypoints matching a second set of keypoints of the query image. The first set of keypoints may be generated based on a task specified for the agent. The method still further includes determining a current location based on the target image.
    Type: Application
    Filed: April 14, 2021
    Publication date: October 14, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jiexiong TANG, Rares Andrei AMBRUS, Hanme KIM, Vitor GUIZILINI, Adrien David GAIDON, Xipeng WANG, Jeff WALLS, SR., Sudeep PILLAI
  • Publication number: 20210319236
    Abstract: A method for keypoint matching includes receiving an input image obtained by a sensor of an agent. The method also includes identifying a set of keypoints of the received image. The method further includes augmenting the descriptor of each of the keypoints with semantic information of the input image. The method also includes identifying a target image based on one or more semantically augmented descriptors of the target image matching one or more semantically augmented descriptors of the input image. The method further includes controlling an action of the agent in response to identifying the target.
    Type: Application
    Filed: April 14, 2021
    Publication date: October 14, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jiexiong TANG, Rares Andrei AMBRUS, Vitor GUIZILINI, Adrien David GAIDON
  • Publication number: 20210237764
    Abstract: A method for learning depth-aware keypoints and associated descriptors from monocular video for ego-motion estimation is described. The method includes training a keypoint network and a depth network to learn depth-aware keypoints and the associated descriptors. The training is based on a target image and a context image from successive images of the monocular video. The method also includes lifting 2D keypoints from the target image to learn 3D keypoints based on a learned depth map from the depth network. The method further includes estimating ego-motion from the target image to the context image based on the learned 3D keypoints.
    Type: Application
    Filed: November 9, 2020
    Publication date: August 5, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jiexiong TANG, Rares A. AMBRUS, Vitor GUIZILINI, Sudeep PILLAI, Hanme KIM, Adrien David GAIDON
  • Publication number: 20210237774
    Abstract: A method for learning depth-aware keypoints and associated descriptors from monocular video for monocular visual odometry is described. The method includes training a keypoint network and a depth network to learn depth-aware keypoints and the associated descriptors. The training is based on a target image and a context image from successive images of the monocular video. The method also includes lifting 2D keypoints from the target image to learn 3D keypoints based on a learned depth map from the depth network. The method further includes estimating a trajectory of an ego-vehicle based on the learned 3D keypoints.
    Type: Application
    Filed: November 9, 2020
    Publication date: August 5, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jiexiong TANG, Rares A. AMBRUS, Vitor GUIZILINI, Sudeep PILLAI, Hanme KIM, Adrien David GAIDON
  • Publication number: 20210089836
    Abstract: Systems and methods for training a neural keypoint detection network are disclosed herein. One embodiment extracts a portion of an input image; applies a transformation to the portion of the input image to produce a transformed portion of the input image; processes the portion of the input image and the transformed portion of the input image using the neural keypoint detection network to identify one or more candidate keypoint pairs between the portion of the input image and the transformed portion of the input image; and processes the one or more candidate keypoint pairs using an inlier-outlier neural network, the inlier-outlier neural network producing an indirect supervisory signal to train the neural keypoint detection network to identify one or more candidate keypoint pairs between the portion of the input image and the transformed portion of the input image.
    Type: Application
    Filed: March 31, 2020
    Publication date: March 25, 2021
    Inventors: Jiexiong Tang, Rares A. Ambrus, Vitor Guizilini, Sudeep Pillai, Hanme Kim
  • Publication number: 20210089890
    Abstract: Systems and methods for detecting and matching keypoints between different views of a scene are disclosed herein. One embodiment acquires first and second images; subdivides the first and second images into first and second pluralities of cells, respectively; processes both pluralities of cells using a neural keypoint detection network to identify a first keypoint for a particular cell in the first plurality of cells and a second keypoint for a particular cell in the second plurality of cells, at least one of the first and second keypoints lying in a cell other than the particular cell in the first or second plurality of cells for which it was identified; and classifies the first keypoint and the second keypoint as a matching keypoint pair based, at least in part, on a comparison between a first descriptor associated with the first keypoint and a second descriptor associated with the second keypoint.
    Type: Application
    Filed: March 31, 2020
    Publication date: March 25, 2021
    Inventors: Jiexiong Tang, Rares A. Ambrus, Vitor Guizilini, Sudeep Pillai, Hanme Kim