Patents by Inventor Hanme Kim

Hanme Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11256986
    Abstract: Systems and methods for training a neural keypoint detection network are disclosed herein. One embodiment extracts a portion of an input image; applies a transformation to the portion of the input image to produce a transformed portion of the input image; processes the portion of the input image and the transformed portion of the input image using the neural keypoint detection network to identify one or more candidate keypoint pairs between the portion of the input image and the transformed portion of the input image; and processes the one or more candidate keypoint pairs using an inlier-outlier neural network, the inlier-outlier neural network producing an indirect supervisory signal to train the neural keypoint detection network to identify one or more candidate keypoint pairs between the portion of the input image and the transformed portion of the input image.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: February 22, 2022
    Assignee: Toyota Research Institute, Inc.
    Inventors: Jiexiong Tang, Rares A. Ambrus, Vitor Guizilini, Sudeep Pillai, Hanme Kim
  • Publication number: 20210318140
    Abstract: A method for localization performed by an agent includes receiving a query image of a current environment of the agent captured by a sensor integrated with the agent. The method also includes receiving a target image comprising a first set of keypoints matching a second set of keypoints of the query image. The first set of keypoints may be generated based on a task specified for the agent. The method still further includes determining a current location based on the target image.
    Type: Application
    Filed: April 14, 2021
    Publication date: October 14, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jiexiong TANG, Rares Andrei AMBRUS, Hanme KIM, Vitor GUIZILINI, Adrien David GAIDON, Xipeng WANG, Jeff WALLS, SR., Sudeep PILLAI
  • Publication number: 20210237764
    Abstract: A method for learning depth-aware keypoints and associated descriptors from monocular video for ego-motion estimation is described. The method includes training a keypoint network and a depth network to learn depth-aware keypoints and the associated descriptors. The training is based on a target image and a context image from successive images of the monocular video. The method also includes lifting 2D keypoints from the target image to learn 3D keypoints based on a learned depth map from the depth network. The method further includes estimating ego-motion from the target image to the context image based on the learned 3D keypoints.
    Type: Application
    Filed: November 9, 2020
    Publication date: August 5, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jiexiong TANG, Rares A. AMBRUS, Vitor GUIZILINI, Sudeep PILLAI, Hanme KIM, Adrien David GAIDON
  • Publication number: 20210237774
    Abstract: A method for learning depth-aware keypoints and associated descriptors from monocular video for monocular visual odometry is described. The method includes training a keypoint network and a depth network to learn depth-aware keypoints and the associated descriptors. The training is based on a target image and a context image from successive images of the monocular video. The method also includes lifting 2D keypoints from the target image to learn 3D keypoints based on a learned depth map from the depth network. The method further includes estimating a trajectory of an ego-vehicle based on the learned 3D keypoints.
    Type: Application
    Filed: November 9, 2020
    Publication date: August 5, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jiexiong TANG, Rares A. AMBRUS, Vitor GUIZILINI, Sudeep PILLAI, Hanme KIM, Adrien David GAIDON
  • Publication number: 20210089890
    Abstract: Systems and methods for detecting and matching keypoints between different views of a scene are disclosed herein. One embodiment acquires first and second images; subdivides the first and second images into first and second pluralities of cells, respectively; processes both pluralities of cells using a neural keypoint detection network to identify a first keypoint for a particular cell in the first plurality of cells and a second keypoint for a particular cell in the second plurality of cells, at least one of the first and second keypoints lying in a cell other than the particular cell in the first or second plurality of cells for which it was identified; and classifies the first keypoint and the second keypoint as a matching keypoint pair based, at least in part, on a comparison between a first descriptor associated with the first keypoint and a second descriptor associated with the second keypoint.
    Type: Application
    Filed: March 31, 2020
    Publication date: March 25, 2021
    Inventors: Jiexiong Tang, Rares A. Ambrus, Vitor Guizilini, Sudeep Pillai, Hanme Kim
  • Publication number: 20210089836
    Abstract: Systems and methods for training a neural keypoint detection network are disclosed herein. One embodiment extracts a portion of an input image; applies a transformation to the portion of the input image to produce a transformed portion of the input image; processes the portion of the input image and the transformed portion of the input image using the neural keypoint detection network to identify one or more candidate keypoint pairs between the portion of the input image and the transformed portion of the input image; and processes the one or more candidate keypoint pairs using an inlier-outlier neural network, the inlier-outlier neural network producing an indirect supervisory signal to train the neural keypoint detection network to identify one or more candidate keypoint pairs between the portion of the input image and the transformed portion of the input image.
    Type: Application
    Filed: March 31, 2020
    Publication date: March 25, 2021
    Inventors: Jiexiong Tang, Rares A. Ambrus, Vitor Guizilini, Sudeep Pillai, Hanme Kim