Patents by Inventor Davis Rempe

Davis Rempe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240013409
    Abstract: A method for multiple object tracking includes receiving, with a computing device, a point cloud dataset, detecting one or more objects in the point cloud dataset, each of the detected one or more objects defined by points of the point cloud dataset and a bounding box, querying one or more historical tracklets for historical tracklet states corresponding to each of the one or more detected objects, implementing a 4D encoding backbone comprising two branches: a first branch configured to compute per-point features for each of the one or more objects and the corresponding historical tracklet states, and a second branch configured to obtain 4D point features, concatenating the per-point features and the 4D point features, and predicting, with a decoder receiving the concatenated per-point features, current tracklet states for each of the one or more objects.
    Type: Application
    Filed: May 26, 2023
    Publication date: January 11, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha, The Board of Trustees of the Leland Stanford Junior University
    Inventors: Colton Stearns, Jie Li, Rares A. Ambrus, Vitor Campagnolo Guizilini, Sergey Zakharov, Adrien D. Gaidon, Davis Rempe, Tolga Birdal, Leonidas J. Guibas
  • Patent number: 11721056
    Abstract: In some embodiments, a model training system obtains a set of animation models. For each of the animation models, the model training system renders the animation model to generate a sequence of video frames containing a character using a set of rendering parameters and extracts joint points of the character from each frame of the sequence of video frames. The model training system further determines, for each frame of the sequence of video frames, whether a subset of the joint points are in contact with a ground plane in a three-dimensional space and generates contact labels for the subset of the joint points. The model training system trains a contact estimation model using training data containing the joint points extracted from the sequences of video frames and the generated contact labels. The contact estimation model can be used to refine a motion model for a character.
    Type: Grant
    Filed: January 12, 2022
    Date of Patent: August 8, 2023
    Assignee: Adobe Inc.
    Inventors: Jimei Yang, Davis Rempe, Bryan Russell, Aaron Hertzmann
  • Publication number: 20220139019
    Abstract: In some embodiments, a model training system obtains a set of animation models. For each of the animation models, the model training system renders the animation model to generate a sequence of video frames containing a character using a set of rendering parameters and extracts joint points of the character from each frame of the sequence of video frames. The model training system further determines, for each frame of the sequence of video frames, whether a subset of the joint points are in contact with a ground plane in a three-dimensional space and generates contact labels for the subset of the joint points. The model training system trains a contact estimation model using training data containing the joint points extracted from the sequences of video frames and the generated contact labels. The contact estimation model can be used to refine a motion model for a character.
    Type: Application
    Filed: January 12, 2022
    Publication date: May 5, 2022
    Inventors: Jimei Yang, Davis Rempe, Bryan Russell, Aaron Hertzmann
  • Patent number: 11238634
    Abstract: In some embodiments, a motion model refinement system receives an input video depicting a human character and an initial motion model describing motions of individual joint points of the human character in a three-dimensional space. The motion model refinement system identifies foot joint points of the human character that are in contact with a ground plane using a trained contact estimation model. The motion model refinement system determines the ground plane based on the foot joint points and the initial motion model and constructs an optimization problem for refining the initial motion model. The optimization problem minimizes the difference between the refined motion model and the initial motion model under a set of plausibility constraints including constraints on the contact foot joint points and a time-dependent inertia tensor-based constraint. The motion model refinement system obtains the refined motion model by solving the optimization problem.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: February 1, 2022
    Assignee: Adobe Inc.
    Inventors: Jimei Yang, Davis Rempe, Bryan Russell, Aaron Hertzmann
  • Publication number: 20210335028
    Abstract: In some embodiments, a motion model refinement system receives an input video depicting a human character and an initial motion model describing motions of individual joint points of the human character in a three-dimensional space. The motion model refinement system identifies foot joint points of the human character that are in contact with a ground plane using a trained contact estimation model. The motion model refinement system determines the ground plane based on the foot joint points and the initial motion model and constructs an optimization problem for refining the initial motion model. The optimization problem minimizes the difference between the refined motion model and the initial motion model under a set of plausibility constraints including constraints on the contact foot joint points and a time-dependent inertia tensor-based constraint. The motion model refinement system obtains the refined motion model by solving the optimization problem.
    Type: Application
    Filed: April 28, 2020
    Publication date: October 28, 2021
    Inventors: Jimei Yang, Davis Rempe, Bryan Russell, Aaron Hertzmann