Patents by Inventor Kevin Margo

Kevin Margo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12333639
    Abstract: In various examples, animations may be generated using audio-driven body animation synthesized with voice tempo. For example, full body animation may be driven from an audio input representative of recorded speech, where voice tempo (e.g., a number of phonemes per unit time) may be used to generate a 1D audio signal for comparing to datasets including data samples that each include an animation and a corresponding 1D audio signal. One or more loss functions may be used to compare the 1D audio signal from the input audio to the audio signals of the datasets, as well as to compare joint information of joints of an actor between animations of two or more data samples, in order to identify optimal transition points between the animations. The animations may then be stitched together—e.g., using interpolation and/or a neural network trained to seamlessly stitch sequences together—using the transition points.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: June 17, 2025
    Assignee: NVIDIA Corporation
    Inventors: Evgeny Aleksandrovich Tumanov, Dmitry Aleksandrovich Korobchenko, Simon Yuen, Kevin Margo
  • Publication number: 20240233229
    Abstract: In various examples, animations may be generated using audio-driven body animation synthesized with voice tempo. For example, full body animation may be driven from an audio input representative of recorded speech, where voice tempo (e.g., a number of phonemes per unit time) may be used to generate a 1D audio signal for comparing to datasets including data samples that each include an animation and a corresponding 1D audio signal. One or more loss functions may be used to compare the 1D audio signal from the input audio to the audio signals of the datasets, as well as to compare joint information of joints of an actor between animations of two or more data samples, in order to identify optimal transition points between the animations. The animations may then be stitched together—e.g., using interpolation and/or a neural network trained to seamlessly stitch sequences together—using the transition points.
    Type: Application
    Filed: November 8, 2021
    Publication date: July 11, 2024
    Inventors: Evgeny Aleksandrovich Tumanov, Dmitry Aleksandrovich Korobchenko, Simon Yuen, Kevin Margo