Patents by Inventor Martin Guay

Martin Guay has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240312035
    Abstract: The disclosed matting technique comprises receiving a video feed comprising a plurality of temporally ordered video frames as the video frames are captured by a video capture device, generating, using one or more machine learning models, an image mask corresponding to each video frame included in the video feed and a depth estimate corresponding to each video frame included in the video feed, and, for each video frame in the video feed, transmitting, in real-time, the video frame, the corresponding image mask, and the corresponding depth estimate to a compositing system. The compositing system composites a computer-generated element with the video frame based on the corresponding image mask and the corresponding depth estimate.
    Type: Application
    Filed: March 16, 2023
    Publication date: September 19, 2024
    Inventors: Martin GUAY, Tunc Ozan AYDIN, Mattia Gustavo Bruno Paolo RYFFEL
  • Publication number: 20240282028
    Abstract: One embodiment of the present invention sets forth a technique for training a neural motion controller. The technique includes determining a first set of features associated with a first control signal for a virtual character. The technique also includes matching the first set of features to a first sequence of motions included in a plurality of sequences of motions. The technique further includes training the neural motion controller based on one or more motions included in the first sequence of motions and the first control signal.
    Type: Application
    Filed: February 21, 2023
    Publication date: August 22, 2024
    Inventors: Martin GUAY, Dhruv AGRAWAL, Dominik Tobias BORER, Jakob Joachim BUHMANN, Mattia Gustavo Bruno Paolo RYFFEL, Robert Walker SUMNER
  • Publication number: 20240176360
    Abstract: A method and controller for autonomous control and station-keeping of an unmanned vehicle uses sensor data corresponding to at least geographical location and vertical position of the unmanned vehicle within a fluid environment and computes a gradient of the unmanned vehicle movement, uses the gradient to estimate a vertical profile of a feature of the fluid environment that affects the gradient of the unmanned vehicle, identifies a favourable position in the vertical profile where the feature of the fluid environment would transport the unmanned vehicle in a direction that minimizes a performance metric, and outputs a control signal based on the favourable position in the vertical profile. The control signal controls at least one actuator that causes the unmanned vehicle to ascend or descend to the favourable position in the vertical profile. The unmanned vehicle may be a high altitude platform (HAP).
    Type: Application
    Filed: November 29, 2023
    Publication date: May 30, 2024
    Inventors: Martin Guay, Telema Harry
  • Patent number: 11995749
    Abstract: Various embodiments disclosed herein provide techniques for generating image data of a three-dimensional (3D) animatable asset. A rendering module executing on a computer system accesses a machine learning model that has been trained via first image data of the 3D animatable asset generated from first rig vector data. The rendering module receives second rig vector data. The rendering module generates, via the machine learning model, a second image data of the 3D animatable asset based on the second rig vector data.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: May 28, 2024
    Assignees: DISNEY ENTERPRISES, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Dominik Borer, Jakob Buhmann, Martin Guay
  • Patent number: 11704853
    Abstract: Techniques are disclosed for learning a machine learning model that maps control data, such as renderings of skeletons, and associated three-dimensional (3D) information to two-dimensional (2D) renderings of a character. The machine learning model may be an adaptation of the U-Net architecture that accounts for 3D information and is trained using a perceptual loss between images generated by the machine learning model and ground truth images. Once trained, the machine learning model may be used to animate a character, such as in the context of previsualization or a video game, based on control of associated control points.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: July 18, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Dominik Tobias Borer, Martin Guay, Jakob Joachim Buhmann, Robert Walker Sumner
  • Publication number: 20230177784
    Abstract: An image processing system includes a computing platform having a hardware processor and a system memory storing an image augmentation software code, a three-dimensional (3D) shapes library, and/or a 3D poses library. The image processing system also includes a two-dimensional (2D) pose estimation module communicatively coupled to the image augmentation software code. The hardware processor executes the image augmentation software code to provide an image to the 2D pose estimation module and to receive a 2D pose data generated by the 2D pose estimation module based on the image. The image augmentation software code identifies a 3D shape and/or a 3D pose corresponding to the image using an optimization algorithm applied to the 2D pose data and one or both of the 3D poses library and the 3D shapes library, and may output the 3D shape and/or 3D pose to render an augmented image on a display.
    Type: Application
    Filed: January 27, 2023
    Publication date: June 8, 2023
    Inventors: Martin Guay, Gökcen Cimen, Christoph Maurhofer, Mattia Ryffel, Robert W. Sumner
  • Patent number: 11669999
    Abstract: In various embodiments, a training application generates training items for three-dimensional (3D) pose estimation. The training application generates multiple posed 3D models based on multiple 3D poses and a 3D model of a person wearing a costume that is associated with multiple visual attributes. For each posed 3D model, the training application performs rendering operation(s) to generate synthetic image(s). For each synthetic image, the training application generates a training item based on the synthetic image and the 3D pose associated with the posed 3D model from which the synthetic image was rendered. The synthetic images are included in a synthetic training dataset that is tailored for training a machine-learning model to compute estimated 3D poses of persons from two-dimensional (2D) input images. Advantageously, the synthetic training dataset can be used to train the machine-learning model to accurately infer the orientations of persons across a wide range of environments.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: June 6, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Martin Guay, Maurizio Nitti, Jakob Joachim Buhmann, Dominik Tobias Borer
  • Patent number: 11600047
    Abstract: An image processing system includes a computing platform having a hardware processor and a system memory storing an image augmentation software code, a three-dimensional (3D) shapes library, and/or a 3D poses library. The image processing system also includes a two-dimensional (2D) pose estimation module communicatively coupled to the image augmentation software code. The hardware processor executes the image augmentation software code to provide an image to the 2D pose estimation module and to receive a 2D pose data generated by the 2D pose estimation module based on the image. The image augmentation software code identifies a 3D shape and/or a 3D pose corresponding to the image using an optimization algorithm applied to the 2D pose data and one or both of the 3D poses library and the 3D shapes library, and may output the 3D shape and/or 3D pose to render an augmented image on a display.
    Type: Grant
    Filed: July 17, 2018
    Date of Patent: March 7, 2023
    Assignees: Disney Enterprises, Inc., ETH Zurich
    Inventors: Martin Guay, Gokcen Cimen, Christoph Maurhofer, Mattia Ryffel, Robert Sumner
  • Publication number: 20220392099
    Abstract: One embodiment of the present invention sets forth a technique for generating a pose estimation model. The technique includes generating one or more trained components included in the pose estimation model based on a first set of training images and a first set of labeled poses associated with the first set of training images, wherein each labeled pose includes a first set of positions on a left side of an object and a second set of positions on a right side of the object. The technique also includes training the pose estimation model based on a set of reconstructions of a second set of training images, wherein the set of reconstructions is generated by the pose estimation model from a set of predicted poses outputted by the one or more trained components.
    Type: Application
    Filed: May 19, 2022
    Publication date: December 8, 2022
    Inventors: Martin GUAY, Dominik Tobias BORER, Jakob Joachim BUHMANN
  • Publication number: 20210374993
    Abstract: In various embodiments, a training application generates training items for three-dimensional (3D) pose estimation. The training application generates multiple posed 3D models based on multiple 3D poses and a 3D model of a person wearing a costume that is associated with multiple visual attributes. For each posed 3D model, the training application performs rendering operation(s) to generate synthetic image(s). For each synthetic image, the training application generates a training item based on the synthetic image and the 3D pose associated with the posed 3D model from which the synthetic image was rendered. The synthetic images are included in a synthetic training dataset that is tailored for training a machine-learning model to compute estimated 3D poses of persons from two-dimensional (2D) input images. Advantageously, the synthetic training dataset can be used to train the machine-learning model to accurately infer the orientations of persons across a wide range of environments.
    Type: Application
    Filed: May 26, 2020
    Publication date: December 2, 2021
    Inventors: Martin GUAY, Maurizio NITTI, Jakob Joachim BUHMANN, Dominik Tobias BORER
  • Publication number: 20210233300
    Abstract: Various embodiments disclosed herein provide techniques for generating image data of a three-dimensional (3D) animatable asset. A rendering module executing on a computer system accesses a machine learning model that has been trained via first image data of the 3D animatable asset generated from first rig vector data. The rendering module receives second rig vector data. The rendering module generates, via the machine learning model, a second image data of the 3D animatable asset based on the second rig vector data.
    Type: Application
    Filed: March 5, 2020
    Publication date: July 29, 2021
    Inventors: Dominik BORER, Jakob BUHMANN, Martin GUAY
  • Patent number: 10916046
    Abstract: Techniques are disclosed for estimating poses from images. In one embodiment, a machine learning model, referred to herein as the “detector,” is trained to estimate animal poses from images in a bottom-up fashion. In particular, the detector may be trained using rendered images depicting animal body parts scattered over realistic backgrounds, as opposed to renderings of full animal bodies. In order to make appearances of the rendered body parts more realistic so that the detector can be trained to estimate poses from images of real animals, the body parts may be rendered using textures that are determined from a translation of rendered images of the animal into corresponding images with more realistic textures via adversarial learning. Three-dimensional poses may also be inferred from estimated joint locations using, e.g., inverse kinematics.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: February 9, 2021
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Martin Guay, Dominik Tobias Borer, Ahmet Cengiz Öztireli, Robert W. Sumner, Jakob Joachim Buhmann
  • Publication number: 20210019928
    Abstract: Techniques are disclosed for learning a machine learning model that maps control data, such as renderings of skeletons, and associated three-dimensional (3D) information to two-dimensional (2D) renderings of a character. The machine learning model may be an adaptation of the U-Net architecture that accounts for 3D information and is trained using a perceptual loss between images generated by the machine learning model and ground truth images. Once trained, the machine learning model may be used to animate a character, such as in the context of previsualization or a video game, based on control of associated control points.
    Type: Application
    Filed: July 15, 2019
    Publication date: January 21, 2021
    Inventors: Dominik Tobias BORER, Martin GUAY, Jakob Joachim BUHMANN, Robert Walker SUMNER
  • Patent number: 10885708
    Abstract: An automated costume augmentation system includes a computing platform having a hardware processor and a system memory storing a software code. The hardware processor executes the software code to provide an image including a posed figure to an artificial neural network (ANN), receive from the ANN a 2D skeleton data including joint positions corresponding to the posed figure, and determine a 3D pose corresponding to the posed figure using an optimization algorithm applied to the skeleton data. The software code further identifies one or more proportion(s) of the posed figure based on the skeleton data, determines bone directions corresponding to the posed figure using another optimization algorithm applied to the 3D pose, parameterizes a costume for the posed figure based on the 3D pose, the proportion(s), and the bone directions, and outputs an enhanced image including the posed figure augmented with the fitted costume for rendering on a display.
    Type: Grant
    Filed: October 16, 2018
    Date of Patent: January 5, 2021
    Assignees: Disney Enterprises, Inc., ETH ZURICH
    Inventors: Martin Guay, Gökcen Cimen, Christoph Maurhofer, Mattia Ryffel, Robert W. Sumner
  • Publication number: 20200279428
    Abstract: Techniques are disclosed for estimating poses from images. In one embodiment, a machine learning model, referred to herein as the “detector,” is trained to estimate animal poses from images in a bottom-up fashion. In particular, the detector may be trained using rendered images depicting animal body parts scattered over realistic backgrounds, as opposed to renderings of full animal bodies. In order to make appearances of the rendered body parts more realistic so that the detector can be trained to estimate poses from images of real animals, the body parts may be rendered using textures that are determined from a translation of rendered images of the animal into corresponding images with more realistic textures via adversarial learning. Three-dimensional poses may also be inferred from estimated joint locations using, e.g., inverse kinematics.
    Type: Application
    Filed: February 28, 2019
    Publication date: September 3, 2020
    Inventors: Martin GUAY, Dominik Tobias BORER, Ahmet Cengiz ÖZTIRELI, Robert W. SUMNER, Jakob Joachim BUHMANN
  • Publication number: 20200118333
    Abstract: An automated costume augmentation system includes a computing platform having a hardware processor and a system memory storing a software code. The hardware processor executes the software code to provide an image including a posed figure to an artificial neural network (ANN), receive from the ANN a 2D skeleton data including joint positions corresponding to the posed figure, and determine a 3D pose corresponding to the posed figure using an optimization algorithm applied to the skeleton data. The software code further identifies one or more proportion(s) of the posed figure based on the skeleton data, determines bone directions corresponding to the posed figure using another optimization algorithm applied to the 3D pose, parameterizes a costume for the posed figure based on the 3D pose, the proportion(s), and the bone directions, and outputs an enhanced image including the posed figure augmented with the fitted costume for rendering on a display.
    Type: Application
    Filed: October 16, 2018
    Publication date: April 16, 2020
    Inventors: Martin Guay, Gökcen Cimen, Christoph Maurhofer, Mattia Ryffel, Robert W. Sumner
  • Patent number: 10553009
    Abstract: Techniques for generating locomotion data for animating a virtual quadruped model, starting at an origin point and travelling to a destination point along a defined path. A virtual skeletal structure for the virtual quadruped model is analyzed to identify a torso region, limbs each ending in a respective end effector, and limb attributes. A predefined locomotion template for virtual quadruped characters is retrieved and mapped to the virtual quadruped model by aligning the torso region and the plurality of limbs of the virtual skeletal structure for the virtual quadruped with a second torso region and a second plurality of limbs of the predefined locomotion template. Locomotion data is generated for the virtual quadruped model based on the defined path and by upscaling the mapped predefined locomotion template, based at least on the set of limb attributes determined by analyzing the virtual skeletal structure for the virtual quadruped model.
    Type: Grant
    Filed: March 15, 2018
    Date of Patent: February 4, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Martin Guay, Moritz Geilinger, Stelian Coros, Ye Yuan, Robert W. Sumner
  • Publication number: 20200027271
    Abstract: An image processing system includes a computing platform having a hardware processor and a system memory storing an image augmentation software code, a three-dimensional (3D) shapes library, and/or a 3D poses library. The image processing system also includes a two-dimensional (2D) pose estimation module communicatively coupled to the image augmentation software code. The hardware processor executes the image augmentation software code to provide an image to the 2D pose estimation module and to receive a 2D pose data generated by the 2D pose estimation module based on the image. The image augmentation software code identifies a 3D shape and/or a 3D pose corresponding to the image using an optimization algorithm applied to the 2D pose data and one or both of the 3D poses library and the 3D shapes library, and may output the 3D shape and/or 3D pose to render an augmented image on a display.
    Type: Application
    Filed: July 17, 2018
    Publication date: January 23, 2020
    Inventors: Martin Guay, Gokcen Cimen, Christoph Maurhofer, Mattia Ryffel, Robert Sumner
  • Patent number: 10445940
    Abstract: A simulation engine models interactions between a simulated character and real-world objects to produce a physically realistic augmented reality (AR) simulation. The simulation engine recognizes a given real-world object and then identifies, within a library of object models, an object model corresponding to that object. The simulation engine projects the object model onto the real-world object such that the object model is geometrically aligned with the real-world object. When the simulated character encounters the real-world object, the simulation engine models interactions between the simulated character and the real-world object by adjusting the kinematics of the simulated character relative to the object model associated with the real-world object.
    Type: Grant
    Filed: March 15, 2018
    Date of Patent: October 15, 2019
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH
    Inventors: Martin Guay, Gökçen Çimen, Dominik Tobias Borer, Simone Guggiari, Ye Yuan, Stelian Coros, Robert Walker Sumner
  • Publication number: 20190287305
    Abstract: A simulation engine models interactions between a simulated character and real-world objects to produce a physically realistic augmented reality (AR) simulation. The simulation engine recognizes a given real-world object and then identifies, within a library of object models, an object model corresponding to that object. The simulation engine projects the object model onto the real-world object such that the object model is geometrically aligned with the real-world object. When the simulated character encounters the real-world object, the simulation engine models interactions between the simulated character and the real-world object by adjusting the kinematics of the simulated character relative to the object model associated with the real-world object.
    Type: Application
    Filed: March 15, 2018
    Publication date: September 19, 2019
    Inventors: Martin GUAY, Gökçen ÇIMEN, Dominik Tobias BORER, Simone GUGGIARI, Ye YUAN, Stelian COROS, Robert Walker SUMNER