Patents by Inventor Dominik Tobias BORER
Dominik Tobias BORER has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240282028Abstract: One embodiment of the present invention sets forth a technique for training a neural motion controller. The technique includes determining a first set of features associated with a first control signal for a virtual character. The technique also includes matching the first set of features to a first sequence of motions included in a plurality of sequences of motions. The technique further includes training the neural motion controller based on one or more motions included in the first sequence of motions and the first control signal.Type: ApplicationFiled: February 21, 2023Publication date: August 22, 2024Inventors: Martin GUAY, Dhruv AGRAWAL, Dominik Tobias BORER, Jakob Joachim BUHMANN, Mattia Gustavo Bruno Paolo RYFFEL, Robert Walker SUMNER
-
Patent number: 11704853Abstract: Techniques are disclosed for learning a machine learning model that maps control data, such as renderings of skeletons, and associated three-dimensional (3D) information to two-dimensional (2D) renderings of a character. The machine learning model may be an adaptation of the U-Net architecture that accounts for 3D information and is trained using a perceptual loss between images generated by the machine learning model and ground truth images. Once trained, the machine learning model may be used to animate a character, such as in the context of previsualization or a video game, based on control of associated control points.Type: GrantFiled: July 15, 2019Date of Patent: July 18, 2023Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)Inventors: Dominik Tobias Borer, Martin Guay, Jakob Joachim Buhmann, Robert Walker Sumner
-
Patent number: 11669999Abstract: In various embodiments, a training application generates training items for three-dimensional (3D) pose estimation. The training application generates multiple posed 3D models based on multiple 3D poses and a 3D model of a person wearing a costume that is associated with multiple visual attributes. For each posed 3D model, the training application performs rendering operation(s) to generate synthetic image(s). For each synthetic image, the training application generates a training item based on the synthetic image and the 3D pose associated with the posed 3D model from which the synthetic image was rendered. The synthetic images are included in a synthetic training dataset that is tailored for training a machine-learning model to compute estimated 3D poses of persons from two-dimensional (2D) input images. Advantageously, the synthetic training dataset can be used to train the machine-learning model to accurately infer the orientations of persons across a wide range of environments.Type: GrantFiled: May 26, 2020Date of Patent: June 6, 2023Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)Inventors: Martin Guay, Maurizio Nitti, Jakob Joachim Buhmann, Dominik Tobias Borer
-
Publication number: 20220392099Abstract: One embodiment of the present invention sets forth a technique for generating a pose estimation model. The technique includes generating one or more trained components included in the pose estimation model based on a first set of training images and a first set of labeled poses associated with the first set of training images, wherein each labeled pose includes a first set of positions on a left side of an object and a second set of positions on a right side of the object. The technique also includes training the pose estimation model based on a set of reconstructions of a second set of training images, wherein the set of reconstructions is generated by the pose estimation model from a set of predicted poses outputted by the one or more trained components.Type: ApplicationFiled: May 19, 2022Publication date: December 8, 2022Inventors: Martin GUAY, Dominik Tobias BORER, Jakob Joachim BUHMANN
-
Publication number: 20210374993Abstract: In various embodiments, a training application generates training items for three-dimensional (3D) pose estimation. The training application generates multiple posed 3D models based on multiple 3D poses and a 3D model of a person wearing a costume that is associated with multiple visual attributes. For each posed 3D model, the training application performs rendering operation(s) to generate synthetic image(s). For each synthetic image, the training application generates a training item based on the synthetic image and the 3D pose associated with the posed 3D model from which the synthetic image was rendered. The synthetic images are included in a synthetic training dataset that is tailored for training a machine-learning model to compute estimated 3D poses of persons from two-dimensional (2D) input images. Advantageously, the synthetic training dataset can be used to train the machine-learning model to accurately infer the orientations of persons across a wide range of environments.Type: ApplicationFiled: May 26, 2020Publication date: December 2, 2021Inventors: Martin GUAY, Maurizio NITTI, Jakob Joachim BUHMANN, Dominik Tobias BORER
-
Patent number: 10916046Abstract: Techniques are disclosed for estimating poses from images. In one embodiment, a machine learning model, referred to herein as the “detector,” is trained to estimate animal poses from images in a bottom-up fashion. In particular, the detector may be trained using rendered images depicting animal body parts scattered over realistic backgrounds, as opposed to renderings of full animal bodies. In order to make appearances of the rendered body parts more realistic so that the detector can be trained to estimate poses from images of real animals, the body parts may be rendered using textures that are determined from a translation of rendered images of the animal into corresponding images with more realistic textures via adversarial learning. Three-dimensional poses may also be inferred from estimated joint locations using, e.g., inverse kinematics.Type: GrantFiled: February 28, 2019Date of Patent: February 9, 2021Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)Inventors: Martin Guay, Dominik Tobias Borer, Ahmet Cengiz Öztireli, Robert W. Sumner, Jakob Joachim Buhmann
-
Publication number: 20210019928Abstract: Techniques are disclosed for learning a machine learning model that maps control data, such as renderings of skeletons, and associated three-dimensional (3D) information to two-dimensional (2D) renderings of a character. The machine learning model may be an adaptation of the U-Net architecture that accounts for 3D information and is trained using a perceptual loss between images generated by the machine learning model and ground truth images. Once trained, the machine learning model may be used to animate a character, such as in the context of previsualization or a video game, based on control of associated control points.Type: ApplicationFiled: July 15, 2019Publication date: January 21, 2021Inventors: Dominik Tobias BORER, Martin GUAY, Jakob Joachim BUHMANN, Robert Walker SUMNER
-
Publication number: 20200279428Abstract: Techniques are disclosed for estimating poses from images. In one embodiment, a machine learning model, referred to herein as the “detector,” is trained to estimate animal poses from images in a bottom-up fashion. In particular, the detector may be trained using rendered images depicting animal body parts scattered over realistic backgrounds, as opposed to renderings of full animal bodies. In order to make appearances of the rendered body parts more realistic so that the detector can be trained to estimate poses from images of real animals, the body parts may be rendered using textures that are determined from a translation of rendered images of the animal into corresponding images with more realistic textures via adversarial learning. Three-dimensional poses may also be inferred from estimated joint locations using, e.g., inverse kinematics.Type: ApplicationFiled: February 28, 2019Publication date: September 3, 2020Inventors: Martin GUAY, Dominik Tobias BORER, Ahmet Cengiz ÖZTIRELI, Robert W. SUMNER, Jakob Joachim BUHMANN
-
Patent number: 10445940Abstract: A simulation engine models interactions between a simulated character and real-world objects to produce a physically realistic augmented reality (AR) simulation. The simulation engine recognizes a given real-world object and then identifies, within a library of object models, an object model corresponding to that object. The simulation engine projects the object model onto the real-world object such that the object model is geometrically aligned with the real-world object. When the simulated character encounters the real-world object, the simulation engine models interactions between the simulated character and the real-world object by adjusting the kinematics of the simulated character relative to the object model associated with the real-world object.Type: GrantFiled: March 15, 2018Date of Patent: October 15, 2019Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICHInventors: Martin Guay, Gökçen Çimen, Dominik Tobias Borer, Simone Guggiari, Ye Yuan, Stelian Coros, Robert Walker Sumner
-
Publication number: 20190287305Abstract: A simulation engine models interactions between a simulated character and real-world objects to produce a physically realistic augmented reality (AR) simulation. The simulation engine recognizes a given real-world object and then identifies, within a library of object models, an object model corresponding to that object. The simulation engine projects the object model onto the real-world object such that the object model is geometrically aligned with the real-world object. When the simulated character encounters the real-world object, the simulation engine models interactions between the simulated character and the real-world object by adjusting the kinematics of the simulated character relative to the object model associated with the real-world object.Type: ApplicationFiled: March 15, 2018Publication date: September 19, 2019Inventors: Martin GUAY, Gökçen ÇIMEN, Dominik Tobias BORER, Simone GUGGIARI, Ye YUAN, Stelian COROS, Robert Walker SUMNER