Patents Assigned to ETH ZURICH (EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZURICH)
  • Patent number: 11875441
    Abstract: A modeling engine generates a prediction model that quantifies and predicts secondary dynamics associated with the face of a performer enacting a performance. The modeling engine generates a set of geometric representations that represents the face of the performer enacting different facial expressions under a range of loading conditions. For a given facial expression and specific loading condition, the modeling engine trains a Machine Learning model to predict how soft tissue regions of the face of the performer change in response to external forces applied to the performer during the performance. The modeling engine combines different expression models associated with different facial expressions to generate a prediction model. The prediction model can be used to predict and remove secondary dynamics from a given geometric representation of a performance or to generate and add secondary dynamics to a given geometric representation of a performance.
    Type: Grant
    Filed: October 11, 2022
    Date of Patent: January 16, 2024
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Eftychios Dimitrios Sifakis, Gaspard Zoss
  • Patent number: 11836860
    Abstract: Methods and systems for performing facial retargeting using a patch-based technique are disclosed. One or more three-dimensional (3D) representations of a source character's (e.g., a human actor's) face can be transferred onto one or more corresponding representations of a target character's (e.g., a cartoon character's) face, enabling filmmakers to transfer a performance by a source character to a target character. The source character's 3D facial shape can separated into patches. For each patch, a patch combination (representing that patch as a combination of source reference patches) can be determined. The patch combinations and target reference patches can then be used to create target patches corresponding to the target character. The target patches can be combined using an anatomical local model solver to produce a 3D facial shape corresponding to the target character, effectively transferring a facial performance by the source character to the target character.
    Type: Grant
    Filed: January 27, 2022
    Date of Patent: December 5, 2023
    Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Prashanth Chandran, Loïc Florian Ciccone, Derek Edward Bradley
  • Publication number: 20230260186
    Abstract: Embodiments of the present disclosure are directed to methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Methods accord to embodiments may be useful for performing facial capture on subjects with dense facial hair. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject’s face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).
    Type: Application
    Filed: January 27, 2023
    Publication date: August 17, 2023
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Sebastian Winberg, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss, Derek Edward Bradley
  • Patent number: 11716376
    Abstract: A system for managing non-linear transmedia content data is provided. Memory stores a plurality of transmedia content data items and associated linking data which define time-ordered content links between the plurality of transmedia content data items. The plurality of transmedia content data items are arranged into linked transmedia content subsets comprising different groups of the transmedia content data items and different content links therebetween. A control engine receives one or more instructions to create a new time-ordered content link between at least two of the plurality of transmedia content data items. The control engine modifies the linking data stored in the memory to include the new time-ordered content link.
    Type: Grant
    Filed: September 26, 2016
    Date of Patent: August 1, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Max Grosse, Barbara Solenthaler, Peter Kaufmann, Markus Gross, Sasha Schriber
  • Publication number: 20230237739
    Abstract: Methods and systems for performing facial retargeting using a patch-based technique are disclosed. One or more three-dimensional (3D) representations of a source character's (e.g., a human actor's) face can be transferred onto one or more corresponding representations of a target character's (e.g., a cartoon character's) face, enabling filmmakers to transfer a performance by a source character to a target character. The source character's 3D facial shape can separated into patches. For each patch, a patch combination (representing that patch as a combination of source reference patches) can be determined. The patch combinations and target reference patches can then be used to create target patches corresponding to the target character. The target patches can be combined using an anatomical local model solver to produce a 3D facial shape corresponding to the target character, effectively transferring a facial performance by the source character to the target character.
    Type: Application
    Filed: January 27, 2022
    Publication date: July 27, 2023
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Prashanth Chandran, Loïc Florian Ciccone, Derek Edward Bradley
  • Publication number: 20230237753
    Abstract: Embodiments of the present disclosure are directed to methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Methods accord to embodiments may be useful for performing facial capture on subjects with dense facial hair. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject's face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).
    Type: Application
    Filed: January 27, 2023
    Publication date: July 27, 2023
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Derek Edward Bradley, Paulo Fabiano Urnau Gotardo, Gaspard Zoss, Prashanth Chandran, Sebastian Winberg
  • Patent number: 11704853
    Abstract: Techniques are disclosed for learning a machine learning model that maps control data, such as renderings of skeletons, and associated three-dimensional (3D) information to two-dimensional (2D) renderings of a character. The machine learning model may be an adaptation of the U-Net architecture that accounts for 3D information and is trained using a perceptual loss between images generated by the machine learning model and ground truth images. Once trained, the machine learning model may be used to animate a character, such as in the context of previsualization or a video game, based on control of associated control points.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: July 18, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Dominik Tobias Borer, Martin Guay, Jakob Joachim Buhmann, Robert Walker Sumner
  • Patent number: 11669723
    Abstract: A system includes a computing platform having a hardware processor and a memory storing a software code and a neural network (NN) having multiple layers including a last activation layer and a loss layer. The hardware processor executes the software code to identify different combinations of layers for testing the NN, each combination including candidate function(s) for the last activation layer and candidate function(s) for the loss layer. For each different combination, the software code configures the NN based on the combination, inputs, into the configured NN, a training dataset including multiple data objects, receives, from the configured NN, a classification of the data objects, and generates a performance assessment for the combination based on the classification. The software code determines a preferred combination of layers for the NN including selected candidate functions for the last activation layer and the loss layer, based on a comparison of the performance assessments.
    Type: Grant
    Filed: September 16, 2022
    Date of Patent: June 6, 2023
    Assignees: Disney Enterprises, Inc., ETH Zürich (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Hayko Jochen Wilhelm Riemenschneider, Leonhard Markus Helminger, Christopher Richard Schroers, Abdelaziz Djelouah
  • Patent number: 11669999
    Abstract: In various embodiments, a training application generates training items for three-dimensional (3D) pose estimation. The training application generates multiple posed 3D models based on multiple 3D poses and a 3D model of a person wearing a costume that is associated with multiple visual attributes. For each posed 3D model, the training application performs rendering operation(s) to generate synthetic image(s). For each synthetic image, the training application generates a training item based on the synthetic image and the 3D pose associated with the posed 3D model from which the synthetic image was rendered. The synthetic images are included in a synthetic training dataset that is tailored for training a machine-learning model to compute estimated 3D poses of persons from two-dimensional (2D) input images. Advantageously, the synthetic training dataset can be used to train the machine-learning model to accurately infer the orientations of persons across a wide range of environments.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: June 6, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Martin Guay, Maurizio Nitti, Jakob Joachim Buhmann, Dominik Tobias Borer
  • Patent number: 11645813
    Abstract: Techniques are disclosed for creating digital faces. In some examples, an anatomical face model is generated from a data set including captured facial geometries of different individuals and associated bone geometries. A model generator segments each of the captured facial geometries into patches, compresses the segmented geometry associated with each patch to determine local deformation subspaces of the anatomical face model, and determines corresponding compressed anatomical subspaces of the anatomical face model. A sculpting application determines, based on sculpting input from a user, constraints for an optimization to determine parameter values associated with the anatomical face model. The parameter values can be used, along with the anatomical face model, to generate facial geometry that reflects the sculpting input.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: May 9, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Aurel Gruber, Marco Fratarcangeli, Derek Edward Bradley, Gaspard Zoss, Dominik Thabo Beeler
  • Patent number: 11615555
    Abstract: A method of generating a training data set for training an image matting machine learning model includes receiving a plurality of foreground images, generating a plurality of composited foreground images by compositing randomly selected foreground images from the plurality of foreground images, and generating a plurality of training images by compositing each composited foreground image with a randomly selected background image. The training data set includes the plurality of training images.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: March 28, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Tunc Ozan Aydin, Ahmet Cengiz Öztireli, Jingwei Tang, Yagiz Aksoy
  • Patent number: 11587276
    Abstract: A modeling engine generates a prediction model that quantifies and predicts secondary dynamics associated with the face of a performer enacting a performance. The modeling engine generates a set of geometric representations that represents the face of the performer enacting different facial expressions under a range of loading conditions. For a given facial expression and specific loading condition, the modeling engine trains a Machine Learning model to predict how soft tissue regions of the face of the performer change in response to external forces applied to the performer during the performance. The modeling engine combines different expression models associated with different facial expressions to generate a prediction model. The prediction model can be used to predict and remove secondary dynamics from a given geometric representation of a performance or to generate and add secondary dynamics to a given geometric representation of a performance.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: February 21, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Eftychios Dimitrios Sifakis, Gaspard Zoss
  • Patent number: 11574209
    Abstract: A system for hyper-dimensional computing for inference tasks may be provided. The device comprises an item memory for storing hyper-dimensional item vectors, a query transformation unit connected to the item memory, the query transformation unit being adapted for forming a hyper-dimensional query vector from a query input and hyper-dimensional base vectors stored in the item memory, and an associative memory adapted for storing a plurality of hyper-dimensional profile vectors and for determining a distance between the hyper-dimensional query vector and the plurality of hyper-dimensional profile vectors, wherein the item memory and the associative memory are adapted for in-memory computing using memristive devices.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: February 7, 2023
    Assignees: International Business Machines Corporation, ETH ZURICH (EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZURICH)
    Inventors: Kumudu Geethan Karunaratne, Manuel Le Gallo-Bourdeau, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi, Luca Benini
  • Patent number: 11568524
    Abstract: Techniques are disclosed for changing the identities of faces in images. In embodiments, a tunable model for changing facial identities in images includes an encoder, a decoder, and dense layers that generate either adaptive instance normalization (AdaIN) coefficients that control the operation of convolution layers in the decoder or the values of weights within such convolution layers, allowing the model to change the identity of a face in an image based on a user selection. A separate set of dense layers may be trained to generate AdaIN coefficients for each of a number of facial identities, and the AdaIN coefficients output by different sets of dense layers can be combined to interpolate between facial identities. Alternatively, a single set of dense layers may be trained to take as input an identity vector and output AdaIN coefficients or values of weighs within convolution layers of the decoder.
    Type: Grant
    Filed: April 16, 2020
    Date of Patent: January 31, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Leonard Markus Helminger, Jacek Krzysztof Naruniec, Romann Matthew Weber, Christopher Richard Schroers
  • Patent number: 11568212
    Abstract: In various embodiments, a relevance application quantifies how a trained neural network operates. In operation, the relevance application generates a set of input distributions based on a set of input points associated with the trained neural network. Each input distribution is characterized by a mean and a variance associated with a different neuron included in the trained neural network. The relevance application propagates the set of input distributions through a probabilistic neural network to generate at least a first output distribution. The probabilistic neural network is derived from at least a portion of the trained neural network. Based on the first output distribution, the relevance application computes a contribution of a first input point included in the set of input points to a difference between a first output point associated with a first output of the trained neural network and an estimated mean prediction associated with the first output.
    Type: Grant
    Filed: August 6, 2019
    Date of Patent: January 31, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Ahmet Cengiz Öztireli, Markus Gross, Marco Ancona
  • Patent number: 11570397
    Abstract: One embodiment of the present invention sets forth a technique for performing deinterlacing. The technique includes separating a first interlaced video frame into a first sequence of fields ordered by time, the first sequence of fields including a first field. The technique also includes generating, by applying a deinterlacing network to a first field in the first sequence, a second field that is missing from the first sequence of fields and is complementary to the first field. The technique further includes constructing a progressive video frame based on the first field and the second field.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: January 31, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Michael Bernasconi, Daniel Konrad Dorda, Abdelaziz Djelouah, Shinobu Hattori, Christopher Richard Schroers
  • Publication number: 20220327717
    Abstract: Some implementations of the disclosure are directed to capturing facial training data for one or more subjects, the captured facial training data including each of the one or more subject's facial skin geometry tracked over a plurality of times and the subject's corresponding jaw poses for each of those plurality of times; and using the captured facial training data to create a model that provides a mapping from skin motion to jaw motion. Additional implementations of the disclosure are directed to determining a facial skin geometry of a subject; using a model that provides a mapping from skin motion to jaw motion to predict a motion of the subject's jaw from a rest pose given the facial skin geometry; and determining a jaw pose of the subject using the predicted motion of the subject's jaw.
    Type: Application
    Filed: June 28, 2022
    Publication date: October 13, 2022
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Gaspard Zoss
  • Patent number: 11393107
    Abstract: Some implementations of the disclosure are directed to capturing facial training data for one or more subjects, the captured facial training data including each of the one or more subject's facial skin geometry tracked over a plurality of times and the subject's corresponding jaw poses for each of those plurality of times; and using the captured facial training data to create a model that provides a mapping from skin motion to jaw motion. Additional implementations of the disclosure are directed to determining a facial skin geometry of a subject; using a model that provides a mapping from skin motion to jaw motion to predict a motion of the subject's jaw from a rest pose given the facial skin geometry; and determining a jaw pose of the subject using the predicted motion of the subject's jaw.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: July 19, 2022
    Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Gaspard Zoss
  • Patent number: 11335051
    Abstract: Techniques for animation are provided. A first trajectory for a first element in a first animation is determined. A first approximation is generated based on the first trajectory, and the first approximation is modified based on an updated state of the first element. The first trajectory is then refined based on the modified first approximation.
    Type: Grant
    Filed: March 11, 2020
    Date of Patent: May 17, 2022
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Robert W. Sumner, Alba Maria Rios Rodriguez, Maurizio Nitti, Mattia Ryffel, Steven C. Poulakos
  • Patent number: 11276231
    Abstract: Techniques are disclosed for training and applying nonlinear face models. In embodiments, a nonlinear face model includes an identity encoder, an expression encoder, and a decoder. The identity encoder takes as input a representation of a facial identity, such as a neutral face mesh minus a reference mesh, and outputs a code associated with the facial identity. The expression encoder takes as input a representation of a target expression, such as a set of blendweight values, and outputs a code associated with the target expression. The codes associated with the facial identity and the facial expression can be concatenated and input into the decoder, which outputs a representation of a face having the facial identity and expression. The representation of the face can include vertex displacements for deforming the reference mesh.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: March 15, 2022
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Prashanth Chandran, Dominik Thabo Beeler, Derek Edward Bradley