Patents by Inventor Dominik Thabo Beeler

Dominik Thabo Beeler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11875441
    Abstract: A modeling engine generates a prediction model that quantifies and predicts secondary dynamics associated with the face of a performer enacting a performance. The modeling engine generates a set of geometric representations that represents the face of the performer enacting different facial expressions under a range of loading conditions. For a given facial expression and specific loading condition, the modeling engine trains a Machine Learning model to predict how soft tissue regions of the face of the performer change in response to external forces applied to the performer during the performance. The modeling engine combines different expression models associated with different facial expressions to generate a prediction model. The prediction model can be used to predict and remove secondary dynamics from a given geometric representation of a performance or to generate and add secondary dynamics to a given geometric representation of a performance.
    Type: Grant
    Filed: October 11, 2022
    Date of Patent: January 16, 2024
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Eftychios Dimitrios Sifakis, Gaspard Zoss
  • Patent number: 11645813
    Abstract: Techniques are disclosed for creating digital faces. In some examples, an anatomical face model is generated from a data set including captured facial geometries of different individuals and associated bone geometries. A model generator segments each of the captured facial geometries into patches, compresses the segmented geometry associated with each patch to determine local deformation subspaces of the anatomical face model, and determines corresponding compressed anatomical subspaces of the anatomical face model. A sculpting application determines, based on sculpting input from a user, constraints for an optimization to determine parameter values associated with the anatomical face model. The parameter values can be used, along with the anatomical face model, to generate facial geometry that reflects the sculpting input.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: May 9, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Aurel Gruber, Marco Fratarcangeli, Derek Edward Bradley, Gaspard Zoss, Dominik Thabo Beeler
  • Publication number: 20230065700
    Abstract: A modeling engine generates a prediction model that quantifies and predicts secondary dynamics associated with the face of a performer enacting a performance. The modeling engine generates a set of geometric representations that represents the face of the performer enacting different facial expressions under a range of loading conditions. For a given facial expression and specific loading condition, the modeling engine trains a Machine Learning model to predict how soft tissue regions of the face of the performer change in response to external forces applied to the performer during the performance. The modeling engine combines different expression models associated with different facial expressions to generate a prediction model. The prediction model can be used to predict and remove secondary dynamics from a given geometric representation of a performance or to generate and add secondary dynamics to a given geometric representation of a performance.
    Type: Application
    Filed: October 11, 2022
    Publication date: March 2, 2023
    Inventors: Dominik Thabo BEELER, Derek Edward BRADLEY, Eftychios Dimitrios SIFAKIS, Gaspard ZOSS
  • Patent number: 11587276
    Abstract: A modeling engine generates a prediction model that quantifies and predicts secondary dynamics associated with the face of a performer enacting a performance. The modeling engine generates a set of geometric representations that represents the face of the performer enacting different facial expressions under a range of loading conditions. For a given facial expression and specific loading condition, the modeling engine trains a Machine Learning model to predict how soft tissue regions of the face of the performer change in response to external forces applied to the performer during the performance. The modeling engine combines different expression models associated with different facial expressions to generate a prediction model. The prediction model can be used to predict and remove secondary dynamics from a given geometric representation of a performance or to generate and add secondary dynamics to a given geometric representation of a performance.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: February 21, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Eftychios Dimitrios Sifakis, Gaspard Zoss
  • Publication number: 20220327717
    Abstract: Some implementations of the disclosure are directed to capturing facial training data for one or more subjects, the captured facial training data including each of the one or more subject's facial skin geometry tracked over a plurality of times and the subject's corresponding jaw poses for each of those plurality of times; and using the captured facial training data to create a model that provides a mapping from skin motion to jaw motion. Additional implementations of the disclosure are directed to determining a facial skin geometry of a subject; using a model that provides a mapping from skin motion to jaw motion to predict a motion of the subject's jaw from a rest pose given the facial skin geometry; and determining a jaw pose of the subject using the predicted motion of the subject's jaw.
    Type: Application
    Filed: June 28, 2022
    Publication date: October 13, 2022
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Gaspard Zoss
  • Patent number: 11393107
    Abstract: Some implementations of the disclosure are directed to capturing facial training data for one or more subjects, the captured facial training data including each of the one or more subject's facial skin geometry tracked over a plurality of times and the subject's corresponding jaw poses for each of those plurality of times; and using the captured facial training data to create a model that provides a mapping from skin motion to jaw motion. Additional implementations of the disclosure are directed to determining a facial skin geometry of a subject; using a model that provides a mapping from skin motion to jaw motion to predict a motion of the subject's jaw from a rest pose given the facial skin geometry; and determining a jaw pose of the subject using the predicted motion of the subject's jaw.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: July 19, 2022
    Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Gaspard Zoss
  • Publication number: 20220092293
    Abstract: Techniques are disclosed for capturing facial appearance properties. In some examples, a facial capture system includes light source(s) that produce linearly polarized light, at least one camera that is cross-polarized with respect to the polarization of light produced by the light source(s), and at least one other camera that is not cross-polarized with respect to the polarization of the light produced by the light source(s). Images captured by the cross-polarized camera(s) are used to determine facial appearance properties other than specular intensity, such as diffuse albedo, while images captured by the camera(s) that are not cross-polarized are used to determine facial appearance properties including specular intensity. In addition, a coarse-to-fine optimization procedure is disclosed for determining appearance and detailed geometry maps based on images captured by the cross-polarized camera(s) and the camera(s) that are not cross-polarized.
    Type: Application
    Filed: December 2, 2021
    Publication date: March 24, 2022
    Inventors: Jeremy RIVIERE, Paulo Fabiano URNAU GOTARDO, Abhijeet GHOSH, Derek Edward BRADLEY, Dominik Thabo BEELER
  • Patent number: 11276231
    Abstract: Techniques are disclosed for training and applying nonlinear face models. In embodiments, a nonlinear face model includes an identity encoder, an expression encoder, and a decoder. The identity encoder takes as input a representation of a facial identity, such as a neutral face mesh minus a reference mesh, and outputs a code associated with the facial identity. The expression encoder takes as input a representation of a target expression, such as a set of blendweight values, and outputs a code associated with the target expression. The codes associated with the facial identity and the facial expression can be concatenated and input into the decoder, which outputs a representation of a face having the facial identity and expression. The representation of the face can include vertex displacements for deforming the reference mesh.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: March 15, 2022
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Prashanth Chandran, Dominik Thabo Beeler, Derek Edward Bradley
  • Patent number: 11257276
    Abstract: Techniques are disclosed for generating digital faces. In some examples, a style-based generator receives as inputs initial tensor(s) and style vector(s) corresponding to user-selected semantic attribute styles, such as the desired expression, gender, age, identity, and/or ethnicity of a digital face. The style-based generator is trained to process such inputs and output low-resolution appearance map(s) for the digital face, such as a texture map, a normal map, and/or a specular roughness map. The low-resolution appearance map(s) are further processed using a super-resolution generator that is trained to take the low-resolution appearance map(s) and low-resolution 3D geometry of the digital face as inputs and output high-resolution appearance map(s) that align with high-resolution 3D geometry of the digital face. Such high-resolution appearance map(s) and high-resolution 3D geometry can then be used to render standalone images or the frames of a video that include the digital face.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: February 22, 2022
    Assignees: DISNEY ENTERPRISES, INC., ETH ZURICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Prashanth Chandran, Dominik Thabo Beeler, Derek Edward Bradley
  • Publication number: 20220004741
    Abstract: Techniques are disclosed for capturing facial appearance properties. In some examples, a facial capture system includes light source(s) that produce linearly polarized light, at least one camera that is cross-polarized with respect to the polarization of light produced by the light source(s), and at least one other camera that is not cross-polarized with respect to the polarization of the light produced by the light source(s). Images captured by the cross-polarized camera(s) are used to determine facial appearance properties other than specular intensity, such as diffuse albedo, while images captured by the camera(s) that are not cross-polarized are used to determine facial appearance properties including specular intensity. In addition, a coarse-to-fine optimization procedure is disclosed for determining appearance and detailed geometry maps based on images captured by the cross-polarized camera(s) and the camera(s) that are not cross-polarized.
    Type: Application
    Filed: July 2, 2020
    Publication date: January 6, 2022
    Inventors: Jeremy RIVIERE, Paulo Urnau GOTARDO, Abhijeet GHOSH, Derek Edward BRADLEY, Dominik Thabo BEELER
  • Publication number: 20220005268
    Abstract: Techniques are disclosed for creating digital faces. In some examples, an anatomical face model is generated from a data set including captured facial geometries of different individuals and associated bone geometries. A model generator segments each of the captured facial geometries into patches, compresses the segmented geometry associated with each patch to determine local deformation subspaces of the anatomical face model, and determines corresponding compressed anatomical subspaces of the anatomical face model. A sculpting application determines, based on sculpting input from a user, constraints for an optimization to determine parameter values associated with the anatomical face model. The parameter values can be used, along with the anatomical face model, to generate facial geometry that reflects the sculpting input.
    Type: Application
    Filed: July 6, 2020
    Publication date: January 6, 2022
    Inventors: Aurel GRUBER, Marco FRATARCANGELI, Derek Edward BRADLEY, Gaspard ZOSS, Dominik Thabo BEELER
  • Patent number: 11216646
    Abstract: Techniques are disclosed for capturing facial appearance properties. In some examples, a facial capture system includes light source(s) that produce linearly polarized light, at least one camera that is cross-polarized with respect to the polarization of light produced by the light source(s), and at least one other camera that is not cross-polarized with respect to the polarization of the light produced by the light source(s). Images captured by the cross-polarized camera(s) are used to determine facial appearance properties other than specular intensity, such as diffuse albedo, while images captured by the camera(s) that are not cross-polarized are used to determine facial appearance properties including specular intensity. In addition, a coarse-to-fine optimization procedure is disclosed for determining appearance and detailed geometry maps based on images captured by the cross-polarized camera(s) and the camera(s) that are not cross-polarized.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: January 4, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: Jeremy Riviere, Paulo Urnau Gotardo, Abhijeet Ghosh, Derek Edward Bradley, Dominik Thabo Beeler
  • Patent number: 11170550
    Abstract: A retargeting engine automatically performs a retargeting operation. The retargeting engine generates an anatomical local model of a digital character based on performance capture data and/or a 3D model of the digital character. The anatomical local model includes an anatomical model corresponding to internal features of the digital character and a local model corresponding to external features of the digital character. The retargeting engine includes a Machine Learning model that maps a set of locations associated with the face of a performer to a corresponding set of locations associated with the face of the digital character. The retargeting engine includes a solver that modifies a set of parameters associated with the anatomical local model to cause the digital character to exhibit one or more facial expressions enacted by the performer, thereby retargeting those facial expressions onto the digital character.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: November 9, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Derek Edward Bradley, Dominik Thabo Beeler
  • Patent number: 11151767
    Abstract: A removal model is trained to predict secondary dynamics associated with an individual enacting a performance. For a given sequence of frames that includes an individual enacting a performance and secondary dynamics, a retargeting application identifies a set of rigid points that correspond to skeletal regions of the individual and a set of non-rigid points that correspond to non-skeletal region of the individual. For each frame in the sequence of frames, the application applies the removal model that takes as inputs a velocity history of a non-rigid point and a velocity history of the rigid points in a temporal window around the frame, and outputs a delta vector for the non-rigid point indicating a displacement for reducing secondary dynamics in the frame. In addition, a trained synthesis model can be applied to determine a delta vector for every non-rigid point indicating displacements for adding new secondary dynamics.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: October 19, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Gaspard Zoss, Eftychios Sifakis, Dominik Thabo Beeler, Derek Edward Bradley
  • Publication number: 20210279938
    Abstract: Techniques are disclosed for generating digital faces. In some examples, a style-based generator receives as inputs initial tensor(s) and style vector(s) corresponding to user-selected semantic attribute styles, such as the desired expression, gender, age, identity, and/or ethnicity of a digital face. The style-based generator is trained to process such inputs and output low-resolution appearance map(s) for the digital face, such as a texture map, a normal map, and/or a specular roughness map. The low-resolution appearance map(s) are further processed using a super-resolution generator that is trained to take the low-resolution appearance map(s) and low-resolution 3D geometry of the digital face as inputs and output high-resolution appearance map(s) that align with high-resolution 3D geometry of the digital face. Such high-resolution appearance map(s) and high-resolution 3D geometry can then be used to render standalone images or the frames of a video that include the digital face.
    Type: Application
    Filed: June 8, 2020
    Publication date: September 9, 2021
    Inventors: Prashanth CHANDRAN, Dominik Thabo BEELER, Derek Edward BRADLEY
  • Publication number: 20210279956
    Abstract: Techniques are disclosed for training and applying nonlinear face models. In embodiments, a nonlinear face model includes an identity encoder, an expression encoder, and a decoder. The identity encoder takes as input a representation of a facial identity, such as a neutral face mesh minus a reference mesh, and outputs a code associated with the facial identity. The expression encoder takes as input a representation of a target expression, such as a set of blendweight values, and outputs a code associated with the target expression. The codes associated with the facial identity and the facial expression can be concatenated and input into the decoder, which outputs a representation of a face having the facial identity and expression. The representation of the face can include vertex displacements for deforming the reference mesh.
    Type: Application
    Filed: March 4, 2020
    Publication date: September 9, 2021
    Inventors: Prashanth CHANDRAN, Dominik Thabo BEELER, Derek Edward BRADLEY
  • Publication number: 20210166458
    Abstract: A modeling engine generates a prediction model that quantifies and predicts secondary dynamics associated with the face of a performer enacting a performance. The modeling engine generates a set of geometric representations that represents the face of the performer enacting different facial expressions under a range of loading conditions. For a given facial expression and specific loading condition, the modeling engine trains a Machine Learning model to predict how soft tissue regions of the face of the performer change in response to external forces applied to the performer during the performance. The modeling engine combines different expression models associated with different facial expressions to generate a prediction model. The prediction model can be used to predict and remove secondary dynamics from a given geometric representation of a performance or to generate and add secondary dynamics to a given geometric representation of a performance.
    Type: Application
    Filed: December 3, 2019
    Publication date: June 3, 2021
    Inventors: Dominik Thabo BEELER, Derek Edward BRADLEY, Eftychios Dimitrios Sifakis, Gaspard Zoss
  • Publication number: 20210158590
    Abstract: A retargeting engine automatically performs a retargeting operation. The retargeting engine generates an anatomical local model of a digital character based on performance capture data and/or a 3D model of the digital character. The anatomical local model includes an anatomical model corresponding to internal features of the digital character and a local model corresponding to external features of the digital character. The retargeting engine includes a Machine Learning model that maps a set of locations associated with the face of a performer to a corresponding set of locations associated with the face of the digital character. The retargeting engine includes a solver that modifies a set of parameters associated with the anatomical local model to cause the digital character to exhibit one or more facial expressions enacted by the performer, thereby retargeting those facial expressions onto the digital character.
    Type: Application
    Filed: November 26, 2019
    Publication date: May 27, 2021
    Inventors: Derek Edward BRADLEY, Dominik Thabo BEELER
  • Publication number: 20210012512
    Abstract: Some implementations of the disclosure are directed to capturing facial training data for one or more subjects, the captured facial training data including each of the one or more subject's facial skin geometry tracked over a plurality of times and the subject's corresponding jaw poses for each of those plurality of times; and using the captured facial training data to create a model that provides a mapping from skin motion to jaw motion. Additional implementations of the disclosure are directed to determining a facial skin geometry of a subject; using a model that provides a mapping from skin motion to jaw motion to predict a motion of the subject's jaw from a rest pose given the facial skin geometry; and determining a jaw pose of the subject using the predicted motion of the subject's jaw.
    Type: Application
    Filed: July 12, 2019
    Publication date: January 14, 2021
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Gaspard Zoss
  • Patent number: 10609365
    Abstract: The present disclosure relates to a system and method for calibrating an optical device, such as a camera. In one example, the system includes a light-emitting device that generates light patterns and an ray generator that is positioned between the light-emitting device and the optical device. The ray generator separates the light emitted as part of the light patterns into a plurality of directional rays. The optical device then captures the directional rays and the captured data, along with data corresponding to the light pattern and the ray generator, are used to calibrate the optical device.
    Type: Grant
    Filed: April 5, 2016
    Date of Patent: March 31, 2020
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Anselm Grundhöfer, Dominik Thabo Beeler