Patents by Inventor Thibaut WEISE

Thibaut WEISE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11836838
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: December 5, 2023
    Assignee: Apple Inc.
    Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
  • Publication number: 20230343013
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Application
    Filed: May 9, 2023
    Publication date: October 26, 2023
    Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
  • Patent number: 11120600
    Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: September 14, 2021
    Assignee: Apple Inc.
    Inventors: Justin D. Stoyles, Alexandre R. Moha, Nicolas V. Scapel, Guillaume P. Barlier, Aurelio Guzman, Bruno M. Sommer, Nina Damasky, Thibaut Weise, Thomas Goossens, Hoan Pham, Brian Amberg
  • Patent number: 11068698
    Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: July 20, 2021
    Assignee: Apple Inc.
    Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
  • Publication number: 20210174567
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Application
    Filed: December 3, 2020
    Publication date: June 10, 2021
    Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
  • Publication number: 20210090314
    Abstract: Systems and methods for animating an avatar are provided. An example method of animating an avatar includes at an electronic device having one or more processors and memory, receiving an audio input, receiving a video input including at least a portion of a user's face, wherein the video input is separate from the audio input, determining one or more movements of the user's face based on the received audio input and received video input, and generating, using a neural network separately trained with a set of audio training data and a set of video training data, a set of characteristics for controlling an avatar representing the one or more movements of the user's face.
    Type: Application
    Filed: December 20, 2019
    Publication date: March 25, 2021
    Inventors: Ahmed Serag El Din HUSSEN ABDELAZIZ, Nicholas APOSTOLOFF, Justin BINDER, Paul Richard DIXON, Sachin KAJAREKAR, Reinhard KNOTHE, Sebastian MARTIN, Barry-John THEOBALD, Thibaut WEISE
  • Patent number: 10861211
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Grant
    Filed: July 2, 2018
    Date of Patent: December 8, 2020
    Assignee: Apple Inc.
    Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
  • Patent number: 10820795
    Abstract: In one implementation, a method includes: determining an interpupillary distance (IPD) measurement for a user based on a function of depth data obtained by the depth sensor and image data obtained by the image sensor; and calibrating a head-mounted device (HMD) provided to deliver augmented reality/virtual reality (AR/VR) content by setting one or more presentation parameters of the HMD based on the IPD measurement in order to tailor one or more AR/VR displays of the HMD to a field-of-view of the user.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: November 3, 2020
    Assignee: APPLE INC.
    Inventors: Thibaut Weise, Justin D. Stoyles, Michael Kuhn, Reinhard Klapfer, Stefan Misslinger
  • Publication number: 20200125835
    Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 23, 2020
    Applicant: Apple Inc.
    Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
  • Patent number: 10452896
    Abstract: Techniques are disclosed for creating avatars from image data of a person. According to these techniques, spatial facial attributes of a subject may be measured from an image representing the subject. The measured facial attributes may be matching to a three-dimensional avatar template. Other attributes of the subject may be identified, such as hair type, hair color, eye color, skin color and the like. For hair type, hair may be generated for the avatar by measuring spatial locations of hair of the subject and the measured hair locations may be compared to locations of hair represented by a plurality of hair templates. A matching hair template may be selected from the plurality of hair templates, which may be used in generating the avatar. An avatar may be generated from the matching avatar template, which may be deformed according to the measured spatial facial attributes, and from the other attributes.
    Type: Grant
    Filed: September 6, 2017
    Date of Patent: October 22, 2019
    Assignee: Apple Inc.
    Inventors: Thibaut Weise, Sofien Bouaziz, Atulit Kumar, Sarah Amsellem
  • Patent number: 10430642
    Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: October 1, 2019
    Assignee: Apple Inc.
    Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
  • Publication number: 20190251728
    Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.
    Type: Application
    Filed: February 14, 2019
    Publication date: August 15, 2019
    Inventors: Justin D. STOYLES, Alexandre R. MOHA, Nicolas V. SCAPEL, Guillaume P. BARLIER, Aurelio GUZMAN, Bruno M. SOMMER, Nina DAMASKY, Thibaut WEISE, Thomas GOOSSENS, Hoan PHAM, Brian AMBERG
  • Publication number: 20190180084
    Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.
    Type: Application
    Filed: March 23, 2018
    Publication date: June 13, 2019
    Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
  • Publication number: 20190139287
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Application
    Filed: July 2, 2018
    Publication date: May 9, 2019
    Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
  • Patent number: 10210648
    Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.
    Type: Grant
    Filed: November 10, 2017
    Date of Patent: February 19, 2019
    Assignee: Apple Inc.
    Inventors: Justin D. Stoyles, Alexandre R. Moha, Nicolas V. Scapel, Guillaume P. Barlier, Aurelio Guzman, Bruno M. Sommer, Nina Damasky, Thibaut Weise, Thomas Goossens, Hoan Pham, Brian Amberg
  • Publication number: 20180336714
    Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.
    Type: Application
    Filed: November 10, 2017
    Publication date: November 22, 2018
    Inventors: Justin D. Stoyles, Alexandre R. Moha, Nicolas V. Scapel, Guillaume P. Barlier, Aurelio Guzman, Bruno M. Sommer, Nina Damasky, Thibaut Weise, Thomas Goossens, Hoan Pham, Brian Amberg
  • Patent number: 10013787
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Grant
    Filed: December 12, 2011
    Date of Patent: July 3, 2018
    Assignee: Faceshift AG
    Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
  • Publication number: 20180089880
    Abstract: In an embodiment a method of online video communication is disclosed. An online video communication is established between a source device and a receiving device. The source device captures a live video recording of a sending user. The captured recording is analyzed to identify one or more characteristics of the sending user. The source device then generates avatar data corresponding to the identified characteristics. The avatar data is categorized into a plurality of groups, wherein a first group of the at least two groups comprises avatar data that is more unique to the sending user. Finally, at least the first group of the plurality of groups is transmitted to the receiving device. The transmitted first group of avatar data defines, at least in part, how to animate an avatar that mimics the sending user's one or more physical characteristics.
    Type: Application
    Filed: September 22, 2017
    Publication date: March 29, 2018
    Inventors: Christopher M. Garrido, Brian Amberg, David L. Biderman, Eric L. Chien, Haitao Guo, Sarah Amsellem, Thibaut Weise, Timothy L. Bienz
  • Publication number: 20130147788
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Application
    Filed: December 12, 2011
    Publication date: June 13, 2013
    Inventors: Thibaut WEISE, Sofien Bouaziz, Hao Ll, Mark Pauly