Patents by Inventor Thibaut WEISE
Thibaut WEISE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11836838Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.Type: GrantFiled: December 3, 2020Date of Patent: December 5, 2023Assignee: Apple Inc.Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
-
Publication number: 20230343013Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.Type: ApplicationFiled: May 9, 2023Publication date: October 26, 2023Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
-
Patent number: 11120600Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.Type: GrantFiled: February 14, 2019Date of Patent: September 14, 2021Assignee: Apple Inc.Inventors: Justin D. Stoyles, Alexandre R. Moha, Nicolas V. Scapel, Guillaume P. Barlier, Aurelio Guzman, Bruno M. Sommer, Nina Damasky, Thibaut Weise, Thomas Goossens, Hoan Pham, Brian Amberg
-
Patent number: 11068698Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.Type: GrantFiled: September 27, 2019Date of Patent: July 20, 2021Assignee: Apple Inc.Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
-
Publication number: 20210174567Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.Type: ApplicationFiled: December 3, 2020Publication date: June 10, 2021Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
-
Publication number: 20210090314Abstract: Systems and methods for animating an avatar are provided. An example method of animating an avatar includes at an electronic device having one or more processors and memory, receiving an audio input, receiving a video input including at least a portion of a user's face, wherein the video input is separate from the audio input, determining one or more movements of the user's face based on the received audio input and received video input, and generating, using a neural network separately trained with a set of audio training data and a set of video training data, a set of characteristics for controlling an avatar representing the one or more movements of the user's face.Type: ApplicationFiled: December 20, 2019Publication date: March 25, 2021Inventors: Ahmed Serag El Din HUSSEN ABDELAZIZ, Nicholas APOSTOLOFF, Justin BINDER, Paul Richard DIXON, Sachin KAJAREKAR, Reinhard KNOTHE, Sebastian MARTIN, Barry-John THEOBALD, Thibaut WEISE
-
Patent number: 10861211Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.Type: GrantFiled: July 2, 2018Date of Patent: December 8, 2020Assignee: Apple Inc.Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
-
Patent number: 10820795Abstract: In one implementation, a method includes: determining an interpupillary distance (IPD) measurement for a user based on a function of depth data obtained by the depth sensor and image data obtained by the image sensor; and calibrating a head-mounted device (HMD) provided to deliver augmented reality/virtual reality (AR/VR) content by setting one or more presentation parameters of the HMD based on the IPD measurement in order to tailor one or more AR/VR displays of the HMD to a field-of-view of the user.Type: GrantFiled: June 22, 2018Date of Patent: November 3, 2020Assignee: APPLE INC.Inventors: Thibaut Weise, Justin D. Stoyles, Michael Kuhn, Reinhard Klapfer, Stefan Misslinger
-
Publication number: 20200125835Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.Type: ApplicationFiled: September 27, 2019Publication date: April 23, 2020Applicant: Apple Inc.Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
-
Patent number: 10452896Abstract: Techniques are disclosed for creating avatars from image data of a person. According to these techniques, spatial facial attributes of a subject may be measured from an image representing the subject. The measured facial attributes may be matching to a three-dimensional avatar template. Other attributes of the subject may be identified, such as hair type, hair color, eye color, skin color and the like. For hair type, hair may be generated for the avatar by measuring spatial locations of hair of the subject and the measured hair locations may be compared to locations of hair represented by a plurality of hair templates. A matching hair template may be selected from the plurality of hair templates, which may be used in generating the avatar. An avatar may be generated from the matching avatar template, which may be deformed according to the measured spatial facial attributes, and from the other attributes.Type: GrantFiled: September 6, 2017Date of Patent: October 22, 2019Assignee: Apple Inc.Inventors: Thibaut Weise, Sofien Bouaziz, Atulit Kumar, Sarah Amsellem
-
Patent number: 10430642Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.Type: GrantFiled: March 23, 2018Date of Patent: October 1, 2019Assignee: Apple Inc.Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
-
Publication number: 20190251728Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.Type: ApplicationFiled: February 14, 2019Publication date: August 15, 2019Inventors: Justin D. STOYLES, Alexandre R. MOHA, Nicolas V. SCAPEL, Guillaume P. BARLIER, Aurelio GUZMAN, Bruno M. SOMMER, Nina DAMASKY, Thibaut WEISE, Thomas GOOSSENS, Hoan PHAM, Brian AMBERG
-
Publication number: 20190180084Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.Type: ApplicationFiled: March 23, 2018Publication date: June 13, 2019Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
-
Publication number: 20190139287Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.Type: ApplicationFiled: July 2, 2018Publication date: May 9, 2019Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
-
Patent number: 10210648Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.Type: GrantFiled: November 10, 2017Date of Patent: February 19, 2019Assignee: Apple Inc.Inventors: Justin D. Stoyles, Alexandre R. Moha, Nicolas V. Scapel, Guillaume P. Barlier, Aurelio Guzman, Bruno M. Sommer, Nina Damasky, Thibaut Weise, Thomas Goossens, Hoan Pham, Brian Amberg
-
Publication number: 20180336714Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.Type: ApplicationFiled: November 10, 2017Publication date: November 22, 2018Inventors: Justin D. Stoyles, Alexandre R. Moha, Nicolas V. Scapel, Guillaume P. Barlier, Aurelio Guzman, Bruno M. Sommer, Nina Damasky, Thibaut Weise, Thomas Goossens, Hoan Pham, Brian Amberg
-
Patent number: 10013787Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.Type: GrantFiled: December 12, 2011Date of Patent: July 3, 2018Assignee: Faceshift AGInventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
-
Publication number: 20180089880Abstract: In an embodiment a method of online video communication is disclosed. An online video communication is established between a source device and a receiving device. The source device captures a live video recording of a sending user. The captured recording is analyzed to identify one or more characteristics of the sending user. The source device then generates avatar data corresponding to the identified characteristics. The avatar data is categorized into a plurality of groups, wherein a first group of the at least two groups comprises avatar data that is more unique to the sending user. Finally, at least the first group of the plurality of groups is transmitted to the receiving device. The transmitted first group of avatar data defines, at least in part, how to animate an avatar that mimics the sending user's one or more physical characteristics.Type: ApplicationFiled: September 22, 2017Publication date: March 29, 2018Inventors: Christopher M. Garrido, Brian Amberg, David L. Biderman, Eric L. Chien, Haitao Guo, Sarah Amsellem, Thibaut Weise, Timothy L. Bienz
-
Publication number: 20130147788Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.Type: ApplicationFiled: December 12, 2011Publication date: June 13, 2013Inventors: Thibaut WEISE, Sofien Bouaziz, Hao Ll, Mark Pauly