Patents by Inventor Mark Pauly
Mark Pauly has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240212251Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.Type: ApplicationFiled: February 29, 2024Publication date: June 27, 2024Inventors: SOFIEN BOUAZIZ, MARK PAULY
-
Patent number: 11948238Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.Type: GrantFiled: May 27, 2022Date of Patent: April 2, 2024Assignee: Apple Inc.Inventors: Sofien Bouaziz, Mark Pauly
-
Patent number: 11836838Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.Type: GrantFiled: December 3, 2020Date of Patent: December 5, 2023Assignee: Apple Inc.Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
-
Publication number: 20230343013Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.Type: ApplicationFiled: May 9, 2023Publication date: October 26, 2023Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
-
Patent number: 11568601Abstract: Technologies are provided herein for modeling and tracking physical objects, such as human hands, within a field of view of a depth sensor. A sphere-mesh model of the physical object can be created and used to track the physical object in real-time. The sphere-mesh model comprises an explicit skeletal mesh and an implicit convolution surface generated based on the skeletal mesh. The skeletal mesh parameterizes the convolution surface and distances between points in data frames received from the depth sensor and the sphere-mesh model can be efficiently determined using the skeletal mesh. The sphere-mesh model can be automatically calibrated by dynamically adjusting positions and associated radii of vertices in the skeletal mesh to fit the convolution surface to a particular physical object.Type: GrantFiled: August 14, 2017Date of Patent: January 31, 2023Assignee: UVic Industry Partnerships Inc.Inventors: Andrea Tagliasacchi, Anastasia Tkach, Mark Pauly
-
Publication number: 20230024768Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.Type: ApplicationFiled: May 27, 2022Publication date: January 26, 2023Inventors: SOFIEN BOUAZIZ, MARK PAULY
-
Patent number: 11348299Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.Type: GrantFiled: January 27, 2020Date of Patent: May 31, 2022Assignee: Apple Inc.Inventors: Sofien Bouaziz, Mark Pauly
-
Patent number: 11298966Abstract: The invention relates to a thin optical security element comprising a reflective or refractive light-redirecting surface having a relief pattern operable to redirect incident light from a light source and form a projected image on a projection surface, the projected image comprising a caustic pattern reproducing a reference pattern that is easily visually recognizable by a person. The invention also relates to a method for designing a relief pattern of a light-redirecting surface of a thin optical security element.Type: GrantFiled: September 28, 2018Date of Patent: April 12, 2022Assignee: SICPA HOLDING SAInventors: Andrea Callegari, Pierre Degott, Todor Dinoev, Christophe Garnier, Alain Mayer, Yuliy Schwartzburg, Romain Testuz, Mark Pauly
-
Patent number: 11292283Abstract: The invention relates to a thin optical security element comprising a reflective or refractive light-redirecting surface having a relief pattern operable to redirect incident light from a light source and form a projected image on a projection surface, the optical parameters of this optical security element fulfilling a specific projection criterion such that the projected image comprises a caustic pattern reproducing a reference pattern that is easily visually recognizable by a person. The invention also relates to a method for designing a relief pattern of an optical security element.Type: GrantFiled: September 28, 2018Date of Patent: April 5, 2022Assignee: SICPA HOLDING SAInventors: Andrea Callegari, Pierre Degott, Todor Dinoev, Christophe Garnier, Alain Mayer, Yuliy Schwartzburg, Romain Testuz, Mark Pauly
-
Publication number: 20210174567Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.Type: ApplicationFiled: December 3, 2020Publication date: June 10, 2021Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
-
Patent number: 10861211Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.Type: GrantFiled: July 2, 2018Date of Patent: December 8, 2020Assignee: Apple Inc.Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
-
Publication number: 20200269627Abstract: The invention relates to a thin optical security element comprising a reflective or refractive light-redirecting surface having a relief pattern operable to redirect incident light from a light source and form a projected image on a projection surface, the projected image comprising a caustic pattern reproducing a reference pattern that is easily visually recognizable by a person. The invention also relates to a method for designing a relief pattern of a light-redirecting surface of a thin optical security element.Type: ApplicationFiled: September 28, 2018Publication date: August 27, 2020Inventors: Andrea CALLEGARI, Pierre DEGOTT, Todor DINOEV, Christophe GARNIER, Alain MAYER, Yuliy SCHWARTZBURG, Romain TESTUZ, Mark PAULY
-
Patent number: 10732405Abstract: Method of designing a refractive surface, comprising providing a refractive object having a refractive surface with an initial geometry, determining a refraction of incident illumination through the refractive surface with the initial geometry to create a source irradiance ES distribution on a receiver; and determining a shape of the refractive surface of the refractive object such that a resulting irradiance distribution on the receiver matches a desired target irradiance ET provided by a user.Type: GrantFiled: June 11, 2015Date of Patent: August 4, 2020Assignee: ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE (EPFL)Inventors: Mark Pauly, Romain P. Testuz, Yuliy Schwartzburg
-
Publication number: 20200230995Abstract: The invention relates to a thin optical security element comprising a reflective or refractive light-redirecting surface having a relief pattern operable to redirect incident light from a light source and form a projected image on a projection surface, the optical parameters of this optical security element fulfilling a specific projection criterion such that the projected image comprises a caustic pattern reproducing a reference pattern that is easily visually recognizable by a person. The invention also relates to a method for designing a relief pattern of an optical security element.Type: ApplicationFiled: September 28, 2018Publication date: July 23, 2020Inventors: Andrea CALLEGARI, Pierre DEGOTT, Todor DINOEV, Christophe GARNIER, Alain MAYER, Yuliy SCHWARTZBURG, Romain TESTUZ, Mark PAULY
-
Publication number: 20200160582Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.Type: ApplicationFiled: January 27, 2020Publication date: May 21, 2020Inventors: SOFIEN BOUAZIZ, MARK PAULY
-
Publication number: 20200151290Abstract: The present invention concerns a method for encoding a given 3D shape into a target 2D linkage. The method comprises: (a) providing an initial 2D surface; and (b) defining on the initial 2D surface an auxetic pattern of geometric elements planarly linked between them to obtain the target 2D linkage, the pattern allowing the target 2D linkage to be virtually stretched. The target 2D linkage has a spatially varying scale factor thereby spatially varying the stretching capability of the 2D linkage.Type: ApplicationFiled: November 12, 2018Publication date: May 14, 2020Applicants: ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE (EPFL), CARNEGIE MELLON UNIVERSITYInventors: Mina KONAKOVIC-LUKOVIC, Keenan CRANE, Mark PAULY, Julian PANETTA
-
Patent number: 10586372Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.Type: GrantFiled: January 28, 2019Date of Patent: March 10, 2020Assignee: Apple Inc.Inventors: Sofien Bouaziz, Mark Pauly
-
Publication number: 20190272670Abstract: Technologies are provided herein for modeling and tracking physical objects, such as human hands, within a field of view of a depth sensor. A sphere-mesh model of the physical object can be created and used to track the physical object in real-time. The sphere-mesh model comprises an explicit skeletal mesh and an implicit convolution surface generated based on the skeletal mesh. The skeletal mesh parameterizes the convolution surface and distances between points in data frames received from the depth sensor and the sphere-mesh model can be efficiently determined using the skeletal mesh. The sphere-mesh model can be automatically calibrated by dynamically adjusting positions and associated radii of vertices in the skeletal mesh to fit the convolution surface to a particular physical object.Type: ApplicationFiled: August 14, 2017Publication date: September 5, 2019Applicant: UVic Industry Partnerships Inc.Inventors: Andrea TAGLIASACCHI, Anastasia TKACH, Mark PAULY
-
Publication number: 20190156549Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.Type: ApplicationFiled: January 28, 2019Publication date: May 23, 2019Inventors: SOFIEN BOUAZIZ, MARK PAULY
-
Publication number: 20190139287Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.Type: ApplicationFiled: July 2, 2018Publication date: May 9, 2019Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly