Patents by Inventor Mark Pauly

Mark Pauly has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240125782
    Abstract: The present invention relates to a particle collection for detecting a pathogen-neutralizing molecule. The present invention further relates to a composition comprising a particle collection. The present invention also relates to a method of detecting a pathogen-neutralizing molecule. Furthermore, the present invention relates to a kit for detecting a pathogen-neutralizing molecule. The present invention further relates to a point-of-care device, and to a use of a particle collection or a composition in a method of detecting a pathogen-neutralizing molecule.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 18, 2024
    Inventors: Antje BÄUMNER, Mark-Steven STEINER, Diana PAULY, Ralf WAGNER
  • Patent number: 11948238
    Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: April 2, 2024
    Assignee: Apple Inc.
    Inventors: Sofien Bouaziz, Mark Pauly
  • Patent number: 11836838
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: December 5, 2023
    Assignee: Apple Inc.
    Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
  • Publication number: 20230343013
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Application
    Filed: May 9, 2023
    Publication date: October 26, 2023
    Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
  • Patent number: 11568601
    Abstract: Technologies are provided herein for modeling and tracking physical objects, such as human hands, within a field of view of a depth sensor. A sphere-mesh model of the physical object can be created and used to track the physical object in real-time. The sphere-mesh model comprises an explicit skeletal mesh and an implicit convolution surface generated based on the skeletal mesh. The skeletal mesh parameterizes the convolution surface and distances between points in data frames received from the depth sensor and the sphere-mesh model can be efficiently determined using the skeletal mesh. The sphere-mesh model can be automatically calibrated by dynamically adjusting positions and associated radii of vertices in the skeletal mesh to fit the convolution surface to a particular physical object.
    Type: Grant
    Filed: August 14, 2017
    Date of Patent: January 31, 2023
    Assignee: UVic Industry Partnerships Inc.
    Inventors: Andrea Tagliasacchi, Anastasia Tkach, Mark Pauly
  • Publication number: 20230024768
    Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.
    Type: Application
    Filed: May 27, 2022
    Publication date: January 26, 2023
    Inventors: SOFIEN BOUAZIZ, MARK PAULY
  • Patent number: 11348299
    Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: May 31, 2022
    Assignee: Apple Inc.
    Inventors: Sofien Bouaziz, Mark Pauly
  • Patent number: 11298966
    Abstract: The invention relates to a thin optical security element comprising a reflective or refractive light-redirecting surface having a relief pattern operable to redirect incident light from a light source and form a projected image on a projection surface, the projected image comprising a caustic pattern reproducing a reference pattern that is easily visually recognizable by a person. The invention also relates to a method for designing a relief pattern of a light-redirecting surface of a thin optical security element.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: April 12, 2022
    Assignee: SICPA HOLDING SA
    Inventors: Andrea Callegari, Pierre Degott, Todor Dinoev, Christophe Garnier, Alain Mayer, Yuliy Schwartzburg, Romain Testuz, Mark Pauly
  • Patent number: 11292283
    Abstract: The invention relates to a thin optical security element comprising a reflective or refractive light-redirecting surface having a relief pattern operable to redirect incident light from a light source and form a projected image on a projection surface, the optical parameters of this optical security element fulfilling a specific projection criterion such that the projected image comprises a caustic pattern reproducing a reference pattern that is easily visually recognizable by a person. The invention also relates to a method for designing a relief pattern of an optical security element.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: April 5, 2022
    Assignee: SICPA HOLDING SA
    Inventors: Andrea Callegari, Pierre Degott, Todor Dinoev, Christophe Garnier, Alain Mayer, Yuliy Schwartzburg, Romain Testuz, Mark Pauly
  • Publication number: 20210174567
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Application
    Filed: December 3, 2020
    Publication date: June 10, 2021
    Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
  • Patent number: 10861211
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Grant
    Filed: July 2, 2018
    Date of Patent: December 8, 2020
    Assignee: Apple Inc.
    Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
  • Publication number: 20200269627
    Abstract: The invention relates to a thin optical security element comprising a reflective or refractive light-redirecting surface having a relief pattern operable to redirect incident light from a light source and form a projected image on a projection surface, the projected image comprising a caustic pattern reproducing a reference pattern that is easily visually recognizable by a person. The invention also relates to a method for designing a relief pattern of a light-redirecting surface of a thin optical security element.
    Type: Application
    Filed: September 28, 2018
    Publication date: August 27, 2020
    Inventors: Andrea CALLEGARI, Pierre DEGOTT, Todor DINOEV, Christophe GARNIER, Alain MAYER, Yuliy SCHWARTZBURG, Romain TESTUZ, Mark PAULY
  • Patent number: 10732405
    Abstract: Method of designing a refractive surface, comprising providing a refractive object having a refractive surface with an initial geometry, determining a refraction of incident illumination through the refractive surface with the initial geometry to create a source irradiance ES distribution on a receiver; and determining a shape of the refractive surface of the refractive object such that a resulting irradiance distribution on the receiver matches a desired target irradiance ET provided by a user.
    Type: Grant
    Filed: June 11, 2015
    Date of Patent: August 4, 2020
    Assignee: ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE (EPFL)
    Inventors: Mark Pauly, Romain P. Testuz, Yuliy Schwartzburg
  • Publication number: 20200230995
    Abstract: The invention relates to a thin optical security element comprising a reflective or refractive light-redirecting surface having a relief pattern operable to redirect incident light from a light source and form a projected image on a projection surface, the optical parameters of this optical security element fulfilling a specific projection criterion such that the projected image comprises a caustic pattern reproducing a reference pattern that is easily visually recognizable by a person. The invention also relates to a method for designing a relief pattern of an optical security element.
    Type: Application
    Filed: September 28, 2018
    Publication date: July 23, 2020
    Inventors: Andrea CALLEGARI, Pierre DEGOTT, Todor DINOEV, Christophe GARNIER, Alain MAYER, Yuliy SCHWARTZBURG, Romain TESTUZ, Mark PAULY
  • Publication number: 20200160582
    Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.
    Type: Application
    Filed: January 27, 2020
    Publication date: May 21, 2020
    Inventors: SOFIEN BOUAZIZ, MARK PAULY
  • Publication number: 20200151290
    Abstract: The present invention concerns a method for encoding a given 3D shape into a target 2D linkage. The method comprises: (a) providing an initial 2D surface; and (b) defining on the initial 2D surface an auxetic pattern of geometric elements planarly linked between them to obtain the target 2D linkage, the pattern allowing the target 2D linkage to be virtually stretched. The target 2D linkage has a spatially varying scale factor thereby spatially varying the stretching capability of the 2D linkage.
    Type: Application
    Filed: November 12, 2018
    Publication date: May 14, 2020
    Applicants: ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE (EPFL), CARNEGIE MELLON UNIVERSITY
    Inventors: Mina KONAKOVIC-LUKOVIC, Keenan CRANE, Mark PAULY, Julian PANETTA
  • Patent number: 10586372
    Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: March 10, 2020
    Assignee: Apple Inc.
    Inventors: Sofien Bouaziz, Mark Pauly
  • Publication number: 20190272670
    Abstract: Technologies are provided herein for modeling and tracking physical objects, such as human hands, within a field of view of a depth sensor. A sphere-mesh model of the physical object can be created and used to track the physical object in real-time. The sphere-mesh model comprises an explicit skeletal mesh and an implicit convolution surface generated based on the skeletal mesh. The skeletal mesh parameterizes the convolution surface and distances between points in data frames received from the depth sensor and the sphere-mesh model can be efficiently determined using the skeletal mesh. The sphere-mesh model can be automatically calibrated by dynamically adjusting positions and associated radii of vertices in the skeletal mesh to fit the convolution surface to a particular physical object.
    Type: Application
    Filed: August 14, 2017
    Publication date: September 5, 2019
    Applicant: UVic Industry Partnerships Inc.
    Inventors: Andrea TAGLIASACCHI, Anastasia TKACH, Mark PAULY
  • Publication number: 20190156549
    Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.
    Type: Application
    Filed: January 28, 2019
    Publication date: May 23, 2019
    Inventors: SOFIEN BOUAZIZ, MARK PAULY
  • Publication number: 20190139287
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Application
    Filed: July 2, 2018
    Publication date: May 9, 2019
    Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly