Patents by Inventor Wan-Chun Ma

Wan-Chun Ma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11288844
    Abstract: Systems, methods, and computer program products are described that implement obtaining, at an electronic computing device and for at least one image of a scene rendered in an Augmented Reality (AR) environment, a scene lighting estimation captured at a first time period. The scene lighting estimation may include at least a first image measurement associated with the scene. The implementations may include determining, at the electronic computing device, a second image measurement associated with the scene at a second time period, determining a function of the first image measurement and the second image measurement. Based on the determined function, the implementations may also include triggering calculation of a partial lighting estimation update or triggering calculation of a full lighting estimation update and rendering, on a screen of the electronic computing device and for the scene, the scene using the partial lighting estimation update or the full lighting estimation update.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: March 29, 2022
    Assignee: Google LLC
    Inventors: Chloe LeGendre, Laurent Charbonnel, Christina Tong, Konstantine Nicholas John Tsotsos, Wan-Chun Ma, Paul Debevec
  • Publication number: 20220027659
    Abstract: Techniques of estimating lighting from portraits includes generating a lighting estimate from a single image of a face based on a machine learning (ML) system using multiple bidirectional reflection distribution functions (BRDFs) as a loss function. In some implementations, the ML system is trained using images of faces formed with HDR illumination computed from LDR imagery. The technical solution includes training a lighting estimation model in a supervised manner using a dataset of portraits and their corresponding ground truth illumination.
    Type: Application
    Filed: September 21, 2020
    Publication date: January 27, 2022
    Inventors: Chloe LeGendre, Paul Debevec, Wan-Chun Ma, Rohit Pandey, Sean Ryan Francesco Fanello, Christina Tong
  • Publication number: 20210406581
    Abstract: An example method, apparatus, and computer-readable storage medium are provided to predict high-dynamic range (HDR) lighting from low-dynamic range (LDR) background images. In an example implementation, a method may include receiving low-dynamic range (LDR) background images of scenes, each LDR background image captured with appearance of one or more reference objects with different reflectance properties; and training a lighting estimation model based at least on the received LDR background images to predict high-dynamic range (HDR) lighting based at least on the trained model. In another example implementation, a method may include capturing a low-dynamic range (LDR) background image of a scene from an LDR video captured by a camera of the electronic computing device; predicting high-dynamic range (HDR) lighting for the image, the predicting, using a trained model, based at least on the LDR background image; and rendering a virtual object based at least on the predicted HDR lighting.
    Type: Application
    Filed: November 15, 2019
    Publication date: December 30, 2021
    Inventors: Chloe LeGendre, Wan-Chun Ma, Graham Fyffe, John Flynn, Jessica Busch, Paul Debevec
  • Patent number: 11189084
    Abstract: The present specification describes systems and methods for automatically generating personalized blendshapes from actor performance measurements, while preserving the semantics of a template facial animation rig. The disclosed inventions facilitate the creation of an ensemble of realistic digital double face rigs for each individual with consistent behaviour across the set with sophisticated iterative optimization techniques.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: November 30, 2021
    Assignee: Activision Publishing, Inc.
    Inventors: Wan-Chun Ma, III, Chongyang Ma
  • Publication number: 20210166437
    Abstract: Systems, methods, and computer program products are described that implement obtaining, at an electronic computing device and for at least one image of a scene rendered in an Augmented Reality (AR) environment, a scene lighting estimation captured at a first time period. The scene lighting estimation may include at least a first image measurement associated with the scene. The implementations may include determining, at the electronic computing device, a second image measurement associated with the scene at a second time period, determining a function of the first image measurement and the second image measurement. Based on the determined function, the implementations may also include triggering calculation of a partial lighting estimation update or triggering calculation of a full lighting estimation update and rendering, on a screen of the electronic computing device and for the scene, the scene using the partial lighting estimation update or the full lighting estimation update.
    Type: Application
    Filed: October 16, 2019
    Publication date: June 3, 2021
    Inventors: Chloe LeGendre, Laurent Charbonnel, Christina Tong, Konstantine Nicholas John Tsotsos, Wan-Chun Ma, Paul Debevec
  • Publication number: 20200226821
    Abstract: The present specification describes systems and methodsfor automatically generating personalized blendshapes from actor performance measurements, while preserving the semantics of a template facial animation rig. The disclosed inventions facilitate the creation of an ensemble of realistic digital double face rigs for each individual with consistent behaviour across the set with sophisticated iterative optimization techniques.
    Type: Application
    Filed: January 17, 2020
    Publication date: July 16, 2020
    Inventors: Wan-Chun Ma, III, Chongyang Ma
  • Patent number: 10586380
    Abstract: The present specification describes systems and methods for automatically animating personalized blendshapes from three dimensional stereo reconstruction data. The disclosed inventions facilitate the animation of blendshapes using an optimization process that applies to a frame a fitting process to yield a set of weighted blendshapes and applies to that frame a temporal smoothing process to yield a another set of weighted blendshapes and repeats that optimization process for a predetermined number of iterations to yield a final set of weighted blendshapes.
    Type: Grant
    Filed: October 21, 2016
    Date of Patent: March 10, 2020
    Assignee: Activision Publishing, Inc.
    Inventors: Wan-Chun Ma, Chongyang Ma
  • Patent number: 10573065
    Abstract: The present specification describes systems and methodsfor automatically generating personalized blendshapes from actor performance measurements, while preserving the semantics of a template facial animation rig. The disclosed inventions facilitate the creation of an ensemble of realistic digital double face rigs for each individual with consistent behaviour across the set with sophisticated iterative optimization techniques.
    Type: Grant
    Filed: October 21, 2016
    Date of Patent: February 25, 2020
    Assignee: Activision Publishing, Inc.
    Inventors: Wan-Chun Ma, Chongyang Ma
  • Publication number: 20180033189
    Abstract: The present specification describes systems and methodsfor automatically generating personalized blendshapes from actor performance measurements, while preserving the semantics of a template facial animation rig. The disclosed inventions facilitate the creation of an ensemble of realistic digital double face rigs for each individual with consistent behaviour across the set with sophisticated iterative optimization techniques.
    Type: Application
    Filed: October 21, 2016
    Publication date: February 1, 2018
    Inventors: Wan-Chun Ma, Chongyang Ma
  • Publication number: 20180033190
    Abstract: The present specification describes systems and methods for automatically animating personalized blendshapes from three dimensional stereo reconstruction data. The disclosed inventions facilitate the animation of blendshapes using an optimization process that applies to a frame a fitting process to yield a set of weighted blendshapes and applies to that frame a temporal smoothing process to yield a another set of weighted blendshapes and repeats that optimization process for a predetermined number of iterations to yield a final set of weighted blendshapes.
    Type: Application
    Filed: October 21, 2016
    Publication date: February 1, 2018
    Inventors: Wan-Chun Ma, Chongyang Ma
  • Patent number: 8902232
    Abstract: Acquisition, modeling, compression, and synthesis of realistic facial deformations using polynomial displacement maps are described. An analysis phase can be included where the relationship between motion capture markers and detailed facial geometry is inferred. A synthesis phase can be included where detailed animated facial geometry is driven by a sparse set of motion capture markers. For analysis, an actor can be recorded wearing facial markers while performing a set of training expression clips. Real-time high-resolution facial deformations are captured, including dynamic wrinkle and pore detail, using interleaved structured light 3D scanning and photometric stereo. Next, displacements are calculated between a neutral mesh driven by the motion capture markers and the high-resolution captured expressions. These geometric displacements are stored in one or more polynomial displacement maps parameterized according to the local deformations of the motion capture dots.
    Type: Grant
    Filed: February 2, 2009
    Date of Patent: December 2, 2014
    Assignee: University of Southern California
    Inventors: Paul E. Debevec, Wan-Chun Ma, Timothy Hawkins
  • Publication number: 20120062719
    Abstract: A camera may capture a sequence of images of a face while the face changes. A camera support may cause the field of view of the camera to remain substantially fixed with respect to the face, notwithstanding movement of the head. A lighting system may light the face from multiple directions. A lighting system support may cause each of the directions of the light from the lighting system to remain substantially fixed with respect to the face, notwithstanding movement of the head. Sequential images of the face may be computed as it changes based on the captured images. Each computed image may include least per-pixel surface normals of the face that are calculated based on multiple, separate images of the face. Each separate image may be representative of the face being lit by the lighting system from a different one of the separate directions.
    Type: Application
    Filed: September 9, 2011
    Publication date: March 15, 2012
    Applicant: UNIVERSITY OF SOUTHERN CALIFORNIA
    Inventors: PAUL E. DEBEVEC, ANDREW JONES, GRAHAM LESLIE FYFFE, WAN-CHUN MA, XUEMING YU, JAY BUSCH
  • Patent number: 8134555
    Abstract: An apparatus for generating a surface normal map of an object may include a plurality of light sources having intensities that are controllable so as to generate one or more gradient illumination patterns. The light sources are configured and arranged to illuminate the surface of the object with the gradient illumination patterns. A camera may receive light reflected from the illuminated surface of the object, and generate data representative of the reflected light. A processing system may process the data so as to estimate the surface normal map of the surface of the object. A specular normal map and a diffuse normal map of the surface of the object may be generated separately, by placing polarizers on the light sources and in front of the camera so as to illuminate the surface of the object with polarized spherical gradient illumination patterns.
    Type: Grant
    Filed: April 17, 2008
    Date of Patent: March 13, 2012
    Assignee: University of Southern California
    Inventors: Paul E. Debevec, Wan-Chun Ma, Timothy Hawkins
  • Publication number: 20090195545
    Abstract: Acquisition, modeling, compression, and synthesis of realistic facial deformations using polynomial displacement maps are described. An analysis phase can be included where the relationship between motion capture markers and detailed facial geometry is inferred. A synthesis phase can be included where detailed animated facial geometry is driven by a sparse set of motion capture markers. For analysis, an actor can be recorded wearing facial markers while performing a set of training expression clips. Real-time high-resolution facial deformations are captured, including dynamic wrinkle and pore detail, using interleaved structured light 3D scanning and photometric stereo. Next, displacements are calculated between a neutral mesh driven by the motion capture markers and the high-resolution captured expressions. These geometric displacements are stored in one or more polynomial displacement maps parameterized according to the local deformations of the motion capture dots.
    Type: Application
    Filed: February 2, 2009
    Publication date: August 6, 2009
    Applicant: University fo Southern California
    Inventors: Paul E. Debevec, Wan-Chun Ma, Timothy Hawkins
  • Publication number: 20080304081
    Abstract: An apparatus for generating a surface normal map of an object may include a plurality of light sources having intensities that are controllable so as to generate one or more gradient illumination patterns. The light sources are configured and arranged to illuminate the surface of the object with the gradient illumination patterns. A camera may receive light reflected from the illuminated surface of the object, and generate data representative of the reflected light. A processing system may process the data so as to estimate the surface normal map of the surface of the object. A specular normal map and a diffuse normal map of the surface of the object may be generated separately, by placing polarizers on the light sources and in front of the camera so as to illuminate the surface of the object with polarized spherical gradient illumination patterns.
    Type: Application
    Filed: April 17, 2008
    Publication date: December 11, 2008
    Inventors: Paul E. Debevec, Wan-Chun Ma, Timothy Hawkins