Patents by Inventor Wan-Chun Ma
Wan-Chun Ma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11288844Abstract: Systems, methods, and computer program products are described that implement obtaining, at an electronic computing device and for at least one image of a scene rendered in an Augmented Reality (AR) environment, a scene lighting estimation captured at a first time period. The scene lighting estimation may include at least a first image measurement associated with the scene. The implementations may include determining, at the electronic computing device, a second image measurement associated with the scene at a second time period, determining a function of the first image measurement and the second image measurement. Based on the determined function, the implementations may also include triggering calculation of a partial lighting estimation update or triggering calculation of a full lighting estimation update and rendering, on a screen of the electronic computing device and for the scene, the scene using the partial lighting estimation update or the full lighting estimation update.Type: GrantFiled: October 16, 2019Date of Patent: March 29, 2022Assignee: Google LLCInventors: Chloe LeGendre, Laurent Charbonnel, Christina Tong, Konstantine Nicholas John Tsotsos, Wan-Chun Ma, Paul Debevec
-
Publication number: 20220027659Abstract: Techniques of estimating lighting from portraits includes generating a lighting estimate from a single image of a face based on a machine learning (ML) system using multiple bidirectional reflection distribution functions (BRDFs) as a loss function. In some implementations, the ML system is trained using images of faces formed with HDR illumination computed from LDR imagery. The technical solution includes training a lighting estimation model in a supervised manner using a dataset of portraits and their corresponding ground truth illumination.Type: ApplicationFiled: September 21, 2020Publication date: January 27, 2022Inventors: Chloe LeGendre, Paul Debevec, Wan-Chun Ma, Rohit Pandey, Sean Ryan Francesco Fanello, Christina Tong
-
Publication number: 20210406581Abstract: An example method, apparatus, and computer-readable storage medium are provided to predict high-dynamic range (HDR) lighting from low-dynamic range (LDR) background images. In an example implementation, a method may include receiving low-dynamic range (LDR) background images of scenes, each LDR background image captured with appearance of one or more reference objects with different reflectance properties; and training a lighting estimation model based at least on the received LDR background images to predict high-dynamic range (HDR) lighting based at least on the trained model. In another example implementation, a method may include capturing a low-dynamic range (LDR) background image of a scene from an LDR video captured by a camera of the electronic computing device; predicting high-dynamic range (HDR) lighting for the image, the predicting, using a trained model, based at least on the LDR background image; and rendering a virtual object based at least on the predicted HDR lighting.Type: ApplicationFiled: November 15, 2019Publication date: December 30, 2021Inventors: Chloe LeGendre, Wan-Chun Ma, Graham Fyffe, John Flynn, Jessica Busch, Paul Debevec
-
Patent number: 11189084Abstract: The present specification describes systems and methods for automatically generating personalized blendshapes from actor performance measurements, while preserving the semantics of a template facial animation rig. The disclosed inventions facilitate the creation of an ensemble of realistic digital double face rigs for each individual with consistent behaviour across the set with sophisticated iterative optimization techniques.Type: GrantFiled: January 17, 2020Date of Patent: November 30, 2021Assignee: Activision Publishing, Inc.Inventors: Wan-Chun Ma, III, Chongyang Ma
-
Publication number: 20210166437Abstract: Systems, methods, and computer program products are described that implement obtaining, at an electronic computing device and for at least one image of a scene rendered in an Augmented Reality (AR) environment, a scene lighting estimation captured at a first time period. The scene lighting estimation may include at least a first image measurement associated with the scene. The implementations may include determining, at the electronic computing device, a second image measurement associated with the scene at a second time period, determining a function of the first image measurement and the second image measurement. Based on the determined function, the implementations may also include triggering calculation of a partial lighting estimation update or triggering calculation of a full lighting estimation update and rendering, on a screen of the electronic computing device and for the scene, the scene using the partial lighting estimation update or the full lighting estimation update.Type: ApplicationFiled: October 16, 2019Publication date: June 3, 2021Inventors: Chloe LeGendre, Laurent Charbonnel, Christina Tong, Konstantine Nicholas John Tsotsos, Wan-Chun Ma, Paul Debevec
-
Publication number: 20200226821Abstract: The present specification describes systems and methodsfor automatically generating personalized blendshapes from actor performance measurements, while preserving the semantics of a template facial animation rig. The disclosed inventions facilitate the creation of an ensemble of realistic digital double face rigs for each individual with consistent behaviour across the set with sophisticated iterative optimization techniques.Type: ApplicationFiled: January 17, 2020Publication date: July 16, 2020Inventors: Wan-Chun Ma, III, Chongyang Ma
-
Patent number: 10586380Abstract: The present specification describes systems and methods for automatically animating personalized blendshapes from three dimensional stereo reconstruction data. The disclosed inventions facilitate the animation of blendshapes using an optimization process that applies to a frame a fitting process to yield a set of weighted blendshapes and applies to that frame a temporal smoothing process to yield a another set of weighted blendshapes and repeats that optimization process for a predetermined number of iterations to yield a final set of weighted blendshapes.Type: GrantFiled: October 21, 2016Date of Patent: March 10, 2020Assignee: Activision Publishing, Inc.Inventors: Wan-Chun Ma, Chongyang Ma
-
Patent number: 10573065Abstract: The present specification describes systems and methodsfor automatically generating personalized blendshapes from actor performance measurements, while preserving the semantics of a template facial animation rig. The disclosed inventions facilitate the creation of an ensemble of realistic digital double face rigs for each individual with consistent behaviour across the set with sophisticated iterative optimization techniques.Type: GrantFiled: October 21, 2016Date of Patent: February 25, 2020Assignee: Activision Publishing, Inc.Inventors: Wan-Chun Ma, Chongyang Ma
-
Publication number: 20180033189Abstract: The present specification describes systems and methodsfor automatically generating personalized blendshapes from actor performance measurements, while preserving the semantics of a template facial animation rig. The disclosed inventions facilitate the creation of an ensemble of realistic digital double face rigs for each individual with consistent behaviour across the set with sophisticated iterative optimization techniques.Type: ApplicationFiled: October 21, 2016Publication date: February 1, 2018Inventors: Wan-Chun Ma, Chongyang Ma
-
Publication number: 20180033190Abstract: The present specification describes systems and methods for automatically animating personalized blendshapes from three dimensional stereo reconstruction data. The disclosed inventions facilitate the animation of blendshapes using an optimization process that applies to a frame a fitting process to yield a set of weighted blendshapes and applies to that frame a temporal smoothing process to yield a another set of weighted blendshapes and repeats that optimization process for a predetermined number of iterations to yield a final set of weighted blendshapes.Type: ApplicationFiled: October 21, 2016Publication date: February 1, 2018Inventors: Wan-Chun Ma, Chongyang Ma
-
Patent number: 8902232Abstract: Acquisition, modeling, compression, and synthesis of realistic facial deformations using polynomial displacement maps are described. An analysis phase can be included where the relationship between motion capture markers and detailed facial geometry is inferred. A synthesis phase can be included where detailed animated facial geometry is driven by a sparse set of motion capture markers. For analysis, an actor can be recorded wearing facial markers while performing a set of training expression clips. Real-time high-resolution facial deformations are captured, including dynamic wrinkle and pore detail, using interleaved structured light 3D scanning and photometric stereo. Next, displacements are calculated between a neutral mesh driven by the motion capture markers and the high-resolution captured expressions. These geometric displacements are stored in one or more polynomial displacement maps parameterized according to the local deformations of the motion capture dots.Type: GrantFiled: February 2, 2009Date of Patent: December 2, 2014Assignee: University of Southern CaliforniaInventors: Paul E. Debevec, Wan-Chun Ma, Timothy Hawkins
-
Publication number: 20120062719Abstract: A camera may capture a sequence of images of a face while the face changes. A camera support may cause the field of view of the camera to remain substantially fixed with respect to the face, notwithstanding movement of the head. A lighting system may light the face from multiple directions. A lighting system support may cause each of the directions of the light from the lighting system to remain substantially fixed with respect to the face, notwithstanding movement of the head. Sequential images of the face may be computed as it changes based on the captured images. Each computed image may include least per-pixel surface normals of the face that are calculated based on multiple, separate images of the face. Each separate image may be representative of the face being lit by the lighting system from a different one of the separate directions.Type: ApplicationFiled: September 9, 2011Publication date: March 15, 2012Applicant: UNIVERSITY OF SOUTHERN CALIFORNIAInventors: PAUL E. DEBEVEC, ANDREW JONES, GRAHAM LESLIE FYFFE, WAN-CHUN MA, XUEMING YU, JAY BUSCH
-
Patent number: 8134555Abstract: An apparatus for generating a surface normal map of an object may include a plurality of light sources having intensities that are controllable so as to generate one or more gradient illumination patterns. The light sources are configured and arranged to illuminate the surface of the object with the gradient illumination patterns. A camera may receive light reflected from the illuminated surface of the object, and generate data representative of the reflected light. A processing system may process the data so as to estimate the surface normal map of the surface of the object. A specular normal map and a diffuse normal map of the surface of the object may be generated separately, by placing polarizers on the light sources and in front of the camera so as to illuminate the surface of the object with polarized spherical gradient illumination patterns.Type: GrantFiled: April 17, 2008Date of Patent: March 13, 2012Assignee: University of Southern CaliforniaInventors: Paul E. Debevec, Wan-Chun Ma, Timothy Hawkins
-
Publication number: 20090195545Abstract: Acquisition, modeling, compression, and synthesis of realistic facial deformations using polynomial displacement maps are described. An analysis phase can be included where the relationship between motion capture markers and detailed facial geometry is inferred. A synthesis phase can be included where detailed animated facial geometry is driven by a sparse set of motion capture markers. For analysis, an actor can be recorded wearing facial markers while performing a set of training expression clips. Real-time high-resolution facial deformations are captured, including dynamic wrinkle and pore detail, using interleaved structured light 3D scanning and photometric stereo. Next, displacements are calculated between a neutral mesh driven by the motion capture markers and the high-resolution captured expressions. These geometric displacements are stored in one or more polynomial displacement maps parameterized according to the local deformations of the motion capture dots.Type: ApplicationFiled: February 2, 2009Publication date: August 6, 2009Applicant: University fo Southern CaliforniaInventors: Paul E. Debevec, Wan-Chun Ma, Timothy Hawkins
-
Publication number: 20080304081Abstract: An apparatus for generating a surface normal map of an object may include a plurality of light sources having intensities that are controllable so as to generate one or more gradient illumination patterns. The light sources are configured and arranged to illuminate the surface of the object with the gradient illumination patterns. A camera may receive light reflected from the illuminated surface of the object, and generate data representative of the reflected light. A processing system may process the data so as to estimate the surface normal map of the surface of the object. A specular normal map and a diffuse normal map of the surface of the object may be generated separately, by placing polarizers on the light sources and in front of the camera so as to illuminate the surface of the object with polarized spherical gradient illumination patterns.Type: ApplicationFiled: April 17, 2008Publication date: December 11, 2008Inventors: Paul E. Debevec, Wan-Chun Ma, Timothy Hawkins