Patents by Inventor Peihong Guo

Peihong Guo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240078754
    Abstract: In one embodiment, a computing system may access a first image of a first portion of a face of a user captured by a first camera from a first viewpoint and a second image of a second portion of the face captured by a second camera from a second viewpoint. The system may generate, using a machine-learning model and the first and second images, a synthesized image corresponding to a third portion of the face of the user as viewed from a third viewpoint. The system may access a three-dimensional (3D) facial model representative of the face and generate a texture image for the face by projecting at least the synthesized image onto the 3D facial model from a predetermined camera pose corresponding to the third viewpoint. The system may cause an output image to be rendered using at least the 3D facial model and the texture image.
    Type: Application
    Filed: October 31, 2023
    Publication date: March 7, 2024
    Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
  • Publication number: 20240029331
    Abstract: A method for personalizing stylized avatars for enhanced reality applications is provided. The method includes capturing an image of a facial expression of a first subject, identifying, in the image, a one or more features indicative of a personal characteristic of the first subject, identifying, from a set of standard expressions in a human model, a selected expression based on the facial expression of the first subject, transferring the one or more features indicative of the personal characteristic of the first subject to the selected expression in the human model, and providing the human model to an immersive reality application for display on a client device. A memory storing instructions, a processor, and a system caused by the processor to perform the above method when executing the instructions, are also provided.
    Type: Application
    Filed: November 18, 2022
    Publication date: January 25, 2024
    Inventors: Christopher John Ocampo, Milton Cadogan, Peihong Guo
  • Patent number: 11842442
    Abstract: In one embodiment, one or more computing systems may access a plurality of images corresponding to a portion of a face of a user. The plurality of images is captured from different viewpoints by a plurality of cameras coupled to an artificial-reality system worn by the user. The one or more computing systems may synthesize, using a machine-learning model, to generate a synthesized image corresponding to the portion of the face of the user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the user and generate a texture image by projecting the synthesized image onto the 3D facial model from a specific camera pose. The one or more computing systems may cause an output image of a facial representation of the user to be rendered using at least the 3D facial model and the texture image.
    Type: Grant
    Filed: December 22, 2022
    Date of Patent: December 12, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
  • Patent number: 11734952
    Abstract: Examples described herein include systems for reconstructing facial image data from partial frame data and landmark data. Systems for generating the partial frame data and landmark data are described. Neural networks may be used to reconstruct the facial image data and/or generate the partial frame data. In this manner, compression of facial image data may be achieved in some examples.
    Type: Grant
    Filed: August 22, 2022
    Date of Patent: August 22, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Elif Albuz, Bryan Michael Anenberg, Peihong Guo, Ronit Kassis
  • Publication number: 20230206560
    Abstract: In one embodiment, one or more computing systems may access a plurality of images corresponding to a portion of a face of a user. The plurality of images is captured from different viewpoints by a plurality of cameras coupled to an artificial-reality system worn by the user. The one or more computing systems may synthesize, using a machine-learning model, to generate a synthesized image corresponding to the portion of the face of the user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the user and generate a texture image by projecting the synthesized image onto the 3D facial model from a specific camera pose. The one or more computing systems may cause an output image of a facial representation of the user to be rendered using at least the 3D facial model and the texture image.
    Type: Application
    Filed: December 22, 2022
    Publication date: June 29, 2023
    Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
  • Patent number: 11562535
    Abstract: In one embodiment, one or more computing systems may receive an image of a portion of a face of a first user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the first user. The one or more computing systems may identify one or more facial features captured in the image and determine a camera pose relative to the 3D facial model based on the identified one or more facial features in the image and predetermined feature locations on the 3D facial model. The one or more computing systems may determine a mapping relationship between the image and the 3D facial model by projecting the image of the portion of the face of the first user onto the 3D facial model from the camera pose and cause an output image of a facial representation of the first user to be rendered.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: January 24, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
  • Patent number: 11423692
    Abstract: Examples described herein include systems for reconstructing facial image data from partial frame data and landmark data. Systems for generating the partial frame data and landmark data are described. Neural networks may be used to reconstruct the facial image data and/or generate the partial frame data. In this manner, compression of facial image data may be achieved in some examples.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: August 23, 2022
    Assignee: META PLATFORMS TECHNOLOGIES, LLC
    Inventors: Elif Albuz, Bryan Michael Anenberg, Peihong Guo, Ronit Kassis
  • Publication number: 20220092853
    Abstract: In one embodiment, one or more computing systems may receive an image of a portion of a face of a first user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the first user. The one or more computing systems may identify one or more facial features captured in the image and determine a camera pose relative to the 3D facial model based on the identified one or more facial features in the image and predetermined feature locations on the 3D facial model. The one or more computing systems may determine a mapping relationship between the image and the 3D facial model by projecting the image of the portion of the face of the first user onto the 3D facial model from the camera pose and cause an output image of a facial representation of the first user to be rendered.
    Type: Application
    Filed: September 22, 2020
    Publication date: March 24, 2022
    Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
  • Patent number: 11217036
    Abstract: An avatar personalization engine can generate personalized avatars for a user by creating a 3D user model based on one or more images of the user. The avatar personalization engine can compute a delta between the 3D user model and an average person model, which is a model created based on the average measurements from multiple people. The avatar personalization engine can then apply the delta to a generic avatar model by changing measurements of particular features of the generic avatar model by amounts specified for corresponding features identified in the delta. This personalizes the generic avatar model to resemble the user. Additional features matching characteristics of the user can be added to further personalize the avatar model, such as a hairstyle, eyebrow geometry, facial hair, glasses, etc.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: January 4, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Elif Albuz, Chad Vernon, Shu Liang, Peihong Guo
  • Patent number: 11113859
    Abstract: Disclosed herein includes a system, a method, and a non-transitory computer readable medium for rendering a three-dimensional (3D) model of an avatar according to an audio stream including a vocal output of a person and image data capturing a face of the person. In one aspect, phonemes of the vocal output are predicted according to the audio stream, and the predicted phonemes of the vocal output are translated into visemes. In one aspect, a plurality of blendshapes and corresponding weights are determined, according to the corresponding image data of the face, to form the 3D model of the avatar of the person. The visemes may be combined with the 3D model of the avatar to form a 3D representation of the avatar, by synchronizing the visemes with the 3D model of the avatar in time.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: September 7, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Tong Xiao, Sidi Fu, Mengqian Liu, Peihong Guo, Shu Liang, Evgeny Zatepyakin
  • Patent number: 8687908
    Abstract: The disclosure relates to adjusting intensities of images. The method includes receiving information identifying of a plurality of regions within an image; receiving an intensity adjustment of at least one of the plurality of regions; adjusting the intensities of the at least one plurality of regions based on the received intensity adjustment; interconnecting at least two of the plurality of regions by applying a two-dimensional method; generating intensity adjustments for at least one pixel outside the plurality of regions based on the received intensity adjustment of at least one of the plurality of regions and the interconnection of at least two of the plurality of regions; and applying the generated intensity adjustments to the image.
    Type: Grant
    Filed: March 29, 2013
    Date of Patent: April 1, 2014
    Assignee: Peking University
    Inventors: Xiaoru Yuan, Peihong Guo
  • Patent number: 8515167
    Abstract: The disclosure relates generally to receiving original image data, decomposing the original image data into layers, compressing a dynamic range of each of layers, and integrating compressed layers to form a final image.
    Type: Grant
    Filed: August 31, 2009
    Date of Patent: August 20, 2013
    Assignee: Peking University
    Inventors: Xiaoru Yuan, Peihong Guo
  • Patent number: 8433150
    Abstract: The disclosure relates to adjusting intensities of images. The method includes receiving information identifying of a plurality of regions within an image; receiving an intensity adjustment of at least one of the plurality of regions; adjusting the intensities of the at least one plurality of regions based on the received intensity adjustment; interconnecting at least two of the plurality of regions by applying a two-dimensional method; generating intensity adjustments for at least one pixel outside the plurality of regions based on the received intensity adjustment of at least one of the plurality of regions and the interconnection of at least two of the plurality of regions; and applying the generated intensity adjustments to the image.
    Type: Grant
    Filed: September 28, 2009
    Date of Patent: April 30, 2013
    Assignee: Peking University
    Inventors: Xiaoru Yuan, Peihong Guo
  • Publication number: 20110075944
    Abstract: The disclosure relates to adjusting intensities of images. The method includes receiving information identifying of a plurality of regions within an image; receiving an intensity adjustment of at least one of the plurality of regions; adjusting the intensities of the at least one plurality of regions based on the received intensity adjustment; interconnecting at least two of the plurality of regions by applying a two-dimensional method; generating intensity adjustments for at least one pixel outside the plurality of regions based on the received intensity adjustment of at least one of the plurality of regions and the interconnection of at least two of the plurality of regions; and applying the generated intensity adjustments to the image.
    Type: Application
    Filed: September 28, 2009
    Publication date: March 31, 2011
    Inventors: Xiaoru Yuan, Peihong Guo
  • Publication number: 20110052088
    Abstract: The disclosure relates to receiving original image data decomposing the original image data into a plurality of layers, compressing a dynamic range of each of the plurality of layers, and integrating the plurality of compressed layers to form a final image.
    Type: Application
    Filed: August 31, 2009
    Publication date: March 3, 2011
    Inventors: Xiaoru Yuan, Peihong Guo