Patents by Inventor Peihong Guo
Peihong Guo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240078754Abstract: In one embodiment, a computing system may access a first image of a first portion of a face of a user captured by a first camera from a first viewpoint and a second image of a second portion of the face captured by a second camera from a second viewpoint. The system may generate, using a machine-learning model and the first and second images, a synthesized image corresponding to a third portion of the face of the user as viewed from a third viewpoint. The system may access a three-dimensional (3D) facial model representative of the face and generate a texture image for the face by projecting at least the synthesized image onto the 3D facial model from a predetermined camera pose corresponding to the third viewpoint. The system may cause an output image to be rendered using at least the 3D facial model and the texture image.Type: ApplicationFiled: October 31, 2023Publication date: March 7, 2024Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
-
Publication number: 20240029331Abstract: A method for personalizing stylized avatars for enhanced reality applications is provided. The method includes capturing an image of a facial expression of a first subject, identifying, in the image, a one or more features indicative of a personal characteristic of the first subject, identifying, from a set of standard expressions in a human model, a selected expression based on the facial expression of the first subject, transferring the one or more features indicative of the personal characteristic of the first subject to the selected expression in the human model, and providing the human model to an immersive reality application for display on a client device. A memory storing instructions, a processor, and a system caused by the processor to perform the above method when executing the instructions, are also provided.Type: ApplicationFiled: November 18, 2022Publication date: January 25, 2024Inventors: Christopher John Ocampo, Milton Cadogan, Peihong Guo
-
Patent number: 11842442Abstract: In one embodiment, one or more computing systems may access a plurality of images corresponding to a portion of a face of a user. The plurality of images is captured from different viewpoints by a plurality of cameras coupled to an artificial-reality system worn by the user. The one or more computing systems may synthesize, using a machine-learning model, to generate a synthesized image corresponding to the portion of the face of the user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the user and generate a texture image by projecting the synthesized image onto the 3D facial model from a specific camera pose. The one or more computing systems may cause an output image of a facial representation of the user to be rendered using at least the 3D facial model and the texture image.Type: GrantFiled: December 22, 2022Date of Patent: December 12, 2023Assignee: Meta Platforms Technologies, LLCInventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
-
Patent number: 11734952Abstract: Examples described herein include systems for reconstructing facial image data from partial frame data and landmark data. Systems for generating the partial frame data and landmark data are described. Neural networks may be used to reconstruct the facial image data and/or generate the partial frame data. In this manner, compression of facial image data may be achieved in some examples.Type: GrantFiled: August 22, 2022Date of Patent: August 22, 2023Assignee: Meta Platforms Technologies, LLCInventors: Elif Albuz, Bryan Michael Anenberg, Peihong Guo, Ronit Kassis
-
Publication number: 20230206560Abstract: In one embodiment, one or more computing systems may access a plurality of images corresponding to a portion of a face of a user. The plurality of images is captured from different viewpoints by a plurality of cameras coupled to an artificial-reality system worn by the user. The one or more computing systems may synthesize, using a machine-learning model, to generate a synthesized image corresponding to the portion of the face of the user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the user and generate a texture image by projecting the synthesized image onto the 3D facial model from a specific camera pose. The one or more computing systems may cause an output image of a facial representation of the user to be rendered using at least the 3D facial model and the texture image.Type: ApplicationFiled: December 22, 2022Publication date: June 29, 2023Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
-
Patent number: 11562535Abstract: In one embodiment, one or more computing systems may receive an image of a portion of a face of a first user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the first user. The one or more computing systems may identify one or more facial features captured in the image and determine a camera pose relative to the 3D facial model based on the identified one or more facial features in the image and predetermined feature locations on the 3D facial model. The one or more computing systems may determine a mapping relationship between the image and the 3D facial model by projecting the image of the portion of the face of the first user onto the 3D facial model from the camera pose and cause an output image of a facial representation of the first user to be rendered.Type: GrantFiled: September 22, 2020Date of Patent: January 24, 2023Assignee: Meta Platforms Technologies, LLCInventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
-
Patent number: 11423692Abstract: Examples described herein include systems for reconstructing facial image data from partial frame data and landmark data. Systems for generating the partial frame data and landmark data are described. Neural networks may be used to reconstruct the facial image data and/or generate the partial frame data. In this manner, compression of facial image data may be achieved in some examples.Type: GrantFiled: October 24, 2019Date of Patent: August 23, 2022Assignee: META PLATFORMS TECHNOLOGIES, LLCInventors: Elif Albuz, Bryan Michael Anenberg, Peihong Guo, Ronit Kassis
-
Publication number: 20220092853Abstract: In one embodiment, one or more computing systems may receive an image of a portion of a face of a first user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the first user. The one or more computing systems may identify one or more facial features captured in the image and determine a camera pose relative to the 3D facial model based on the identified one or more facial features in the image and predetermined feature locations on the 3D facial model. The one or more computing systems may determine a mapping relationship between the image and the 3D facial model by projecting the image of the portion of the face of the first user onto the 3D facial model from the camera pose and cause an output image of a facial representation of the first user to be rendered.Type: ApplicationFiled: September 22, 2020Publication date: March 24, 2022Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
-
Patent number: 11217036Abstract: An avatar personalization engine can generate personalized avatars for a user by creating a 3D user model based on one or more images of the user. The avatar personalization engine can compute a delta between the 3D user model and an average person model, which is a model created based on the average measurements from multiple people. The avatar personalization engine can then apply the delta to a generic avatar model by changing measurements of particular features of the generic avatar model by amounts specified for corresponding features identified in the delta. This personalizes the generic avatar model to resemble the user. Additional features matching characteristics of the user can be added to further personalize the avatar model, such as a hairstyle, eyebrow geometry, facial hair, glasses, etc.Type: GrantFiled: October 7, 2019Date of Patent: January 4, 2022Assignee: Facebook Technologies, LLCInventors: Elif Albuz, Chad Vernon, Shu Liang, Peihong Guo
-
Patent number: 11113859Abstract: Disclosed herein includes a system, a method, and a non-transitory computer readable medium for rendering a three-dimensional (3D) model of an avatar according to an audio stream including a vocal output of a person and image data capturing a face of the person. In one aspect, phonemes of the vocal output are predicted according to the audio stream, and the predicted phonemes of the vocal output are translated into visemes. In one aspect, a plurality of blendshapes and corresponding weights are determined, according to the corresponding image data of the face, to form the 3D model of the avatar of the person. The visemes may be combined with the 3D model of the avatar to form a 3D representation of the avatar, by synchronizing the visemes with the 3D model of the avatar in time.Type: GrantFiled: July 10, 2019Date of Patent: September 7, 2021Assignee: Facebook Technologies, LLCInventors: Tong Xiao, Sidi Fu, Mengqian Liu, Peihong Guo, Shu Liang, Evgeny Zatepyakin
-
Patent number: 8687908Abstract: The disclosure relates to adjusting intensities of images. The method includes receiving information identifying of a plurality of regions within an image; receiving an intensity adjustment of at least one of the plurality of regions; adjusting the intensities of the at least one plurality of regions based on the received intensity adjustment; interconnecting at least two of the plurality of regions by applying a two-dimensional method; generating intensity adjustments for at least one pixel outside the plurality of regions based on the received intensity adjustment of at least one of the plurality of regions and the interconnection of at least two of the plurality of regions; and applying the generated intensity adjustments to the image.Type: GrantFiled: March 29, 2013Date of Patent: April 1, 2014Assignee: Peking UniversityInventors: Xiaoru Yuan, Peihong Guo
-
Patent number: 8515167Abstract: The disclosure relates generally to receiving original image data, decomposing the original image data into layers, compressing a dynamic range of each of layers, and integrating compressed layers to form a final image.Type: GrantFiled: August 31, 2009Date of Patent: August 20, 2013Assignee: Peking UniversityInventors: Xiaoru Yuan, Peihong Guo
-
Patent number: 8433150Abstract: The disclosure relates to adjusting intensities of images. The method includes receiving information identifying of a plurality of regions within an image; receiving an intensity adjustment of at least one of the plurality of regions; adjusting the intensities of the at least one plurality of regions based on the received intensity adjustment; interconnecting at least two of the plurality of regions by applying a two-dimensional method; generating intensity adjustments for at least one pixel outside the plurality of regions based on the received intensity adjustment of at least one of the plurality of regions and the interconnection of at least two of the plurality of regions; and applying the generated intensity adjustments to the image.Type: GrantFiled: September 28, 2009Date of Patent: April 30, 2013Assignee: Peking UniversityInventors: Xiaoru Yuan, Peihong Guo
-
Publication number: 20110075944Abstract: The disclosure relates to adjusting intensities of images. The method includes receiving information identifying of a plurality of regions within an image; receiving an intensity adjustment of at least one of the plurality of regions; adjusting the intensities of the at least one plurality of regions based on the received intensity adjustment; interconnecting at least two of the plurality of regions by applying a two-dimensional method; generating intensity adjustments for at least one pixel outside the plurality of regions based on the received intensity adjustment of at least one of the plurality of regions and the interconnection of at least two of the plurality of regions; and applying the generated intensity adjustments to the image.Type: ApplicationFiled: September 28, 2009Publication date: March 31, 2011Inventors: Xiaoru Yuan, Peihong Guo
-
Publication number: 20110052088Abstract: The disclosure relates to receiving original image data decomposing the original image data into a plurality of layers, compressing a dynamic range of each of the plurality of layers, and integrating the plurality of compressed layers to form a final image.Type: ApplicationFiled: August 31, 2009Publication date: March 3, 2011Inventors: Xiaoru Yuan, Peihong Guo