Patents by Inventor Gengdai LIU

Gengdai LIU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11972527
    Abstract: Provided is a method for reconstructing a face mesh model. The method includes: acquiring face scanning data to be reconstructed and a three-dimensional face mesh template; obtaining a target face mesh model by hierarchically extracting key feature points in the 3D face mesh template and by sequentially deforming the 3D face mesh template based on posture matching positions of the hierarchically extracted key feature points in the face scanning data; and obtaining a reconstructed face mesh model by acquiring global feature points in the target face mesh model and by deforming the target face mesh model based on the posture matching positions of the global feature points in the face scanning data.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: April 30, 2024
    Assignee: BIGO TECHNOLOGY PTE. LTD.
    Inventor: Gengdai Liu
  • Publication number: 20240062579
    Abstract: Provided is a face tracking method. The method includes: in a process of tracking a face in a video frame, determining whether an optimization thread is running; in response to the optimization thread running and the video frame being a key frame, updating a second keyframe data set based on the video frame; in response to receiving a clear instruction from the optimization thread, clearing the video frame in the second keyframe data set, and updating the second keyframe data set to a first keyframe data set; in response to the optimization thread not running and the video frame being the key frame, updating the first keyframe data set based on the video frame and the second keyframe data set; and making the optimization thread optimize a facial identity based on the first keyframe data set by invoking the optimization thread upon updating the first keyframe data set.
    Type: Application
    Filed: January 4, 2022
    Publication date: February 22, 2024
    Inventors: Wenyu CHEN, Gengdai LIU
  • Patent number: 11908236
    Abstract: An illumination detection method comprises acquiring a face image to be detected and a three-dimensional face mesh template; deforming the three-dimensional face mesh template according to the face image to obtain a reconstructed face mesh model; according to the deformation positions, in the reconstructed face mesh model, of key feature points in the three-dimensional face mesh template, determining the brightness of feature points, corresponding to the key feature points, in the face image; and according to the relationship between the predetermined brightness of the key feature points and the illumination, and the brightness of the feature points, corresponding to the key feature points, in the face image, determining illumination information of the face image.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: February 20, 2024
    Assignee: BIGO TECHNOLOGY PTE. LTD.
    Inventors: Feiqian Zhang, Gengdai Liu, Leju Yan
  • Publication number: 20240037852
    Abstract: A method for reconstructing three-dimensional faces is provided. The method includes: estimating a dynamic reconstruction parameter of a current video frame for three-dimensional face reconstruction by inputting, in response to a steady-state reconstruction parameter of the current video frame for the three-dimensional face reconstruction having been estimated by a pre-constructed teacher network model, the current video frame into a student network model distilled from the teacher network model; and reconstructing a three-dimensional face corresponding to the current video frame by inputting the steady-state reconstruction parameter and the dynamic reconstruction parameter into a pre-constructed three-dimensional deformation model.
    Type: Application
    Filed: December 28, 2021
    Publication date: February 1, 2024
    Inventors: Xiaowei ZHANG, Zhongyuan HU, Gengdai LIU
  • Patent number: 11775059
    Abstract: A method for determining human eye close degrees includes: acquiring a face image; determining a human eye open amplitude and a reference distance in the face image; calculating a relative amplitude of the human eye open amplitude relative to the reference distance; acquiring a maximum relative amplitude; and calculating a human eye close weight in the face image based on the relative amplitude and the maximum relative amplitude, the human eye close weight being configured to measure a human eye close degree.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: October 3, 2023
    Assignee: BIGO TECHNOLOGY PTE. LTD.
    Inventors: Feiqian Zhang, Gengdai Liu
  • Publication number: 20230260184
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media relate to a method for training machine learning network to generate facial expression for rendering an avatar within a video communication platform representing a video conference participant. Video images may be processed by the machine learning network to generate facial expression values. The generated facial expression values may be modified or adjusted to change the facial expression values. The modified or adjusted facial expression values may then be used to render a digital representation of the video conference participant in the form of an avatar.
    Type: Application
    Filed: March 17, 2022
    Publication date: August 17, 2023
    Inventors: Wenyu Chen, Chichen Fu, Qiang Li, Wenchong Lin, Bo Ling, Gengdai Liu
  • Publication number: 20230222721
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media relate to a method for generating an avatar within a video communication platform. The system may receive a selection of an avatar model from a group of one or more avatar models. The system receives a first video stream and audio data of a first video conference participant. The system analyzes image frames of the first video stream to determine a group of pixels representing the first video conference participant. The system determines a plurality of facial expression parameter associated with the determined group of pixels. Based on the determined plurality of facial expression parameter values, the system generates a first modified video stream depicting a digital representation of the first video conference participant in an avatar form.
    Type: Application
    Filed: January 31, 2022
    Publication date: July 13, 2023
    Inventors: Wenyu Chen, Chichen Fu, Guozhu Hu, Qiang Li, Wenhao Li, Wenchong Lin, Bo Ling, Gengdai Liu, Geng Wang, Kai Wei, Yian Zhu
  • Publication number: 20230177702
    Abstract: Provided is a method for training an adaptive rigid prior model. The method includes: initializing model parameters of the adaptive rigid prior model; acquiring a plurality of frames of facial data of a same face by face tracking on training video data using the adaptive rigid prior model; updating the model parameters based on the plurality of frames of facial data; determining whether a condition of stopping updating the model parameters is satisfied; stopping updating the model parameters of the adaptive rigid prior model, and acquiring a final adaptive rigid prior model; and returning to the step of acquiring the plurality of frames of facial data of the same face by face tracking on the training video data using the adaptive rigid prior model.
    Type: Application
    Filed: April 15, 2021
    Publication date: June 8, 2023
    Inventors: Wenyu CHEN, Gengdai LIU
  • Publication number: 20220292776
    Abstract: Provided is a method for reconstructing a face mesh model. The method includes: acquiring face scanning data to be reconstructed and a three-dimensional face mesh template; obtaining a target face mesh model by hierarchically extracting key feature points in the 3D face mesh template and by sequentially deforming the 3D face mesh template based on posture matching positions of the hierarchically extracted key feature points in the face scanning data; and obtaining a reconstructed face mesh model by acquiring global feature points in the target face mesh model and by deforming the target face mesh model based on the posture matching positions of the global feature points in the face scanning data.
    Type: Application
    Filed: November 13, 2019
    Publication date: September 15, 2022
    Inventor: Gengdai LIU
  • Publication number: 20220261070
    Abstract: Provided is a method for determining human eye close degrees, including: acquiring a face image; determining a human eye open amplitude and a reference distance in the face image; calculating a relative amplitude of the human eye open amplitude relative to the reference distance; acquiring a maximum relative amplitude; and calculating a human eye close weight in the face image based on the relative amplitude and the maximum relative amplitude, the human eye close weight being configured to measure a human eye close degree.
    Type: Application
    Filed: June 24, 2020
    Publication date: August 18, 2022
    Inventors: Feiqian ZHANG, Gengdai LIU
  • Publication number: 20220254058
    Abstract: Provided is a method for detecting line-of-sight. The method for detecting line-of-sight includes: determining, based on a key feature point in a face image, a face posture and an eye pupil rotational displacement corresponding to the face image, wherein the eye pupil rotational displacement is a displacement of a pupil center relative to an eyeball center in the face image; and acquiring a line-of-sight direction of an actual face by back-projecting, based on a preset projection function and the face posture, the eye pupil rotational displacement to a three-dimensional space where the actual face is located.
    Type: Application
    Filed: June 22, 2020
    Publication date: August 11, 2022
    Inventor: Gengdai LIU
  • Publication number: 20220075992
    Abstract: An illumination detection method comprises acquiring a face image to be detected and a three-dimensional face mesh template; deforming the three-dimensional face mesh template according to the face image to obtain a reconstructed face mesh model; according to the deformation positions, in the reconstructed face mesh model, of key feature points in the three-dimensional face mesh template, determining the brightness of feature points, corresponding to the key feature points, in the face image; and according to the relationship between the predetermined brightness of the key feature points and the illumination, and the brightness of the feature points, corresponding to the key feature points, in the face image, determining illumination information of the face image.
    Type: Application
    Filed: December 4, 2019
    Publication date: March 10, 2022
    Inventors: Feiqian ZHANG, Gengdai LIU, Leju YAN