Patents by Inventor Xiangkai LIN

Xiangkai LIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11961325
    Abstract: An image processing method and apparatus, a computer-readable medium, and an electronic device are provided. The image processing method includes: respectively projecting, according to a plurality of view angle parameters corresponding to a plurality of view angles, a face model of a target object onto a plurality of face images of the target object acquired from the plurality of view angles, to determine correspondences between regions on the face model and regions on the face image; respectively extracting, based on the correspondences and a target region in the face model that need to generate a texture image, images corresponding to the target region from the plurality of face images; and fusing the images that correspond to the target region and that are respectively extracted from the plurality of face images, to generate the texture image.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: April 16, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Linchao Bao, Yonggen Ling, Yibing Song, Wei Liu
  • Patent number: 11941737
    Abstract: Embodiments of this application disclose an artificial intelligence-based (AI-based) animation character control method. When one animation character has a corresponding face customization base, and one animation character has no corresponding face customization base, the animation character having the face customization base may be used as a driving character, and the animation character having no face customization base may be used as a driven character.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: March 26, 2024
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Sheng Wang, Xing Ji, Zhantu Zhu, Xiangkai Lin, Linchao Bao
  • Patent number: 11900557
    Abstract: A three-dimensional face model generation method is provided. The method includes: obtaining an inputted three-dimensional face mesh of a target object; aligning the three-dimensional face mesh with a first three-dimensional face model of a standard object according to face keypoints; performing fitting on the three-dimensional face mesh and a local area of the first three-dimensional face model, to obtain a second three-dimensional face model after local fitting; and performing fitting on the three-dimensional face mesh and a global area of the second three-dimensional face model, to obtain a three-dimensional face model of the target object after global fitting.
    Type: Grant
    Filed: October 22, 2021
    Date of Patent: February 13, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Linchao Bao
  • Patent number: 11798190
    Abstract: Embodiments of this application disclose a method for displaying a virtual character in a plurality of real-world images captured by a camera is performed at an electronic device. The method includes: capturing an initial real-world image using the camera; simulating a display of the virtual character in the initial real-world image; capturing a subsequent real-world image using the camera after a movement of the camera; determining position and pose updates of the camera associated with the movement of the camera from tracking one or more feature points in the initial real-world image and the subsequent real-world image; and adjusting the display of the virtual character in the subsequent real-world image in accordance with the position and pose updates of the camera associated with the movement of the camera.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: October 24, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Liang Qiao, Fengming Zhu, Yu Zuo, Zeyu Yang, Yonggen Ling, Linchao Bao
  • Patent number: 11748934
    Abstract: This application provides a three-dimensional (3D) expression base generation method performed by a computer device. The method includes: obtaining image pairs of a target object in n types of head postures, each image pair including a color feature image and a depth image in a head posture; constructing a 3D human face model of the target object according to then image pairs; and generating a set of expression bases of the target object according to the 3D human face model of the target object. According to this application, based on a reconstructed 3D human face model, a set of expression bases of a target object is further generated, so that more diversified product functions may be expanded based on the set of expression bases.
    Type: Grant
    Filed: October 15, 2021
    Date of Patent: September 5, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Linchao Bao
  • Publication number: 20230123433
    Abstract: This application discloses an artificial intelligence (AI) based animation character drive method. A first expression base of a first animation character corresponding to a speaker is determined by acquiring media data including a facial expression change when the speaker says a speech, and the first expression base may reflect different expressions of the first animation character. After target text information is obtained, an acoustic feature and a target expression parameter corresponding to the target text information are determined according to the target text information, the foregoing acquired media data, and the first expression base. A second animation character having a second expression base may be driven according to the acoustic feature and the target expression parameter, so that the second animation character may simulate the speaker's sound and facial expression when saying the target text information, thereby improving experience of interaction between the user and the animation character.
    Type: Application
    Filed: December 13, 2022
    Publication date: April 20, 2023
    Inventors: Linchao BAO, Shiyin KANG, Sheng WANG, Xiangkai LIN, Xing JI, Zhantu ZHU, Kuongchi LEI, Deyi TUO, Peng LIU
  • Patent number: 11605193
    Abstract: This application disclose an artificial intelligence (AI) based animation character drive method. A first expression base of a first animation character corresponding to a speaker is determined by acquiring media data including a facial expression change when the speaker says a speech, and the first expression base may reflect different expressions of the first animation character. After target text information is obtained, an acoustic feature and a target expression parameter corresponding to the target text information are determined according to the target text information, the foregoing acquired media data, and the first expression base. A second animation character having a second expression base may be driven according to the acoustic feature and the target expression parameter, so that the second animation character may simulate the speaker's sound and facial expression when saying the target text information, thereby improving experience of interaction between the user and the animation character.
    Type: Grant
    Filed: August 18, 2021
    Date of Patent: March 14, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Linchao Bao, Shiyin Kang, Sheng Wang, Xiangkai Lin, Xing Ji, Zhantu Zhu, Kuongchi Lei, Deyi Tuo, Peng Liu
  • Patent number: 11605214
    Abstract: Embodiments of this application disclose a method for determining camera pose information of a camera of a mobile terminal. The method includes: obtaining a first image, a second image, and a template image, the first image being a previous frame of image of the second image, the first image and the second image being images including a respective instance of the template image captured by the mobile terminal using the camera at a corresponding spatial position; determining a first homography between the template image and the second image; determining a second homography between the first image and the second image; and performing complementary filtering processing on the first homography and the second homography, to obtain camera pose information of the camera, wherein the camera pose information of the camera represents a spatial position of the mobile terminal when the mobile terminal captures the second image using the camera.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: March 14, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Linchao Bao, Wei Liu
  • Patent number: 11481923
    Abstract: This application discloses a repositioning method and apparatus in a camera pose tracking process, a device, and a storage medium, belonging to the field of augmented reality (AR).
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: October 25, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Yonggen Ling, Linchao Bao, Wei Liu
  • Publication number: 20220284679
    Abstract: A method, an apparatus and a storage medium for constructing a three-dimensional (3D) facial mesh using artificial intelligence is disclosed. The method includes: obtaining a facial point cloud of a target object; determining, through an expansion calculation, pixel coordinates on a facial texture image of the target object that correspond to 3D data points in the facial point cloud, as index information of the 3D data points; performing triangulation on pixels on the facial texture image to obtain triangulation information; constructing an initial 3D facial mesh according to the triangulation information, the index information, and the facial point cloud; determining a non-core region in the initial 3D facial mesh; smoothing the non-core region in the initial 3D facial mesh; and replacing the non-core region in the initial 3D facial mesh with the smoothed non-core region to obtain a 3D facial mesh of the target object.
    Type: Application
    Filed: May 23, 2022
    Publication date: September 8, 2022
    Inventors: Xiangkai LIN, Sheng WANG
  • Publication number: 20220222893
    Abstract: Embodiments of this application disclose a method and an apparatus for generating a three-dimensional face model. The method includes obtaining a plurality of target face images and a plurality of depth images corresponding to the plurality of target face images, the plurality of target face images comprising a same face; obtaining, according to an image type of each target face image, a regional face image that is in each target face image and that corresponds to the image type of each target face image, the image type comprising a front face type, a left face type, a right face type, or a head-up type; obtaining a regional depth image in a corresponding depth image according to the regional face image; and performing image fusion based on the plurality of obtained regional face images and the plurality of obtained regional depth images, to generate a three-dimensional face model.
    Type: Application
    Filed: March 29, 2022
    Publication date: July 14, 2022
    Inventors: Wenpan LI, Linchao BAO, Xiangkai LIN
  • Publication number: 20220165031
    Abstract: A method for constructing a 3D model of a target object provided is performed by a computer device, the method including: obtaining at least two initial images of a target object from a plurality of shooting angles, the at least two initial images respectively including depth information of the target object, and the depth information indicating distances between a plurality of points of the target object and a reference position; obtaining first point cloud information corresponding to the at least two initial images respectively according to the depth information in the at least two initial images; fusing the first point cloud information respectively corresponding to the at least two initial images into second point cloud information; and constructing a 3D model of the target object according to the second point cloud information.
    Type: Application
    Filed: February 8, 2022
    Publication date: May 26, 2022
    Inventor: Xiangkai LIN
  • Publication number: 20220138974
    Abstract: A method for acquiring a texture of a three-dimensional (3D) model includes: acquiring at least two 3D networks generated by a target object based on a plurality of angles, the at least two 3D networks including a first correspondence between point cloud information and color information of the target object, and first camera poses of the target object; acquiring an offset between 3D points used for recording the same position of the target object in the at least two 3D networks according to the first camera poses respectively included in the at least two 3D networks; updating the first correspondence according to the offset, to acquire a second correspondence between the point cloud information and the color information of the target object; and acquiring a surface color texture of a 3D model of the target object according to the second correspondence.
    Type: Application
    Filed: January 19, 2022
    Publication date: May 5, 2022
    Inventor: Xiangkai LIN
  • Patent number: 11321870
    Abstract: Embodiments of this application disclose a camera attitude tracking method and apparatus, a device, and a system in the field of augmented reality (AR). The method includes receiving, by a second device with a camera, an initial image and an initial attitude parameter that are transmitted by a first device; obtaining, by the second device, a second image acquired by the camera; obtaining, by the second device, a camera attitude variation of the second image relative to the initial image; and obtaining, by the second device, according to the initial attitude parameter and the camera attitude variation, a second camera attitude parameter, the second camera attitude parameter corresponding to the second image.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: May 3, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Yonggen Ling, Linchao Bao, Xiaolong Zhu, Liang Qiao, Wei Liu
  • Publication number: 20220092813
    Abstract: Embodiments of this application disclose a method for displaying a virtual character in a plurality of real-world images captured by a camera is performed at an electronic device. The method includes: capturing an initial real-world image using the camera; simulating a display of the virtual character in the initial real-world image; capturing a subsequent real-world image using the camera after a movement of the camera; determining position and pose updates of the camera associated with the movement of the camera from tracking one or more feature points in the initial real-world image and the subsequent real-world image; and adjusting the display of the virtual character in the subsequent real-world image in accordance with the position and pose updates of the camera associated with the movement of the camera.
    Type: Application
    Filed: December 6, 2021
    Publication date: March 24, 2022
    Inventors: Xiangkai LIN, Liang QIAO, Fengming ZHU, Yu ZUO, Zeyu YANG, Yonggen LING, Linchao BAO
  • Patent number: 11276183
    Abstract: A relocalization method includes: obtaining, by a front-end program run on a device, a target image acquired after an ith marker image in the plurality of marker images; determining, by the front-end program, the target image as an (i+1)th marker image when the target image satisfies a relocalization condition, and transmitting the target image to a back-end program; performing, by the front-end program, feature point tracking on a current image acquired after the target image relative to the target image to obtain a first pose parameter. The back-end program performs relocalization on the target image to obtain a second pose parameter, and transmits the second pose parameter to the front-end program. The front-end program calculates a current pose parameter of the current image according to the first pose parameter and the second pose parameter.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: March 15, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Yonggen Ling, Linchao Bao, Wei Liu
  • Patent number: 11270460
    Abstract: A method for determining a pose of an image capturing device is performed at an electronic device. The electronic device acquires a plurality of image frames captured by the image capturing device, extracts a plurality of matching feature points from the plurality of image frames and determines first position information of each of the matching feature points in each of the plurality of image frames. After estimating second position information of each of the matching feature points in a current image frame in the plurality of image frames by using the first position information of each of the matching feature points extracted from a previous image frame in the plurality of image frames, the electronic device determines a pose of the image capturing device based on the first position information and the second position information of each of the matching feature points in the current image frame.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: March 8, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Liang Qiao, Xiangkai Lin, Linchao Bao, Yonggen Ling, Fengming Zhu
  • Publication number: 20220044491
    Abstract: A three-dimensional face model generation method is provided. The method includes: obtaining an inputted three-dimensional face mesh of a target object; aligning the three-dimensional face mesh with a first three-dimensional face model of a standard object according to face keypoints; performing fitting on the three-dimensional face mesh and a local area of the first three-dimensional face model, to obtain a second three-dimensional face model after local fitting; and performing fitting on the three-dimensional face mesh and a global area of the second three-dimensional face model, to obtain a three-dimensional face model of the target object after global fitting.
    Type: Application
    Filed: October 22, 2021
    Publication date: February 10, 2022
    Inventors: Xiangkai LIN, Linchao BAO
  • Publication number: 20220036636
    Abstract: This application provides a three-dimensional (3D) expression base generation method performed by a computer device. The method includes: obtaining image pairs of a target object in n types of head postures, each image pair including a color feature image and a depth image in a head posture; constructing a 3D human face model of the target object according to then image pairs; and generating a set of expression bases of the target object according to the 3D human face model of the target object. According to this application, based on a reconstructed 3D human face model, a set of expression bases of a target object is further generated, so that more diversified product functions may be expanded based on the set of expression bases.
    Type: Application
    Filed: October 15, 2021
    Publication date: February 3, 2022
    Inventors: Xiangkai LIN, Linchao BAO
  • Publication number: 20220012930
    Abstract: Embodiments of this application disclose an artificial intelligence-based (AI-based) animation character control method. When one animation character has a corresponding face customization base, and one animation character has no corresponding face customization base, the animation character having the face customization base may be used as a driving character, and the animation character having no face customization base may be used as a driven character.
    Type: Application
    Filed: September 27, 2021
    Publication date: January 13, 2022
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Sheng WANG, Xing JI, Zhantu ZHU, Xiangkai LIN, Linchao BAO