Patents by Inventor Yonggen Ling

Yonggen Ling has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220092813
    Abstract: Embodiments of this application disclose a method for displaying a virtual character in a plurality of real-world images captured by a camera is performed at an electronic device. The method includes: capturing an initial real-world image using the camera; simulating a display of the virtual character in the initial real-world image; capturing a subsequent real-world image using the camera after a movement of the camera; determining position and pose updates of the camera associated with the movement of the camera from tracking one or more feature points in the initial real-world image and the subsequent real-world image; and adjusting the display of the virtual character in the subsequent real-world image in accordance with the position and pose updates of the camera associated with the movement of the camera.
    Type: Application
    Filed: December 6, 2021
    Publication date: March 24, 2022
    Inventors: Xiangkai LIN, Liang QIAO, Fengming ZHU, Yu ZUO, Zeyu YANG, Yonggen LING, Linchao BAO
  • Patent number: 11276183
    Abstract: A relocalization method includes: obtaining, by a front-end program run on a device, a target image acquired after an ith marker image in the plurality of marker images; determining, by the front-end program, the target image as an (i+1)th marker image when the target image satisfies a relocalization condition, and transmitting the target image to a back-end program; performing, by the front-end program, feature point tracking on a current image acquired after the target image relative to the target image to obtain a first pose parameter. The back-end program performs relocalization on the target image to obtain a second pose parameter, and transmits the second pose parameter to the front-end program. The front-end program calculates a current pose parameter of the current image according to the first pose parameter and the second pose parameter.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: March 15, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Yonggen Ling, Linchao Bao, Wei Liu
  • Patent number: 11270460
    Abstract: A method for determining a pose of an image capturing device is performed at an electronic device. The electronic device acquires a plurality of image frames captured by the image capturing device, extracts a plurality of matching feature points from the plurality of image frames and determines first position information of each of the matching feature points in each of the plurality of image frames. After estimating second position information of each of the matching feature points in a current image frame in the plurality of image frames by using the first position information of each of the matching feature points extracted from a previous image frame in the plurality of image frames, the electronic device determines a pose of the image capturing device based on the first position information and the second position information of each of the matching feature points in the current image frame.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: March 8, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Liang Qiao, Xiangkai Lin, Linchao Bao, Yonggen Ling, Fengming Zhu
  • Publication number: 20220051061
    Abstract: An artificial intelligence-based action recognition method includes: determining, according to video data comprising an interactive object, node sequence information corresponding to video frames in the video data, the node sequence information of each video frame including position information of nodes in a node sequence, the nodes in the node sequence being nodes of the interactive object that are moved to implement a corresponding interactive action; determining action categories corresponding to the video frames in the video data, including: determining, according to the node sequence information corresponding to N consecutive video frames in the video data, action categories respectively corresponding to the N consecutive video frames; and determining, according to the action categories corresponding to the video frames in the video data, a target interactive action made by the interactive object in the video data.
    Type: Application
    Filed: November 1, 2021
    Publication date: February 17, 2022
    Inventors: Wanchao CHI, Chong ZHANG, Yonggen LING, Wei LIU, Zhengyou ZHANG, Zejian YUAN, Ziyang SONG, Ziyi YIN
  • Patent number: 11222440
    Abstract: Embodiments of this application disclose a position and pose determining method performed at an electronic device. The method includes: acquiring, by tracking a first feature point extracted from a marked image, position and pose parameters of a first image captured by a camera relative to the marked image; extracting a second feature point from the first image in a case that the first image fails to meet a feature point tracking condition; and acquiring, by tracking the first feature point and the second feature point, position and pose parameters of a second image captured by the camera relative to the marked image, and determining a position and a pose of the camera according to the position and pose parameters, the second image being an image captured by the camera after the first image.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: January 11, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Liang Qiao, Fengming Zhu, Yu Zuo, Zeyu Yang, Yonggen Ling, Linchao Bao
  • Patent number: 11205282
    Abstract: This application discloses a repositioning method performed by an electronic device in a camera pose tracking process, belonging to the field of augmented reality (AR). The method includes: obtaining a current image acquired by the camera after an ith anchor image in a plurality of anchor images; obtaining an initial feature point and an initial pose parameter in a first anchor image in the plurality of anchor images in a case that the current image satisfies a repositioning condition; performing feature point tracking on the current image relative to the first anchor image, to obtain a target feature point; calculating a pose change amount of a camera from a first camera pose to a target camera pose according to the initial feature point and the target feature point; and performing repositioning according to the initial pose parameter and the pose change amount to obtain a target pose parameter.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: December 21, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Yonggen Ling, Linchao Bao, Wei Liu
  • Patent number: 11189037
    Abstract: This application discloses a repositioning method performed by an electronic device in a camera pose tracking process, belonging to the field of augmented reality (AR). The method includes: obtaining a current image acquired by the camera after an ith anchor image in a plurality of anchor images; selecting a target keyframe from a keyframe database according to Hash index information in a case that the current image satisfies a repositioning condition; performing second repositioning on the current image relative to the target keyframe; and calculating a camera pose parameter of a camera during acquisition of the current image according to a positioning result of the first repositioning and a positioning result of the second repositioning. In a case that there are different keyframes covering a surrounding area of a camera acquisition scene, it is highly probable that repositioning can succeed, thereby improving the success probability of a repositioning process.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: November 30, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Yonggen Ling, Linchao Bao, Wei Liu
  • Publication number: 20210342643
    Abstract: A computer device extracts local features of sample images based on a first part of a convolutional neural network (CNN) model. The sample images comprise a plurality of images taken at the same place. The device; aggregates the local features into feature vectors having a first dimensionality based on a second part of the CNN model. The device obtains compressed representation vectors of the feature vectors based on a third part of the CNN model. The compressed representation vectors have a second dimensionality less than the first dimensionality. The device trains the CNN model, and obtains a trained CNN mode satisfying a preset condition in accordance with the training.
    Type: Application
    Filed: July 13, 2021
    Publication date: November 4, 2021
    Inventors: Dongdong Bai, Yonggen Ling, Wei Liu
  • Patent number: 11158083
    Abstract: Embodiments of this application disclose a position and attitude determining method. The method includes acquiring, by tracking a feature point of a first marked image, position and attitude parameters of an image captured by a camera; using a previous image of a first image as a second marked image in response to the previous image of the first image meeting a feature point tracking condition and the first image failing to meet the feature point tracking condition; acquiring, position and attitude parameters of the image captured by the camera relative to the second marked image; acquiring position and attitude parameters according to the position and attitude parameters of the image relative to the second marked image and position and attitude parameters of each marked image relative to a previous marked image; and determining a position and an attitude of the camera according to the position and attitude parameters.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: October 26, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Liang Qiao, Fengming Zhu, Yu Zuo, Zeyu Yang, Yonggen Ling, Linchao Bao
  • Patent number: 11145078
    Abstract: A depth information determining method for dual cameras is provided. A tth left eye matching similarity from a left eye image captured by a first camera of the dual cameras to a right eye image captured by a second camera of the dual cameras is obtained. A tth right eye matching similarity from the right eye image to the left eye image is obtained. The tth left eye matching similarity and a (t?1)th left eye attention map are processed with a neural network model, to obtain a tth left eye disparity map. The tth right eye matching similarity and a (t?1)th right eye attention map are processed with the neural network model, to obtain a tth right eye disparity map. First depth information is determined according to the tth left eye disparity map. Second depth information is determined according to the tth right eye disparity map.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: October 12, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Zequn Jie, Yonggen Ling, Wei Liu
  • Publication number: 20210286977
    Abstract: A computer application method for generating a three-dimensional (3D) face model is provided, performed by a face model generation model running on a computer device, the method including: obtaining a two-dimensional (2D) face image as input to the face model generation model; extracting global features and local features of the 2D face image; obtaining a 3D face model parameter based on the global features and the local features; and outputting a 3D face model corresponding to the 2D face image based on the 3D face model parameter.
    Type: Application
    Filed: June 3, 2021
    Publication date: September 16, 2021
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Yajing CHEN, Yibing SONG, Yonggen LING, Linchao BAO, Wei LIU
  • Publication number: 20210272294
    Abstract: A method for determining motion information is provided to be executed by a computing device. The method includes: determining a first image and a second image, the first image and the second image each including an object; determining a first pixel region based on a target feature point on the object in the first image; determining a second pixel region according to a pixel difference among a plurality of first pixel points in the first pixel region and further according to the target feature point; and obtaining motion information of the target feature point according to the plurality of second pixel points and the second image, the motion information being used for indicating changes in locations of the target feature point in the first image and the second image.
    Type: Application
    Filed: May 18, 2021
    Publication date: September 2, 2021
    Inventors: Yonggen LING, Shenghao ZHANG
  • Publication number: 20210247764
    Abstract: A method for controlling a UAV in an environment includes receiving first and second sensing signals from a vision sensor and a proximity sensor, respectively, coupled to the UAV. The first and second sensing signals include first and second depth information of the environment, respectively. The method further includes selecting the first and second sensing signals for generating first and second portions of an environmental map, respectively, based on a suitable criterion associated with distinct characteristics of various portions of the environment or distinct capabilities of the vision sensor and the proximity sensor, generating first and second sets of depth images for the first and second portions of the environmental map, respectively, based on the first and second sensing signals, respectively, combining the first and second sets of depth images to generate the environmental map; and effecting the UAV to navigate in the environment using the environmental map.
    Type: Application
    Filed: January 25, 2021
    Publication date: August 12, 2021
    Inventors: Ang LIU, Weiyu MO, Yonggen LING
  • Publication number: 20210241521
    Abstract: A face image generation method includes: determining, according to a first face image, a three dimensional morphable model (3DMM) corresponding to the first face image as a first model; determining, according to a reference element, a 3DMM corresponding to the reference element as a second model, the reference element representing a posture and/or an expression of a target face image; determining, according to the first model and the second model, an initial optical flow map corresponding to the first face image, and deforming the first face image according to the initial optical flow map to obtain an initial deformation map; obtaining, through a convolutional neural network, an optical flow increment map and a visibility probability map that correspond to the first face image; and generating the target face image according to the first face image, the initial optical flow map, the optical flow increment map, and the visibility probability map.
    Type: Application
    Filed: April 20, 2021
    Publication date: August 5, 2021
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xue Fei ZHE, Yonggen LING, Lin Chao BAO, Yi Bing SONG, Wei LIU
  • Publication number: 20210183166
    Abstract: This application provides a method for configuring parameters of a three-dimensional face model. The method includes: obtaining a reference face image; identifying a key facial point on the reference face image to obtain key point coordinates as reference coordinates; and determining a recommended parameter set in a face parameter value space according to the reference coordinates. The first projected coordinates are projected coordinates of the key facial point obtained by projecting a three-dimensional face model corresponding to the recommended parameter set onto a coordinate system. The proximity of the first projected coordinates to the reference coordinates meets a preset condition.
    Type: Application
    Filed: February 26, 2021
    Publication date: June 17, 2021
    Inventors: Mu HU, Sirui GAO, Yonggen LING, Yitong WANG, Linchao BAO, Wei LIU
  • Publication number: 20210183044
    Abstract: An image processing method and apparatus, a computer-readable medium, and an electronic device are provided. The image processing method includes: respectively projecting, according to a plurality of view angle parameters corresponding to a plurality of view angles, a face model of a target object onto a plurality of face images of the target object acquired from the plurality of view angles, to determine correspondences between regions on the face model and regions on the face image; respectively extracting, based on the correspondences and a target region in the face model that need to generate a texture image, images corresponding to the target region from the plurality of face images; and fusing the images that correspond to the target region and that are respectively extracted from the plurality of face images, to generate the texture image.
    Type: Application
    Filed: February 24, 2021
    Publication date: June 17, 2021
    Inventors: Xiangkai LIN, Linchao BAO, Yonggen LING, Yibing SONG, Wei LIU
  • Publication number: 20210152751
    Abstract: A model training method includes obtaining an image sample set and brief-prompt information; generating a content mask set according to the image sample set and the brief-prompt information; generating a to-be-trained image set according to the content mask set; obtaining, based on the image sample set and the to-be-trained image set, a predicted image set through a to-be-trained information synthesis model, the predicted image set comprising at least one predicted image, the predicted image being in correspondence to the image sample; and training, based on the predicted image set and the image sample set, the to-be-trained information synthesis model by using a target loss function, to obtain an information synthesis model.
    Type: Application
    Filed: December 1, 2020
    Publication date: May 20, 2021
    Inventors: Haozhi HUANG, Jiawei LI, Li SHEN, Yonggen LING, Wei LIU, Dong YU
  • Patent number: 10901419
    Abstract: A method for controlling an unmanned aerial vehicle (UAV) includes receiving first sensor data relative to a first coordinate system and second sensor data relative to a second coordinate system from a first sensor and a second sensor, respectively. The first and second sensor data includes first and second obstacle occupancy information indicative of relative locations of a first and a second sets of obstacles in reference to the UAV in the first and second coordinate systems, respectively. The first and second sets of obstacles have at least a subset of obstacles in common. The method further includes converting the first and second sensor data into a single coordinate system using sensor calibration data to generate an obstacle occupancy grid map based on the first and second obstacle occupancy information, and effecting the UAV to navigate using the obstacle occupancy grid map to perform obstacle avoidance.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: January 26, 2021
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Ang Liu, Weiyu Mo, Yonggen Ling
  • Publication number: 20200388051
    Abstract: Embodiments of this application disclose a camera attitude tracking method and apparatus, a device, and a system in the field of augmented reality (AR). The method includes receiving, by a second device with a camera, an initial image and an initial attitude parameter that are transmitted by a first device; obtaining, by the second device, a second image acquired by the camera; obtaining, by the second device, a camera attitude variation of the second image relative to the initial image; and obtaining, by the second device, according to the initial attitude parameter and the camera attitude variation, a second camera attitude parameter, the second camera attitude parameter corresponding to the second image.
    Type: Application
    Filed: August 24, 2020
    Publication date: December 10, 2020
    Inventors: Xiangkai LIN, Yonggen LING, Linchao BAO, Xiaolong ZHU, Liang QIAO, Wei LIU
  • Publication number: 20200357136
    Abstract: A method for determining a pose of an image capturing device is performed at an electronic device. The electronic device acquires a plurality of image frames captured by the image capturing device, extracts a plurality of matching feature points from the plurality of image frames and determines first position information of each of the matching feature points in each of the plurality of image frames. After estimating second position information of each of the matching feature points in a current image frame in the plurality of image frames by using the first position information of each of the matching feature points extracted from a previous image frame in the plurality of image frames, the electronic device determines a pose of the image capturing device based on the first position information and the second position information of each of the matching feature points in the current image frame.
    Type: Application
    Filed: July 27, 2020
    Publication date: November 12, 2020
    Inventors: Liang Qiao, Xiangkai Lin, Linchao Bao, Yonggen Ling, Fengming Zhu