Patents by Inventor Yonggen Ling

Yonggen Ling has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220258356
    Abstract: This disclosure relates to a spatial calibration method and apparatus of a robot ontology coordinate system based on a visual perception device and a storage medium. The method includes: obtaining first transformation relationships; obtaining second transformation relationships; using a transformation relationship between a visual perception coordinate system and an ontology coordinate system as an unknown variable; and resolving the unknown variable based on an equivalence relationship between a transformation relationship obtained according to the first transformation relationships and the unknown variable and a transformation relationship obtained according to the second transformation relationships and the unknown variable, to obtain the transformation relationship between the visual perception coordinate system and the ontology coordinate system.
    Type: Application
    Filed: May 2, 2022
    Publication date: August 18, 2022
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Wanchao CHI, Le CHEN, Yonggen LING, Shenghao ZHANG, Yu ZHENG, Xinyang JIANG, Zhengyou ZHANG
  • Publication number: 20220223182
    Abstract: A video sound-picture matching includes: acquiring a voice sequence; acquiring a voice segment from the voice sequence; acquiring an initial position of a start-stop mark and a moving direction of the start-stop mark from an image sequence; determining an active segment according to the initial position of the start-stop mark, the moving direction of the start-stop mark, and the voice segment; and synthesizing the voice segment and the active segment to obtain a video segment. In a video synthesizing process, the present disclosure uses start-stop marks to locate positions of active segments in an image sequence, so as to match the active segments having actions with voice segments, so that the synthesized video segments are more in line with natural laws of a character during speaking, and have better authenticity.
    Type: Application
    Filed: April 1, 2022
    Publication date: July 14, 2022
    Inventors: Yonggen LING, Haozhi HUANG, Li SHEN
  • Patent number: 11380050
    Abstract: A face image generation method includes: determining, according to a first face image, a three dimensional morphable model (3DMM) corresponding to the first face image as a first model; determining, according to a reference element, a 3DMM corresponding to the reference element as a second model, the reference element representing a posture and/or an expression of a target face image; determining, according to the first model and the second model, an initial optical flow map corresponding to the first face image, and deforming the first face image according to the initial optical flow map to obtain an initial deformation map; obtaining, through a convolutional neural network, an optical flow increment map and a visibility probability map that correspond to the first face image; and generating the target face image according to the first face image, the initial optical flow map, the optical flow increment map, and the visibility probability map.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: July 5, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xue Fei Zhe, Yonggen Ling, Lin Chao Bao, Yi Bing Song, Wei Liu
  • Patent number: 11373384
    Abstract: This application provides a method for configuring parameters of a three-dimensional face model. The method includes: obtaining a reference face image; identifying a key facial point on the reference face image to obtain key point coordinates as reference coordinates; and determining a recommended parameter set in a face parameter value space according to the reference coordinates. The first projected coordinates are projected coordinates of the key facial point obtained by projecting a three-dimensional face model corresponding to the recommended parameter set onto a coordinate system. The proximity of the first projected coordinates to the reference coordinates meets a preset condition.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: June 28, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Mu Hu, Sirui Gao, Yonggen Ling, Yitong Wang, Linchao Bao, Wei Liu
  • Publication number: 20220180543
    Abstract: A method of depth map completion is described. A color map and a sparse depth map of a target scenario can be received. Resolutions of the color map and the sparse depth map are adjusted to generate n pairs of color maps and sparse depth maps of n different resolutions. The n pairs of color maps and the sparse depth maps can be processed to generate n prediction result maps using a cascade hourglass network including n levels of hourglass networks. Each of the n pair is input to a respective one of the n levels to generate the respective one of the n prediction result maps. The n prediction result maps each include a dense depth map of the same resolution as the corresponding pair. A final dense depth map of the target scenario can be generated according to the dense depth maps.
    Type: Application
    Filed: February 25, 2022
    Publication date: June 9, 2022
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Yonggen LING, Wanchao CHI, Chong ZHANG, Shenghao ZHANG, Zhengyou ZHANG, Zejian YUAN, Ang LI, Zidong CAO
  • Publication number: 20220143125
    Abstract: The present invention discloses Fuke Qianjin Capsules and a quality control method therefor. The capsules are made of Radix Et Caulis Flemingiae, Caulis Mahoniae, Herba Andrographis, Zanthoxylum dissitum Hemsl., Caulis Spatholobi, Radix Angelicae Sinensis, Radix Codonopsis, and Radix Rosa Laevigata as raw materials. Each of the Fuke Qianjin Capsules contains not less than 2.0 mg of Z-ligustilide, and a total amount of andrographolide and dehydroandrographolide is not less than 1.9 mg. A new standard for controlling quality of the Fuke Qianjin Capsules has been established through an analysis of chemical ingredients in the Fuke Qianjin Capsules. This standard adds a variety of core ingredient content to the existing pharmacopoeia standards. According to the Fuke Qianjin Capsules made in this range, the consistency of effects between different batches is more stable. Moreover, the more the types of core ingredients are limited, the more stable the consistency of the drug effect.
    Type: Application
    Filed: January 14, 2020
    Publication date: May 12, 2022
    Applicant: QIANJIN PHARMACEUTICAL CO., LTD.
    Inventors: Shun JIAN, Yun GONG, Peng ZHANG, Fujun LI, Yonggen LING, Juanjuan HE, Kanghua WANG, Xiuwei YANG
  • Patent number: 11321870
    Abstract: Embodiments of this application disclose a camera attitude tracking method and apparatus, a device, and a system in the field of augmented reality (AR). The method includes receiving, by a second device with a camera, an initial image and an initial attitude parameter that are transmitted by a first device; obtaining, by the second device, a second image acquired by the camera; obtaining, by the second device, a camera attitude variation of the second image relative to the initial image; and obtaining, by the second device, according to the initial attitude parameter and the camera attitude variation, a second camera attitude parameter, the second camera attitude parameter corresponding to the second image.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: May 3, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Yonggen Ling, Linchao Bao, Xiaolong Zhu, Liang Qiao, Wei Liu
  • Publication number: 20220092813
    Abstract: Embodiments of this application disclose a method for displaying a virtual character in a plurality of real-world images captured by a camera is performed at an electronic device. The method includes: capturing an initial real-world image using the camera; simulating a display of the virtual character in the initial real-world image; capturing a subsequent real-world image using the camera after a movement of the camera; determining position and pose updates of the camera associated with the movement of the camera from tracking one or more feature points in the initial real-world image and the subsequent real-world image; and adjusting the display of the virtual character in the subsequent real-world image in accordance with the position and pose updates of the camera associated with the movement of the camera.
    Type: Application
    Filed: December 6, 2021
    Publication date: March 24, 2022
    Inventors: Xiangkai LIN, Liang QIAO, Fengming ZHU, Yu ZUO, Zeyu YANG, Yonggen LING, Linchao BAO
  • Patent number: 11276183
    Abstract: A relocalization method includes: obtaining, by a front-end program run on a device, a target image acquired after an ith marker image in the plurality of marker images; determining, by the front-end program, the target image as an (i+1)th marker image when the target image satisfies a relocalization condition, and transmitting the target image to a back-end program; performing, by the front-end program, feature point tracking on a current image acquired after the target image relative to the target image to obtain a first pose parameter. The back-end program performs relocalization on the target image to obtain a second pose parameter, and transmits the second pose parameter to the front-end program. The front-end program calculates a current pose parameter of the current image according to the first pose parameter and the second pose parameter.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: March 15, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Yonggen Ling, Linchao Bao, Wei Liu
  • Patent number: 11270460
    Abstract: A method for determining a pose of an image capturing device is performed at an electronic device. The electronic device acquires a plurality of image frames captured by the image capturing device, extracts a plurality of matching feature points from the plurality of image frames and determines first position information of each of the matching feature points in each of the plurality of image frames. After estimating second position information of each of the matching feature points in a current image frame in the plurality of image frames by using the first position information of each of the matching feature points extracted from a previous image frame in the plurality of image frames, the electronic device determines a pose of the image capturing device based on the first position information and the second position information of each of the matching feature points in the current image frame.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: March 8, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Liang Qiao, Xiangkai Lin, Linchao Bao, Yonggen Ling, Fengming Zhu
  • Publication number: 20220051061
    Abstract: An artificial intelligence-based action recognition method includes: determining, according to video data comprising an interactive object, node sequence information corresponding to video frames in the video data, the node sequence information of each video frame including position information of nodes in a node sequence, the nodes in the node sequence being nodes of the interactive object that are moved to implement a corresponding interactive action; determining action categories corresponding to the video frames in the video data, including: determining, according to the node sequence information corresponding to N consecutive video frames in the video data, action categories respectively corresponding to the N consecutive video frames; and determining, according to the action categories corresponding to the video frames in the video data, a target interactive action made by the interactive object in the video data.
    Type: Application
    Filed: November 1, 2021
    Publication date: February 17, 2022
    Inventors: Wanchao CHI, Chong ZHANG, Yonggen LING, Wei LIU, Zhengyou ZHANG, Zejian YUAN, Ziyang SONG, Ziyi YIN
  • Patent number: 11222440
    Abstract: Embodiments of this application disclose a position and pose determining method performed at an electronic device. The method includes: acquiring, by tracking a first feature point extracted from a marked image, position and pose parameters of a first image captured by a camera relative to the marked image; extracting a second feature point from the first image in a case that the first image fails to meet a feature point tracking condition; and acquiring, by tracking the first feature point and the second feature point, position and pose parameters of a second image captured by the camera relative to the marked image, and determining a position and a pose of the camera according to the position and pose parameters, the second image being an image captured by the camera after the first image.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: January 11, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Liang Qiao, Fengming Zhu, Yu Zuo, Zeyu Yang, Yonggen Ling, Linchao Bao
  • Patent number: 11205282
    Abstract: This application discloses a repositioning method performed by an electronic device in a camera pose tracking process, belonging to the field of augmented reality (AR). The method includes: obtaining a current image acquired by the camera after an ith anchor image in a plurality of anchor images; obtaining an initial feature point and an initial pose parameter in a first anchor image in the plurality of anchor images in a case that the current image satisfies a repositioning condition; performing feature point tracking on the current image relative to the first anchor image, to obtain a target feature point; calculating a pose change amount of a camera from a first camera pose to a target camera pose according to the initial feature point and the target feature point; and performing repositioning according to the initial pose parameter and the pose change amount to obtain a target pose parameter.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: December 21, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Yonggen Ling, Linchao Bao, Wei Liu
  • Patent number: 11189037
    Abstract: This application discloses a repositioning method performed by an electronic device in a camera pose tracking process, belonging to the field of augmented reality (AR). The method includes: obtaining a current image acquired by the camera after an ith anchor image in a plurality of anchor images; selecting a target keyframe from a keyframe database according to Hash index information in a case that the current image satisfies a repositioning condition; performing second repositioning on the current image relative to the target keyframe; and calculating a camera pose parameter of a camera during acquisition of the current image according to a positioning result of the first repositioning and a positioning result of the second repositioning. In a case that there are different keyframes covering a surrounding area of a camera acquisition scene, it is highly probable that repositioning can succeed, thereby improving the success probability of a repositioning process.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: November 30, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Yonggen Ling, Linchao Bao, Wei Liu
  • Publication number: 20210342643
    Abstract: A computer device extracts local features of sample images based on a first part of a convolutional neural network (CNN) model. The sample images comprise a plurality of images taken at the same place. The device; aggregates the local features into feature vectors having a first dimensionality based on a second part of the CNN model. The device obtains compressed representation vectors of the feature vectors based on a third part of the CNN model. The compressed representation vectors have a second dimensionality less than the first dimensionality. The device trains the CNN model, and obtains a trained CNN mode satisfying a preset condition in accordance with the training.
    Type: Application
    Filed: July 13, 2021
    Publication date: November 4, 2021
    Inventors: Dongdong Bai, Yonggen Ling, Wei Liu
  • Patent number: 11158083
    Abstract: Embodiments of this application disclose a position and attitude determining method. The method includes acquiring, by tracking a feature point of a first marked image, position and attitude parameters of an image captured by a camera; using a previous image of a first image as a second marked image in response to the previous image of the first image meeting a feature point tracking condition and the first image failing to meet the feature point tracking condition; acquiring, position and attitude parameters of the image captured by the camera relative to the second marked image; acquiring position and attitude parameters according to the position and attitude parameters of the image relative to the second marked image and position and attitude parameters of each marked image relative to a previous marked image; and determining a position and an attitude of the camera according to the position and attitude parameters.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: October 26, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Liang Qiao, Fengming Zhu, Yu Zuo, Zeyu Yang, Yonggen Ling, Linchao Bao
  • Patent number: 11145078
    Abstract: A depth information determining method for dual cameras is provided. A tth left eye matching similarity from a left eye image captured by a first camera of the dual cameras to a right eye image captured by a second camera of the dual cameras is obtained. A tth right eye matching similarity from the right eye image to the left eye image is obtained. The tth left eye matching similarity and a (t?1)th left eye attention map are processed with a neural network model, to obtain a tth left eye disparity map. The tth right eye matching similarity and a (t?1)th right eye attention map are processed with the neural network model, to obtain a tth right eye disparity map. First depth information is determined according to the tth left eye disparity map. Second depth information is determined according to the tth right eye disparity map.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: October 12, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Zequn Jie, Yonggen Ling, Wei Liu
  • Publication number: 20210286977
    Abstract: A computer application method for generating a three-dimensional (3D) face model is provided, performed by a face model generation model running on a computer device, the method including: obtaining a two-dimensional (2D) face image as input to the face model generation model; extracting global features and local features of the 2D face image; obtaining a 3D face model parameter based on the global features and the local features; and outputting a 3D face model corresponding to the 2D face image based on the 3D face model parameter.
    Type: Application
    Filed: June 3, 2021
    Publication date: September 16, 2021
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Yajing CHEN, Yibing SONG, Yonggen LING, Linchao BAO, Wei LIU
  • Publication number: 20210272294
    Abstract: A method for determining motion information is provided to be executed by a computing device. The method includes: determining a first image and a second image, the first image and the second image each including an object; determining a first pixel region based on a target feature point on the object in the first image; determining a second pixel region according to a pixel difference among a plurality of first pixel points in the first pixel region and further according to the target feature point; and obtaining motion information of the target feature point according to the plurality of second pixel points and the second image, the motion information being used for indicating changes in locations of the target feature point in the first image and the second image.
    Type: Application
    Filed: May 18, 2021
    Publication date: September 2, 2021
    Inventors: Yonggen LING, Shenghao ZHANG
  • Publication number: 20210247764
    Abstract: A method for controlling a UAV in an environment includes receiving first and second sensing signals from a vision sensor and a proximity sensor, respectively, coupled to the UAV. The first and second sensing signals include first and second depth information of the environment, respectively. The method further includes selecting the first and second sensing signals for generating first and second portions of an environmental map, respectively, based on a suitable criterion associated with distinct characteristics of various portions of the environment or distinct capabilities of the vision sensor and the proximity sensor, generating first and second sets of depth images for the first and second portions of the environmental map, respectively, based on the first and second sensing signals, respectively, combining the first and second sets of depth images to generate the environmental map; and effecting the UAV to navigate in the environment using the environmental map.
    Type: Application
    Filed: January 25, 2021
    Publication date: August 12, 2021
    Inventors: Ang LIU, Weiyu MO, Yonggen LING