Patents by Inventor Yonggen Ling

Yonggen Ling has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11978219
    Abstract: A method for determining motion information is provided to be executed by a computing device. The method includes: determining a first image and a second image, the first image and the second image each including an object; determining a first pixel region based on a target feature point on the object in the first image; determining a second pixel region according to a pixel difference among a plurality of first pixel points in the first pixel region and further according to the target feature point; and obtaining motion information of the target feature point according to the plurality of second pixel points and the second image, the motion information being used for indicating changes in locations of the target feature point in the first image and the second image.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: May 7, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yonggen Ling, Shenghao Zhang
  • Patent number: 11972778
    Abstract: A video sound-picture matching includes: acquiring a voice sequence; acquiring a voice segment from the voice sequence; acquiring an initial position of a start-stop mark and a moving direction of the start-stop mark from an image sequence; determining an active segment according to the initial position of the start-stop mark, the moving direction of the start-stop mark, and the voice segment; and synthesizing the voice segment and the active segment to obtain a video segment. In a video synthesizing process, the present disclosure uses start-stop marks to locate positions of active segments in an image sequence, so as to match the active segments having actions with voice segments, so that the synthesized video segments are more in line with natural laws of a character during speaking, and have better authenticity.
    Type: Grant
    Filed: April 1, 2022
    Date of Patent: April 30, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yonggen Ling, Haozhi Huang, Li Shen
  • Patent number: 11961325
    Abstract: An image processing method and apparatus, a computer-readable medium, and an electronic device are provided. The image processing method includes: respectively projecting, according to a plurality of view angle parameters corresponding to a plurality of view angles, a face model of a target object onto a plurality of face images of the target object acquired from the plurality of view angles, to determine correspondences between regions on the face model and regions on the face image; respectively extracting, based on the correspondences and a target region in the face model that need to generate a texture image, images corresponding to the target region from the plurality of face images; and fusing the images that correspond to the target region and that are respectively extracted from the plurality of face images, to generate the texture image.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: April 16, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Linchao Bao, Yonggen Ling, Yibing Song, Wei Liu
  • Patent number: 11914369
    Abstract: A method for controlling a UAV in an environment includes receiving first and second sensing signals from a vision sensor and a proximity sensor, respectively, coupled to the UAV. The first and second sensing signals include first and second depth information of the environment, respectively. The method further includes selecting the first and second sensing signals for generating first and second portions of an environmental map, respectively, based on a suitable criterion associated with distinct characteristics of various portions of the environment or distinct capabilities of the vision sensor and the proximity sensor, generating first and second sets of depth images for the first and second portions of the environmental map, respectively, based on the first and second sensing signals, respectively, combining the first and second sets of depth images to generate the environmental map; and effecting the UAV to navigate in the environment using the environmental map.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: February 27, 2024
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Ang Liu, Weiyu Mo, Yonggen Ling
  • Patent number: 11798190
    Abstract: Embodiments of this application disclose a method for displaying a virtual character in a plurality of real-world images captured by a camera is performed at an electronic device. The method includes: capturing an initial real-world image using the camera; simulating a display of the virtual character in the initial real-world image; capturing a subsequent real-world image using the camera after a movement of the camera; determining position and pose updates of the camera associated with the movement of the camera from tracking one or more feature points in the initial real-world image and the subsequent real-world image; and adjusting the display of the virtual character in the subsequent real-world image in accordance with the position and pose updates of the camera associated with the movement of the camera.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: October 24, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Liang Qiao, Fengming Zhu, Yu Zuo, Zeyu Yang, Yonggen Ling, Linchao Bao
  • Patent number: 11636613
    Abstract: A computer application method for generating a three-dimensional (3D) face model is provided, performed by a face model generation model running on a computer device, the method including: obtaining a two-dimensional (2D) face image as input to the face model generation model; extracting global features and local features of the 2D face image; obtaining a 3D face model parameter based on the global features and the local features; and outputting a 3D face model corresponding to the 2D face image based on the 3D face model parameter.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: April 25, 2023
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Yajing Chen, Yibing Song, Yonggen Ling, Linchao Bao, Wei Liu
  • Publication number: 20230076589
    Abstract: A method for controlling motion of a legged robot includes determining one or more candidate landing points for each foot of the robot. The method further includes determining a first correlation between a center of mass position change parameter, candidate landing points, and foot contact force. The method further includes determining, under a constraint condition set and based on the first correlation, a target center of mass position change parameter, a target step order, and a target landing point for each foot selected among the one or more candidate landing points for the respective foot, the constraint condition set constraining a step order. The method further includes controlling, according to the target center of mass position change parameter, the target step order, and the target landing point for each foot, motion of the legged robot in the preset period.
    Type: Application
    Filed: November 15, 2022
    Publication date: March 9, 2023
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Yu ZHENG, Xinyang JIANG, Wanchao CHI, Yonggen LING, Shenghao ZHANG, Zhengyou ZHANG
  • Publication number: 20230030054
    Abstract: A method for controlling motion of a legged robot includes: determining, according to state data of the legged robot at a start moment in a preset period, a candidate landing point of each foot in the preset period; determining, according to the state data at the start moment and the candidate landing point of each foot, a first correlation between a centroid position change parameter, a step duty ratio, a candidate landing point, and a foot contact force; determining, under a constraint of a constraint condition set, a target centroid position change parameter, a target step duty ratio, and a target landing point satisfying the first correlation; and controlling, according to the target centroid position change parameter, the target step duty ratio, and the target landing point, motion of the legged robot in the preset period.
    Type: Application
    Filed: September 27, 2022
    Publication date: February 2, 2023
    Inventors: Yu ZHENG, Xinyang JIANG, Wanchao CHI, Yonggen LING, Shenghao ZHANG, Zhengyou ZHANG
  • Publication number: 20230015214
    Abstract: This application relates to a planar contour recognition method and apparatus, a computer device, and a storage medium. The method includes obtaining a target frame image collected from a target environment; fitting edge points of an object plane in the target frame image and edge points of a corresponding object plane in a previous frame image to obtain a fitting graph, the previous frame image being collected from the target environment before the target frame image; deleting edge points that do not appear on the object plane of the previous frame image, in the fitting graph; and recognizing a contour constructed by remaining edge points in the fitting graph as a planar contour.
    Type: Application
    Filed: September 29, 2022
    Publication date: January 19, 2023
    Inventors: Shenghao ZHANG, Yonggen LING, Wanchao CHI, Yu ZHENG, Xinyang JIANG
  • Publication number: 20220415064
    Abstract: The present disclosure provides an image processing method and apparatus, an electronic device, and a computer-readable storage medium. The method includes: obtaining a first three-dimensional image of a target object in a three-dimensional coordinate system; determining a target plane of the target object in the first three-dimensional image, the target plane comprising target three-dimensional points; projecting the target three-dimensional points to a two-dimensional coordinate system defined on the target plane, to obtain target two-dimensional points; determining a target polygon and a minimum circumscribed target graphic of the target polygon according to the target two-dimensional points; and recognizing the minimum circumscribed target graphic as a first target graphic of the target object in the first target three-dimensional image.
    Type: Application
    Filed: September 1, 2022
    Publication date: December 29, 2022
    Inventors: Shenghao ZHANG, Yonggen Ling, Wanchao Chi, Yu Zheng, Xinyang Jiang
  • Publication number: 20220414910
    Abstract: A scene contour recognition method is provided. In the method, a plurality of scene images of an environment is obtained. Three-dimensional information of a target plane in the plurality of scene images is determined based on depth information for each of the plurality of scene images, The target plane corresponds to a target object in the plurality of scene images. A three-dimensional contour corresponding to the target object is generated by fusing the target plane in each of the plurality of scene images based on the three-dimensional information of the target plane in each of the plurality of scene images. A contour diagram of the target object is generated by projecting the three-dimensional contour onto a two-dimensional plane.
    Type: Application
    Filed: August 22, 2022
    Publication date: December 29, 2022
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Shenghao ZHANG, Yonggen LING, Wanchao CHI, Yu ZHENG, Xinyang JIANG
  • Patent number: 11481923
    Abstract: This application discloses a repositioning method and apparatus in a camera pose tracking process, a device, and a storage medium, belonging to the field of augmented reality (AR).
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: October 25, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Yonggen Ling, Linchao Bao, Wei Liu
  • Publication number: 20220274254
    Abstract: A computing device creates first relation data indicating a relation between an interval duration and a center of mass position of a legged robot. The first relation data comprises a first constant, C. The computing device creates second relation data corresponding to at least one leg of for the legged robot and a force corresponding to the at last one leg with the ground. The second relation data comprises the first constant, C. The computing device creates third relation data according to the second relation data. The device determines a value of the first constant, C, when a target value J is a minimum value, and obtains the first relation data according to the determined value of the first constant, C.
    Type: Application
    Filed: May 12, 2022
    Publication date: September 1, 2022
    Inventors: Yu ZHENG, Xinyang JIANG, Wanchao CHI, Yonggen LING, Shenghao ZHANG, Zhengyou ZHANG
  • Publication number: 20220258356
    Abstract: This disclosure relates to a spatial calibration method and apparatus of a robot ontology coordinate system based on a visual perception device and a storage medium. The method includes: obtaining first transformation relationships; obtaining second transformation relationships; using a transformation relationship between a visual perception coordinate system and an ontology coordinate system as an unknown variable; and resolving the unknown variable based on an equivalence relationship between a transformation relationship obtained according to the first transformation relationships and the unknown variable and a transformation relationship obtained according to the second transformation relationships and the unknown variable, to obtain the transformation relationship between the visual perception coordinate system and the ontology coordinate system.
    Type: Application
    Filed: May 2, 2022
    Publication date: August 18, 2022
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Wanchao CHI, Le CHEN, Yonggen LING, Shenghao ZHANG, Yu ZHENG, Xinyang JIANG, Zhengyou ZHANG
  • Publication number: 20220223182
    Abstract: A video sound-picture matching includes: acquiring a voice sequence; acquiring a voice segment from the voice sequence; acquiring an initial position of a start-stop mark and a moving direction of the start-stop mark from an image sequence; determining an active segment according to the initial position of the start-stop mark, the moving direction of the start-stop mark, and the voice segment; and synthesizing the voice segment and the active segment to obtain a video segment. In a video synthesizing process, the present disclosure uses start-stop marks to locate positions of active segments in an image sequence, so as to match the active segments having actions with voice segments, so that the synthesized video segments are more in line with natural laws of a character during speaking, and have better authenticity.
    Type: Application
    Filed: April 1, 2022
    Publication date: July 14, 2022
    Inventors: Yonggen LING, Haozhi HUANG, Li SHEN
  • Patent number: 11380050
    Abstract: A face image generation method includes: determining, according to a first face image, a three dimensional morphable model (3DMM) corresponding to the first face image as a first model; determining, according to a reference element, a 3DMM corresponding to the reference element as a second model, the reference element representing a posture and/or an expression of a target face image; determining, according to the first model and the second model, an initial optical flow map corresponding to the first face image, and deforming the first face image according to the initial optical flow map to obtain an initial deformation map; obtaining, through a convolutional neural network, an optical flow increment map and a visibility probability map that correspond to the first face image; and generating the target face image according to the first face image, the initial optical flow map, the optical flow increment map, and the visibility probability map.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: July 5, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xue Fei Zhe, Yonggen Ling, Lin Chao Bao, Yi Bing Song, Wei Liu
  • Patent number: 11373384
    Abstract: This application provides a method for configuring parameters of a three-dimensional face model. The method includes: obtaining a reference face image; identifying a key facial point on the reference face image to obtain key point coordinates as reference coordinates; and determining a recommended parameter set in a face parameter value space according to the reference coordinates. The first projected coordinates are projected coordinates of the key facial point obtained by projecting a three-dimensional face model corresponding to the recommended parameter set onto a coordinate system. The proximity of the first projected coordinates to the reference coordinates meets a preset condition.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: June 28, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Mu Hu, Sirui Gao, Yonggen Ling, Yitong Wang, Linchao Bao, Wei Liu
  • Publication number: 20220180543
    Abstract: A method of depth map completion is described. A color map and a sparse depth map of a target scenario can be received. Resolutions of the color map and the sparse depth map are adjusted to generate n pairs of color maps and sparse depth maps of n different resolutions. The n pairs of color maps and the sparse depth maps can be processed to generate n prediction result maps using a cascade hourglass network including n levels of hourglass networks. Each of the n pair is input to a respective one of the n levels to generate the respective one of the n prediction result maps. The n prediction result maps each include a dense depth map of the same resolution as the corresponding pair. A final dense depth map of the target scenario can be generated according to the dense depth maps.
    Type: Application
    Filed: February 25, 2022
    Publication date: June 9, 2022
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Yonggen LING, Wanchao CHI, Chong ZHANG, Shenghao ZHANG, Zhengyou ZHANG, Zejian YUAN, Ang LI, Zidong CAO
  • Publication number: 20220143125
    Abstract: The present invention discloses Fuke Qianjin Capsules and a quality control method therefor. The capsules are made of Radix Et Caulis Flemingiae, Caulis Mahoniae, Herba Andrographis, Zanthoxylum dissitum Hemsl., Caulis Spatholobi, Radix Angelicae Sinensis, Radix Codonopsis, and Radix Rosa Laevigata as raw materials. Each of the Fuke Qianjin Capsules contains not less than 2.0 mg of Z-ligustilide, and a total amount of andrographolide and dehydroandrographolide is not less than 1.9 mg. A new standard for controlling quality of the Fuke Qianjin Capsules has been established through an analysis of chemical ingredients in the Fuke Qianjin Capsules. This standard adds a variety of core ingredient content to the existing pharmacopoeia standards. According to the Fuke Qianjin Capsules made in this range, the consistency of effects between different batches is more stable. Moreover, the more the types of core ingredients are limited, the more stable the consistency of the drug effect.
    Type: Application
    Filed: January 14, 2020
    Publication date: May 12, 2022
    Applicant: QIANJIN PHARMACEUTICAL CO., LTD.
    Inventors: Shun JIAN, Yun GONG, Peng ZHANG, Fujun LI, Yonggen LING, Juanjuan HE, Kanghua WANG, Xiuwei YANG
  • Patent number: 11321870
    Abstract: Embodiments of this application disclose a camera attitude tracking method and apparatus, a device, and a system in the field of augmented reality (AR). The method includes receiving, by a second device with a camera, an initial image and an initial attitude parameter that are transmitted by a first device; obtaining, by the second device, a second image acquired by the camera; obtaining, by the second device, a camera attitude variation of the second image relative to the initial image; and obtaining, by the second device, according to the initial attitude parameter and the camera attitude variation, a second camera attitude parameter, the second camera attitude parameter corresponding to the second image.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: May 3, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Yonggen Ling, Linchao Bao, Xiaolong Zhu, Liang Qiao, Wei Liu