Patents by Inventor Wanchao CHI

Wanchao CHI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240157555
    Abstract: A method for controlling a legged robot is performed by an electronic device. The legged robot includes a base and at least two robotic legs. Each of the robotic legs includes at least one joint. The method includes: determining a first expected moving trajectory corresponding to the legged robot and determining a second expected moving trajectory corresponding to the legged robot in response to the legged robot falling to contact a plane, the first expected moving trajectory indicating an expected moving trajectory of a center of mass of the legged robot, and the second expected moving trajectory indicating an expected moving trajectory of a foot end of each of the at least two robotic legs; and controlling, based on a dynamic model corresponding to the legged robot, the first expected moving trajectory, and the second expected moving trajectory, an action of each joint after the legged robot contacts the plane.
    Type: Application
    Filed: January 22, 2024
    Publication date: May 16, 2024
    Inventors: Shuai WANG, Yu ZHENG, Wanchao CHI, Jingfan ZHANG
  • Publication number: 20230076589
    Abstract: A method for controlling motion of a legged robot includes determining one or more candidate landing points for each foot of the robot. The method further includes determining a first correlation between a center of mass position change parameter, candidate landing points, and foot contact force. The method further includes determining, under a constraint condition set and based on the first correlation, a target center of mass position change parameter, a target step order, and a target landing point for each foot selected among the one or more candidate landing points for the respective foot, the constraint condition set constraining a step order. The method further includes controlling, according to the target center of mass position change parameter, the target step order, and the target landing point for each foot, motion of the legged robot in the preset period.
    Type: Application
    Filed: November 15, 2022
    Publication date: March 9, 2023
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Yu ZHENG, Xinyang JIANG, Wanchao CHI, Yonggen LING, Shenghao ZHANG, Zhengyou ZHANG
  • Publication number: 20230055206
    Abstract: A legged robot motion control method includes: acquiring centroid state data of a spatial path start point and a spatial path end point of a motion path; determining a target landing point of afoot of the legged robot in the motion path based on the spatial path start point and the spatial path end point; determining a change relationship between a centroid position change coefficient and a foot contact force based on the centroid state data; selecting, under constraint of a constraint condition set, a target centroid position change coefficient that meets the change relationship; the constraint condition set including a spatial landing point constraint condition; determining a target motion control parameter according to the target centroid position change coefficient and the target landing point of the foot; and controlling, based on the target motion control parameter, the legged robot to perform motion according to the motion path.
    Type: Application
    Filed: October 20, 2022
    Publication date: February 23, 2023
    Inventors: Yu ZHENG, Xinyang JIANG, Wanchao CHI, Zhengyou ZHANG
  • Publication number: 20230030054
    Abstract: A method for controlling motion of a legged robot includes: determining, according to state data of the legged robot at a start moment in a preset period, a candidate landing point of each foot in the preset period; determining, according to the state data at the start moment and the candidate landing point of each foot, a first correlation between a centroid position change parameter, a step duty ratio, a candidate landing point, and a foot contact force; determining, under a constraint of a constraint condition set, a target centroid position change parameter, a target step duty ratio, and a target landing point satisfying the first correlation; and controlling, according to the target centroid position change parameter, the target step duty ratio, and the target landing point, motion of the legged robot in the preset period.
    Type: Application
    Filed: September 27, 2022
    Publication date: February 2, 2023
    Inventors: Yu ZHENG, Xinyang JIANG, Wanchao CHI, Yonggen LING, Shenghao ZHANG, Zhengyou ZHANG
  • Publication number: 20230016514
    Abstract: A legged robot motion control method, apparatus, and device, and a storage medium. The method includes: acquiring center of mass state data corresponding to a spatial path starting point and spatial path ending point of a motion path; determining a candidate foothold of each foot in the motion path based on the spatial path starting point and the spatial path ending point; determining a variation relationship between a center of mass position variation coefficient and a foot contact force based on the center of mass state data; screening out, under restrictions of a constraint set, a target center of mass position variation coefficient and target foothold that satisfy the variation relationship; determining a target motion control parameter according to the target center of mass position variation coefficient and the target foothold; and controlling a legged robot based on the target motion control parameter to move according to the motion path.
    Type: Application
    Filed: September 15, 2022
    Publication date: January 19, 2023
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yu ZHENG, Xinyang JIANG, Wanchao CHI, Zhengyou ZHANG
  • Publication number: 20230015214
    Abstract: This application relates to a planar contour recognition method and apparatus, a computer device, and a storage medium. The method includes obtaining a target frame image collected from a target environment; fitting edge points of an object plane in the target frame image and edge points of a corresponding object plane in a previous frame image to obtain a fitting graph, the previous frame image being collected from the target environment before the target frame image; deleting edge points that do not appear on the object plane of the previous frame image, in the fitting graph; and recognizing a contour constructed by remaining edge points in the fitting graph as a planar contour.
    Type: Application
    Filed: September 29, 2022
    Publication date: January 19, 2023
    Inventors: Shenghao ZHANG, Yonggen LING, Wanchao CHI, Yu ZHENG, Xinyang JIANG
  • Publication number: 20220415064
    Abstract: The present disclosure provides an image processing method and apparatus, an electronic device, and a computer-readable storage medium. The method includes: obtaining a first three-dimensional image of a target object in a three-dimensional coordinate system; determining a target plane of the target object in the first three-dimensional image, the target plane comprising target three-dimensional points; projecting the target three-dimensional points to a two-dimensional coordinate system defined on the target plane, to obtain target two-dimensional points; determining a target polygon and a minimum circumscribed target graphic of the target polygon according to the target two-dimensional points; and recognizing the minimum circumscribed target graphic as a first target graphic of the target object in the first target three-dimensional image.
    Type: Application
    Filed: September 1, 2022
    Publication date: December 29, 2022
    Inventors: Shenghao ZHANG, Yonggen Ling, Wanchao Chi, Yu Zheng, Xinyang Jiang
  • Publication number: 20220414910
    Abstract: A scene contour recognition method is provided. In the method, a plurality of scene images of an environment is obtained. Three-dimensional information of a target plane in the plurality of scene images is determined based on depth information for each of the plurality of scene images, The target plane corresponds to a target object in the plurality of scene images. A three-dimensional contour corresponding to the target object is generated by fusing the target plane in each of the plurality of scene images based on the three-dimensional information of the target plane in each of the plurality of scene images. A contour diagram of the target object is generated by projecting the three-dimensional contour onto a two-dimensional plane.
    Type: Application
    Filed: August 22, 2022
    Publication date: December 29, 2022
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Shenghao ZHANG, Yonggen LING, Wanchao CHI, Yu ZHENG, Xinyang JIANG
  • Publication number: 20220274657
    Abstract: A robot comprises a wheel-footed bimodal mechanical leg having a driving apparatus, a thigh unit, and a calf unit. A joint end of the thigh unit is hingedly connected to a joint end of the calf unit by a rotary shaft. The driving apparatus is connected to the rotary shaft by a transmission apparatus. The calf unit comprise a locking mechanism. The robot can operate in a footed mode and a wheeled mode. In the footed mode, the calf units and the rotary shafts in n mechanical legs are fixedly connected to each other, where n is an integer that is at least two. In the wheeled mode, the calf units and the rotary shafts in at least two wheel-footed bimodal mechanical legs are rotatably connected to each other.
    Type: Application
    Filed: May 18, 2022
    Publication date: September 1, 2022
    Inventors: Dongsheng ZHANG, Kun XIONG, Xiangyu CHEN, Sicheng YANG, Qinqin ZHOU, Liangwei XU, Qiwei XU, Wanchao CHI, Xiong LI, Zhengyou ZHANG
  • Publication number: 20220274254
    Abstract: A computing device creates first relation data indicating a relation between an interval duration and a center of mass position of a legged robot. The first relation data comprises a first constant, C. The computing device creates second relation data corresponding to at least one leg of for the legged robot and a force corresponding to the at last one leg with the ground. The second relation data comprises the first constant, C. The computing device creates third relation data according to the second relation data. The device determines a value of the first constant, C, when a target value J is a minimum value, and obtains the first relation data according to the determined value of the first constant, C.
    Type: Application
    Filed: May 12, 2022
    Publication date: September 1, 2022
    Inventors: Yu ZHENG, Xinyang JIANG, Wanchao CHI, Yonggen LING, Shenghao ZHANG, Zhengyou ZHANG
  • Publication number: 20220266448
    Abstract: This application relates to a leg assembly and device for a robot. The leg assembly includes: a connection assembly and a sole assembly; the connection assembly is configured to connect the leg assembly and a robot body. The sole assembly includes a sole plate, a first force sensor, a distance sensor, and an attitude sensor. The connection assembly includes a second force sensor and a shank connector. The first force sensor is configured to detect a normal reaction force suffered by the sole plate after being in contact with an obstacle; the second force sensor is configured to detect a resultant force of reaction forces suffered by the sole plate after being in contact with the obstacle.
    Type: Application
    Filed: May 13, 2022
    Publication date: August 25, 2022
    Inventors: Wanchao CHI, Yu ZHENG, Yuan DAI, Kun XIONG, Xiangyu CHEN, Qinqin ZHOU, Zhengyou ZHANG
  • Publication number: 20220258356
    Abstract: This disclosure relates to a spatial calibration method and apparatus of a robot ontology coordinate system based on a visual perception device and a storage medium. The method includes: obtaining first transformation relationships; obtaining second transformation relationships; using a transformation relationship between a visual perception coordinate system and an ontology coordinate system as an unknown variable; and resolving the unknown variable based on an equivalence relationship between a transformation relationship obtained according to the first transformation relationships and the unknown variable and a transformation relationship obtained according to the second transformation relationships and the unknown variable, to obtain the transformation relationship between the visual perception coordinate system and the ontology coordinate system.
    Type: Application
    Filed: May 2, 2022
    Publication date: August 18, 2022
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Wanchao CHI, Le CHEN, Yonggen LING, Shenghao ZHANG, Yu ZHENG, Xinyang JIANG, Zhengyou ZHANG
  • Publication number: 20220180543
    Abstract: A method of depth map completion is described. A color map and a sparse depth map of a target scenario can be received. Resolutions of the color map and the sparse depth map are adjusted to generate n pairs of color maps and sparse depth maps of n different resolutions. The n pairs of color maps and the sparse depth maps can be processed to generate n prediction result maps using a cascade hourglass network including n levels of hourglass networks. Each of the n pair is input to a respective one of the n levels to generate the respective one of the n prediction result maps. The n prediction result maps each include a dense depth map of the same resolution as the corresponding pair. A final dense depth map of the target scenario can be generated according to the dense depth maps.
    Type: Application
    Filed: February 25, 2022
    Publication date: June 9, 2022
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Yonggen LING, Wanchao CHI, Chong ZHANG, Shenghao ZHANG, Zhengyou ZHANG, Zejian YUAN, Ang LI, Zidong CAO
  • Publication number: 20220051061
    Abstract: An artificial intelligence-based action recognition method includes: determining, according to video data comprising an interactive object, node sequence information corresponding to video frames in the video data, the node sequence information of each video frame including position information of nodes in a node sequence, the nodes in the node sequence being nodes of the interactive object that are moved to implement a corresponding interactive action; determining action categories corresponding to the video frames in the video data, including: determining, according to the node sequence information corresponding to N consecutive video frames in the video data, action categories respectively corresponding to the N consecutive video frames; and determining, according to the action categories corresponding to the video frames in the video data, a target interactive action made by the interactive object in the video data.
    Type: Application
    Filed: November 1, 2021
    Publication date: February 17, 2022
    Inventors: Wanchao CHI, Chong ZHANG, Yonggen LING, Wei LIU, Zhengyou ZHANG, Zejian YUAN, Ziyang SONG, Ziyi YIN