Patents by Inventor Shenghao ZHANG
Shenghao ZHANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12293532Abstract: A method of depth map completion is described. A color map and a sparse depth map of a target scenario can be received. Resolutions of the color map and the sparse depth map are adjusted to generate n pairs of color maps and sparse depth maps of n different resolutions. The n pairs of color maps and the sparse depth maps can be processed to generate n prediction result maps using a cascade hourglass network including n levels of hourglass networks. Each of the n pair is input to a respective one of the n levels to generate the respective one of the n prediction result maps. The n prediction result maps each include a dense depth map of the same resolution as the corresponding pair. A final dense depth map of the target scenario can be generated according to the dense depth maps.Type: GrantFiled: February 25, 2022Date of Patent: May 6, 2025Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Yonggen Ling, Wanchao Chi, Chong Zhang, Shenghao Zhang, Zhengyou Zhang, Zejian Yuan, Ang Li, Zidong Cao
-
Patent number: 12202152Abstract: This disclosure relates to a spatial calibration method and apparatus of a robot ontology coordinate system based on a visual perception device and a storage medium. The method includes: obtaining first transformation relationships; obtaining second transformation relationships; using a transformation relationship between a visual perception coordinate system and an ontology coordinate system as an unknown variable; and resolving the unknown variable based on an equivalence relationship between a transformation relationship obtained according to the first transformation relationships and the unknown variable and a transformation relationship obtained according to the second transformation relationships and the unknown variable, to obtain the transformation relationship between the visual perception coordinate system and the ontology coordinate system.Type: GrantFiled: May 2, 2022Date of Patent: January 21, 2025Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Wanchao Chi, Le Chen, Yonggen Ling, Shenghao Zhang, Yu Zheng, Xinyang Jiang, Zhengyou Zhang
-
Publication number: 20240227203Abstract: In a state estimation method for a legged robot, first sensor information and second sensor information of the legged robot are received. First state information of the legged robot for a period of time is determined, via a first Kalman filter, based on the first sensor information and the second sensor information. Third sensor information of the legged robot is received. Second state information of the legged robot is determined, via a second Kalman filter, based on the third sensor information and the first state information for the period of time. First state information of the legged robot at a current time is updated based on the second state information of the legged robot, to determine state information of the legged robot at the current time.Type: ApplicationFiled: March 21, 2024Publication date: July 11, 2024Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Yanming WU, Wanchao CHI, Haitao WANG, Xinyang JIANG, Shenghao ZHANG, Yu ZHENG
-
Patent number: 11978219Abstract: A method for determining motion information is provided to be executed by a computing device. The method includes: determining a first image and a second image, the first image and the second image each including an object; determining a first pixel region based on a target feature point on the object in the first image; determining a second pixel region according to a pixel difference among a plurality of first pixel points in the first pixel region and further according to the target feature point; and obtaining motion information of the target feature point according to the plurality of second pixel points and the second image, the motion information being used for indicating changes in locations of the target feature point in the first image and the second image.Type: GrantFiled: May 18, 2021Date of Patent: May 7, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Yonggen Ling, Shenghao Zhang
-
Publication number: 20230077413Abstract: Onboarding one or more wireless devices to different wireless networks in a wireless system. A gateway/access point apparatus selects from a user interface a trigger service set identifier (SSID) among a plurality of available onboarding trigger SSIDs, each onboarding trigger SSID corresponding to a different wireless network. The gateway/access point apparatus transmits the onboarding trigger SSID to a wireless device, initiates an onboarding procedure between the wireless device and the gateway/access point apparatus, and establish a network connection between the wireless device and a wireless network based on the onboarding procedure, the wireless network corresponding to the transmitted onboarding trigger SSID. The selecting, the transmitting, the initiating, and the establishing are performed for each of the one or more wireless devices for establishing a network connection to a different one of the one or more wireless networks using a corresponding onboarding trigger SSID.Type: ApplicationFiled: February 18, 2020Publication date: March 16, 2023Inventors: Xiangzhong JIAO, Feng ZHENG, Shenghao ZHANG, Yonghui WU, Shukai YANG, Fangli LIAO, Peng TAO
-
Publication number: 20230076589Abstract: A method for controlling motion of a legged robot includes determining one or more candidate landing points for each foot of the robot. The method further includes determining a first correlation between a center of mass position change parameter, candidate landing points, and foot contact force. The method further includes determining, under a constraint condition set and based on the first correlation, a target center of mass position change parameter, a target step order, and a target landing point for each foot selected among the one or more candidate landing points for the respective foot, the constraint condition set constraining a step order. The method further includes controlling, according to the target center of mass position change parameter, the target step order, and the target landing point for each foot, motion of the legged robot in the preset period.Type: ApplicationFiled: November 15, 2022Publication date: March 9, 2023Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Yu ZHENG, Xinyang JIANG, Wanchao CHI, Yonggen LING, Shenghao ZHANG, Zhengyou ZHANG
-
Publication number: 20230030054Abstract: A method for controlling motion of a legged robot includes: determining, according to state data of the legged robot at a start moment in a preset period, a candidate landing point of each foot in the preset period; determining, according to the state data at the start moment and the candidate landing point of each foot, a first correlation between a centroid position change parameter, a step duty ratio, a candidate landing point, and a foot contact force; determining, under a constraint of a constraint condition set, a target centroid position change parameter, a target step duty ratio, and a target landing point satisfying the first correlation; and controlling, according to the target centroid position change parameter, the target step duty ratio, and the target landing point, motion of the legged robot in the preset period.Type: ApplicationFiled: September 27, 2022Publication date: February 2, 2023Inventors: Yu ZHENG, Xinyang JIANG, Wanchao CHI, Yonggen LING, Shenghao ZHANG, Zhengyou ZHANG
-
Publication number: 20230015214Abstract: This application relates to a planar contour recognition method and apparatus, a computer device, and a storage medium. The method includes obtaining a target frame image collected from a target environment; fitting edge points of an object plane in the target frame image and edge points of a corresponding object plane in a previous frame image to obtain a fitting graph, the previous frame image being collected from the target environment before the target frame image; deleting edge points that do not appear on the object plane of the previous frame image, in the fitting graph; and recognizing a contour constructed by remaining edge points in the fitting graph as a planar contour.Type: ApplicationFiled: September 29, 2022Publication date: January 19, 2023Inventors: Shenghao ZHANG, Yonggen LING, Wanchao CHI, Yu ZHENG, Xinyang JIANG
-
Publication number: 20220414910Abstract: A scene contour recognition method is provided. In the method, a plurality of scene images of an environment is obtained. Three-dimensional information of a target plane in the plurality of scene images is determined based on depth information for each of the plurality of scene images, The target plane corresponds to a target object in the plurality of scene images. A three-dimensional contour corresponding to the target object is generated by fusing the target plane in each of the plurality of scene images based on the three-dimensional information of the target plane in each of the plurality of scene images. A contour diagram of the target object is generated by projecting the three-dimensional contour onto a two-dimensional plane.Type: ApplicationFiled: August 22, 2022Publication date: December 29, 2022Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Shenghao ZHANG, Yonggen LING, Wanchao CHI, Yu ZHENG, Xinyang JIANG
-
Publication number: 20220415064Abstract: The present disclosure provides an image processing method and apparatus, an electronic device, and a computer-readable storage medium. The method includes: obtaining a first three-dimensional image of a target object in a three-dimensional coordinate system; determining a target plane of the target object in the first three-dimensional image, the target plane comprising target three-dimensional points; projecting the target three-dimensional points to a two-dimensional coordinate system defined on the target plane, to obtain target two-dimensional points; determining a target polygon and a minimum circumscribed target graphic of the target polygon according to the target two-dimensional points; and recognizing the minimum circumscribed target graphic as a first target graphic of the target object in the first target three-dimensional image.Type: ApplicationFiled: September 1, 2022Publication date: December 29, 2022Inventors: Shenghao ZHANG, Yonggen Ling, Wanchao Chi, Yu Zheng, Xinyang Jiang
-
Publication number: 20220274254Abstract: A computing device creates first relation data indicating a relation between an interval duration and a center of mass position of a legged robot. The first relation data comprises a first constant, C. The computing device creates second relation data corresponding to at least one leg of for the legged robot and a force corresponding to the at last one leg with the ground. The second relation data comprises the first constant, C. The computing device creates third relation data according to the second relation data. The device determines a value of the first constant, C, when a target value J is a minimum value, and obtains the first relation data according to the determined value of the first constant, C.Type: ApplicationFiled: May 12, 2022Publication date: September 1, 2022Inventors: Yu ZHENG, Xinyang JIANG, Wanchao CHI, Yonggen LING, Shenghao ZHANG, Zhengyou ZHANG
-
Publication number: 20220258356Abstract: This disclosure relates to a spatial calibration method and apparatus of a robot ontology coordinate system based on a visual perception device and a storage medium. The method includes: obtaining first transformation relationships; obtaining second transformation relationships; using a transformation relationship between a visual perception coordinate system and an ontology coordinate system as an unknown variable; and resolving the unknown variable based on an equivalence relationship between a transformation relationship obtained according to the first transformation relationships and the unknown variable and a transformation relationship obtained according to the second transformation relationships and the unknown variable, to obtain the transformation relationship between the visual perception coordinate system and the ontology coordinate system.Type: ApplicationFiled: May 2, 2022Publication date: August 18, 2022Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Wanchao CHI, Le CHEN, Yonggen LING, Shenghao ZHANG, Yu ZHENG, Xinyang JIANG, Zhengyou ZHANG
-
Publication number: 20220180543Abstract: A method of depth map completion is described. A color map and a sparse depth map of a target scenario can be received. Resolutions of the color map and the sparse depth map are adjusted to generate n pairs of color maps and sparse depth maps of n different resolutions. The n pairs of color maps and the sparse depth maps can be processed to generate n prediction result maps using a cascade hourglass network including n levels of hourglass networks. Each of the n pair is input to a respective one of the n levels to generate the respective one of the n prediction result maps. The n prediction result maps each include a dense depth map of the same resolution as the corresponding pair. A final dense depth map of the target scenario can be generated according to the dense depth maps.Type: ApplicationFiled: February 25, 2022Publication date: June 9, 2022Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Yonggen LING, Wanchao CHI, Chong ZHANG, Shenghao ZHANG, Zhengyou ZHANG, Zejian YUAN, Ang LI, Zidong CAO
-
Publication number: 20210272294Abstract: A method for determining motion information is provided to be executed by a computing device. The method includes: determining a first image and a second image, the first image and the second image each including an object; determining a first pixel region based on a target feature point on the object in the first image; determining a second pixel region according to a pixel difference among a plurality of first pixel points in the first pixel region and further according to the target feature point; and obtaining motion information of the target feature point according to the plurality of second pixel points and the second image, the motion information being used for indicating changes in locations of the target feature point in the first image and the second image.Type: ApplicationFiled: May 18, 2021Publication date: September 2, 2021Inventors: Yonggen LING, Shenghao ZHANG