Patents by Inventor Zhaoxiang Liu
Zhaoxiang Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11321583Abstract: An image annotating method includes: acquiring an image collected at a terminal; acquiring voice information associated with the image; annotating the image according to the voice information; and storing an annotated result of the image.Type: GrantFiled: September 19, 2019Date of Patent: May 3, 2022Assignee: CLOUDMINDS ROBOTICS CO., LTD.Inventors: Shiguo Lian, Zhaoxiang Liu, Ning Wang, Yibing Nan
-
Patent number: 11175668Abstract: The embodiment of the present invention provides a navigation method and apparatus, and a terminal device, relating to the technical field of navigation, and for reducing or avoiding the impact of network delay on real-time obstacle detection and avoidance. The method includes: detecting an obstacle to obtain first obstacle information; obtaining scene information and sending the scene information to a remote server, so that the remote server obtains second obstacle information according to the scene information, wherein the accuracy of the second obstacle information is greater than the accuracy of the first obstacle information; and if the second obstacle information sent by the remote server is not received, avoiding the obstacle according to the first obstacle information. The embodiment of the present invention is applied to navigation.Type: GrantFiled: June 10, 2019Date of Patent: November 16, 2021Assignee: CLOUDMINDS (SHANGHAI) ROBOTICS CO., LTD.Inventors: Zhaoxiang Liu, Shiguo Lian
-
Publication number: 20210271253Abstract: A method and apparatus for controlling a device to move, a storage medium, and an electronic device. The method includes: collecting a first RGB-D image of a surrounding environment of a target device according to a preset period when the target device moves; obtaining a second RGB-D image of a preset number of frames from the first RGB-D image; obtaining a pre-trained deep Q network model DQN training model, and performing migration training on the DQN training model according to the second RGB-D image to obtain a target DQN model; obtaining a target RGB-D image of the current surrounding environment of the target device; inputting the target RGB-D image into the target DQN model to obtain a target output parameter, and determining a target control strategy according to the target output parameter; and controlling the target device to move according to the target control strategy.Type: ApplicationFiled: May 14, 2021Publication date: September 2, 2021Applicant: CloudMinds (Shanghai) Robotics Co., Ltd.Inventors: Zhaoxiang Liu, Shiguo Lian, Shaohua Li
-
Patent number: 11036990Abstract: A target identification method includes: using information of a to-be-detected target acquired within a predetermined time period as judgment information; acquiring an identification result of the to-be-detected target at a current time and outputting the identification result; judging whether the attribute type corresponding to the identification result is an attribute type having the highest priority; and if the attribute type corresponding to the identification result is not the attribute type having the highest priority, using information of the to-be-detected target acquired within a next predetermined time period as the judgment information, and returning to the step of acquiring an identification result of the to-be-detected target at a current time and outputting the identification result.Type: GrantFiled: March 13, 2020Date of Patent: June 15, 2021Assignee: CLOUDMINDS (BEIJING) TECHNOLOGIES CO., LTD.Inventors: Shiguo Lian, Zhaoxiang Liu, Ning Wang
-
Publication number: 20200218897Abstract: A target identification method includes: using information of a to-be-detected target acquired within a predetermined time period as judgment information; acquiring an identification result of the to-be-detected target at a current time and outputting the identification result; judging whether the attribute type corresponding to the identification result is an attribute type having the highest priority; and if the attribute type corresponding to the identification result is not the attribute type having the highest priority, using information of the to-be-detected target acquired within a next predetermined time period as the judgment information, and returning to the step of acquiring an identification result of the to-be-detected target at a current time and outputting the identification result.Type: ApplicationFiled: March 13, 2020Publication date: July 9, 2020Inventors: Shiguo LIAN, Zhaoxiang LIU, Ning WANG
-
Publication number: 20200125920Abstract: An interaction method and apparatus of a virtual robot, a storage medium and an electronic device, includes obtaining interaction information input by a user for interacting with the virtual robot; inputting the interaction information into a control model of the virtual robot, wherein the control model is obtained by training by using interaction information input by a user of a live video platform and behavior response information of an anchor for the interaction information as model training samples; and performing behavior control on the virtual robot according to behavior control information output by the control model based on the interaction information. The method achieves the interaction between the virtual robot and the user, improving the instantaneity, the flexibility and the applicability of the virtual robot, and meeting the emotional and action communication demands of the user and the virtual robot.Type: ApplicationFiled: September 12, 2019Publication date: April 23, 2020Inventors: Zhaoxiang LIU, Shiguo LIAN, Ning WANG
-
Publication number: 20200090057Abstract: A human-computer hybrid decision method and apparatus, which relate to the field of artificial intelligence, are presented to solve the problem that it is difficult to ensure the system reliability by artificial intelligence alone. The method includes: determining a confidence coefficient of an artificial intelligence AI module for target information, wherein the confidence coefficient is used for indicating a probability that the AI module make a correct decision according to the target information; in response to the confidence coefficient being greater than a preset threshold, obtaining decision information made by the AI module according to the target information to serve as actual decision information; and in response to the confidence coefficient being less than the preset threshold, displaying the target information and providing an interaction interface; obtaining artificial decision information received by the interaction interface to serve as the actual decision information.Type: ApplicationFiled: December 7, 2016Publication date: March 19, 2020Inventors: Shiguo LIAN, Zhaoxiang LIU, Kai WANG, Yimin LIN, Qiang LI
-
Publication number: 20200012888Abstract: An image annotating method includes: acquiring an image collected at a terminal; acquiring voice information associated with the image; annotating the image according to the voice information; and storing an annotated result of the image.Type: ApplicationFiled: September 19, 2019Publication date: January 9, 2020Inventors: Shiguo LIAN, Zhaoxiang LIU, Ning WANG, Yibing NAN
-
Publication number: 20190294172Abstract: The embodiment of the present invention provides a navigation method and apparatus, and a terminal device, relating to the technical field of navigation, and for reducing or avoiding the impact of network delay on real-time obstacle detection and avoidance. The method includes: detecting an obstacle to obtain first obstacle information; obtaining scene information and sending the scene information to a remote server, so that the remote server obtains second obstacle information according to the scene information, wherein the accuracy of the second obstacle information is greater than the accuracy of the first obstacle information; and if the second obstacle information sent by the remote server is not received, avoiding the obstacle according to the first obstacle information. The embodiment of the present invention is applied to navigation.Type: ApplicationFiled: June 10, 2019Publication date: September 26, 2019Inventors: Zhaoxiang LIU, Shiguo LIAN
-
Patent number: 10129471Abstract: A method for detecting a location of a laser point on a screen includes acquiring a first image frame captured by a first camera among N cameras; detecting whether a first laser point exists in the first image frame; and when the first laser point exists in the first image frame, and no laser point exists in an image frame captured by another camera except the first camera among the N cameras, using the first laser point as the laser point on the screen, and acquiring information about a location of the first laser point on the screen. In the present application, images captured by all cameras do not need to be fused first; instead, laser point detection is performed directly on a captured image frame. This greatly improves efficiency of the laser point detection and improves detection accuracy.Type: GrantFiled: July 11, 2016Date of Patent: November 13, 2018Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Zhaoxiang Liu, Shiguo Lian
-
Patent number: 9965026Abstract: A method, a device, and a system for interactive video display, relate to the field of multimedia technologies, and can resolve a problem in the prior art that browsing experience of a user is not real because a visible region is simply changed only according to a panning change of a location of the user. A specific solution is acquiring posture information of a user, where the posture information includes viewing angle information of the user and location information of the user, and adjusting, according to the posture information, an unadjusted visible region, in a video, for being displayed on a client in order to obtain an adjusted visible region currently for being displayed on the client. The embodiments of the present invention are used for performing interactive video display.Type: GrantFiled: March 4, 2016Date of Patent: May 8, 2018Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Kai Wang, Shiguo Lian, Zhaoxiang Liu
-
Publication number: 20170046843Abstract: A method for detecting a location of a laser point on a screen includes acquiring a first image frame captured by a first camera among N cameras; detecting whether a first laser point exists in the first image frame; and when the first laser point exists in the first image frame, and no laser point exists in an image frame captured by another camera except the first camera among the N cameras, using the first laser point as the laser point on the screen, and acquiring information about a location of the first laser point on the screen. In the present application, images captured by all cameras do not need to be fused first; instead, laser point detection is performed directly on a captured image frame. This greatly improves efficiency of the laser point detection and improves detection accuracy.Type: ApplicationFiled: July 11, 2016Publication date: February 16, 2017Inventors: Zhaoxiang Liu, Shiguo Lian
-
Publication number: 20160259403Abstract: A method, a device, and a system for interactive video display, relate to the field of multimedia technologies, and can resolve a problem in the prior art that browsing experience of a user is not real because a visible region is simply changed only according to a panning change of a location of the user. A specific solution is acquiring posture information of a user, where the posture information includes viewing angle information of the user and location information of the user, and adjusting, according to the posture information, an unadjusted visible region, in a video, for being displayed on a client in order to obtain an adjusted visible region currently for being displayed on the client. The embodiments of the present invention are used for performing interactive video display.Type: ApplicationFiled: March 4, 2016Publication date: September 8, 2016Inventors: Kai Wang, Shiguo Lian, Zhaoxiang Liu