ROBOT AND CONTROL METHOD THEREFOR

The embodiments of the present disclosure provide a robot and a control method therefor. In the robot control method, a robot may acquire posture data of a user in response to a posture interaction wakeup instruction, determine a target operation region according to the posture data of the user, and in case that the target operation region is different from a region that a current position of the robot belongs to, move to the target operation region so as to perform a set operation task. Further, the robot implements operations while moving based on user postures without the limitation of region division, thereby further improving the robot control flexibility.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

The present disclosure refers to Chinese Patent Application No. 2020100435390, filed on Jan. 15, 2020, and entitled “Robot and Control Method Therefor”, the contents of which are hereby incorporated by reference in its entirety.

TECHNICAL FIELD

The present disclosure relates to the technical field of intelligent devices, and particularly to a robot and a control method therefor.

BACKGROUND

With the development of sciences and technologies, intelligent robots have gradually entered people's daily life, providing more convenience therefor, and user's demands for interaction with robots have also increased.

In the prior art, a robot is unable to understand a control intention represented by posture of a user. Therefore, a solution needs to be proposed.

SUMMARY

The present disclosure provides a robot and a control method therefor in multiple aspects, thereby improving the robot control flexibility.

An aspect of the present disclosure provides a robot control method, including: acquiring posture data of a user in response to a posture interaction wakeup instruction; determining, according to the posture data, a target operation region specified by the user, the target operation region being different from a region that a current position of a robot belongs to; and causing the robot to move to the target operation region so as to perform a set operation task.

Another aspect of the present disclosure provides a robot, including: a robot body, as well as a sensor component, controller, and motion component that are mounted to the robot body. The sensor component is configured to acquire posture data of a user in response to an operation control instruction of the user. The controller is configured to determine, according to the posture data of the user, a target operation region specified by the user, and control the motion component to move to the target operation region so as to perform an operation task.

In embodiments of the present disclosure, a robot may acquire posture data of a user in response to a posture interaction wakeup instruction, determine a target operation region according to the posture data of the user, and in case that the target operation region is different from a region where a current position of the robot is located, move to the target operation region so as to perform a set operation task. Further, the robot implements operations while moving based on user postures without the limitation of region division, thereby further improving the robot control flexibility.

BRIEF DESCRIPTION OF THE DRAWINGS

In order to describe the technical solutions in the embodiments of the present invention or the prior art more clearly, the drawings required to be used in descriptions about the embodiments or the prior art will be introduced briefly below. Apparently, the drawings in the description below are some embodiments of the present invention. Those of ordinary skill in the art may further obtain other drawings according to these drawings without creative work.

In the drawings:

FIG. 1 is a schematic structural diagram of a robot according to an exemplary embodiment of the present disclosure;

FIG. 2 is a schematic principle diagram of three-dimensional depth measurement according to an exemplary embodiment of the present disclosure;

FIG. 3 is a schematic flowchart of a robot control method according to an exemplary embodiment of the present disclosure;

FIG. 4a is a schematic flowchart of a robot control method according to another exemplary embodiment of the present disclosure;

FIG. 4b is a schematic diagram of acquiring posture data and detecting key points according to an exemplary embodiment of the present disclosure;

FIGS. 4c to 4d are schematic diagrams of determining a target operation direction according to a space coordinate corresponding to a gesture according to an exemplary embodiment of the present disclosure;

FIG. 5a is a schematic diagram of an operating logic of a sweeping robot according to an application scenario embodiment of the present disclosure; and

FIGS. 5b to 5d are schematic diagrams of performing, by a sweeping robot, a cleaning task according to a posture of a user according to an application scenario embodiment of the present disclosure.

DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

In order to make the objectives, technical solutions, and advantages of the present disclosure clearer, the technical solutions of the present disclosure will be described clearly and completely below in combination with specific embodiments and corresponding drawings of the present disclosure. Apparently, the described embodiments are not all but only part of embodiments of the present disclosure. All other embodiments obtained by those of ordinary skill in the art based on the embodiments in the present disclosure without creative work shall fall within the scope of protection of the present disclosure.

In the prior art, methods for specifying an operation region of a robot are relatively undiversified. For example, for a sweeping robot with a cleaning function, a user usually needs to specify a region to be cleaned on a navigation map of the robot provided by a terminal device, and the sweeping robot performs a cleaning task according to the region to be cleaned specified by the user on the navigation map. However, this method is much dependent on the terminal device. In addition, in some typical scenarios, the navigation map of the robot is not so complete, so the sweeping robot is unable to perform a cleaning task on a region not included in the navigation map, and the flexibility is relatively low.

For the foregoing technical problem, some exemplary embodiments of the present disclosure provide a robot and a robot control method. The technical solutions provided in each embodiment of the present disclosure will be described below in detail in combination with the drawings.

It is to be noted that the same reference signs represent the same objects in the following drawings and embodiments. Therefore, a certain object, once being defined in a drawing, needs not to be further discussed in subsequent drawings.

FIG. 1 is a schematic structural diagram of a robot according to an exemplary embodiment of the present disclosure. As shown in FIG. 1, the robot includes a body 10, as well as a sensor component 20, controller 30, and motion component 40 that are mounted to the body 10.

In the present embodiment, the robot refers to an electronic device capable of moving autonomously and implementing intelligent control. In some scenarios, the robot is implemented as a robot capable of performing sweeping and cleaning tasks, such as a sweeping robot that sweeps floors, a scrubbing robot that cleans floors, walls, ceilings, glass, and motor vehicles, and an air purification robot that purifies the air. FIG. 1 illustrates the structure of the robot provided in the embodiment of the present disclosure taking a sweeping robot as an example. However, this does not mean that the robot provided in the present disclosure may be implemented as a sweeping robot only.

In some other scenarios, the robot may be implemented as a warehouse logistics robot, such as a freight robot and a delivery robot. In some other scenarios, the robot may be implemented as a robot waiter, such as a greeting robot and serving robot of a hotel, and a guide robot of a mall or store, and illustrations are omitted.

It is to be noted that the autonomous moving function of the robot may include a function of moving on the ground or a function of autonomously flying in the air. If including the function of flying in the air, the robot may be implemented as an unmanned aerial vehicle, and elaborations are omitted.

Certainly, the robots listed above are only for exemplary description, and the present embodiment includes but is not limited to them.

In the robot, the sensor component 20 is mainly configured to acquire posture data of a user in response to an operation control instruction of the user. A posture refers to a pose struck by the user, such as a head pose, a hand pose, and a leg pose. In various application scenarios of the embodiment of the present disclosure, the user may interact with the robot through a posture, and the posture data of the user is data obtained by the sensor component 20 by acquiring the posture of the user.

The sensor component 20 may be implemented by any one or more sensors capable of acquiring the posture data of the user, and no limits are made in the present embodiment. In some optional embodiments, the sensor component 20 may be implemented as a three-dimensional depth sensor configured to perform three-dimensional measurement on the user to obtain three-dimensional measurement data. The three-dimensional measurement data includes an image obtained by shooting the user and a distance between the user and the robot. The image may be a Red Green Blue (RGB) image or a gray-scale image. The distance between the user and the robot is also referred to as a depth of the object to be measured.

The below will exemplarily describe, in combination with optional implementation forms of the three-dimensional depth sensor, an implementation mode that the three-dimensional depth sensor acquires the RGB image of the object to be measured and senses the depth of the object to be measured.

In some embodiments, the three-dimensional depth sensor is implemented based on a binocular camera, and obtains the three-dimensional measurement data based on a binocular depth recovery technology. In this solution, two monocular cameras may be fixed to a single module, and angles and distances of the two cameras are fixed, thereby forming a stable binocular structure.

In this solution, the binocular camera may shoot the object to be measured to obtain the RGB image of the object to be measured. Meanwhile, the distance between the object to be measured and the camera may be obtained based on a triangulation ranging method and a parallax principle. When the two cameras simultaneously aim at the object to be measured, there is an image of the object to be measured in each of the cameras. There is a certain distance between the two cameras, so positions of image points corresponding to the same point on the object to be measured after imaging by the two cameras are different. Based on this, two corresponding feature points in the images shot by the two cameras may be extracted, and a distance difference between the two corresponding feature points is calculated. Then, a distance between the object to be measured and a baseline of the binocular camera may be calculated by the triangulation ranging method based on the distance difference between the two corresponding feature points, a distance difference between the two cameras, and a focal length of the camera. Further descriptions will be made below in combination with FIG. 2.

As shown in FIG. 2, the distance of the binocular camera is a baseline distance B, the focal length of the camera is f, the binocular camera shoots the same feature point P(xc, yc, zc) of a space object at the same time. (xc, yc, zc) is a coordinate of the feature point P in a camera coordinate system xyz. After the feature point P(xc, yc, zc) is imaged by the binocular camera, corresponding image coordinates are pleft(xleft, yleft) and pright(xright, yright) respectively.

If images shot by the binocular camera are on the same plane, y-axis coordinates of points obtained after the feature point P is imaged are the same, namely yleft=yright. Then, the following formula 1 is obtained according to a triangle geometry relationship:

{ x left = f x c z c x right = f ( x c - B ) z c y = f y c z c Formula 1

A parallax of the two points obtained after the feature point P is imaged is Δ=xleft−xright. The three-dimensional coordinate (xc, yc, zc) of the feature point P in the camera coordinate system xyz may be calculated accordingly:

{ x c = B ? x left Δ y c = B ? y Δ z c = B ? f Δ Formula 2 ? indicates text missing or illegible when filed

Optionally, the binocular camera may be an infrared binocular camera, and thus may further capture depth information of the object to be measured in a low-light and even dark environment based on light of an infrared lamp.

In some other embodiments, the three-dimensional depth sensor may be implemented based on a projector capable of projecting structured light and a camera. The camera may shoot the object to be measured to obtain the RGB image of the object to be measured. The projector may project structured light of a known pattern to the object to be measured. The camera may acquire a pattern formed by reflected-back structured light. Then, the pattern formed by the projected structured light is compared with the pattern formed by the reflected-back structured light. Depth information of the object to be measured may be calculated by a triangulation ranging method based on a pattern comparison result and a fixed distance between the projector and the camera.

Optionally, the structured light projected by the projector may be speckle structured light or coded structured light. No limits are made in the present embodiment.

In some other embodiments, the three-dimensional depth sensor may be implemented based on a camera as well as an electromagnetic wave sensor such as a laser radar or a millimeter wave radar. The camera may shoot the object to be measured to obtain the RGB image of the object to be measured. An electromagnetic wave signal emitted by the laser radar or the millimeter wave radar returns after arriving at the object to be measured. Time spent by the electromagnetic wave signal on returning after arriving at the object to be measured is calculated. A distance between the object to be measured and the sensor is calculated based on the time and a transmission speed of the electromagnetic wave.

Certainly, the implementation forms of the three-dimensional depth sensor listed above are only for exemplary description, and the present embodiment includes but is not limited to them.

The controller 30 is configured to determine, according to the posture data of the user acquired by the sensor component 20, a target operation region specified by the user, and control the motion component 40 to move to the target operation region so as to perform an operation task.

Optionally, the controller 30 may be implemented by various Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field-Programmable Gate Arrays (FPGAs), micro central controllers, microprocessors, Micro Control Units (MCUs), or other electronic components. No limits are made in the present embodiment.

The motion component 40 refers to a device mounted to the robot for autonomous movement of the robot, such as a moving chassis and roller of the robot. No limits are made in the present embodiment.

It is to be noted that, besides the components recorded in the above-mentioned embodiment, the robot provided in the embodiment of the present disclosure may further include a memory mounted to the body 10. The memory is configured to store a computer program, and may be configured to store various other data so as to support the operations on the robot. Examples of the data include instructions of any applications or methods operated on the robot.

The memory may be implemented by a volatile or non-volatile storage device of any type or a combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic memory, a flash memory, a magnetic disk, or an optical disk.

In some embodiments, the robot may further include a display component mounted to the body 10. The display component may include a screen that may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If including a TP, the screen may be implemented as a touch screen to receive an input signal from the user. The TP includes one or more touch sensors to sense touches, swipes, and gestures on the TP. The touch sensor may not only sense a boundary of a touch or swipe action but also detect a duration and pressure associated with the touch or swipe action.

In some embodiments, the robot may further include a power component mounted to the body 10. The power component may provide power for various components on the robot. The power component may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the robot. Elaborations are omitted.

Based on the robot provided in the above-mentioned embodiment, the embodiments of the present disclosure also provide a robot control method. Specific descriptions will be made below in combination with the drawings.

FIG. 3 is a schematic flowchart of a robot control method according to an exemplary embodiment of the present disclosure. As shown in FIG. 3, the method includes the following steps.

In step 301, a robot acquires posture data of a user in response to a posture interaction wakeup instruction.

In step 302, the robot determines, according to the posture data, a target operation region specified by the user, the target operation region being different from a region that a current position of the robot belongs to.

In step 303, the robot moves to the target operation region so as to perform a set operation task.

The posture interaction wakeup instruction refers to an instruction for waking up a posture interaction function of the robot. The posture interaction function refers to a function that the robot may capture a posture of the user, recognize an interaction content corresponding to the posture of the user, and perform a corresponding operation task according to the recognized interaction content. In the present embodiment, the posture interaction wakeup instruction may be given by the user directly, or given by the user through a terminal device.

In some embodiments, if being given by the user directly, the posture interaction wakeup instruction is implemented as a voice instruction given by the user to wake up the posture interaction function of the robot, such as a “Look at my gesture”, “Follow my gesture”, and other voice instructions. Alternatively, the posture interaction wakeup instruction may be implemented as a gesture instruction given by the user to wake up the posture interaction function of the robot, and the gesture instruction may be defined by the user. No limits are made in the present embodiment.

In some other embodiments, the user may initiate a control operation on the robot through the terminal device to wake up the posture interaction function of the robot. Based on this, the posture interaction wakeup instruction may be implemented as a control instruction sent by the terminal device to wake up the posture interaction function of the robot. The terminal device may be implemented as a device such as a mobile phone, a tablet computer, a smart watch, a smart band, and an intelligent speaker.

Generally, the terminal device may include an electronic display screen, and the user may initiate the control operation on the robot through the display screen. The electronic display screen may include an LCD and a TP. If including a TP, the electronic display screen may be implemented as a touch screen capable of receiving an input signal from the user so as to detect the control operation of the user on the robot. Certainly, in other optional embodiments, the terminal device may include a physical button configured to provide a robot control operation for the user, a voice input device, or the like. Elaborations are omitted herein.

The terminal device is bound with the robot in advance, and they may establish a communication relationship in a wired or wireless communication mode. Based on this, the operation that the user gives the posture interaction wakeup instruction to the robot through the terminal device may be implemented based on a communication message between the terminal device and the robot.

The wireless communication mode between the terminal device and the robot includes a short-distance communication mode such as Bluetooth, ZigBee, infrared ray, and Wireless Fidelity (WiFi), or a long-distance wireless communication mode such as Long Distance Radio (LORA), or a mobile-network-based wireless communication mode. When a communication connection is established through a mobile network, a network system of the mobile network may be any one of 2nd-Generation (2G) (Global System for Mobile Communications (GSM)), 2.5th-Generation (2.5g) (General Packet Radio Service (GPRS)), 3rd-Generation (3G) (Wideband Code Division Multiple Access (WCDMA), Time Division-Synchronization Code Division Multiple Access (TD-SCDMA), Code Division Multiple Access 2000 (CDMA2000), and Universal Mobile Telecommunications System (UMTS)), 4th-Generation (4G) (Long Term Evolution (LTE)), 4G+ (LTE+), 5th-Generation (5G), World Interoperability for Microwave Access (WiMax), etc. No limits are made in the present embodiment.

The posture data of the user refers to data obtained by acquiring a pose struck by the user, such as a head pose, hand pose, and leg pose of the user.

In some optional implementation modes, the posture data of the user may be acquired by a sensor component mounted to the robot, specifically referring to the records in the above-mentioned embodiment. Elaborations are omitted herein.

In some other optional implementation modes, the posture data of the user may be acquired by a posture sensor worn on the user. For example, the posture data of the user may be acquired by a gyroscope, inertial sensor, etc., worn on an arm of the user. No limits are made in the present embodiment.

In some other optional implementation modes, the posture data of the user may be acquired by multiple sensors mounted in a space where the user is located. For example, when the robot is used in a specific space, monitoring cameras mounted in the space may be reused to shoot the user from multiple angles, and the posture data of the user is acquired based on shooting results of the multiple angles. Elaborations are omitted herein.

In the application scenario provided in the embodiment of the present disclosure, the user may instruct the robot through different poses to move to specific operation regions for operation tasks. For example, a user in a family may strike a pose of pointing to a certain room with an arm to instruct a sweeping robot to move to the specified room for a cleaning task. For another example, a foreman user of a hotel may turn the head to strike a pose of facing a certain region to instruct a robot waiter to move to the specified region for a service task.

The target operation region refers to a region that is recognized according to the posture data of the user and where the user instructs the robot to go. The target operation region is different from the region that the current position of the robot belongs to. For example, the target operation region and the current position of the robot belong to different rooms, or the target operation region and the current position of the robot are divided artificially into two different regions. After determining the target operation region, the robot may move to the target operation region across rooms or regions, so as to perform the set operation task.

In the present embodiment, the robot may acquire the posture data of the user in response to the posture interaction wakeup instruction, determine the target operation region according to the posture data of the user, and in case that the target operation region is different from the region that the current position of the robot belongs to, move to the target operation region so as to perform the set operation task. Further, the robot implements operations while moving based on user postures without the limitation of region division, thereby further improving the robot control flexibility.

FIG. 4a is a schematic flowchart of a robot control method according to another exemplary embodiment of the present disclosure. As shown in FIG. 4a, the method includes the following steps.

Step 401, a robot performs, in response to a posture interaction wakeup instruction, three-dimensional measurement on a user through a sensor component mounted to the robot to obtain three-dimensional measurement data.

Step 402, the robot acquires a space coordinate corresponding to a gesture of the user according to the three-dimensional measurement data.

Step 403, the robot determines, according to the space coordinate corresponding to the gesture of the user, a target operation direction specified by the user.

Step 404, the robot determines, from a candidate operation region, an operation region adapted to the target operation direction as a target operation region, the target operation region being different from a region that a current position of the robot belongs to.

Step 405, the robot moves to the target operation region so as to perform a set operation task.

In step 401, the three-dimensional measurement data optionally includes an image obtained by shooting the user and a distance between the user and the robot. A specific method for acquiring the three-dimensional measurement data may refer to the records in the above-mentioned embodiment, and will not be elaborated herein.

In step 402, the space coordinate corresponding to the gesture of the user may be acquired according to the three-dimensional measurement data.

Optionally, image recognition may be performed on the image in the three-dimensional measurement data to obtain posture key points of the user. Optionally, a method for recognizing the posture key points from the image may be implemented based on a deep learning algorithm. For example, an image recognition model may be trained based on a Convolutional Neural Network (CNN) or a Graph Convolutional Network (GCN). Exemplary descriptions will be made below with the CNN as an example.

Optionally, multiple postures of the user may be shot to obtain a large number of images. Then, posture key points on the images are labeled to obtain training samples, and the training samples are input to a CNN model for iterative training. During training, a model parameter in the CNN model may be continuously adjusted by taking the posture key points labeled on the samples as learning targets of the model until a loss function converges to a certain range.

In response to the posture interaction wakeup instruction, the robot, after shooting the image of the user, may input the image to the CNN model, and acquires the posture key points on the image according to an output of the CNN model.

The posture key points may include feature points corresponding to key parts of the user on the image, such as the eyes, the nose, the shoulders, the elbows, the wrists, the hips, the knees, and the ankles, as shown in FIG. 4b. The left and right parts of the user on the image may be distinguished while recognizing the posture key points.

In some scenarios, considering the convenience for posture interaction with the robot, the user may perform gesture interaction with the robot by a gesture. The technical solution provided in the embodiment of the present disclosure will be exemplarily described below with a gesture as an example.

The gesture includes a specific action and body posture presented by the user with arms. After the posture key points are recognized, a target key point used to represent the gesture of the user may be determined from the posture key points. The arm is highly flexible, and when the arm of the user is used, the fingers, the wrist, the elbow, the shoulder, and other joints may be driven to move together. Based on this, in some embodiments, when the target key point representing the gesture of the user are determined, at least a key point corresponding to the elbow of the user and a key point corresponding to the wrist may be determined, as shown in FIG. 4c. In some other embodiments, three key points corresponding to the shoulder, elbow, and wrist of the user may be determined, as shown in FIG. 4d. In some other embodiments, in order to recognize the gesture more accurately, key points corresponding to the fingers of the user may further be acquired while acquiring the key point corresponding to the shoulder, the key point corresponding to the elbow, and the key point corresponding to the wrist, and illustrations are omitted. For the distance between the user and the robot in the three-dimensional measurement data, a distance between the target key point and the robot may be determined from the distance between the user and the robot according to a coordinate of the target key point on the image. For example, when the sensor component is implemented as a binocular camera, a target key point may be recognized from an image shot by the binocular camera, and depth information corresponding to the target key point is acquired based on a binocular depth recovery technology from the image shot by the binocular camera as a distance between the target key point and the robot.

Then, the space coordinate corresponding to the gesture of the user may be determined according to the coordinate of the target key point and the distance between the target key point and the robot. It is to be understood that a two-dimensional coordinate of the target key point in a camera coordinate system may be acquired based on the shot image, and a third-dimension coordinate of the target key point in the camera coordinate system may be acquired based on the distance between the target key point and the robot. Based on the three coordinates, the three-dimensional coordinate of the target key point in the camera coordinate system may be obtained.

Next, coordinate system conversion is performed on the three-dimensional coordinate of the target key point in the camera coordinate system to convert the camera coordinate system into a world coordinate system, thereby obtaining a space coordinate corresponding to the gesture of the user in the world coordinate system.

In step 403, the operation direction specified by the user may be determined according to the space coordinate corresponding to the gesture of the user.

Optionally, in this step, straight line fitting may be performed according to the space coordinate corresponding to the gesture of the user to obtain a space straight line, and then a direction of extension of the space straight line to an end of the gesture of the user is determined as the operation direction specified by the user, as shown in FIGS. 4c and 4d. If the key points acquired from the image include the key point corresponding to the shoulder, the direction of extension to the end of the gesture of the user refers to a direction of extension from the shoulder to the shoulder, or a direction of extension from the shoulder to the wrist, or a direction of extension from the shoulder to the finger. If the key points acquired from the image include the key point corresponding to the elbow, the direction of extension to the end of the gesture of the user refers to a direction of extension from the elbow to the wrist, or a direction of extension from the elbow to the finger. Elaborations are omitted.

In step 404, the target operation region is different from the region that the current position of the robot belongs to may include that: there is a physical obstacle or virtual obstacle between the target operation region and the region that the current position belongs to. Exemplary descriptions will be made below in combination with different application scenarios.

In a typical application scenario, the robot is implemented as a household sweeping robot, the target operation region may be implemented as a certain room in a house, and there is a wall or door between the room and a room where the robot is currently located. For example, the robot is currently in a living room, and the target operation region is a bedroom.

In another typical application scenario, the robot is implemented as a robot waiter for a restaurant. If the restaurant is relatively large, different service regions may be divided for different robots so as to ensure provide service orderly for customers. In this scenario, virtual walls may be set between different service regions in the restaurant so as to generate a navigation map available for the robots according to the virtual walls. The virtual walls exist not in the physical space but on the navigation maps of the robots, and the robots may move for operation within operation ranges respectively specified for them according to the navigation map generated according to the virtual walls. In this scenario, the target operation region may be implemented as another region separated by a virtual wall from the region where the robot is currently located. For example, the robot is currently in dining region A, the target operation region is in dining region B, and there is a virtual wall between dining region A and dining region B.

The candidate operation region refers to all regions that the robot may move to and perform operation tasks in. For example, in a family environment, the candidate operation region includes all rooms in a house. For example, in a restaurant environment, the candidate operation region includes all dining regions provided by the restaurant. For another example, in a mall environment, the candidate operation region may include all shops regions provided by the mall. Elaborations are omitted.

The target operation direction refers to a direction that the user instructs the robot to move to. Therefore, after the target operation direction is acquired, the operation region adapted to the target operation direction may be determined from the candidate operation region as the target operation region.

The operation direction specified by the user is represented by the direction of extension of the space straight line to the end of the gesture of the user. Based on this, an intersection position of the space straight line and a plane where the candidate operation region is located may be calculated, and the target operation region specified by the user is determined according to the intersection position.

When the intersection position of the space straight line and the plane where the candidate operation region is located is calculated, the plane where the candidate operation region is located is taken as a space plane, and the process of calculating the intersection position is converted into a process of calculating an intersection of the space straight line and the space plane.

Generally, the robot and the candidate operation region are on the same plane. If a three-dimensional coordinate system XYZ is established according to a space where the robot is located, the plane where the candidate operation region is located may be regarded as a plane whose Z is 0.

In case of different intersection positions, implementation modes of determining the target operation region specified by the user are also different. Exemplary descriptions will be made below.

Implementation mode 1: if the intersection position is within a known operation region of the robot, the operation region where the intersection position is located is determined as the target operation region. The known operation region refers to a region included in a navigation map of the robot.

Implementation mode 2: if the intersection position is not within the known operation region of the robot, and an included angle between the space straight line and the plane where the candidate operation region is located is greater than a set angle threshold, an operation region closest to the current position of the robot in the operation direction specified by the user may be determined from the known operation region as the target operation region.

The included angle between the space straight line and the plane where the candidate operation region is located is a shown in FIG. 4c. The set angle threshold may be set as practically needed. Optionally, a maximum gesture angle capable of covering the whole candidate region is calculated according to an area of the candidate region, and the maximum gesture angle is determined as the angle threshold. If the included angle between the space straight line and the plane where the candidate operation region is located is greater than the set angle threshold, it is considered that an angle the gesture of the user points to is inappropriate, for example, the arm of the user is too high and even parallel to the ground.

When the operation region closest to the current position of the robot in the operation direction specified by the user is determined from the known operation region, the space straight line may be projected to the navigation map of the robot to obtain a projected straight line. Then, an operation region intersecting the projected straight line and closest to the current position of the robot on the navigation map is determined as the target operation region.

Implementation mode 3: if the intersection position is not within the known operation region of the robot, and the included angle between the space straight line and the plane is less than or equal to the angle threshold, the operation direction specified by the user is searched for the target operation region according to the intersection position.

When the included angle between the space straight line and the plane is less than or equal to the angle threshold, it may be considered that the user points to a reasonable angle, but there is a missing operation region on the navigation map of the robot. For example, taking a sweeping robot as an example, the gesture of the user specifies a kitchen as a target region to be swept, but there is no kitchen on the navigation map of the robot.

In such case, the operation direction specified by the user may be searched for the target operation region according to the intersection position, thereby completing the operation task specified by the user.

Optionally, the robot may move to the operation direction specified by the user until encountering a target obstacle. The target obstacle may be a wall. After encountering the target obstacle, the robot may move to a direction of approaching the intersection position along an edge of the target obstacle until detecting an entrance. The entrance is usually where the obstacle vanishes, such as a door in the wall. If an operation region that the entrance belongs to is not within the known operation region, the operation region that the entrance belongs to may be determined as the target operation region.

It is to be noted that, in some cases, the entrance detected by the robot is on the navigation map of the robot, and the operation region that the entrance belongs to is a part of the known operation region of the robot. In such case, the robot may consider that an entrance of the target operation region is still yet not detected. Then, the robot may enter the known operation region from the detected entrance, and continue to move to the direction of approaching the intersection position along an edge of an obstacle in the known operation region until detecting a new entrance. If an operation region that the new entrance belongs to is not within the known operation region, the operation region that the new entrance belongs to may be determined as the target operation region. Based on this, the robot realizes a function of moving to a region not included in the navigation map for operation tasks, whereby the hands of the user are further freed.

It is to be noted that the navigation map of the root is usually generated according to a historical movement trajectory of the robot, and the navigation map includes the known operation region of the robot. For example, for a sweeping robot, when used by the user in a house for the first time, the sweeping robot may move to clean accessible rooms in the house, and synchronously draw a navigation map according to a movement trajectory.

During first cleaning, if a door of a certain room in the house is right closed, and the robot does not move to this room, the generated navigation map does not include a map region corresponding to this room. When the robot is used next time, the door of this room is opened, but the robot does not know that a cleaning environment changes, and thus may still not clean this region timely. If the user instructs the sweeping robot with a gesture to clean this room, the robot may look for this room according to the method provided in implementation mode 3, and clean it.

It is to be noted that, in implementation mode 3, the robot, after finding the target operation region in the operation direction specified by the user, may perform the operation task in the target operation region, and further update a navigation map corresponding to the known operation region according to a trajectory formed by performing the operation task. Based on this, the exploration of an unknown operation region and the real-time updating of the navigation map are implemented, which contributes more to improving the efficiency of subsequently performing operation tasks.

In step 405, optionally, when the robot moves to the target operation region so as to perform the set operation task, if the target operation region is within the known operation region, like implementation mode 1 and implementation mode 2, the robot may plan a path to the target operation region according to a navigation map corresponding to the known operation region, and moves to the target operation region according to the planned path to the target operation region.

In the present embodiment, the robot, after acquiring the three-dimensional measurement data of the user, may acquire, according to the three-dimensional measurement data of the user, the target operation direction specified by the user, and determine the operation region adapted to the target operation direction from the candidate operation region as the target operation region. In case that the target operation region is different from the region that the current position of the robot belongs to, the robot may move to the target operation region so as to perform the set operation task. Further, the robot implements operations while moving based on user postures without the limitation of region division, thereby further improving the flexibility of controlling the robot by the user.

It is to be noted that an execution body of each step of the method provided in the above-mentioned embodiment may be the same device. Alternatively, different devices may be involved in the method as execution bodies. For example, execution bodies of steps 401 to 402 may be device A. For another example, execution bodies of steps 401 and 402 may be device A, and an execution body of step 403 may be device B.

In addition, some flows described in the above-mentioned embodiments and the drawings include multiple operations executed according to a specific sequence. However, it is to be clearly understood that these operations may be executed in sequences different from those specified herein or concurrently. The sequence numbers of the operations, such as 401 and 402, are only for distinguishing different operations and do not represent any execution sequence.

It is to be noted that descriptions such as “first” and “second” herein are used to distinguish different messages, devices, modules, etc., and do not represent any sequence and limit “first” and “second” to different types.

The robot control method provided in the embodiments of the present disclosure will further be described below with a specific application scenario in combination with FIGS. 5a to 5d.

In a typical application scenario, the robot provided in each of the above-mentioned embodiments is implemented as a sweeping robot. The sweeping robot may execute the robot control method provided in each of the above-mentioned embodiments.

When the sweeping robot is used, the user may wake up a posture interaction function of the sweeping robot through a voice instruction, and control the sweeping robot with gestures to move to different rooms for cleaning. For example, the user may speak “Follow my instruction” to the sweeping robot. After the posture interaction function of the sweeping robot is woken up, the user may be shot by a binocular camera mounted to the sweeping robot to obtain an image of the user. Then, a human body imaging result on the image is recognized by a deep learning technology to recognize each key point of a human body on the imaging result. Next, target key points corresponding to a gesture, i.e., the key point corresponding to the elbow and the key point corresponding to the wrist, are selected from each recognized key point.

Then, depth information of the target key points is acquired based on a binocular depth recovery technology from the image acquired by the binocular camera. Three-dimensional coordinates of the target key points may be obtained based on coordinates of the target key points on the image and the calculated depth information.

Then, a direction specified by the gesture of the human body is calculated according to the three-dimensional coordinates of the target key points, and a position coordinate specified by the gesture of the human body on the ground is calculated, i.e., an intersection of a space straight line formed by the elbow key point and the wrist key point and the ground. For ease of description, the intersection is described as a specified point.

Then, whether the specified point is within a current navigation map region of the sweeping robot is judged.

In one case, as shown in FIG. 5b, the specified point is within the current navigation map region. In such case, the specified point is usually close to the user, and a position of the specified point is clear. Then, the robot may directly move to a room region where the specified point belongs to for cleaning. As illustrated in FIG. 5b, the specified point is in room 2 on the navigation map. In such case, the sweeping robot may move to room 2 to perform a cleaning task.

In another case, the specified point is not within the current navigation map region. In such case, the specified point is usually far from user. There may be two reasons for this case.

First, an angle the gesture of the user points to is unreasonable, for example, the gesture points to an excessively large angle or is even horizontal, and as a result, the specified point does not intersect the current navigation map or is beyond a maximum range reachable for the robot. Then, it may be considered that the region to be cleaned is relatively far, as shown in FIG. 5c. In such case, the sweeping robot may search the specified direction across rooms for a room closest to the region where the robot is currently located for cleaning.

For example, as shown in FIG. 5c, the specified point is beyond the maximum range reachable for the robot. Room 3 is closest to the region where the sweeping robot is currently located in the specified direction, so room 3 may be determined as a target region to be cleaned, and the sweeping robot may move to room 3 to perform a cleaning task.

Second, the angle the gesture of the user points to is reasonable, but the specified point does not exist in the current navigation map, or the specified point is not on the current navigation map but within the maximum range reachable for the robot. Then, it may be considered that the navigation map is not so complete and there is a missing room.

In such case, as shown in FIG. 5d, the sweeping robot may move to an edge of a closest obstacle in the specified direction, and then start searching for an accessible door or entrance along the edge. The robot, after finding the door or entrance, may enter a region that the door or entrance belongs to, and if the region is on the current navigation map, may continue to move in this region to an edge of a closest obstacle in the specified direction until finding a next door or entrance, a region that the door or entrance belongs to being not on the navigation map.

For example, as shown in FIG. 5d, the sweeping robot, after finding a door of room 1 and entering room 1, finds that room 1 is on the navigation map, and then may continue to search the specified direction in room 1 for an accessible door or entrance. The sweeping robot, after finding an entrance of room 3, finds that room 3 is not on the navigation map, and then may determine room 3 as a target region to be cleaned and start to clean room 3. The sweeping robot, when cleaning room 3, may record a map of room 3 according to a movement trajectory, and accordingly update the current navigation map.

Based on the above-mentioned implementation mode, in the application scenario of the sweeping robot, the user may conveniently interact with the sweeping robot through a posture, and the sweeping robot may accurately reach a region to be cleaned according to an instruction of the user without the limitation of an obstacle (room wall, etc.). Therefore, personalized cleaning requirements of a family are met, and human hands are further freed.

In another typical application scenario, the robot provided in each of the above-mentioned embodiments is implemented as an air purification robot. The air purification robot may execute the robot control method provided in each of the above-mentioned embodiments.

When the air purification robot is used, the user may wake up a posture interaction function of the air purification robot through a voice instruction, and control the air purification robot with gestures to move to different rooms for air purification tasks. For example, the user may speak “Follow my gesture” to the air purification robot. After the posture interaction function of the air purification robot is woken up, the user may be shot by a binocular camera mounted to the air purification robot to obtain an image of the user. Then, a three-dimensional coordinate of a target key point used to represent a gesture of the user is acquired based on a deep learning technology and a binocular depth recovery technology from the acquired image.

Then, a direction specified by the gesture of the human body is calculated according to the three-dimensional coordinate of the target key point, and a position coordinate specified by the gesture of the human body on the ground is calculated, i.e., an intersection of a space straight line formed by an elbow key point and a wrist key point and the ground. For ease of description, the intersection is described as a specified point.

Then, whether the specified point is within a current navigation map region of the air purification robot is judged.

If the specified point is within the current navigation map region, the robot may determine a movement path according to the navigation map, and directly move into a room region that the specified point belongs to so as to perform an air purification task.

If the specified point is not within the current navigation map region, whether an angle the gesture of the user points to is reasonable may further be judged. If the angle the gesture of the user points to is unreasonable, the air purification robot may search the specified direction across rooms for a room closest to a region where the robot is currently located to perform the air purification task. If the angle the gesture of the user points to is reasonable, but the specified point does not exist in the current navigation map, or the specified point is not on the current navigation map but within a maximum range reachable for the robot, it may be considered that the navigation map is not so complete and there is a missing room.

If there is a missing room, the air purification robot may move to an edge of a closest obstacle in the specified direction, and then start searching for an accessible door or entrance along the edge. The robot, after finding the door or entrance, may enter a region that the door or entrance belongs to, and if the region is on the current navigation map, may continue to move in this region to an edge of a closest obstacle in the specified direction until finding a next door or entrance, a region that the door or entrance belongs to being not on the navigation map. In such case, the air purification robot may determine the region that the door or entrance belongs to as a target air purification region, and start to perform the air purification task. The air purification robot, when performing the air purification task, may further record a map of the target air purification region according to a movement trajectory, and accordingly update the current navigation map.

In another typical application scenario, the robot provided in each of the above-mentioned embodiments is implemented as an unmanned aerial vehicle. The unmanned aerial vehicle may execute the robot control method provided in each of the above-mentioned embodiments.

There is made such a hypothesis that the unmanned aerial vehicle needs to perform an aerial photographing task in a relatively large park that includes multiple buildings, and the multiple buildings belong to different regions respectively on a navigation map of the unmanned aerial vehicle. When the unmanned aerial vehicle is controlled, the user may wake up a posture interaction function of the unmanned aerial vehicle through a voice instruction, and control the unmanned aerial vehicle with gestures to move to different buildings for photographing tasks.

For example, the user may speak “Follow my gesture” to the unmanned aerial vehicle. After the posture interaction function of the unmanned aerial vehicle is woken up, the user may be shot by a binocular camera mounted to the unmanned aerial vehicle to obtain an image of the user. Then, a three-dimensional coordinate of a target key point used to represent a gesture of the user is acquired based on a deep learning technology and a binocular depth recovery technology from the shot image.

Then, a direction specified by the gesture of the human body is calculated according to the three-dimensional coordinate of the target key point, and a position coordinate specified by the gesture of the human body on the ground is calculated, i.e., an intersection of a space straight line formed by an elbow key point and a wrist key point and the ground. For ease of description, the intersection is described as a specified point.

Then, whether the specified point is within a current navigation map region of the unmanned aerial vehicle is judged.

If the specified point is within the current navigation map region, the robot may determine a flight path according to the navigation map, and directly fly into a room region that the specified point belongs to so as to perform a photographing task.

If the specified point is not within the current navigation map region, whether an angle the gesture of the user points to is reasonable may further be judged. If the angle the gesture of the user points to is unreasonable, the unmanned aerial vehicle may search the specified direction across regions for a building closest to a region where the robot is currently located to perform the photographing task. If the angle the gesture of the user points to is reasonable, but the specified point does not exist in the current navigation map, or the specified point is not on the current navigation map but within a maximum range reachable for the robot in the park, it may be considered that the navigation map of the park is not so complete and there is a missing building.

If there is a missing building, the unmanned aerial vehicle may fly along the specified direction until finding a new region corresponding to the specified direction but not on the navigation map. In such case, the unmanned aerial vehicle may determine the new region as a target region to be photographed, and start to perform the photographing task. The unmanned aerial vehicle, when performing the photographing task, may further draw a map of the target region to be photographed according to a position distribution and region area of the target region to be photographed, and accordingly update the current navigation map.

It is to be noted that the embodiments of the present disclosure also provide a computer-readable storage medium storing a computer program. When the computer program is executed, each step of the robot control method in the method embodiment may be implemented.

It is to be understood by those skilled in the art that the embodiment of the present invention may be provided as a method, a system, or a computer program product. Therefore, the form of a pure hardware embodiment, a pure software embodiment, or an embodiment combining software and hardware may be used in the present invention. Moreover, the form of a computer program product implemented on one or more computer-available storage media (including, but not limited to, a disk memory, a Compact Disc Read-Only Memory (CD-ROM), an optical memory, etc.) including computer-available program codes may be used in the present invention.

The present invention is described with reference to flowcharts and/or block diagrams of the method, device (system), and computer program product according to the embodiments of the present invention. It is to be understood that each flow and/or block in the flowcharts and/or the block diagrams and combinations of the flows and/or blocks in the flowcharts and/or the block diagrams may be implemented by computer program instructions. These computer program instructions may be provided for a general-purpose computer, a special-purpose computer, an embedded processor, or a controller of another programmable data processing device to generate a machine, so that a device for realizing functions specified in one or more flows in the flowcharts and/or one or more blocks in the block diagrams is generated by the instructions executed by the computer or the controller of the other programmable data processing device.

Alternatively, these computer program instructions may be stored in a computer-readable memory capable of guiding the computer or the other programmable data processing device to work in a specific manner, so that a product including an instruction device may be generated by the instructions stored in the computer-readable memory, the instruction device realizing the functions specified in one or more flows in the flowcharts and/or one or more blocks in the block diagrams.

Alternatively, these computer program instructions may be loaded onto the computer or the other programmable data processing device, so that a series of operating steps are executed in the computer or the other programmable data processing device to generate processing implemented by the computer, and steps for realizing the functions specified in one or more flows in the flowcharts and/or one or more blocks in the block diagrams are provided by the instructions executed on the computer or the other programmable data processing device.

In a typical configuration, a computing device includes one or more Central Processing Units (CPUs), an input/output interface, a network interface, and an internal memory.

The internal memory may include a volatile memory, Random Access Memory (RAM), non-volatile memory, and/or other forms, such as a ROM or a flash RAM, in computer-readable media. The internal memory is an example of the computer-readable medium.

The computer-readable medium includes nonvolatile and volatile, as well as removable and irremovable media, and may store information by any method or technology. The information may be a computer-readable instruction, a data structure, a program module, or other data. Examples of the computer storage medium include, but not limited to, a Phase-change RAM (PRAM), an SRAM, a Dynamic RAM (DRAM), a RAM of another type, a ROM, an EEPROM, a flash memory or other memory technologies, a CD-ROM, a Digital Video Disk (DVD) or other optical memories, a cassette tape, a tape disk memory or other magnetic storage devices, or any other non-transmission media. The storage medium may be configured to store information accessible for the computing device. As defined herein, the computer-readable medium does not include transitory media, such as a modulated data signal and a carrier.

It is also to be noted that terms “include” and “contain” or any other variations thereof are intended to include nonexclusive inclusions, thereby ensuring that a process, method, commodity, or device including a series of elements not only includes these elements, but also includes other elements that are not clearly listed, or further includes elements inherent to the process, the method, the commodity, or the device. With no more restrictions, an element defined by statement “including a/an . . . ” does not exclude the existence of another same element in a process, method, commodity, or device including the element.

The above is only the embodiment of the present disclosure and not intended to limit the present disclosure. For those skilled in the art, various modifications and variations may be made to the present disclosure. Any modifications, equivalent replacements, improvements, etc., made within the spirit and principle of the present disclosure shall fall within the scope of protection of the claims of the present disclosure.

Claims

1. A robot control method, comprising:

acquiring posture data of a user in response to a posture interaction wakeup instruction;
determining, according to the posture data, a target operation region specified by the user; wherein the target operation region is different from a region that a current position of a robot belongs to; and
causing the robot to move to the target operation region so as to perform a set operation task.

2. The method according to claim 1, wherein the posture interaction wakeup instruction comprises at least one of:

a voice instruction given by the user to wake up a posture interaction function of the robot;
a control instruction given by the user through a terminal device to wake up the posture interaction function of the robot; and
a gesture instruction given by the user to wake up the posture interaction function of the robot.

3. The method according to claim 1, wherein the acquiring posture data of a user comprises:

performing three-dimensional measurement on the user through a sensor component mounted to the robot to obtain three-dimensional measurement data; and
acquiring a space coordinate corresponding to a gesture of the user as the posture data of the user according to the three-dimensional measurement data.

4. The method according to claim 3, wherein the three-dimensional measurement data comprises an image obtained by shooting the user and a distance between the user and the robot.

5. The method according to claim 4, wherein the acquiring a space coordinate corresponding to a gesture of the user according to the three-dimensional measurement data comprises:

recognizing the image to obtain posture key points of the user;
determining, from the posture key points, a target key point used to represent the gesture of the user;
determining a distance between the target key point and the robot according to the distance between the user and the robot; and
determining the space coordinate corresponding to the gesture of the user according to a coordinate of the target key point and the distance between the target key point and the robot.

6. The method according to claim 3, wherein the determining, according to the posture data, a target operation region specified by the user comprises:

determining, according to the space coordinate corresponding to the gesture of the user, a target operation direction specified by the user; and
determining, from a candidate operation region, an operation region adapted to the target operation direction as the target operation region.

7. The method according to claim 6, wherein the determining, according to the space coordinate corresponding to the gesture of the user, a target operation direction specified by the user comprises:

performing straight line fitting according to the space coordinate corresponding to the gesture of the user to obtain a space straight line; and
determining a direction of extension of the space straight line to an end of the gesture of the user as the operation direction specified by the user.

8. The method according to claim 7, wherein the determining, from a candidate operation region, an operation region adapted to the target operation direction as the target operation region comprises:

calculating an intersection position of the space straight line and a plane where the candidate operation region is located; and
determining, according to the intersection position, the target operation region specified by the user.

9. The method according to claim 8, wherein the determining, according to the intersection position, the target operation region specified by the user comprises any one of the following steps:

if the intersection position is within a known operation region of the robot, determining the operation region where the intersection position is located as the target operation region;
if the intersection position is not within the known operation region of the robot, and an included angle between the space straight line and the plane is greater than a set angle threshold, determining, from the known operation region, an operation region closest to the current position of the robot in the operation direction specified by the user as the target operation region; and
if the intersection position is not within the known operation region of the robot, and the included angle between the space straight line and the plane is less than or equal to the angle threshold, searching the operation direction specified by the user for the target operation region according to the intersection position.

10. The method according to claim 9, wherein the searching the operation direction specified by the user for the target operation region according to the intersection position comprises:

causing the robot to move to the operation direction specified by the user until encountering a target obstacle;
causing the robot to move to a direction of approaching the intersection position along an edge of the target obstacle until detecting an entrance; and
determining an operation region that the entrance belongs to as the target operation region, the operation region that the entrance belongs to being not within the known operation region.

11. The method according to claim 9, further comprising:

after the target operation region is found in the operation direction specified by the user, performing the operation task in the target operation region, and updating a navigation map corresponding to the known operation region according to a trajectory formed by performing the operation task.

12. The method according to claim 9, wherein the causing the robot to move to the target operation region so as to perform a set operation task comprises:

if the target operation region is within the known operation region, planning a path to the target operation region according to a navigation map corresponding to the known operation region; and
causing the robot to move to the target operation region according to the path to the target operation region.

13. The method according to claim 1, wherein there is a physical obstacle or virtual obstacle between the target operation region and the region that the current position of the robot belongs to.

14. A robot, comprising a robot body, a sensor component, a controller, and a motion component that are mounted to the robot body; wherein,

the sensor component is configured to acquire posture data of a user in response to an operation control instruction of the user; and
the controller is configured to determine, according to the posture data of the user, a target operation region specified by the user, and control the motion component to move to the target operation region so as to perform an operation task.

15. The robot according to claim 14, wherein the sensor component comprises a depth sensor.

16. The method according to 2, wherein there is a physical obstacle or virtual obstacle between the target operation region and the region that the current position of the robot belongs to.

17. The method according to 3, wherein there is a physical obstacle or virtual obstacle between the target operation region and the region that the current position of the robot belongs to.

18. The method according to 4, wherein there is a physical obstacle or virtual obstacle between the target operation region and the region that the current position of the robot belongs to.

19. The method according to 5, wherein there is a physical obstacle or virtual obstacle between the target operation region and the region that the current position of the robot belongs to.

20. The method according to 6, wherein there is a physical obstacle or virtual obstacle between the target operation region and the region that the current position of the robot belongs to.

Patent History
Publication number: 20230057965
Type: Application
Filed: Dec 31, 2020
Publication Date: Feb 23, 2023
Applicant: ECOVACS ROBOTICS CO., LTD. (Suzhou)
Inventors: Rui PENG (Suzhou), Qingxiang SONG (Suzhou)
Application Number: 17/793,356
Classifications
International Classification: A47L 11/40 (20060101); A47L 9/00 (20060101);