COMPUTER-IMPLEMENTED METHOD FOR CONTROLLING A ROBOT, ROBOT CONTROL METHOD, SYSTEM, ARTICLE MANUFACTURING METHOD, AND RECORDING MEDIUM
A computer-implemented method for controlling a robot includes acquiring a current image showing a positional relationship between a first object held by the robot and a second object, determining whether the first object and the second object are in a reference positional relationship based on the current image, and controlling, in a case where a computer determines that the first object and the second object are in a reference positional relationship based on the current image, a position and a posture of the robot according to a trajectory of the robot set before the determining such that the first object and the second object have a target positional relationship.
The present invention relates to a robot control method and the like.
Description of the Related ArtRobots are being introduced in various fields including production sites. As one method for controlling a position and a posture of a robot, visual servoing in which a change in position of a target is measured as visual information, and the visual information is used as feedback information, is known.
JP 2015-74058 A discloses a robot control device that performs control by adding an output of visual servoing control based on visual information and an output of force control based on a force detection result at different ratios according to a distance from a target position.
JP 2013-180380 A discloses a control device in which a force control system is incorporated in a visual servoing system, and which controls a robot based on both load information applied to a target object and positional relationship between the target object and a component to be assembled.
For example, in a case of causing a robot hand to grip component 1 and perform work of assembling component 1 to component 2, when the robot is controlled by a control method according to a related art, the assembling work may not be performed correctly. Specifically, in a case where unintended deviation occurs in a relative position between component 1 and component 2 while the robot hand grips and moves component 1, the robot hand may not correctly assemble the components even when the robot hand is moved according to a taught trajectory.
Therefore, for the control of robots, there has been a demand for a technology that is advantageous in positioning two objects in an appropriate target positional relationship.
SUMMARY OF THE INVENTIONAccording to a first aspect of the present invention, a computer-implemented method for controlling a robot includes acquiring a current image showing a positional relationship between a first object held by the robot and a second object, determining whether the first object and the second object are in a reference positional relationship based on the current image, and controlling, in a case where a computer determines that the first object and the second object are in a reference positional relationship based on the current image, a position and a posture of the robot according to a trajectory of the robot set before the determining such that the first object and the second object have a target positional relationship.
According to a second aspect of the present invention, a robot control method includes setting a position and a posture of a mechanical device such that a first teaching object held by the mechanical device and a second teaching object have a target positional relationship, changing the position and the posture of the mechanical device holding the first teaching object such that the first teaching object and the second teaching object that are in the target positional relationship have a reference positional relationship, acquiring a reference image showing that the first teaching object and the second teaching object are in the reference positional relationship, and controlling, by a computer, in a case where the computer acquires a current image showing a positional relationship between a first object held by a robot and a second object and determines that the first object and the second object are in the reference positional relationship based on the reference image and the current image, a position and a posture of the robot holding the first object such that the first object and the second object have the target positional relationship.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
feedback teaching data.
A robot control method and the like according to an embodiment of the present invention will be described with reference to the drawings. The embodiments described below are merely examples, and for example, detailed configurations can be appropriately changed and implemented by those skilled in the art without departing from the gist of the present invention. When describing control of an operation and a position and posture of a robot, contents of the description may include an operation of stopping the robot and control of maintaining the position and posture of the robot.
In the drawings referred to in the following description of the embodiments and examples, elements denoted by the same reference signs have the same functions unless otherwise specified. In the drawings, in a case where a plurality of the same elements is arranged, reference signs and a description thereof may be omitted.
In addition, the drawings may be schematic for convenience of illustration and description, and thus, the shape, size, arrangement, and the like of elements in the drawings may not strictly match those of actual ones.
First EmbodimentAs a first embodiment, a control method for causing a robot to perform work of assembling a first workpiece to a second workpiece will be exemplified. In the assembling work, the first workpiece is held by the robot to approach the second workpiece, and the first workpiece is moved to a position where the first workpiece and the second workpiece have a reference positional relationship, that is, an approach position. In the present specification, the “positional relationship” refers to a relative relationship including not only a position but also a posture. The approach position is set in advance as a positional relationship (reference positional relationship) in which the held first workpiece can be assembled to the second workpiece with high reliability by the robot. A robot operation of moving the first workpiece from the approach position to an assembly completion position is taught in advance to a control unit of the robot.
In the assembling work, the control unit acquires a current image obtained by imaging at least the first workpiece using an imaging unit, and determines whether or not the first workpiece and the second workpiece are in the reference positional relationship. In a case where it is determined that the first workpiece and the second workpiece are in the reference positional relationship, the control unit controls the position and posture of the robot according to a control program taught in advance, and moves the first workpiece to the assembly completion position (target positional relationship).
In order to make it possible to determine whether or not the first workpiece and the second workpiece are in the reference positional relationship, the control unit acquires in advance a target image (reference image) as visual feedback teaching data. The reference image is an image obtained by arranging a first teaching workpiece and a second teaching workpiece so as to have the reference positional relationship, and performing imaging using the imaging unit so as to include at least the first teaching workpiece. Except for manufacturing errors, the first teaching workpiece is substantially identical in shape to the first workpiece, and the second teaching workpiece is substantially identical in shape to the second workpiece. The first teaching workpiece may also be referred to as a first teaching object, and the second teaching workpiece may also be referred to as a second teaching object. That is, in the embodiment, the reference image showing that the first teaching object and the second teaching object are in the reference positional relationship is acquired in advance.
By comparing the current image with the reference image, the control unit can accurately determine whether or not the first workpiece is at the approach position, that is, whether or not the first workpiece and the second workpiece are in the reference positional relationship.
In the method according to the related art, whether or not a robot hand is in a predetermined position and posture for starting an approach operation during assembling work is determined using a position sensor of the robot hand or an image obtained by imaging the robot hand. Even in a case where it is determined that the robot hand is in the predetermined position and posture, the first workpiece is not in the predetermined position and posture with respect to the second workpiece when so-called gripping displacement occurs. The gripping displacement refers to a phenomenon in which the first workpiece is displaced with respect to the robot hand when or after the robot hand grips the first workpiece. When the assembling work is continued in a state in which the gripping displacement occurs, not only the assembling is not correctly performed, but also the workpieces may be damaged by collision or the like.
In the control according to the present embodiment, the control unit that controls the robot holding the first workpiece acquires the current image that enables determination of the positional relationship between the first workpiece and the second workpiece, and determines whether or not the first workpiece and the second workpiece are in the preset reference positional relationship based on the current image. The reference positional relationship is set in advance before the determination is performed, and for example, is set in advance before the current image is acquired. The reference positional relationship can be set when capturing the target image. When it is determined that the first workpiece and the second workpiece are in the reference positional relationship (approach position), the position and the posture of the robot are controlled such that the first workpiece and the second workpiece are in the target positional relationship (assembly completed state). As described above, in the present embodiment, it is confirmed from the image that the first workpiece and the second workpiece are in a positional relationship in which the robot can appropriately assemble the held first workpiece to the second workpiece at the position (that is, the reference positional relationship), and then the robot is caused to perform the final assembling operation. Therefore, there is no concern that the final assembling operation is continued unless it is confirmed that the first workpiece and the second workpiece are in the reference positional relationship in a case where so-called gripping displacement occurs. Even when the gripping displacement occurs, if it is confirmed that the first workpiece and the second workpiece are in the reference positional relationship, the final assembling operation can be performed by, for example, force control.
In the present embodiment, a robot control method for moving the first workpiece to the approach position is not particularly limited, and for example, the control unit may move the robot hand according to a taught trajectory, or a user may move the robot hand by using a pendant.
At least the first teaching workpiece is included in the target image (reference image). For example, both the first teaching workpiece and the second teaching workpiece having the reference positional relationship may be included in the reference image. The control unit can make determination by comparing the positional relationship between the first workpiece and the second workpiece shown in the current image with the positional relationship between the first teaching workpiece and the second teaching workpiece shown in the target image (reference image).
In a case where there is a position reference object (for example, a jig for fixing the second workpiece) having a fixed positional relationship with respect to the second workpiece, it is sufficient if the first teaching workpiece and the position reference object are included in the target image (reference image). The control unit can perform determination by comparing a positional relationship between the first workpiece and the position reference object shown in the current image and the positional relationship between the first teaching workpiece and the position reference object shown in the reference image.
A robot operation of moving the first workpiece confirmed to be in the reference positional relationship from the current image from the approach position to the assembly completion position is preferably performed by the force control. However, in a case where there is no problem even if the first workpiece is moved to the assembly completion position by position control, the robot operation may be performed by the position control. For example, a large clearance between the first workpiece and another object (including the second workpiece) is secured in a path along which the first workpiece is moved to the assembly completion position, and the first workpiece may be moved by the position control when a possibility of interference is extremely small. Alternatively, in a case where the first workpiece or the second workpiece is formed of a flexible material and there is no problem even if minor contact due to a control error occurs, the first workpiece may be moved by the position control. Furthermore, also in a case where a compliance mechanism such as a remote center compliance (RCC) device is provided in the robot, the first workpiece may be moved by the position control.
Hereinafter, a specific description will be given.
Configuration of Robot SystemThe control device 400 serving as the control unit is a computer that controls the entire robot system 1000 and performs various types of work such as the assembling work. The control device 400 includes a central processing unit (CPU) 401 serving as a processor.
The control device 400 includes, for example, a read only memory (ROM) 402, a random-access memory (RAM) 403, and a hard disk drive (HDD) 404 as a storage unit. In addition, the control device 400 includes a recording disk drive 405 and interfaces 406 to 410 serving as a plurality of input/output interfaces (I/Fs).
The ROM 402, the RAM 403, the HDD 404, the recording disk drive 405, and the interfaces 406 to 410 are connected to the CPU 401 via a bus 420. The ROM 402 stores a basic program such as a basic input/output system (BIOS). The RAM 403 is a storage device that temporarily stores various types of data such as an arithmetic processing result of the CPU 401.
The HDD 404 is a non-transitory computer-readable recording medium that stores a processing program executed by the CPU 401, an arithmetic processing result of the CPU 401, various types of data acquired from the outside, and the like. A program 430 for causing the CPU 401 to execute arithmetic processing and visual feedback teaching data 432 are recorded in the HDD 404. The CPU 401 executes each processing of an article manufacturing method based on the program 430 recorded (stored) in the HDD 404. Each processing can include an image processing method.
In the present embodiment, the program 430 and the visual feedback teaching data 432 are stored in the HIDD 404. However, the program 430 and the visual feedback teaching data 432 may also be stored in a recording medium other than the HIDD 404. That is, the program 430 and the visual feedback teaching data 432 may be recorded in any recording medium as long as the recording medium is a non-transitory computer-readable recording medium. For example, a flexible disk, a hard disk, an optical disk, a magneto-optical disk, a magnetic tape, or a non-volatile memory can be used. The optical disk is, for example, a disk medium such as a Blu-ray disk, a DVD, or a CD. The non-volatile memory is, for example, a storage device such as a USB memory, a memory card, a ROM, or an SSD. Various types of data, programs, and the like recorded in a recording disk 431 can be read by the recording disk drive 405.
The input device 500 is connected to the interface 406 of the control device 400. The CPU 401 acquires input data (input information) from the input device 500 via the interface 406 and the bus 420. The input device 500 is a device that can be operated by the user to input various types of information to the control device 400. For example, if a teaching pendant is used as the input device 500, an operator (user) can use the teaching pendant to teach a motion of a robot arm 200.
The display 600, which is an example of a display unit, is connected to the interface 407 of the control device 400. The display 600 can display various images output from the control device 400.
An external storage device 700 that is a storage unit such as a rewritable non-volatile memory or an external HDD can be connected to the interface 408 of the control device 400.
The servo control unit 230 of the robot 100 is connected to the interface 409 of the control device 400. The servo control unit 230 will be described below.
The visual sensor 800 serving as the imaging unit is connected to the interface 410 of the control device 400. The visual sensor 800 is an example of the imaging unit that captures an image and transmits the image to the control device 400, and is, for example, a digital camera. The visual sensor 800 is a two-dimensional camera, and can acquire two-dimensional image information by imaging a subject. The visual sensor 800 is not limited to a two-dimensional camera, and may be, for example, a three-dimensional camera. The visual sensor 800 performs imaging at predetermined time intervals (for example, every 30 ms) under the control of the CPU 401. As a result, the CPU 401 can acquire visual information, that is, data of a captured image, from the visual sensor 800 at predetermined time intervals (for example, every 30 ms).
In the present embodiment, the control system 440 includes the input device 500, the display 600, the visual sensor 800, the control device 400, and the servo control unit 230.
The CPU 401 of the control device 400 can acquire angle information from an angle sensor 250 of each joint via the servo control unit 230, the interface 409, and the bus 420. Similarly, the CPU 401 can acquire force sensing information from a force sensing sensor 260 of each joint. The force sensing sensor 260 may be a torque sensor that detects a torque that is one of the force sensing information, may be a force sensor that detects a force that is one of the force sensing information, or may be a sensor that detects both a torque and a force. The servo control unit 230 may divide an angle of a motor 231 detected using the angle sensor 250 by a reduction ratio of a speed reducer (not illustrated), convert the divided angle into angle information of a corresponding joint, and transmit the angle information to the CPU 401.
The CPU 401 of the control device 400 outputs data of a command value corresponding to each of joints J1 to J6 to the servo control unit 230 via the bus 420 and the interface 409 at a predetermined time interval (for example, 1 ms). The servo control unit 230 controls driving of the motor 231 of each joint based on the command value corresponding to each of the joints J1 to J6 acquired from the control device 400 such that an angle or torque of each of the joints J1 to J6 follows the command value. That is, the servo control unit 230 is configured to be able to perform the position control or the torque control on each joint of the robot 100.
RobotThe robot 100 (
The robot 100 is installed in a manufacturing line, for example, and is used to perform various works for manufacturing an article. Examples of the work for manufacturing an article can include conveyance work, assembling work, processing work, or coating work. Examples of the processing work can include cutting work, grinding work, polishing work, or sealing work.
An end effector corresponding to a work content is attached to the robot arm 200. A link 216 positioned at a distal end portion of the robot arm 200 is a support portion configured to mount and support the end effector. In the present embodiment, a case where the robot hand 300 is mounted on the robot arm 200 as the end effector is taken as an example, and a control method in the assembling work will be described. As described below with reference to
The robot arm 200 includes a base 209, links 210 to 216, and the joints J1 to J6. The base 209, which is a base end (fixed end), is installed on an upper surface of a pedestal B1. The links 210 to 216 are connected in series via the joints J1 to J6 in this order. Each of the joints J1 to J6 includes the motor 231, the angle sensor 250, and the force sensing sensor 260. For convenience of illustration,
The position and posture of the robot arm 200 can be expressed by a coordinate system. A coordinate system To in
A coordinate system Tc is a coordinate system set at the center of the visual sensor 800, and is represented by orthogonal coordinates of three axes including XYZ axes similarly to the coordinate system Te and the coordinate system Te, and is set such that an optical axis direction of the visual sensor 800 is a Z-axis direction. In the present embodiment, a case where the visual sensor 800 is fixed to a predetermined position based on the coordinate system To, for example, the pedestal B1, is described. However, the visual sensor 800 may be fixed to the robot arm 200 or the robot hand 300.
The servo control unit 230 controls driving of the motor 231 included in each of the joints J1 to J6. The servo control unit 230 is electrically connected to the motor 231, the angle sensor 250, and the force sensing sensor 260 included in each of the joints J1 to J6. The motor 231 is, for example, a brushless DC motor or AC motor, and rotationally drives the joint via the speed reducer (not illustrated). The angle sensor 250 is, for example, a rotary encoder, is attached to the motor 231, and is configured to be able to detect a rotation angle of the motor 231. The force sensing sensor 260 is configured to be able to detect a torque applied to the joint.
The servo control unit 230 is disposed inside the base 209, for example, and may also be disposed at another position. For example, the servo control unit 230 may be disposed inside a casing of the control device 400. That is, the servo control unit 230 may be a part of the configuration of the control device 400.
WorkNext, work performed by the robot 100 in the present embodiment will be described.
In
For example, the imaging area A1 can be set such that both the first workpiece W1 gripped by the robot hand 300 at the approach position and the second workpiece W2 fixed to the workpiece fixing jig M1 are included in the angle of view. With this setting, it is possible to extract the first workpiece W1 and the second workpiece W2 from the captured image by image processing and calculate the relative positional relationship between the first workpiece W1 and the second workpiece W2.
In a case where a positional relationship between the second workpiece W2 and the workpiece fixing jig M1 is known in advance, the imaging area A1 can be set such that both the first workpiece W1 gripped by the robot hand 300 at the approach position and the workpiece fixing jig M1 are included in the angle of view. With this setting, the relative positional relationship between the first workpiece W1 and the second workpiece W2 can be calculated by extracting the first workpiece W1 and the workpiece fixing jig M1 from the captured image by image processing and obtaining a relative positional relationship between the first workpiece W1 and the workpiece fixing jig M1.
In addition, the imaging area A1 can be set such that both the first workpiece W1 and the second workpiece W2 are included in the angle of view not only at a time point when the robot hand 300 is at the approach position but also from the start of the assembling work. Such setting is suitable for moving the first workpiece W1 to the approach position by visual servoing.
Since the visual sensor 800 only needs to be able to acquire an image for determining the positional relationship between the first workpiece W1 and the second workpiece W2, the visual sensor 800 does not necessarily have to be installed at the position illustrated in
In the present embodiment, the control device 400 controls the robot arm using visual feedback to move the robot arm 200 to the approach position, and determines whether or not the first workpiece W1 and the second workpiece W2 are in the positional relationship illustrated in
In step S11, the CPU 401 acquires the visual feedback teaching data 432 recorded in the HDD 404. Here, the visual feedback teaching data 432 is, for example, a target image IG (reference image) captured by the visual sensor 800 when the first workpiece W1 and the second workpiece W2 are in the state illustrated in
In step S12, the CPU 401 extracts a target image feature amount Fg from the visual feedback teaching data 432.
More preferably, the extraction of the edge f111 and the edge f121 and the extraction of the portion f112 and the portion f122 are desirably performed by different methods. For example, a line segment detection method using Hough transform can be used to extract the edge f111 and the edge f121, and template matching can be used to extract the portion f112 and the portion f122. In this case, even when an erroneous image feature is extracted due to a failure of the image processing, the error can be detected by comparing the extraction results of the edge f111 and the portion f112 or the extraction results of the edge f121 and the portion f122. In a case where the template matching is used to extract the portion f112 and the portion f122, in step S12, input values input by the operator or values recorded in the HDD 404 or the like are registered as positions and sizes of the portion f112 and the portion f122. Then, the corresponding range of the target image IG may be registered as a template. The registered template is temporarily recorded in the RAM 403 or the like. A coordinate system Tc′ is a coordinate system on the target image IG in which a perpendicular bisector of the edge f111 is an X′ axis and a direction along the edge f111 is a Y′ axis on the target image IG. The target image feature amount Fg is calculated by coordinates on the coordinate system Tc′. For example, the target image feature amount Fg includes three values of a distance f131 between an intersection of the edge f111 and the X′ axis and an intersection of the edge f121 and the X′ axis, a difference f132 between positions of the predetermined portion f112 and the predetermined portion f122 in a Y′-axis direction, and an angle f133 formed by the edge f111 and the edge f121. That is, the target image feature amount Fg is represented by Fg=[f131 f132 f133]T as a three-dimensional vector including the distance f131, the difference f132, and the angle f133. The subscript T represents transposition of a vector or a matrix.
Next, when the actual assembling work is started, in step S13, the CPU 401 causes the visual sensor 800 to perform imaging to acquire a current image IC. The current image IC is temporarily recorded in the RAM 403 or the like, for example.
In step S14, the CPU 401 extracts a current image feature amount Fc from the current image IC by a method similar to the method for extracting the target image feature amount Fg from the target image IG in step S12. Hereinafter, the respective components of the target image feature amount Fg are denoted as fg131, fg132, and fg133, and the respective components of the current image feature amount Fc are denoted as fc131, fc132, and fc133. That is, Fg=[fg131 fg132 fg133]T and Fc=[fc131 fc132 fc133]T. At this time, the extracted current image feature amount Fc may be displayed on the display 600.
In step S15, the CPU 401 calculates a control amount qv of each of the joints J1 to J6 of the robot from the target image feature amount Fg and the current image feature amount Fc, and operates the robot. In calculating the control amount qv, first, a feature amount difference Fe (F is in bold) between the current image feature amount Fc and the target image feature amount Fg is calculated according to the following Equation (1).
Subsequently, the CPU 401 calculates a matrix of an image Jacobian Jimg (J is in bold) and a matrix of a robot Jacobian Jr(J is in bold). The image Jacobian Jimg is a matrix of three rows and three columns associating minute displacement amounts of the coordinate system Te set on the robot hand 300 with minute displacement amounts of the current image feature amount Fc. The robot Jacobian Jr is a matrix of three rows and six columns associating minute displacement amounts of the joints J1 to J6 of the robot arm 200 with minute displacement amounts of the coordinate system Te set on the robot hand 300. The image Jacobian Jimg and the robot Jacobian Jr are defined according to the following Equation (2).
Here, xe (x is in bold) is a position vector xe=[Xe Ye αe]T of three degrees of freedom of the coordinate system Te in the coordinate system To, and αe is a rotation angle around a Z axis of the coordinate system To. q (q is in bold) represents a joint angle vector q=[q1 . . . q6]T of each of the joints J1 to J6 of the robot arm 200.
Subsequently, the CPU 401 calculates the control amount qv (q is in bold) of each of the joints J1 to J6 of the robot. The control amount qv is calculated, for example, according to the following Equation (3).
λ (λ is in bold) in Equation (3) is a feedback gain represented by a three-dimensional vector, the subscript-1 represents an inverse matrix, and + represents a pseudo inverse matrix. A method for calculating the control amount qv from the feature amount difference Fe is not limited to the above-described method, and other known methods may be freely used. The CPU 401 calculates a new angle command value by adding the control amount qv to the previous angle command value of each of the joints J1 to J6 of the robot. Then, the CPU 401 performs a visual feedback operation by operating the robot arm 200 via the servo control unit 230 based on the angle command value.
In step S16, the CPU 401 determines whether or not a position correction operation by the visual feedback has been completed and the reference positional relationship has been achieved. If the feature amount difference Fe is equal to or smaller than a predetermined value, it is determined that the reference positional relationship has been achieved, and the processing proceeds to step S17. If the feature amount difference Fe is larger than the predetermined value, the processing returns to step S13, and the visual feedback operation is performed again. Here, the predetermined value is a value that can be determined with sufficient accuracy to appropriately perform the subsequent assembling work in the force control, and the same value may be used for all the values included in the feature amount difference Fe. Alternatively, the determination may be made using different values, and the correction operation may be continued until all the values satisfy the condition.
In step S17, the CPU 401 operates the robot arm 200 by using the force control, causes the first workpiece W1 to follow the second workpiece W2, and performs the assembling operation so as to achieve the target positional relationship. In the control device 400, a motion amount of the robot hand 300 for transitioning from the state of
The force control method includes hybrid control in which a position and a force are completely separated and individually controlled, and impedance control in which a position and a force are controlled in a linear relationship. The impedance control is a method in which mechanical impedance characteristics of the hand tip of the robot are set to appropriate values in advance, and a force applied to a target object by the hand tip is indirectly controlled through adjustment of a hand tip target position. The hybrid control is a method in which a position control direction of the hand tip and a force control direction are clearly distinguished, and a force control rule is used for the force control direction. In the impedance control, both an elastic term and a viscous term can be considered in calculation of a target value, and only the elastic term can be used while ignoring the viscous term, and in this case, the impedance control can be referred to as compliance control. The compliance control is treated as a type of impedance control. In the present embodiment, the force control may be either hybrid control or impedance control, but it is preferable to use the impedance control for an operation from the reference positional relationship to the target positional relationship. In step S17, a case where the first workpiece W1 is caused to follow the second workpiece W2 by the force control and the assembling operation is performed has been described, but the assembling operation may be performed by the position control depending on the situation. For example, in a case where a gap between the workpieces at the time of assembly is sufficiently large, in a case where the material is soft, or in a case where the compliance mechanism is provided in the robot, the workpieces can be assembled by the position control. In step S17, the position and the posture of the first workpiece W1 may be maintained, and the second workpiece W2 may be moved to follow the first workpiece W1 such that the first workpiece W1 and the second workpiece W2 have the target positional relationship.
As described above, according to the present embodiment, the visual feedback control is performed using the distance f131, the difference f132, and the angle f133 which are the image features determined by the positional relationship between the first workpiece W1 and the second workpiece W2. According to the present embodiment, even when the first workpiece W1 and the robot hand 300 are misaligned due to the gripping displacement or the like, whether or not the first workpiece W1 and the second workpiece W2 are in the predetermined reference positional relationship is confirmed by using an image, and whether or not the final operation can be performed is determined.
Further, if the approach position is set to a position immediately before the first workpiece W1 and the second workpiece W2 come into contact with each other, and switching from the visual feedback control to the force control is made at the approach position, it is possible to prevent the visual feedback control from becoming unstable due to occlusion or the like. In addition, by minimizing a movement amount of the robot hand 300 in the force control, the assembling operation can be completed without deteriorating the accuracy, and work performance of the robot can be improved.
Second EmbodimentAs a second embodiment, a modified example of a control method for causing a robot to perform work of assembling a first workpiece to a second workpiece will be described. A description of matters common to the first embodiment will be simplified or omitted. The present embodiment is characterized by a method for setting an approach position and a method for acquiring a reference image (and an image feature amount) serving as visual feedback teaching data.
As described above, when determining that a first workpiece W1 and a second workpiece W2 are in a preset reference positional relationship (that is, the first workpiece W1 is at a predetermined approach position), a control unit can cause the robot to perform the final movement operation in an assembling operation. In order to perform precise assembly, it is necessary to appropriately set the approach position so that the first workpiece W1 easily enters an inlet of the second workpiece W2 in the final movement operation. In the present embodiment, a reference image is captured by setting the approach position by a method described below, and a control device 400 acquires the visual feedback teaching data. That is, the processes illustrated as step S11 and step S12 in the flowchart of
In step S21, an operator manually operates (so-called jog operation) a robot arm 200 via an input device 500 to move the first workpiece W1 from the position illustrated in
Next, in step S22, the operator manually operates the robot arm 200 to offset the first workpiece W1 in a direction opposite to an assembling direction in the previous step, and pulls out the first workpiece W1 from the second workpiece W2. That is, the first workpiece W1 is linearly moved in a negative direction of a Z axis of the coordinate system Te to be pulled out from the second workpiece W2.
In step S23, the operator inputs an acquisition command for the target image IG to the control device 400. The control device 400 receives the acquisition command input for the target image IG and sends an imaging command to the visual sensor 800 via an interface 410 and a bus 420. The control device 400 acquires a captured image of the imaging area A1 from the visual sensor 800, and temporarily stores the captured image in a RAM 403 as the target image IG (reference image). The target image IG is image data (visual information) obtained by imaging the first workpiece W1 at the approach position illustrated in
In step S24, the control device 400 extracts an image feature amount between the end portion E1 of the first workpiece W1 on the distal end side and the end portion E2 of the second workpiece W2 on the opening portion side from the target image IG acquired in step S23. A method for extracting the image feature amount is similar to the method described above, and a description thereof will be omitted here.
Finally, in step S25, the control device 400 temporarily stores the image feature amount extracted in step S24 in the RAM 403 as a target image feature amount Fg. The target image feature amount Fg is a feature amount extracted from an image captured in a state in which the first workpiece W1 and the second workpiece W2 have a predetermined assembly start positional relationship (reference positional relationship) in step S23. That is, a robot 100 can move the first workpiece W1 to an accurate assembly start position (approach position) by performing the visual feedback operation described above with respect to the target image feature amount Fg. If it is confirmed that there is no problem when actual assembling work is performed using the temporarily stored target image feature amount Fg, the control device 400 can store the target image feature amount Fg in an HDD 404 and reuse the target image feature amount Fg according to an instruction of the operator.
As described above, with the assembly start position teaching method according to the embodiment described above, it is possible to easily acquire the target image IG and the target image feature amount Fg at the accurate assembly start position by performing offsetting based on a position where the assembly has been completed once.
Although a case where the operator performs the jog operation on the robot arm 200 via the input device 500 to assemble the first workpiece W1 to the second workpiece W2 has been described in step S21, the assembly may also be performed by direct teaching in which the operator directly touches and operates the robot. The direct teaching is a teaching method in which the robot arm 200 is made flexible with respect to an external force by the impedance control or the like to follow a reaction force from the operator or the workpiece. That is, not only the operation of the robot arm 200 becomes intuitive, but also a load on the workpiece until reaching a workpiece assembling completion position illustrated in
The control device 400 can acquire an execution program for the final operation performed in step S17 (
In addition, although a case where steps S22 and S23 are performed by the operator performing an input operation on the control device 400 has been described, the execution method is not limited thereto. For example, if design information such as the angle of view of the visual sensor 800, dimensions of the first workpiece W1 and the second workpiece W2, or the assembling direction is known in advance, a direction in which the first workpiece W1 is pulled out and the offset amount can also be determined in advance. That is, steps S22 to S25 may be programmed in advance, and after step S21, a series of teaching work may be automatically performed by execution of a program.
In the embodiment described above, a case where both the first workpiece W1 and the second workpiece W2 are included in the target image IG has been described, but the first workpiece W1 and the second workpiece W2 do not always have to be included in the target image IG. For example, in a case where a positional relationship of one workpiece with respect to the visual sensor 800 is known in advance or in a case where the change can be ignored, it is sufficient if one of the first workpiece W1 and the second workpiece W2 is included in the target image IG. Alternatively, in a case where there is a position reference object (for example, a jig for fixing the second workpiece W2) having a fixed positional relationship with respect to the second workpiece W2, it is sufficient if the first workpiece W1 and the position reference object are included in the target image. The control unit can perform determination by comparing a positional relationship between the first workpiece and the position reference object shown in the current image and the positional relationship between the first teaching workpiece and the position reference object shown in the target image.
Also in the present embodiment, in the assembling work, the control unit acquires the current image obtained by imaging at least the first workpiece using an imaging unit, and determines whether or not the first workpiece and the second workpiece are in the reference positional relationship. In a case where it is determined that the reference positional relationship is achieved, that is, in a case where it is determined that the first workpiece is at the approach position, the control unit controls the position and posture of the robot according to a control program taught in advance, and moves the first workpiece to the assembly completion position (target positional relationship).
In the present embodiment, when the approach position is set and the target image is captured in advance, after the assembly of the first workpiece and the second workpiece is once completed and the target positional relationship is achieved, the first workpiece is moved in a direction opposite to that of the final operation at the time of assembly, the approach position is set, and imaging is performed. Therefore, in the present embodiment, the approach position can be set as a positional relationship in which the held first workpiece can be appropriately assembled to the second workpiece by the robot. Also in the present embodiment, as in the first embodiment, it is not necessary to use the same object as a work workpiece and a teaching workpiece as long as various positional relationships can be recognized. In addition, it is not necessary to use the same mechanical device (robot arm) as a work robot arm and a teaching robot arm as long as various positional relationships can be recognized and a teaching content and a work content do not deviate from each other.
Modified ExampleThe present invention is not limited to the embodiments described above, and many modifications can be made within the technical idea of the present invention. In addition, the effects described in the embodiments merely enumerate the most preferable effects that result from the present invention, and the effects of the present invention are not limited to those described in the embodiments.
In the above-described embodiments, a case where the feedback control unit 450 is a part of the function of the CPU 401 and the servo control unit 230 is implemented by a device different from the CPU 401 has been described, but the present technology is not limited thereto. The CPU 401 may be configured to implement some or all of the functions of the servo control unit 230 based on the program 430.
Furthermore, in the above-described embodiments, the current image feature amount Fc is displayed using the control device 400 and the display 600, but the present technology is not limited thereto. For example, an electronic device including a CPU and a display device such as a display panel may be separately used. The electronic device may be an information processing device such as a desktop personal computer (PC), a laptop PC, a tablet PC, or a smartphone. Furthermore, in a case where the input device 500 is a teaching pendant including a display device, the current image feature amount Fc, the current image IC, the target image feature amount Fg, and the like may be displayed on the display device.
Furthermore, in the above-described embodiments, a case where the robot arm 200 is a vertical articulated robot arm has been described, but the present technology is not limited thereto. The robot arm 200 may be various robot arms such as a horizontal articulated robot arm, a parallel link robot arm, and an orthogonal robot. That is, the present invention can be implemented in various articulated robots.
Furthermore, in the above-described embodiments, a case where the robot hand 300 is attached to the robot arm 200 has been described, but the present technology is not limited thereto. A holding mechanism capable of holding a holding object such as a workpiece may be attached to the robot arm 200 as the end effector. Examples of the holding mechanism include a mechanism for holding a workpiece by suction. In addition, a tool for machining a workpiece or the like may be attached to the robot arm 200 as the end effector.
Although the first workpiece W1 and the second workpiece W2 serving as objects are typically components, the present invention may be implemented in control of a robot that performs work of bringing a tool serving as the object into contact with a component. For example, the first workpiece may be a tool attached to the robot, and the second workpiece W2 may be a component. Alternatively, the first workpiece may be a tool held by the robot, and the second workpiece W2 may be a component. Alternatively, the present technology may be implemented as control when the robot performs work in which the robot grips a driver, a screw (first workpiece W1) is set in the driver by a magnet, and the screw is inserted into a screw hole of a component that is the second workpiece W2. As described above, the workpiece and the tool can be collectively referred to as the objects and can be referred to as a work object and a teaching object.
The control method, the control device, and the like according to the present invention can be applied to control of devices and facilities including various mechanical devices such as a production device including a movable portion, an industrial robot, a service robot, a medical robot, and a machine tool operated by numerical control by a computer.
The present invention can also be implemented by processing in which a program for implementing one or more functions of the embodiments is supplied to a system or a device via a network or a storage medium, and one or more processors in a computer of the system or the device read and execute the program. The present invention can also be implemented by a circuit (for example, an application specific integrated circuit (ASIC)) that implements one or more functions.
Furthermore, the contents of disclosure in the present specification include not only contents described in the present specification but also all of the items which are understandable from the present specification and the drawings accompanying the present specification. Moreover, the contents of disclosure in the present specification include a complementary set of concepts described in the present specification. Thus, if, in the present specification, there is a description indicating that, for example, “A is B”, even when a description indicating that “A is not B” is omitted, the present specification can be said to disclose a description indicating that “A is not B”. This is because, in a case where there is a description indicating that “A is B”, taking into consideration a case where “A is not B” is a premise.
Other EmbodimentsEmbodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD) TM), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2023-166526, filed Sep. 27, 2023, which is hereby incorporated by reference herein in its entirety.
Claims
1. A computer-implemented method for controlling a robot, the method comprising:
- acquiring a current image showing a positional relationship between a first object held by the robot and a second object;
- determining whether the first object and the second object are in a reference positional relationship based on the current image; and
- controlling, in a case where a computer determines that the first object and the second object are in a reference positional relationship based on the current image, a position and a posture of the robot according to a trajectory of the robot set before the determining such that the first object and the second object have a target positional relationship.
2. The method according to claim 1,
- wherein both the first object and the second object are included in the current image.
3. The method according to claim 1,
- wherein the computer is configured to acquire an image feature amount according to a positional relationship between a portion of the first object and a portion of the second object from the current image, and to perform the determining based on the image feature amount.
4. The method according to claim 1,
- wherein in the controlling of the position and the posture of the robot, the position and the posture of the robot are changed according to an operation of the robot instructed by a user after the acquiring the current image.
5. The method according to claim 1,
- wherein the computer is configured to control, in a case where it is determined that the first object and the second object are in the reference positional relationship, the position and the posture of the robot by force control such that the first object and the second object have the target positional relationship.
6. The method according to claim 1,
- wherein the computer is configured to control, in a case where it is determined that the first object and the second object are in the reference positional relationship, the position and the posture of the robot by position control such that the first object and the second object have the target positional relationship.
7. The method according to claim 1, wherein
- the first object and the second object are separated from each other in the reference positional relationship, and
- the first object and the second object are in contact with each other in the target positional relationship.
8. The method according to claim 2,
- wherein an imaging unit configured to capture the current image is disposed so as to move according to a change in the position and the posture of the robot.
9. The method according to claim 1,
- wherein the position and the posture of the robot are controlled by visual servoing such that the first object and the second object have the reference positional relationship.
10. The method according to claim 1
- wherein the computer is configured to acquire a reference image showing that a first teaching object held by the robot and a second teaching object are in the reference positional relationship, and to determine whether the first object and the second object are in the reference positional relationship by using the current image and the reference image.
11. A robot control method comprising:
- setting a position and a posture of a mechanical device such that a first teaching object held by the mechanical device and a second teaching object have a target positional relationship;
- changing the position and the posture of the mechanical device holding the first teaching object such that the first teaching object and the second teaching object that are in the target positional relationship have a reference positional relationship;
- acquiring a reference image showing that the first teaching object and the second teaching object are in the reference positional relationship; and
- controlling, by a computer, in a case where the computer acquires a current image showing a positional relationship between a first object held by a robot and a second object and determines that the first object and the second object are in the reference positional relationship based on the reference image and the current image, a position and a posture of the robot holding the first object such that the first object and the second object have the target positional relationship.
12. The robot control method according to claim 11,
- wherein in the controlling of the position and the posture of the robot, the position and the posture of the robot are changed according to an operation of the robot set before the determination or an operation of the robot instructed by a user after acquiring the current image.
13. The robot control method according to claim 11,
- wherein in the controlling of the position and the posture of the robot, the position and the posture of the robot are changed based on transition of the position and the posture of the mechanical device in the changing.
14. The robot control method according to claim 11, wherein
- the second teaching object is included in the reference image, and
- the second object is included in the current image.
15. The robot control method according to claim 11,
- wherein the computer is configured to control, in a case where it is determined that the first object and the second object are in the reference positional relationship, the position and the posture of the robot by force control such that the first object and the second object have the target positional relationship.
16. The robot control method according to claim 11,
- wherein the computer is configured to control, in a case where it is determined that the first object and the second object are in the reference positional relationship, the position and the posture of the robot by position control such that the first object and the second object have the target positional relationship.
17. The robot control method according to claim 11, wherein
- the first teaching object and the second teaching object are separated from each other in the reference positional relationship, and
- the first teaching object and the second teaching object are in contact with each other in the target positional relationship.
18. The robot control method according to claim 11,
- wherein an imaging unit configured to capture the current image is disposed so as to move according to a change in the position and the posture of the robot.
19. The robot control method according to claim 11,
- wherein the position and the posture of the robot are controlled by visual servoing such that the first object and the second object have the reference positional relationship.
20. The robot control method according to claim 11,
- wherein the mechanical device is the robot.
21. A system configured to execute the method according to claim 1, the system comprising the computer.
22. The system according to claim 21, further comprising:
- the robot; and
- a control unit configured to control the robot.
23. The system according to claim 21, wherein
- the robot is an articulated robot, and
- a force sensing sensor is provided in at least one joint of the articulated robot.
24. The system according to claim 21,
- further comprising an imaging unit configured to capture the current image.
25. An article manufacturing method comprising:
- controlling the robot by the method according to claim 1 to assemble the first object to the second object.
26. A recording medium storing a program for causing the computer to execute the method according to claim 1.
Type: Application
Filed: Sep 13, 2024
Publication Date: Mar 27, 2025
Inventors: HIROTO MIZOHANA (Tokyo), RYOJI TSUKAMOTO (Kanagawa), TOMOHIRO IZUMI (Kanagawa)
Application Number: 18/884,301