AUTOMATED PACKAGE REGISTRATION SYSTEMS, DEVICES, AND METHODS
A method of operating a robotic system includes: receiving first image data representative of a package surface; identifying a pair of edges based on the first image data; determining a minimum viable region based on the pair of edges; receiving second image data representative of the package after the lift; and creating registration data based on the first and second image data.
This application is a continuation of U.S. patent application Ser. No. 17/313,921 filed May 6, 2021, now allowed, which is a continuation of U.S. patent application Ser. No. 16/736,667 filed Jan. 7, 2020, issued as U.S. Pat. No. 11,034,025 on Jun. 15, 2021, which is a continuation of U.S. patent application Ser. No. 16/443,743 filed Jun. 17, 2019, issued as U.S. Pat. No. 10,562,188 on Feb. 18, 2020, which is a continuation of U.S. patent application Ser. No. 16/290,741 filed Mar. 1, 2019, issued as U.S. Pat. No. 10,369,701 on Aug. 6, 2019, which claims the benefit of U.S. Provisional Patent Application No. 62/752,756 filed Oct. 30, 2018, all of which are incorporated by reference herein in their entireties.
This application contains subject matter related to U.S. patent application Ser. No. 16/443,757, filed Jun. 17, 2019, issued as U.S. Pat. No. 10,562,189, and titled “AUTOMATED PACKAGE REGISTRATION SYSTEMS, DEVICES, AND METHODS,” which is incorporated herein by reference in its entirety.
BACKGROUNDOften times, packages are palletized for shipment to a destination where they are de-palletized. Sometimes, they are de-palletized by human workers which can be expensive and risks bodily injuries. In industrial settings, de-palletizing operations are often performed by industrial robots such as a robotic arm that grips, lifts, transports, and delivers the package to a release point. Also, an imaging device may be employed to capture an image of a stack of packages loaded on the pallet. A system may process the image to ensure the package is efficiently handled by the robotic arm, such as by comparing the captured image with a registered image stored in a registration data source.
On occasion, the captured image of a package may match a registered image. As a result, physical characteristics (e.g., measurements of a package's dimensions, weight, and/or center or mass) of the imaged objects may be unknown. Failure to correctly identify the physical characteristics can lead to a variety of unwanted outcomes. For example, such failure could cause a stoppage, which may require manual registration of the package. Also, such failure could result in a package being mishandled, especially if the package is relatively heavy and/or lop-sided.
In the following description, several specific details are presented to provide a thorough understanding of embodiments of the inventive concepts disclosed herein. One skilled in the relevant art will recognize, however, that embodiments of the present technology disclosed herein can be practiced without one or more of the specific details, or in combination with other components, etc. In other instances, well-known implementations or operations are not shown or described in detail to avoid obscuring aspects of various embodiments of the present technology disclosed herein.
The de-palletizing platform 110 can include any platform, surface, and/or structure upon which a plurality of packages 112 (singularly, “package 112”) may be stacked and/or staged and ready to be transported to the receiving platform 120. It should be noted that, although the term “package” and “packages” will used herein, the term includes any other word that container of which, as discussed in detail below, is capable of being gripped, lifted, transported, and delivered such as, but not limited to, “case,” “box”, “carton,” or any combination thereof. Moreover, although rectangular boxes are illustrated in the drawings disclosed herein, the shapes of the boxes are not limited to such shape but includes any regular or irregular shape that, as discussed in detail below, is capable of being gripped, lifted, transported, and delivered.
Like the de-palletizing platform 110, the receiving platform 120 can include any platform, surface, and/or structure designated to receive the packages 112 for further tasks/operations. In some embodiments, the receiving platform 120 can include a conveyor system for transporting the package 112 from one location (e.g., a release point as discussed below) to another location for further operations (e.g., sorting and/or storage).
The robotic arm system 130 can include a set of link structures, joints, motors/actuators, sensors, or a combination thereof configured to manipulate (e.g., transfer and/or rotate or reorient) the packages 112. The robotic arm system 130 can include a robotic arm 132 which, for the purpose of illustration herein and not of limitation, could be an articulated, six-axis robotic arm structure. Although the discussion herein will be drawn to the robotic arm system 130, the embodiments disclosed herein are not limited to such system but include any robotic system that may be configured to perform the actions disclosed herein.
The end effector 140 can include any component or components coupled to a distal end of the robotic arm 132 configured to interact with the plurality of packages 112. For example, the end effector 140 can include structures (e.g., vacuum-based grippers) configured to grip and hold the packages 112. In some embodiments, the end effector 140 could include a force-torque (F-T) sensor 142, an arm interface 144, a gripper system 146, and/or a gripper interface 148 (as shown in
The PU 150 of
The PU 150 can include any electronic data processing unit which executes software or computer instruction code that could be stored, permanently or temporarily, in a digital memory storage device or a non-transitory computer-readable media (generally, a memory 152 of
The PU 150 may be electronically coupled (via, e.g., wires, buses, and/or wireless connections) to systems and/or sources to facilitate the receipt of input data. In some embodiments, operatively coupled may be considered as interchangeable with electronically coupled. It is not necessary that a direct connection be made; instead, such receipt of input data and the providing of output data could be provided through a bus, through a wireless network, or as a signal received and/or transmitted by the PU 150 via a physical or a virtual computer port. The PU 150 may be programmed or configured to execute the method discussed in detail below. In some embodiments, the PU 150 may be programmed or configured to receive data from various systems and/or units including, but not limited to, the image system 160, the RDS 170, the HDS 180, and/or the RPS 190. In some embodiments, the PU 150 may be programmed or configured to provide output data to various systems and/or units including, but not limited to, the robotic arm system 130, the end effector 140, and the RDS 170.
The image system 160 could include one or more sensors 162 configured to capture image data representative of one or more SIs of the packages 112 located on the de-palletizing platform 110. In some embodiments, the image data can represent visual designs and/or markings appearing on one or more surfaces of the package 112 from which a determination of a registration status of the package 112 may be made. In some embodiments, the image system 160 can include one or more cameras designed to work within targeted (e.g., visible and/or infrared) electromagnetic spectrum bandwidth and used to detect light/energy within the corresponding spectrum. In some embodiments, the image data could be a set of data points forming point cloud, the depth map, or a combination thereof captured from one or more three-dimensional (3-D) cameras and/or one or more two-dimensional (2-D) cameras. From these cameras, distances or depths between the image system 160 and one or more exposed (e.g., relative to a line of sight for the image system 160) surfaces of the packages 112 may be determined. In some embodiments, the distances or depths can be determined through the use of an image recognition algorithm(s), such as contextual image classification algorithm(s) and/or edge detection algorithm(s). Once determined, the distance/depth values may be used to manipulate the packages 112 via the robotic arm system 130. For example, the PU 150 and/or the robotic arm system 130 can use the distance/depth values for calculating the position from where the package 112 may be lifted and/or gripped. It should be noted that data described herein, such as the image data, can include any analog or digital signal, either discrete or continuous, which could contain information or be indicative of information.
The image system 160 can include at least one display unit 164 configured to present an image of the package(s) 112 captured by the sensors 162 that may be viewed by one or more operators of the robotic system 100 as discussed in detail below. In addition, the display units 164 can be configured to present other information such as, but not limited to, symbology representative of registered and/or unregistered instances of the packages 112 as discussed in detail below.
The RDS 170 could include any database and/or memory storage device (e.g., a non-transitory computer-readable media) configured to store the registration records 172 for a plurality of the packages 112. For example, the RDS 170 can include read-only memory (ROM), compact disc (CD), solid-state memory, secure digital cards, compact flash cards, and/or data storage servers or remote storage devices.
In some embodiments, the registration records 172 can each include physical characteristics or attributes for a corresponding package 112. For example, each registration record 172 can include, but not be limited to, one or more template SIs, 2-D or 3-D size measurements, a weight, and/or center of mass (CoM) information. The template SIs can represent known or previously determined visible characteristics of the package 112 including the design, marking, appearance, exterior shape/outline, or a combination thereof of the package 112. The 2-D or 3-D size measurements can include lengths, widths, heights, or combination thereof for the known/expected packages.
In some embodiments, the RDS 170 can be configured to receive a new instance of the registration record 172 (e.g., for a previously unknown package and/or a previously unknown aspect of a package) created in accordance with the embodiments disclosed below. Accordingly, the robotic system 100 can automate the process for registering the packages 112 by expanding the number of registration records 172 stored in the RDS 170, thereby making a de-palletizing operation more efficient with fewer unregistered instances of the packages 112. By dynamically (e.g., during operation/deployment) updating the registration records 172 in the RDS 170 using live/operational data, the robotic system 100 can efficiently implement a computer-learning process that can account for previously unknown or unexpected conditions (e.g., lighting conditions, unknown orientations, and/or stacking inconsistencies) and/or newly encountered packages. Accordingly, the robotic system 100 can reduce the failures resulting from “unknown” conditions/packages, associated human operator interventions, and/or associated task failures (e.g., lost packages and/or collisions).
The HDS 180 can include components configured to provide a vertical measurement of an object (e.g., the package 112) relative to a reference point (e.g., a surface associated with the HDS 180). For the example illustrated in
The RPS 190 can include components/circuits configured to trigger a signal that is provided to the PU 150 when the package 112 crosses or contacts a horizontal plane associated with the RPS 190. In some embodiments, a signal triggered by the RPS 190 may be used to determine the position at which the gripper system 146 releases the package 112 onto the receiving platform 120. In some embodiments, the RPS 190 could be installed on the receiving platform 120 as shown in
Referring now to
The arm interface 144 could be any device configured to couple the distal end of the robotic arm 132 of
Referring now to
The gripper interface 148 of
In some embodiments, the robotic system 100 can make 2-D measurements (e.g., lengths and widths) of packages from the image data (through image recognition algorithm(s) that could include edge detection algorithm(s). For example, the robotic system 100 can use image recognition algorithm(s) (e.g., edge detection algorithm(s) and/or mapping algorithm(s)) to make the 2-D measurements of the packages 112 at the de-palletizing platform 110, such as the packages 112-1 through 112-3, 112-13 through 112-14, and 112-21 through 112-24. The robotic system 100 can make the 2-D measurements based on depth to the measured surface. The robotic system 100 can compare (e.g., once the measured package is identified, such as via image recognition of its exposed surface) the 2-D measurements of the corresponding package 112 to their registration records 172 to confirm the accuracy of the registration records 172.
In some embodiments, CoM information stored in the registration record 172 may be provided to the PU 150 for the purpose of positioning the end effector 140 and/or the gripper system 146 of
In addition, the packages 112-31, 112-32, 112-44, and 112-52 through 112-54 can represent unregistered and/or erroneously processed/matched instances of the packages 112, which may not correspond to the registration record 172 stored in the RDS 170 of
As shown in
In addition, example MVRs 112-31f, 112-44f, and 112-53f are illustrated in
In some embodiments with reference made to the Cartesian coordinate system, force measurement(s) along one or more axis (i.e., F(x-axis), F(y-axis), and/or F(z-axis)) and/or moment measurement(s) about one or more axis (i.e., M(x-axis), M(y-axis), and/or M(z-axis)) may be captured via the F-T sensor 142. By applying CoM calculation algorithms, the weight of the package may be computed as a function of the force measurement(s), and the CoM of the package may be computed as a function of the force measurement(s) and the moment measurement(s). These measurements can be added to a new instance of the registration record 172 of
While in the raised position, the robotic system can reimage the lifted package to clarify the previously unclear edges. Second image data representative of a partial SI of 112-44 (i.e., the portion of the entire SI that is not blocked by the gripper system 146) may be captured by the sensors 162 of the image system 160 of
The robotic system 100 can include a capture module 202. The capture module 202 captures the SI as the first image data. For example, the capture module 202 can capture the first image data with the sensor(s) 162 of
The robotic system 100 can include the region module 204, which can be coupled to the capture module 202. The region module 204 calculates the MVR. For example, the region module 204 can calculate the MVR based on the package 112, the registration record 172 of
In some embodiments, the capture module 202 can compare the received image data (e.g., the first image data) to the registration record 172. For example, the capture module 202 can compare the first image data and/or any processing results thereof (e.g., dimension/size estimates and/or visual markings derived from the image data) to existing descriptions/templates of known or previously encountered packages. Based on the comparison, the region module 204 can determine whether the first image data matches corresponding information of a known or previously encountered package as stored in the registration records 172.
The region module 204 can calculate the MVR in a number of ways, including, for example, calculating the MVR based on whether the package 112 is registered as the registration record 172 (e.g., whether comparison of the first image data to the registration records 172 returns a match). More specifically as an example, if the first image data matches one of the registration records 172 (e.g., the package 112 is registered as one or more of the registration records 172), the region module 204 can avoid calculating the MVR. In contrast, the region module 204 can calculate the MVR when first image data does not match to any of the registration records 172. For example, the first image data captured can represent the top surface of the unregistered instance of the package 112. Since the package 112 is unregistered, some instance of the edge of the package 112 can be unclear. More specifically as an example, the unregistered package 112-31 of
For a specific example, the first image data of the package 112-31 can include two clear edges 112-31b of
For a further example, design and/or markings appearing on the SI of the package 112 can include a straight line. The straight line can be mistaken as the edge of the surface of the package 112. To reduce potential misidentification of the edge, the region module 204 can calculate the MVR excluding the portion of the surface with the straight line. More specifically as an example, the region module 204 can calculate the MVR smaller than the boundary of the surface area comprised with the straight line.
For a different example, the region module 204 can calculate the MVR based on the location of the package 112. For example, the pallet can include more than one packages 112 (registered and/or unregistered) as shown in
In contrast, some of the edges for the unregistered instance of the package 112 can be unknown, such as due to the package 112-52 being unregistered. Furthermore, an unknown package (e.g., the package 112-52) can be surrounded by other packages, such as the packages 112-31, 112-32, 112-33, 112-43, 112-44, 112-51, 112-53, and/or 112-54 as illustrated in
In some instances, one or more of the surrounding packages (e.g., the package 112-32) can also be unregistered/unmatched according to the registration records 172, which can introduce further uncertainties about the remaining/unknown edges of the package 112-52. Without a clear boundary established between the package 112-52 and the package 112-32, the SI for the package 112-52 and the SI for the package 112-32 can overlap with each other.
In contrast, while the package 112-31 may also be unregistered/unmatched, the sensors 162 can detect a unique location/state of the package 112-31 relative to the de-palletizing platform 110 and/or other packages. For example, the region module 204 can determine that the package 112-31 satisfies a predetermined location/state when at least one edge of the package 112-31 is not adjacent to/abutting another package 112. More specifically as an example, the two edges 112-31b
Accordingly, the region module 204 can determine the unregistered instance of the package 112 to be at or near the outer perimeter of the de-palletizing platform 110 based on the visibility of the edge(s), the corner, or a combination thereof of the package 112. In some embodiments, the region module 204 can further determine the unregistered instance of the package 112 to be exposed (e.g., not adjacent to other packages, such as due to removal of previously adjacent package) along one or more horizontal directions. The region module 204 can prioritize the calculation of the MVR for the unregistered instance of the package 112 that are exposed and/or at the exterior over other unregistered packages (e.g., the unregistered package 112-52 112 located at the horizontally interior portion of the stack/layer).
In some embodiments, the region module 204 can prioritize the calculation of the MVR for the package with the greater number of edges that are clearly visible and/or exposed over the package with fewer such edges. Accordingly, the robotic system 100 can reduce the risk of grasping the package with SI that overlaps with another package 112 and reduce the corresponding gripping/lifting failures. For the example illustrated in 4B, the region module 204 can determine the MVR for the package 112-31, 112-53, and/or 112-44 before the packages 112-32, 112-54 112-52. The region module 204 can transmit the MVR to a lift module 206.
The region module 204 can calculate the MVR based on two unclear corners. More specifically as an example, the two unclear corners can be comprised of a combination of at least three unclear edges, two clear edges and one unclear edge, or two unclear edges and one clear edge. As discussed above, the region module 204 can predict the boundary of the surface of the package 112 by extending each edge to intersect the other edge to create a corner. The region module 204 can calculate the MVR based on the boundary of the surface area created by the three edges/two corners by determining the MVR to be smaller than the boundary created by the two unclear corners.
The robotic system 100 calculating the MVR dynamically and in real-time provides improved accuracy and performance in grasping the unregistered instance of the package 112. By calculating the MVR, the robotic system 100 can estimate the surface area (e.g., the corresponding edges/boundaries) of the package 112 where the gripper system 146 of
The robotic system 100 can include the lift module 206, which can be coupled to the region module 204. The lift module 206 implements (via, e.g., communicating and/or executing) the command for the robotic arm 132 of
The lift module 206 can operate in a number of ways. For example, the lift module 206 can execute the lift command for the robotic arm 132 to grasp the package 112 within the MVR where the unclear edge is visible to the sensors 162. For a specific example, as shown in
For a different example, the lift module 206 can determine the weight of the package 112. More specifically as an example, the lift module 206 can determine the weight of the unregistered instance of the package 112 using the F-T sensor 142 of
In some embodiments, the capture module 202 can further capture the SI of the unregistered instance of the package 112 after the lift and/or as it is being lifted by the grasp on the MVR as the second image data. More specifically as an example, the capture module 202 can capture the second image data based on the package 112 being lifted for the lift check distance to include the now visible two clear edges 112-44g of
The robotic system 100 can include the extraction module 208, which can be coupled to the lift module 206. The extraction module 208 extracts the third image data. For example, the extraction module 208 can extract the third image data based on the first image data, the second image data, or a combination thereof.
The extraction module 208 can extract the third image data in a number of ways. For example, the extraction module 208 can determine an image difference based comparing the first image data and the second image data. The image difference can represent the difference between the first image data and the second image data of the same instance of the unregistered instance of the package 112.
More specifically as an example, the first image data can include SI with the design and/or markings of the package 112. However, since the package 112 is unregistered, the edges of the package 112 can be unclear or not definitively determined. Thus, the first image data can include the SI of the package 112 with the edge that is unclear or overlapped with the SI of another package 112. For a further example, the second image data can include the SI of the package 112 after being lifted for the lift check distance. More specifically as an example, the second image data can include the SI of the package 112 with previously unclear edge (e.g., edges 112-44b) becoming clear edge (e.g., edges 112-44g) after the package 112 is lifted. The unclear edge can become clear edge after the package 112 is lifted because the package 112 becomes separate (e.g., at a higher height) from other adjacent packages 112. The sensors 162 can distinguish between different packages 112 because the distance or depth between the package 112 being lifted and the adjacent packages 112 can be different from the sensors 162.
The extraction module 208 can extract the third image data based on combining the image differences between the first image data and the second image data. For example, the first image data of the package 112-44 can include the design and/or markings, the two clear edges 112-44b, the corner 112-44c, or a combination thereof. For a further example, the second image data of the package 112-44 can include the two clear edges 112-44g. The extraction module 208 can extract the third image data of the package 112-44 by including the design and/or markings, the two clear edges 112-44b, the corner 112-44c, the two clear edges 112-44g, or a combination thereof.
For a further example, the extraction module 208 can determine the length, width, or a combination thereof of the package 112 based on the third image data. More specifically as an example, based on the clear edges, the extraction module 208 can determine the dimension including the length, the width, or a combination thereof. The extraction module 208 can transmit the third image data, the length, the width, or a combination thereof to the registration module 212.
Extraction of the third image data dynamically and in real-time provides improved performance and the accuracy of the robotic system 100 to identify and to grasp the unregistered instance of the package 112. By extracting the third image data, the robotic system 100 can identify the edges of the package 112 to differentiate from another package 112. By clearly identifying the boundaries/edges of the package 112, the robot system 100 can efficiently place the gripper system 146 on the package 112 to securely grasp and transfer the package 112. As a result, the robotic system 100 can continue to transfer the package 112 unregistered to the robotic system 100 to improve the performance of the workflow for depalletizing the packages 112.
For illustrative purposes, the lift module 206 is described to execute the command to lift the package 112 but the lift module 206 can operate differently. For example, the lift module 206 can determine the CoM based on lifting the package 112. More specifically as an example, the lift module 206 can determine whether the location within the MVR where the package 112 is gripped and lifted by the gripper system 146 is above the CoM of the package 112. For a further example, the lift module 206 can determine whether the CoM is within the xy axes represented as the MVR.
The lift module 206 can determine the CoM in a number of ways. For example, the lift module 206 can determine the CoM with the F-T sensor 142 as described above. For a further example, the lift module 206 can determine whether the CoM is under the surface area represented as the MVR with the F-T sensor 142, the CoM algorithm, or a combination thereof.
Using the F-T sensor 142 and the CoM algorithms, the lift module 206 can also determine whether the location/portion within the MVR that contacts or is covered by the gripper where the package 112 coincides with or includes the CoM. For a further example, the lift module 206 can determine a new location within the MVR for the gripper system 146 to grip the package 112 if the original gripping location is not above the CoM of the package 112. More specifically as an example, the lift module 206 can determine the new location to grip relative to the original location gripped is above the CoM using the CoM algorithm discussed above. For example, the lift module 206 can calculate a vector direction based on a measured torque and/or a direction thereof. Based on the torque, the lift module 206 can estimate a location/direction of a downward force relative to the F-T sensor 142 and/or the gripper system 146. Also, the lift module 206 can calculate a distance based on a magnitude of the measured torque, a measured weight of the lifted package 112, a relationship between the gripping location and the package boundaries/edges, or a combination thereof. The lift module 206 can check if the new location (e.g., the vector direction and the distance) to grip is within the MVR. If the new location is above the CoM, the lift module 206 can verify that the CoM of the package 112 is properly determined.
If the lift module 206 determines that the location within the MVR gripped is not the CoM, the lift module 206 can execute the command to drop or lower the package 112 to where the gripper system 146 had lifted the package 112. Furthermore, the lift module 206 can execute the command for the gripper system 146 to grasp the package 112 at the new location within the MVR (by, e.g., lowering and releasing the package 112, repositioning the gripper system 146, and then regripping the package 112) and for the robotic arm 132 to lift the package 112 for the lift check distance. For additional example, the gripper system 146 can grasp at the new location within the MVR without blocking the sensors 162 from detecting the unclear edge of the package 112. The lift module 206 can transmit the registration module 212.
The robotic system 100 determining the CoM of the package 112 dynamically and in real-time provides improved performance of the robotic system 100 transferring unregistered instance of the package 112. By accurately identifying the CoM of unregistered/unrecognized packages, the stability of the gripper system 146 grasping the package 112 improves. As a result, the robotic system 100 can continue to transfer the package 112 unregistered to the robotic system 100 to improve the performance of the workflow for depalletizing the packages 112.
For illustrative purposes, the capture module 202 is described to capture the first image data, the second image data, or a combination thereof but the capture module 202 can operate differently. For example, the capture module 202 can capture a regripped image data represented the SI of the package 112 after the CoM is correctly determined as discussed above. More specifically as an example, the robotic arm 132 can lift the package 112 after the gripper system 146 grasps the package 112 at the new location within the MVR. The capture module 202 can capture the regripped image data representing the SI of the package 112 gripped at the new location within the MVR and above the CoM. The capture module 202 can transmit the regripped image data to the extraction module 208.
For illustrative purposes, the extraction module 208 is described extracting the third image data based on the first image data, the second image data, or a combination thereof but the extraction module 208 can operate differently. For example, the extraction module 208 can extract the third image data based on the first image data, the regripped image data, or a combination thereof similarly as described above for the extraction module 208 extracting the third image data based on the first image data, the second image data, or a combination thereof. The extraction module 208 can determine the dimension including the length, the width, or a combination thereof of the package 112 as discussed above. The extraction module 208 can transmit the third image data, the length, the width, or a combination thereof to the registration module 212.
The robotic system 100 can include a transfer module 210, which can be coupled to the extraction module 208. The transfer module 210 executes the command to transfer the package 112 to the receiving platform 120 of
The transfer module 210 can execute the command in a number of ways. For example, the transfer module 210 can execute the command to transfer the package 112 based on the registration status of the package 112. More specifically as an example, if the registration record 172 exists for the package 112, the transfer module 210 can execute the command for the robotic arm 132 grasping the registered instance of the package 112 to transfer the package 112 for placing on the receiving platform 120.
For a different example, if the registration record 172 does not exist, the third image data of the package 112 will be extracted as discussed above. Moreover, the transfer module 210 can execute the command for the robotic arm 132 grasping the unregistered instance of the package 112 to transfer the package 112 to the receiving platform 120. For a further example, when the robotic arm 132 lowers the package 112 to the receiving platform 120, the bottom extent of the package 112 can trigger the HDS 180 of
The height of the HDS 180 relative to the floor can be predefined as the height of the receiving platform 120 can be predefined. The transfer module 210 can determine the height of the package 112 based on a time when the bottom of the package 112 crosses the HDS 180 and a height of the gripper system 146. More specifically as an example, the transfer module 210 can determine the height of the package 112 based on the distance/difference between the location or height of the gripper system 146 when the signal (i.e., representative of the package 112 crossing the HDS 180) is received and the predefined height of the HDS 180. The transfer module 210 can transmit the height of the package 112 to the registration module 212.
The robotic system 100 determining the height of the unregistered instance of the package 112 dynamically and in real-time provides improved performance of the robotic system 100 depalletizing the packages 112. By determining the height of unregistered/unrecognized packages, the robotic system 100 can accurately identify the attributes of the packages 112 to safely grasp the package. As a result, the robotic system 100 can continuously transfer the packages 112 of same type to improve the performance of the workflow for depalletizing the packages 112.
The robotic system 100 can include the registration module 212, which can be coupled to the transfer module 210. The registration module 212 registers the attribute of the package 112. For example, the registration module 212 can register (e.g., by tying or storing together) the third image data, the length, the width, the height, the weight, the CoM, or a combination thereof of the unregistered instance of the package 112. More specifically as an example, the registration module 212 can generate the registration record 172 to convert the unregistered instance of the package 112 into the registered instance of the package 112.
It should be noted that the steps of the method described above may be embodied in computer-readable media stored in a non-transitory computer-readable medium as computer instruction code. The method may include one or more of the steps described herein, which one or more steps may be carried out in any desired order including being carried out simultaneously with one another. For example, two or more of the steps disclosed herein may be combined in a single step and/or one or more of the steps may be carried out as two or more sub-steps. Further, steps not expressly disclosed or inherently present herein may be interspersed with or added to the steps described herein, or may be substituted for one or more of the steps described herein as will be appreciated by a person of ordinary skill in the art having the benefit of the instant disclosure.
The above Detailed Description of examples of the disclosed technology is not intended to be exhaustive or to limit the disclosed technology to the precise form disclosed above. While specific examples for the disclosed technology are described above for illustrative purposes, various equivalent modifications are possible within the scope of the disclosed technology, as those skilled in the relevant art will recognize. For example, while processes or modules are presented in a given order, alternative implementations may perform routines having steps, or employ systems having modules, in a different order, and some processes or modules may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or sub-combinations. Each of these processes or modules may be implemented in a variety of different ways. Also, while processes or modules are at times shown as being performed in series, these processes or modules may instead be performed or implemented in parallel, or may be performed at different times. Further, any specific numbers noted herein are only examples; alternative implementations may employ differing values or ranges.
As used herein, the term “embodiment” means an embodiment that serves to illustrate by way of example but not limitation. It will be appreciated to those skilled in the art that the preceding examples and embodiments are exemplary and not limiting to the broad scope of the inventive concepts disclosed herein. It is intended that all modifications, permutations, enhancements, equivalents, and improvements thereto that are apparent to those skilled in the art upon a reading of the specification and a study of the drawings are included within the broad scope of the inventive concepts disclosed herein. It is therefore intended that the following appended claims include all such modifications, permutations, enhancements, equivalents, and improvements falling within the broad scope of the inventive concepts disclosed herein.
Claims
1. A method of operating a robotic system, the method comprising:
- identifying a pair of edges based on image data that represents a package;
- determining a minimum viable region based on the pair of edges;
- generating one or more commands for moving the package based on the minimum viable region;
- obtaining an attribute associated with the package; and
- creating registration data representative of a new registration record of the package based on at least the attribute associated with the package.
2. The method of claim 1, wherein the attribute includes additional image data representing the package.
3. The method of claim 1, wherein the attribute includes a length, a width, a height, a weight, or a center of mass (CoM) of the package.
4. The method of claim 1, wherein the attribute includes a design, marking, appearance, or exterior shape/outline of the package.
5. The method of claim 1, wherein creating the registration data based on at least the attribute associated with the package includes comparing the attribute to a template surface image representing known or previously determined visible characteristics of the package.
6. The method of claim 1, wherein the attribute is received after moving the package.
7. The method of claim 1, wherein:
- the image data includes a three-dimensional depth map and a two-dimensional surface image of a group of objects including the package;
- the pair of edges are identified based on the three-dimensional depth map; and
- the attribute is obtained based on the two-dimensional surface image or the three-dimensional depth map.
8. The method of claim 1, wherein determining the minimum viable region includes determining a size of the minimum viable region based on a gripper size representative of one or more lateral dimensions of an end-effector.
9. The method of claim 1, further comprising:
- receiving data after moving the package, wherein the data represents a measurement from a sensor connected to or integral with an end-effector;
- obtaining the attribute includes determining a center of mass location of the package based on the received data; and
- when the determined center of mass location is outside of the minimum viable region, generating one or more commands for (1) releasing the package from the end-effector, (2) placing the end-effector over the center of mass location, and (3) regripping the package based on the center of mass location.
10. A robotic system comprising:
- at least one processor; and
- at least one memory coupled to the at least one processor, the memory including instructions executable by the at least one processor to: identify a pair of edges based on image data that represents a package; determine a minimum viable region based on the pair of edges; generate one or more commands for moving the package based on the minimum viable region; obtain an attribute associated with the package; and create registration data representative of a new registration record of the package based on at least the attribute associated with the package.
11. The system of claim 10, wherein the attribute includes additional image data representing the package.
12. The system of claim 10, wherein the attribute includes a length, a width, a height, a weight, or a center of mass (CoM) of the package.
13. The system of claim 10, wherein the attribute includes a design, marking, appearance, or exterior shape/outline of the package.
14. The system of claim 10, wherein creating the registration data based on at least the attribute associated with the package includes comparing the attribute to a template surface image representing known or previously determined visible characteristics of the package.
15. The system of claim 10, wherein the attribute is received after moving the package.
16. A non-transitory memory medium storing computer-executable instructions, when executed by a computing system, cause the computing system to perform a computer-implemented method, the method comprising:
- identifying a pair of edges based on image data that represents a package;
- determining a minimum viable region based on the pair of edges;
- generating one or more commands for moving the package based on the minimum viable region;
- obtain an attribute associated with the package; and
- creating registration data representative of a new registration record of the package based on at least the attribute associated with the package.
17. The non-transitory memory medium of claim 16, wherein the attribute includes additional image data representing the package.
18. The non-transitory memory medium of claim 16, wherein the attribute includes a length, a width, a height, a weight, or a center of mass (CoM) of the package.
19. The non-transitory memory medium of claim 16, wherein the attribute includes a design, marking, appearance, or exterior shape/outline of the package.
20. The non-transitory memory medium of claim 16, wherein creating the registration data based on at least the attribute associated with the package includes comparing the attribute to a template surface image representing known or previously determined visible characteristics of the package.
Type: Application
Filed: Sep 7, 2023
Publication Date: Dec 28, 2023
Inventors: Rosen Diankov (Tokyo), Huan Liu (Tokyo), Xutao Ye (Tokyo), Jose Jeronimo Moreira Rodrigues (Tokyo), Yoshiki Kanemoto (Tokyo), Jinze Yu (Tokyo), Russell Islam (Tokyo)
Application Number: 18/463,233