CASED GOODS INSPECTION SYSTEM AND METHOD
A method of inspecting cased goods includes advancing at least one case of goods on a conveyor, generating an illumination sheet of parallel illuminating rays with at least one electromagnetic source, and capturing an image, formed by the illumination sheet passing through a diffuser with at least one camera located so as to capture illumination from diffused parallel rays of the light sheet, where the image case embodies a goods image that is generated by the case goods moving between the light source and the at least one camera, where part of the parallel sheet of light is at least partially blocked by the case of goods, thus generating a gray level image.
This application is a continuation of U.S. Application 17/216,803, filed Mar. 30, 2021, (now U.S. Pat. No. 11,449,978, issues on Sep. 20, 2022), which is a continuation of United States Application No. 15/416,922, filed Jan. 26, 2017, (now U.S. Pat. No. 10,964,007, issues on Mar. 30, 2021), which is a non-provisional of and claims the benefit of U.S. Provisional Pat. Application No. 62/287,128, filed on Jan. 26, 2016, the disclosures of which are incorporated herein by reference in their entireties.
1. TECHNICAL FIELDThis invention relates to product inspection, and in particular to cased goods inspection systems and methods therefor.
2. BACKGROUNDThere is a need to improve cased goods inspection systems and methods.
The other cased good inspection systems are mostly built on LED curtain lighting. The LEDs in these arrays have a considerable spacing between them (> 5 mm), so they only produce a ‘sampled’ image, instead of imaging the cased goods completely. The other main disadvantage of LED light curtains is the lack of transparency measurement. The light curtains only detect the presence of material in its path. They are therefore unable to differentiate a piece of shrink wrap from an opaque cardboard piece.
Other approaches use the laser triangulation method. This approach is fast, precise and robust, but is sensitive to reflective surfaces like shrink wraps. Also, like the previous method, it cannot differentiate a piece of shrink wrap from cardboard.
Therefore, there is a need in the market for a new approach that integrates speed, resolution, robustness and cost for inspection cased goods.
SUMMARYIn accordance with an aspect of the proposed solution there is provided a cased goods inspection system adapted to determine the presence of product being scanned and to obtain at least “real box”, “max box”, “max bulge”, “orientation angle”, ”distance from one side of the conveyor”, etc. measurements.
In accordance with another aspect of the proposed solution there is provided a cased goods inspection system adapted to reject or accept product based on allowable dimensions applied at least to “real box”, “max box” and “max bulge” measurements.
In accordance with a further aspect of the proposed solution there is provided a cased goods inspection system with sensor(s) adapted to sense incident light intensity variations, and detector(s) detecting and accounting for such incident light intensity variations experienced by a vision system of the cased goods inspection system to reduce false product detection and false measurements including rejection of adverse effects to Measurement accuracy from opaque or translucent materials packaging such as plastic shrink wrap.
In accordance with a further aspect of the proposed solution there is provided a cased goods inspection system adapted to identify the presence of debris on a window of a camera subsystem of the cased goods inspection system to reduce false product detection or false measurements.
In accordance with a further aspect of the proposed solution there is provided a cased goods inspection system adapted to inspect an array of goods at least partially encased in shrink wrap packaging.
In accordance with a further aspect of the proposed solution there is provided a cased goods inspection system adapted to inspect an array of goods having complex shapes encased in shrink wrap packaging.
The use of the word “a” or “an” when used in conjunction with the term “comprising” in the claims and/or the specification may mean “one”, but it is also consistent with the meaning of “one or more”, “at least one”, and “one or more than one”. Similarly, the word “another” may mean “at least a second” or “more”.
As used in this specification and claim (s), the words “comprising” (and any form of comprising, such as “comprise” and “comprises”), “having” (and any form of having, such as “have” and “has”), “including” (and any form of including, such as “include” and “includes”) or “containing” (and any form of containing, such as “contain” and “contains”), are inclusive or open-ended and do not exclude additional, unrecited elements.
The invention will be better understood by way of the following detailed description of embodiments of the proposed solution with reference to the appended drawings, in which:
wherein similar features bear similar labels throughout the drawings. References to “top” and “bottom” qualifiers in the present specification is made solely with reference to the orientation of the drawings as presented in the application and do not imply any absolute spatial orientation.
A proposed cased goods inspection system will now be described with reference to illustrative embodiments thereof.
System Input and OutputWith reference to
One example of a case of good(s) is a shrink wrapped product and/or product container or array of one or more product containers or product(s) included in shrink wrap as illustrated in
With reference to
At least some of these system components will now be described in more detail.
Input conveyor B transports incoming products from external equipment. Conveyor B, for example, includes a mat top high-grip conveyor, but can be of any other form (such as roller bed) allowing moving product A into and through the vision system C with reduced vibration and slippage. For example, acceptable vibration can be under a vibration threshold limit.
The output conveyor J transports product A away from vision system C. Conveyor J, for example, includes a mat top low-grip conveyor, but can be of any other form allowing moving product A through and away from the vision system C with reduced vibration. For example, acceptable vibration can be under a vibration threshold limit.
Vision system C is positioned, at least in part, around and about the conveyors B and/or J for measuring the top and side profiles of product A.
In accordance with one aspect, the vision system C can includes a first light source F which emits a (first) sheet of light (not shown), e.g. a continuous plane of substantially parallel light, within a small gap between conveyors B and J. For example first light source F can be located above conveyors B and J as otherwise shown in
The vision system further includes a first camera system G located for example opposite first light source F with respect to conveyors B and J, positioned to receive the parallel light emitted by first light source F. For example if first light source F is located above conveyors B and J, then the first camera system G is located below conveyors B and J. In other aspects, the orientation of first light source and first camera system may be rotated as desired about the axis defined by the direction of travel of conveyors B and J maintaining the relationship between light source emitter and camera system receiver.
A second light source H emits a (second) sheet of light, i.e. a continuous plane of substantially parallel light, over the small gap between conveyors B and J. For example second light source H can be located on one side of conveyors B and J (transmission of the parallel light beams of the second sheet being substantially (orthogonal to the first plane).
A second camera system I is correspondingly located to receive illumination from (e.g. opposite) second light source H with respect to conveyors B and J, positioned to receive the parallel light emitted by second light source H. For example if second light source H is located to one side of conveyors B and J, then second camera system I is located to the other opposite side of conveyors B and J.
A controller or any other device or system (local or remote) operably coupled to the system, includes a computer program that is capable of registering and analyzing image data to calculate desired measurements of products A.
Without limiting the invention, at least one light source F or H can include a light shaper LS made with lenses or mirrors. The light source itself can be a laser, a LED or other types like gas lamp. In other aspects the source may be any other device of EM radiation suitable for EM illumination of a target object and which reflection or transmission, of which may be captured by an appropriate imaging system generating an image or pseudo image of the illuminated target object.
The collimated output beam provides the sheet of parallel propagating light which, when impeded by the product A, casts an orthographic projection shadow onto an input window of the corresponding camera system G or I opposite the corresponding light source F or H. In this regard, the camera system G or I receives an incident collimated input beam output by the corresponding light source.
In the illustrated example, both camera systems G and I include a camera CAM, an optional mirror MIR and a diffusion screen DIF. The optional mirror MIR is for example employed in reducing the footprint of the overall cased goods inspection system by redirecting the sheet of light parallel to the conveyor. The diffusion screen DIF, which may be any suitable type illumination diffuser, is an example of an input beam shaper spreading the input beam by diffusing the parallel light incident thereon from corresponding light sources F or H so that the corresponding camera G and I (e.g. the camera imaging array having a desired predetermined width, defined structurally or by the controller so that the array) can capture and digitize diffused light from the full width of the corresponding light sheet emitted by the light source. As may be realized, the camera (s) G, I may image a case(s) and/or products within the full width of the light sheets (which as may be further realized may span the lateral bounds of the conveyor and height of the inspection system opening).
In order to reduce the light footprint or to be able to use a less powerful laser class light source, smaller sheets of parallel light can be used, with overlap to maintain continuity and cover the larger surface. A calibration procedure can be used to realign these separate sheets as a single one by software.
OperationFor example, in operation, a product A arrives on conveyor B in any orientation and position. Without limiting the invention, the position includes a distance or gap from one side of conveyor B.
Vision system C makes repeated image acquisitions, block 310, into image cache storage (such as of the controller processor), for example triggered by an input conveyor encoder or alternatively by a stepper motor drive circuit advancing at least one of the conveyors B and J.
For each acquired image (
Using, for description purposes, the image acquired from the camera system G located below the conveyors B and J, the controller verifies 330 whether the number of pixels in a considered portion of an acquired image (registered by at least one or if desired, acquired images registered by both camera’s G, I) which have a drop in intensity of more than, for example, about 40% compared to the normalized intensity value, and that represent a width of, for example, about 30 mm or more from the full width of the illumination sheet captured by the acquired image. As may be realized, the width referred to herein as the threshold width of reduced intensity portions of the acquired image may be set as desired based on environmental conditions. The reduced intensity width of the image portion corresponds to and is the result of a spatial intensity reduction caused by sustained, over the duration of acquired image(s), disruption and/or obstruction or block of at least a portion of the input beam(s) forming the illumination sheet, such as due to an object, that may be opaque or translucent in part passing through the beam/sheet. In other words, passage of product, case and/or wrapping through the sheet produces what may also be referred to as a gray level image for at least part of the acquired image width. If this is the case, the controller considers that there is a potential detection 332 of a product. The threshold value (both threshold width and threshold intensity variance) of the drop in intensity may be modified as desired (for example intensity drop threshold may be 10% drop from normalized). As may be realized, both thresholds settings are determinative of portion of opaque or translucent material in the illumination sheet, the disruption thereof resulting in a gray image in which such material is both detectable and measurable as will be further described (and the threshold width may be about 5 mm). By comparison, a wholly opaque material will reflect resulting in substantially complete obstruction of illumination and consequently in the relevant portion of the, or the graphic projection image.
The above process steps 310–340 are repeated as long as the number of pixels in a given acquired image, which have a drop in intensity of more than the predetermined threshold intensity drop (that may also be represented as an absolute intensity value threshold), for example, about 40%, and represents a width greater than the predetermined threshold width of, for example, about 30 mm or more and the process stops when this condition is not true anymore. While this first condition is true (established by exceeding both thresholds), if the number of images that meet this condition represents a potential product length of about 60 mm (as may be determined by a suitable encoder synchronizing acquisition rate, identifying conveyor displacement and rate so as to be correlated or proportional to acquired images and/or image frames) or more, the controller considers that a product was detected, or in other words, confirming detection as true 336 (the potential product length for confirmation of product may be set more or less, such as 10 mm of displacement. In this case, the controller combines previously acquired upstream images representing, for example, 60 mm of conveyor displacement (the representative length may be more or less, e.g. 10 mm) in front of the image setting the detection of the detected product, the number of images in which product was detected, and subsequently acquired downstream images representing, for example, 60 mm of conveyor displacement after the product detection assertion, from both camera systems I and G, to construct 340, a composite contiguous complete combined image of the product from the series of images acquired during the aforementioned durations before and after product detection (the duration(s) pre and/or post detection) may be varied and need not be symmetrical. If the number of images that meet the first and second conditions (i.e. threshold and duration 330, 336) represent a potential product less than, for example, 60 mm in width/length, the controller asserts that the detection 337 was a false detection, or that the detected product is below the minimal accepted length/width the image acquisition process continues normally. This system is robust to noise or parasitic signals like falling debris.
As noted, while both conditions are asserted, contiguous construction of the combined image (or pseudo image) of the scanned product continues past, for example, 60 mm until a maximum accepted product dimension is reached. In other words, upon controller determination that acquired image(s) (corresponding to desired conveyor travel, 60 mm) for example, of camera system G (through such determination maybe effected from acquired images if both cameras G, I) no longer satisfies the above noted thresholds (e.g. the considered portion of the acquired images have neither the width nor an intensity drop, greater than the set thresholds (e.g. 30 mm, 40% drop)), the controller registers the accepted product dimension (such as from the registered conveyor displacement from the encoder coincident with image acquisitions that exceed thresholds). Accordingly, the controller (via suitable programming) effects raw image acquisition for combination into the scanned product combined image may continue for another, for example, 60 mm after the maximum accepted product dimension was surpassed. It is understood, that the “combined image” (or pseudo image) and the “combined product image” correspond to the relative positions and orientations of illumination sources and include images of substantially orthogonal sides of the product such as a side and a top view images.
Once, and if desired substantially coincident with processor construct of the composite image(s) of a complete image product as noted, the controller calculates a variety of quantitative measurements by process steps illustrated in
“Real box” measurements, block 710 include dimensions of the best fit shape which can be determined based on, or obtained from, the combined product image. For example, the shape employed in the fit is a box having a length, width and height. Alternatively, the shape employed can be a sphere having a center and a radius. Various other shapes can be employed in the fit, such as but not limited to a cylinder, ovaloid, cone, etc.
“Outside box” measurements, block 712 include dimensions of the smallest shape that contains the entire product which can be determined based on, or obtained from, the combined product image (as may include protrusions seen by the vision system including distressed product portion, labels and wrapping). For example, the shape employed in the fit is a box having a length, width and height indicative of the largest rectangular footprint of the product A on the conveyor B / J. Alternatively, the shape employed can be a sphere having a center and a radius. Various other shapes can be employed in the fit, such as but not limited to a cylinder, ovaloid, cone, etc.
The “max bulge” measurement block 714 is the longest dimension obtained from the product A being inspected.
The product “orientation angle” is the angle of the product’s main axis relative to the travel direction of product A on the conveyors B / J.
With reference to
For certainty, the proposed solution is not limited to performing the steps illustrated in
Once a substantial number of the above mentioned measurements are determined, the image analysis computer program compares them in block 718 with nominal values and accepted tolerances provided in block 716 to the cased goods inspection system. For example, a Programmable Logic Controller (PLC) (not shown) can provide at least some of the nominal values and accepted tolerances for the given case inspected by the inspection system. According to preferences of the user, the “real box”, the “outside box” or the “max bulge” can be considered to accept or reject product A.
The vision system C then sends its decision (accept or reject), 720A, 720B as well as the various measurements taken to another controller for subsequent use by the user 722. For example, at least conveyors B / J can be operated in a special way to retract or dump rejected product, alternatively paddles (not shown) can be actuated to deflect rejected product possibly onto another conveyor (not shown). In another implementation, large “orientation angles” can be reduced by actuating components of the cased goods inspection system For certainty, a decision to reverse conveyors B / J and rescan the product is not excluded from the scope of the proposed solution.
In accordance with a preferred embodiment, as it can be seen from the raw image example illustrated in
By using above mentioned process, the vision system C can automatically compensate for debris or the like being present on a window panel of the camera system I / G. When such situation arises, the raw constructed/combined image shows a narrow line of constant pixel intensity 12D as shown within stitched line 12A, in
In accordance with another preferred embodiment of the proposed solution, the cased goods inspection system is employed to inspect an array of goods at least partially encased in shrink wrap following the contour of goods in the array. For simple shaped goods, the acceptable “outside box” and “real box” dimensions are similar within tolerances. Allowance is provided for goods having a complex shapes (such as may include one or more nonlinearities in the shape) as may be formed by one or more simple shaped product placed in an ordered array within packaging (e.g. shrink wrapping)). For example, with reference to product illustrated in
The capacity to detect semi-transparency is proves useful to ignore translucent or semi-transparent wrap bulge from the measurements. It is to be noted that many other modifications can be made to the cased goods inspection system described hereinabove and illustrated in the appended drawings. For example: it is to be understood that embodiments of the cased goods inspection system are not limited in their application to the details of construction and parts illustrated in the accompanying drawings and described hereinabove. Other embodiments can be foreseen and practiced in various ways. It is also to be understood that the phraseology or terminology used herein is for the purpose of description and not limitation.
While some reference is made herein to a “vision system”, the invention is not limited to any single nor to any combination of camera systems operating in the millimeter wave, Infra Red, visual, microwave, X-ray, gamma ray, etc. spectra. While composite camera can be employed, separate spectrum specific camera can also be employed severally or in combination. Any reference to cased goods comprising food stuffs is incidental and not intended to limit the scope of the invention. For certainty, durable goods cased in metal shipping containers can be inspected via appropriate sizing of the cased goods inspection system and appropriate selection of vision systems operating in metal shell penetrating spectra (X-ray), including electromagnetic sources operating in corresponding spectra absorbable by the cased goods.
In accordance with one or more aspects of the disclosed embodiment, a scanner apparatus for cased goods inspection is provided. The scanner apparatus including at least one conveyor for advancing a case of goods past the scanner apparatus, at least one electromagnetic (EM) source configured to transmit a sheet of parallel propagating EM radiation illumination of predetermined width towards a vision system disposed to receive the EM radiation illumination from the at least one EM source, the vision system including a diffuser and at least one camera wherein the diffuser diffuses the EM radiation illumination received from the at least one EM source so that the at least one camera captures the predetermined EM radiation illumination sheet width in entirety, the at least one camera is configured to digitize images of at least a portion of the EM radiation illumination sheet so that the case of goods advanced by the at least one conveyor through the transmitted EM radiation illumination sheet cast a shadow thereon, and a processor operably coupled to the at least one conveyor and vision system and including an image acquisition component configured to acquire the digitized images as the case of goods is advanced past the at least one camera, and an image combiner configured to selectively combine acquired digitized images into a combined image based on sustained input beam spatial intensity reduction below a first threshold, wherein the processor is configured to determine from the combined image dimensional measurements about the case of goods including one or more of length, width, height, angle, “real box, “max box” and “max bulge”.
In accordance with one or more aspects of the disclosed embodiment, the processor is configured so as to ascertain presence of the case of goods based on sustained input beam spatial intensity reduction below a second threshold discriminating presence of shrink wrap translucency disposed on product in the case of goods.
In accordance with one or more aspects of the disclosed embodiment, the at least one conveyor is configured to advance the case of goods at a rate of advance, the image acquisition component being configured to acquire the digitized images at an acquisition rate proportional to the rate of advance of the case of goods.
In accordance with one or more aspects of the disclosed embodiment, the image acquisition rate is synchronized by using an encoder or by a stepper motor drive circuit.
In accordance with one or more aspects of the disclosed embodiment, the at least one EM source includes a substantially point source lamp having an output light beam, an output beam shaper configured to redirect the output light beam into the sheet of collimated illumination having parallel light rays of the output light beam, and an optional mirror to reduce a foot print of the apparatus.
In accordance with one or more aspects of the disclosed embodiment, the image acquisition component includes an image cache storage.
In accordance with one or more aspects of the disclosed embodiment, the vision system is configured to determine an ambient light intensity from a sample buffer of cached images.
In accordance with one or more aspects of the disclosed embodiment, the vision system is configured to identify presence of debris on an input window of the vision system based on common pixels of same intensity across a number of digitized images.
In accordance with one or more aspects of the disclosed embodiment, the image combiner is configured to selectively combine acquired digitized images into a potential product combined image if a number of pixels digitized in an image having a reduced intensity below the first predetermined threshold define an image width greater than a second threshold.
In accordance with one or more aspects of the disclosed embodiment, the image combiner is configured to selectively combine acquired digitized images into forming the combined image if a number of pixels digitized across sequential images having reduced intensity below the first predetermined threshold and a second threshold represent a predetermined combined image length.
In accordance with one or more aspects of the disclosed embodiment, the processor is configured to determine dimensions from the combined image of: a first shape best fitting in the combined image, a second shape circumscribing the combined image, and differences between the first and second shapes.
In accordance with one or more aspects of the disclosed embodiment, the processor is configured to determine from the combined image an orientation angle of the case of goods with respect to the at least one conveyor.
In accordance with one or more aspects of the disclosed embodiment, the processor is configured to determine from the combined image a distance of the case of goods from one side of the at least one conveyor.
In accordance with one or more aspects of the disclosed embodiment, a method of inspecting cased goods is provided. The method includes advancing at least one case of goods on a conveyor, generating an illumination sheet of parallel illuminating rays with at least one electromagnetic source, capturing an image, formed by the illumination sheet passing through a diffuser, with at least one camera located so as to capture illumination from diffused parallel rays of the illumination sheet, where the image embodies a cased goods image that is generated by the case of goods moving between the at least one electromagnetic source and the at least one camera, where part of the parallel sheet of illumination is at least partially blocked by the case of goods, thus generating a gray level image.
In accordance with one or more aspects of the disclosed embodiment, capturing comprises capturing a number of serial images, each new image is compared with a normalized intensity from a predetermined number of previously acquired images, and a cased goods is detected, with a processor, when there is a drop of intensity more than a predetermined threshold.
In accordance with one or more aspects of the disclosed embodiment, a combined image of the cased goods is constructed with the processor by adding a series of images.
In accordance with one or more aspects of the disclosed embodiment, various characteristics of the cased goods are computed based on the combined image of the cased goods including one or more of the length, the width, the height, the angle, the “real box”, the “max box” and the “max bulge”
In accordance with one or more aspects of the disclosed embodiment, a scanner apparatus for cased goods inspection is provided. The scanner apparatus includes at least one conveyor for advancing a case of goods past the scanner apparatus, at least one electromagnetic (EM) source configured to transmit a sheet of parallel propagating EM radiation illumination of predetermined width towards a vision system disposed to receive the EM radiation illumination from the at least one EM source, the vision system including a diffuser and at least one camera wherein the diffuser diffuses the EM radiation illumination received from the at least one EM source so that the at least one camera captures the predetermined EM radiation illumination sheet width in entirety, the at least one camera is configured to digitize images of at least a portion of the EM radiation illumination sheet so that the case of goods advanced by the at least one conveyor through the transmitted EM radiation illumination sheet cast a shadow thereon, and a processor operably coupled to the at least one conveyor and visions system and including an image acquisition component configured to acquire the digitized images as the case of goods is advanced past the camera, and an image combiner configured to selectively combine acquired images into a combined image based on sustained input beam spatial intensity reduction below a first threshold, wherein the processor is configured so as to ascertain presence of the case of goods based on sustained input beam spatial intensity reduction below a second threshold discriminating presence of shrink wrap translucency desired on product in the case of goods.
In accordance with one or more aspects of the disclosed embodiment, the processor is configured to determine from the combined image dimensional measurements about the case of goods including one or more of length, width, height, angle, “real box, “max box” and “max bulge”.
In accordance with one or more aspects of the disclosed embodiment, the at least one conveyor is configured to advance the case of goods at a rate of advance, the image acquisition component being configured to acquire the digitized images at an acquisition rate proportional to the rate of advance of the case of goods, and wherein the image acquisition rate is synchronized by using an encoder or by a stepper motor drive circuit.
In accordance with one or more aspects of the disclosed embodiment, the at least one EM source includes a substantially point source lamp having an output light beam, an output beam shaper configured to redirect the output light beam into the sheet of collimated illumination having parallel light rays of the output light beam, and an optional mirror to reduce a foot print of the apparatus.
In accordance with one or more aspects of the disclosed embodiment, the vision system is configured to identify presence of debris on an input window of the vision system based on common pixels of same intensity across a number of digitized images.
In accordance with one or more aspects of the disclosed embodiment, the image combiner is configured to selectively combine acquired digitized images into a potential product combined image if a number of pixels digitized in an image having a reduced intensity below the first predetermined threshold define an image width greater than a second threshold.
In accordance with one or more aspects of the disclosed embodiment, the processor is configured to determine dimensions from the combined image of: a first shape best fitting in the combined image, a second shape circumscribing the combined image, and differences between the first and second shapes.
While the invention has been shown and described with reference to preferred embodiments thereof, it will be recognized by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims
1. An induction station of case goods in a logistic facility, the induction station comprising:
- at least one conveyor for advancing a case of goods into the facility;
- a case inspection apparatus disposed for inspection of the cased goods advanced past the case inspection apparatus by the at least one conveyor;
- at least one electromagnetic (EM) source configured to transmit an illumination sheet, of parallel illumination rays, having a predetermined width towards a vision system disposed to receive the illumination sheet;
- the vision system with at least one camera wherein: the at least one camera is configured so as to capture an image formed by diffused parallel rays of the illumination sheet diffused by a diffuser, and the image captured by the at least one camera embodies a case of goods image that is generated by the case of goods advanced by the at least one conveyor through the illumination sheet and at least partially blocks the parallel rays of the illumination sheet; and the image generated spans the predetermined width of the illumination sheet and encompasses a gray scale image of part of the illumination sheet blocked by at least a portion of the case of goods.
2. The induction station of claim 1, further comprising a processor operably coupled to the at least one conveyor and vision system and including wherein the processor is configured to determine from the combined image a predetermined characteristic intrinsic to the case of goods.
- an image acquisition component configured to acquire more than one of the digitized images for each of the case of goods as the case of goods is advanced past the at least one camera, and
- an image combiner configured to selectively combine a number of acquired digitized images, different than the more than one of the digitized images, into a combined image based on sustained input beam spatial intensity reduction below a first threshold over a duration of the more than one of the acquired digitized images;
3. The induction station of claim 1, wherein the gray scale image is of part of the illumination sheet blocked by and transmitted through the at least a portion of the case of goods.
4. The induction station of claim 2, wherein the processor is configured to determine from the combined image dimensional measurements about the case of goods including one or more of length, width, height, angle, “real box, “max box” and ”max bulge”.
5. The induction station of claim 2, where the processor is configured to ascertain presence of the case of goods based on sustained input beam spatial intensity reduction below a second threshold discriminating presence of translucent shrink wrap disposed on product in the case of goods.
6. The induction station of claim 1, wherein the at least one conveyor is configured to advance the case of goods at a rate of advance, the image acquisition component being configured to acquire the digitized images at an acquisition rate proportional to the rate of advance of the case of goods.
7. The induction station of claim 6, wherein the image acquisition rate is synchronized by using an encoder or by a stepper motor drive circuit.
8. The induction station of claim 1, wherein the at least one EM source comprises:
- a substantially point source lamp having an output light beam;
- an output beam shaper configured to redirect the output light beam into the sheet of collimated illumination having parallel light rays of the output light beam; and
- a mirror to reduce a foot print of the apparatus.
9. The induction station of claim 1, wherein the image acquisition component comprises an image cache storage.
10. The induction station of claim 1, wherein the vision system is configured to determine an ambient light intensity from a sample buffer of cached images.
11. The induction station of claim 1, wherein the vision system is configured to identify presence of debris on an input window of the vision system based on common pixels of same intensity across a number of digitized images.
12. The induction station of claim 1, wherein the image combiner is configured to selectively combine acquired digitized images into a potential product combined image where a number of pixels digitized in an image having a reduced intensity below the first predetermined threshold define an image width greater than a second threshold.
13. The induction station of claim 1, wherein the image combiner is configured to selectively combine acquired digitized images into forming the combined image where a number of pixels digitized across sequential images having reduced intensity below the first predetermined threshold and a second threshold represent a predetermined combined image length.
14. The induction station of claim 1, wherein the processor is configured to determine dimensions from the combined image of: a first shape best fitting in the combined image, a second shape circumscribing the combined image, and differences between the first and second shapes.
15. The induction station of claim 1, wherein the processor is configured to determine from the combined image an orientation angle of the case of goods with respect to the at least one conveyor.
16. The induction station of claim 1, wherein the processor is configured to determine from the combined image a distance of the case of goods from one side of the at least one conveyor.
17. The induction station of claim 6, wherein the image acquisition rate is synchronized by using an encoder or by a stepper motor drive circuit.
Type: Application
Filed: Sep 20, 2022
Publication Date: Jan 12, 2023
Inventors: Marc DUCHARME (Boucherville), Robert JODOIN (Montréal), Benoit LAROUCHE (Montréal), Sylvain-Paul MORENCY (Laval), Christian SIMON (Laval)
Application Number: 17/933,671