IMAGE RETRIEVAL
Method and systems for search and retrieval of images with an image processing system is disclosed. With an input interface, a first image to use in a search query is received. The first image is processed with the image processing system to create first information and determine a first code associated with the first image. A plurality of images are received from an image database and processed to respectively create information at least related to each of the plurality of images. A plurality of codes associated with the plurality of images is determined. The first code is compared with the plurality of codes to find a subset of the plurality of images. The first information is compared with information for the subset. Finally, the image processing system determines if the subset compares favorably with the first image based, at least in part, on an outcome from the comparing the first information with the information from the subset.
Latest Massachusetts Institute of Technology Patents:
This application is a divisional of U.S. application Ser. No. 12/975,192 filed on Dec. 21, 2010; which is a divisional of U.S. application Ser. No. 10/767,216 filed on Jan. 29, 2004, which issued as U.S. Pat. No. 7,860,854 on Dec. 28, 2010; which is a continuation of U.S. application Ser. No. 10/216,550 filed on Aug. 8, 2002, which issued as U.S. Pat. No. 6,721,733 on Apr. 13, 2004; which is a divisional of U.S. application Ser. No. 09/179,541 filed on Oct. 26, 1998, which issued as U.S. Pat. No. 6,463,426 on Oct. 8, 2002, which claims the benefit of U.S. Provisional Patent Application No. 60/063,623 filed on Oct. 27, 1997.
STATEMENTS REGARDING FEDERALLY SPONSORED RESEARCHThis invention was made with government support under Grant No. N00014-95-1-0600 awarded by the U.S. Navy. The government has certain rights in the invention.
BACKGROUND OF THE INVENTIONThis invention relates to information search and retrieval systems and more particularly to search and retrieval systems which utilize in whole or in part image processing techniques.
As is known in the art, a digital image is an image which may be represented as an array of pixels with each of the pixels represented by a digital word. Often the array is provided as a two-dimensional array of pixels. With the increase in the number of available digital pictures, the need has arisen for more complete and efficient annotation (attaching identifying labels to images) and indexing (accessing specific images from the database) systems. Digital image/video database annotation and indexing services provide users, such as advertisers, news agencies and magazine publishers with the ability to browse through, via queries to an image search system, and retrieve images or video segments from such databases.
As is also known, a content based image retrieval system is an image retrieval system which classifies, detects and retrieves images from digital libraries by utilizing directly the content of the image. Content based image processing systems refer to systems which process information in an image by classifying or otherwise identifying subject matter within the image. Such systems may be used in a variety of applications including, but not limited to, art gallery and museum management, architectural image and design, interior, design, remote sensing and management of earth resources, geographic information systems, scientific database management, weather forecasting, retailing, fabric and fashion design, trademark and copyright database management, law enforcement and criminal investigation and picture archiving, communication systems and inspection systems including circuit inspection systems.
Conventional content based image/video retrieval systems utilize images or video frames which have been supplemented with text corresponding to explanatory notes or key words associated with the images. A user retrieves desired images from an image database, for example, by submitting textual queries to the system using one or a combination of these key words. One problem with such systems is that they rely on restricted predefined textual annotations rather than on the content of the still or video images in the database.
Still other systems attempt to retrieve images based on a specified shape. For example, to find images of a fish, such systems would be provided with a specification of a shape of a fish. This specification would then be used to find images of a fish in the database. One problem with this approach, however, is that fish do not have a standard shape and thus the shape specification is limited to classifying or identifying fish having the same or a very similar shape.
Still other systems classify images or video frames by using image statistics including color and texture. The difficulty with these systems is that for a given query image, even though the images located by the system may have the same color, textural, or other statistical properties as the example image, the images may not be part of the same class as the query image. That is, if the query image belongs to the class of images identified as human faces, then systems which classify images or video frame based image statistics including color and texture may return images which fall within the desired color and texture image statistics but which are not human faces.
It would thus be desirable to provide a system and technique which may be used in a general image search and retrieval system and which allows searching of a plurality of different types of images including but not limited to human or animal faces, fabric patterns, symbols, logos, art gallery and museum management, architectural image and design, interior design, remote sensing and management of earth resources, geographic information systems, scientific database management, weather forecasting, retailing, fabric and fashion design, trademark and copyright database management, law enforcement and criminal investigation, picture archiving, communication systems and inspection systems including circuit inspection systems. It would be particularly desirable to have the system be capable of automatically learning which factors are most important in searching for a particular image or for a particular type of image.
BRIEF SUMMARY OF THE INVENTIONMethod and systems for search and retrieval of images with an image processing system is disclosed. With an input interface, a first image to use in a search query is received. The first image is processed with the image processing system to create first information and determine a first code associated with the first image. A plurality of images are received from an image database and processed to respectively create information at least related to each of the plurality of images. A plurality of codes associated with the plurality of images is determined. The first code is compared with the plurality of codes to find a subset of the plurality of images. The first information is compared with information for the subset. Finally, the image processing system determines if the subset compares favorably with the first image based, at least in part, on an outcome from the comparing the first information with the information from the subset.
An example of a non-transitory storage medium, according to the disclosure, includes computer-readable instructions for processing circuit board images to compare a first image with a plurality of images to find a second image. The instructions comprise code for reading the first image of a circuit board, processing the first image to create first information at least related to the first image, reading the plurality of images, and processing each of the plurality of images to respectively create information at least related to each of the plurality of images. The instructions further comprise code for comparing a code related with the first image with a plurality of codes related with the plurality of images to find a subset of the plurality of images. The code related with the first image can indicate a category of the first image, and the plurality of codes related with the plurality of images can indicate one or more categories of the plurality of images. The instructions further comprise code for comparing the first information with the information at least related to each image of the subset, and determining if the subset compares favorably with the first image, at least in part, on an outcome from the comparing the first information with the information for the subset.
The example non-transitory storage medium can include one or more of the following features. The instructions can further comprise code for displaying the plurality of images, wherein an order of the plurality of images corresponds with a likelihood of a match to the first image. The first information can be gathered from the first image and a negative example that does not match the first image. The first information can be gathered from two or more images.
An example method for processing images to compare a first image with a plurality of images to find a second image, according to the disclosure, can include reading the first image, processing the first image to create first information at least related to the first image, and reading a first code associated with the first image. The first code categorizes the first image. The method further can include reading the plurality of images, processing each of the plurality of images to respectively create information at least related to each of the plurality of images, and reading a plurality of codes associated with the plurality of images. The codes categorize the plurality of images. The method further can include comparing the first code with the plurality of codes to find a subset of the plurality of images, comparing the first information with the information at least related to each image of the subset, and determining if the subset compares favorably with the first image, at least in part, on an outcome from the comparing the first information with the information for the subset.
The example method for processing images to compare a first image with a plurality of images to find a second image can include one or more of the following features. The first code can be determined by a human. The first information can be gathered from two or more images. The first image can be of a circuit board. The plurality of images can be displayed, and an order of the plurality of images can correspond with a likelihood of a match to the first image. Processing the first image can include logically combining the first image and a second image to create the first information. The first information can be gathered from the first image and a negative example that does not match the first image.
An example system for processing images to compare a first image with a plurality of images to find a second image, according to the disclosure, can include an input system, an image database, and an image processing system coupled with the input system and the image database. The image processing system can be configured to receive the first image from the input system, process the first image to create first information at least related to the first image, and determine a first code associated with the first image. The first code can categorize the first image. The image processing system can be configured to further receive the plurality of images from the image database, process each of the plurality of images to respectively create information at least related to each of the plurality of images, and determine a plurality of codes associated with the plurality of images. The plurality of codes can categorize the plurality of images. Finally, the image processing system can be configured to further compare the first code with the plurality of codes to find a subset of the plurality of images, compare the first information with the information at least related to each image of the subset, and determine if the subset compares favorably with the first image, at least in part, on an outcome from the comparing the first information with the information for the subset.
The example system for processing images to compare the first image with the plurality of images to find the second image can include one or more of the following features. The first image can be of a circuit board. At least one plug-in module can be configured to provide application information used by the image processing system for at least on application. The at least one plug-in module can provide application information used by the image processing system to process images having at least one item from the group of items consisting of a printed circuit board, a face, a scene, a fabric, and a trademark. The image processing system can be configured to determine the first code based, at least in part, on input from a human. The image processing system can be configured to create the first information by processing two or more images. The system can include an output system, where the image processing system is configured to display the plurality of images on the output system, and an order of the plurality of images corresponds with a likelihood of a match to the first image. The image processing system can be configured to process the first image by at least logically combining the first image and a second image to create the first information. The first information can be gathered from the first image and a negative example that does not match the first image.
In accordance with the present invention, image processing system includes a search engine coupled to an image analyzer. The image analyzer and search engine are coupled to one or more feature modules which provide the information necessary to describe how to optimize the image analyzer for a particular application. With this particular arrangement, an image processing system which can rapidly match a primary image to a target image is provided. Each feature module defines particular regions of an image and particular measurements to make on pixels within the defined image region as well as the measurements to make on neighboring pixels in neighboring image regions for a given application. The feature modules thus specify parameters and characteristics which are important in a particular image match/search routine. The plug-in-modules communicate this application specific information to image analyzer. The information specified by a particular feature module will vary greatly depending upon the particular application. By using the feature modules, generic search engines and image analyzers can be used. Thus the system can be rapidly adapted to operate in applications as widely varying as inspection of printed circuit boards or integrated circuits to search for trademark images. In each application, the particular parameters and characteristics which are important are provided by the feature module.
It should thus be noted that the techniques of the present invention have applicability to a wide variety of different types of image processing applications. For example, the techniques may be used in biometric applications and systems to identify and or verify the identity of a particular person or thing, inspection systems including inspection and test of printed circuit boards (including without limitation any type of circuit board or module or integrated circuit) in all stages of manufacture from raw boards to boards which are fully assembled and/or fully loaded with components and sub-assemblies (including hybrids), including solder joint inspection, post paste inspection and post placement inspection, inspection and test of semiconductor chips in all stages of manufacture, from wafers to finished chips, image or video classification systems used to search image or video archives for a particular type of image or a particular type of video clip, and medical image processing applications to identify particular characteristics in an image, such as a tumor. Thus the phrase “image processing application” or more simply “application” as used herein below refers to a wide variety of uses.
In accordance with a further aspect of the present invention, a process for comparing two images includes the steps of (a) aligning a target image and a selected image, (b) dividing the selected image into a plurality of image regions, (c) collapsing properties in predetermined image regions, (d) selecting a primary image region, (e) selecting a target image region, and (f) comparing one or more properties of the selected primary image region to corresponding one or more properties in the target image region. With this particular arrangement, a technique for comparing rapidly two images is provided. By selecting predetermined features and image regions to compare between the two images, the amount of time required to process the images is reduced. By combining or collapsing features in a selected region of an image for compact representation, a comparison of a relatively large amount of information can be accomplished rapidly.
In accordance with a still further aspect of the present invention, a method of manufacturing a printed circuit board includes the steps of (a) performing a manufacturing operation on a printed circuit board; and (b) inspecting the result of the manufacturing operation by comparing an image of the actual operation being performed to a target image of the manufacturing operation. With this particular arrangement, the efficiency of manufacturing a printed circuit board is increased while reducing the cost of manufacturing the printed circuit board. The manufacturing can correspond to any one or more steps in the printed circuit board (PCB) manufacturing process. For example, when the manufacturing process corresponds to a solder manufacturing process, then the inspection technique may be used before and/or after the post-paste, post-placement and post-reflow operations. For example, the manufacturing process can include the inspection before and or after solder application, component placement, solder reflow, solder joint inspection or any other manufacturing step. By inspecting after predetermined steps in the manufacturing process, it is possible to detect early defects in the manufacturing process which can be corrected prior to continuing the manufacturing process. By detecting defects early in the manufacturing process, the expense and time associated with manufacturing a PCB which cannot pass a final inspection test is provided.
In one embodiment, a method and systems for processing images are disclosed. Two images are compared using categorization codes and image analysis. The first image and the second image are read from a network. A first categorization code associated with the first image and a second categorization code associated with the second image are read. The first and second codes are analyzed, the first and second images are compared. It is determined if the first and second images are likely to compare favorably based, at least in part, on outcomes from the comparing and analyzing.
The foregoing features of this invention, as well as the invention itself, may be more fully understood from the following detailed description of the drawings in which:
Before describing an image search and retrieval system and the techniques associated with, some introductory concepts and terminology are explained.
An analog or continuous parameter image such as a still photograph may be represented as a matrix of digital values and stored in a storage device of a computer or other digital processing device. Thus, as described herein, the matrix of digital data values are generally referred to as a “digital image” or more simply an “image” and may be stored in a digital data storage device, such as a memory for example, as an array of numbers representing the spatial distribution of energy at different wavelengths in a scene.
Similarly, an image sequence such as a view of a moving roller coaster for example, may be converted to a digital video signal as is generally known. The digital video signal is provided from a sequence of discrete digital images or frames. Each frame may be represented as a matrix of digital data values which may be stored in a storage device of a computer or other digital processing device. Thus in the case of video signals, as described herein, a matrix of digital data values are generally referred to as an “image frame” or more simply an “image” or a “frame.” Each of the images in the digital video signal may be stored in a digital data storage device, such as a memory for example, as an array of numbers representing the spatial distribution of energy at different wavelengths in a scene in a manner similar to the manner in which an image of a still photograph is stored.
Whether provided from a still photograph or a video sequence, each of the numbers in the array correspond to a digital word (e.g. an eight-bit binary value) typically referred to as a “picture element” or a “pixel” or as “image data.” The image may be divided into a two dimensional array of pixels with each of the pixels represented by a digital word.
Reference is sometimes made herein to color images with only a luminance component. Such images are known as gray scale images. Thus, a pixel represents a single sample which is located at specific spatial coordinates in the image. It should be noted that the techniques described herein may be applied equally well to either gray scale images or color images.
In the case of a gray scale image, the value of each digital word corresponds to the intensity of the pixel and thus the image at that particular pixel location.
In the case of a color image, reference is sometimes made herein to each pixel being represented by a predetermined number of bits (e.g. eight bits) which represent the color red (R bits), a predetermined number of bits (e.g. eight bits) which represent the color green (G bits) and a predetermined number of bits (e.g. eight bits) which represent the color blue (B-bits) using the so-called RGB color scheme in which a color and luminance value for each pixel can be computed from the RGB values. Thus, in an eight bit color RGB representation, a pixel may be represented by a twenty-four bit digital word.
It is of course possible to use greater or fewer than eight bits for each of the RGB values. It is also possible to represent color pixels using other color schemes such as a hue, saturation, brightness (HSB) scheme or a cyan, magenta, yellow, black (CMYK) scheme. It should thus be noted that the techniques described herein are applicable to a plurality of color schemes including but not limited to the above mentioned RGB, HSB, CMYK schemes as well as the luminosity and color axes a & b (Lab) YUV color difference color coordinate system, the Karhunen-Loeve color coordinate system, the retinal cone color coordinate system and the X, Y, Z scheme.
Reference is also sometimes made herein to an image as a two-dimensional pixel array. An example of an array size is size 512×512. One of ordinary skill in the art will of course recognize that the techniques described herein are applicable to various sizes and shapes of pixel arrays including irregularly shaped pixel arrays.
A “scene” is an image or a single representative frame of video in which the contents and the associated relationships within the image can be assigned a semantic meaning A still image may be represented, for example, as a pixel array. Having 512 rows and 512 columns. An “object” is an identifiable entity in a scene in a still image or a moving or non-moving entity in a video image. For example, a scene may correspond to an entire image while a boat might correspond to an object in the scene. Thus, a scene typically includes many objects and image regions while an object corresponds to a single entity within a scene.
An “image region” or more simply a “region” is a portion of an image. For example, if an image is provided as a 32×32 pixel array, a region may correspond to a 4×4 portion of the 32×32 pixel array.
Before describing the processing to be performed on images, it should be appreciated that, in an effort to promote clarity, reference is sometimes made herein to one or more “features” or “information” in “blocks” or “regions” of an image. It should be understood that the features can correspond to any or any particular characteristic of the block including its relationship to other blocks within the same or a different image. Also, such image blocks or regions should be understood as not being limited to any particular type, size or shape of a portion of an image (i.e. the block need not have a square or a rectangular shape). It should also be understood that the image need not be any particular type of image.
Similarly, reference is also sometimes made herein to comparison of the “features” or “information” in one of more blocks to “features” or “information” in one or more other blocks. The other blocks may be from the same or a different image than the first blocks. Also the processing of the blocks need not be for any specific type of image processing application. Rather the processing of the blocks applies in a wide variety of image processing applications.
Accordingly, those of ordinary skill in the. art will appreciate that the description and processing taking place on “blocks” and “regions” could equally be taking place on portions of an image having a square, rectangular, triangular, circular, or elliptical shape of any size. Likewise, the particular field in which the image processing systems and techniques of the present invention may be used includes but is not limited to, biometric applications and systems to identify and or verify the identity of a particular person or thing, inspection systems including inspection and test of printed circuit boards (including without limitation any type of circuit board or module or integrated circuit) in all stages of manufacture from raw boards to boards which are fully assembled and/or fully loaded with components and sub-assemblies (including hybrids), including solder joint inspection, post paste inspection and post placement inspection, inspection and test of semiconductor chips in all stages of manufacture, from wafers to finished chips, image or video classification systems used to search image or video archives for a particular type of image or a particular type of video clip, and medical image processing applications to identify particular characteristics in an image, such as a tumor.
Referring now to
For example, Network Connection 14e allows input system 14 to receive image data from a global information network (e.g., an internet) or on intranet or any other type of local or global network. Also coupled to image processing system 12 is an image storage device 18 which may for example be provided as image one or more databases having stored therein a plurality of images 20a-20N generally denoted 20. The images 20 may be provided as still images or alternatively the images may correspond to selected frames of a video signal which are treated as still images.
Image processing system 12 also includes an image retrieval to retrieve images from the storage device 18. Alternatively, images may be provided to image processing system 12 via the graphical user interface 14a using one of a number of commercially available drawing packages such as Microsoft Paint or any similar package. Alternatively still, a camera such as camera 14c or other, image capture device may be used to feed images to the processing system 12. Thus system 12 can receive real time or “live” camera images instead of retrieving images from a database or other storage device. Alternatively still, images may be fed to the imaging processing system 12 via the facsimile system 14b, the scanner 14d or the like. Regardless of how images are fed to image processing system 12, image processing system 12 receives the images and processes them in accordance with techniques to be described below in conjunction with
In general overview and as shown in
Image processing system 12 provides the results of the query to output system 16 in a pictorial format as a plurality of images 22a-22h. Each of the result images-22a-22h has a quantitative value 24a-24h associated therewith. The quantitative values 24a-24h indicates the closeness of the match between the query image and the image stored in the storage device 18. A quantitative value 24 closest to a value of zero indicates the closest match while higher values indicate a match which is not as close. The quantitative values allow the image processing system 12 to order the result images. The relative differences between the quantitative values associated with each image can be used to cluster groups of images based upon the closeness of the match to the query image. For instance there is a relatively large difference between the scores associated with images 22b and 22c. This provides an indication that images 22a and 22b are closer in terms of visual similarity to the query image 20, than the images 22c, 22d and beyond. The difference in scores between 22f and 22g is much smaller than the difference in scores between 22b and 22c. This indicates that images 22f and 22g in relation to the query image 20, are very similar. Thus, it should be appreciated that it is the relative values of the scores rather than the absolute scores can be used to order the images found as a result of the search.
Referring now to
The function of the plug-in-modules 30 is to provide the information necessary to describe how to optimize the image analyzer 38 for a particular application. Each of the plug-in-modules 30 define the particular regions of an image and particular measurements to make on pixels within that image region as well as the measurements to make on neighboring pixels in neighboring image regions for a given application. For example, Module 30a specifies parameters, constraints and characteristics to be used when performing trademark image searches. Module 30b specifies, parameters, constraints and characteristics to be used when performing searches upon images of faces. Module 30k specifies parameters, constraints and characteristics to be used when performing searches of scenes (e.g., images of waterfalls, fields, etc.). Module 301 specifies parameters, constraints and characteristics to be used when performing searches for fabrics. Module 301 specifies parameters, constraints and characteristics to be used when performing inspection of printed circuit boards. Module 30N specifies parameters, constraints and characteristics to be used when performing searches on a stream of video images. The plug-in-modules 30 thus specify parameters and characteristics which are important in a particular image match/search routine. The plug-in-modules communicate this application specific information to image analyzer 38.
The plug-in-modules 30 may be implemented in a variety of ways. The modules 30 themselves may implement a technique that takes as input two or more image regions, compares them to each other, and returns scores of how similar they are to each other. The plug-in-modules 30 may contain a list of image attributes that are important for computing image similarity for a particular application, such as color, luminance, and texture. The plug-in modules may also suggest how to preprocess images before the images are sent from storage device interface 39 to image analyzer 38.
For example, if the query image is a trademark image and it is desirable to search the storage device 18 (
On the other hand, if the application is face recognition and the query image contains a human face, then plug-in module 30b, which corresponds to a face optimizer, would be used. The face plug-in-module 30b contains information describing that global configuration is important and relative luminance or relative color information should be used in the matching process to be robust against illumination changes. Although, not necessary, plug-in-module 30b may implement a technique that describes how to delineate the boundaries of a face in an image. This technique may be used to preprocess the query and target images prior to analysis. Such a technique may be used to align the query and the target image (as shown in step 40 of
The scene plug-in-module 30k emphasizes color and local and global structure as important attributes when comparing two scenes. The fabric optimizer 30N utilizes a weighted combination of color, texture, local and global structure in a similarity metric which takes as input two or more images of fabric samples. A video sequence plug-in-module can also be used. In the case of the video optimizer 30N, it must describe how to match one still image to another image and also how to take into account that these images are part of a time sequence. For instance, it may be important to track objects over time. It may also be important to calculate the relative position, color, or other attribute from a frame at time t1 and a frame at time t2. These calculations can be made independently for two or more image sequences. The processed image sequences may then be compared.
It is not necessary to have a predefined specialized plug-in-module in order for image analyzer 38 to compare two images. The image analyzer may use a default plug-in-module 30k, which contains generic biases to compare two or more images of unknown content. For instance, the default plug-in-module 30k utilize color, luminance, and local relative structure. If feedback is provided to the image processor 24 and the plug-in module, the default plug-in-module 30x can generate a list of associations to important image attributes. Measuring and comparing these attributes will therefore take precedence over measuring and comparing other attributes. In the learning step the default plug-in-module 30k can also identify significant internal parameters and ranges for their values. This learning or refining procedure can be applied to the default plug-in-module 30k or any of the application specific plug-in-modules 30a-30n. The learning procedure can be used to generate a set of parameters, characteristics and constraints for a new plug-in module or to train an existing plug-in-module to utilize a set of parameters, characteristics and constraints for a new application.
Many of the applications described are image search applications. The basic premise is that one or more images are provided to the image processor 24 as positive or negative examples or samples. Together these examples define a semantic class of images. The task of the image processor 24 is to search one or more images (which may be stored in one or more databases) to find other images in this class. Search engine 38 can be optimized to perform searches rapidly. In its most generic form, the search engine 38 is used to retrieve images from a storage device and to channel those images sequentially or in batches to the image analyzer. Images from the search engine 36, which are identified by the image analyzer 38 or by a user may be used to refine a search or to perform a new search. This is further explained in conjunction with
Alternatively, the processing blocks represent steps performed by functionally equivalent circuits such as a digital signal processor circuit or an application specific integrated circuit (ASIC). The flow diagram does not depict the syntax of any particular programming language. Rather, the flow diagram illustrates the functional information one of ordinary skill in the art requires to fabricate circuits, or to generate computer software to perform the processing required of the particular apparatus. It should be noted that many routine program elements, such as initialization of loops and variables and the use of temporary variables are not shown.
Some of the processing blocks can represent an empirical or manual procedure or a database function while others can represent computer software instructions or groups of instructions. Thus, some of the steps described in the low diagram may be implemented via computer software while others may be implemented in a different manner e.g. manually, via an empirical procedure, or via a combination of manual and empirical procedures.
It will be appreciated by those of ordinary skill in the art that unless otherwise indicated herein, the particular sequence of steps described is illustrative only and can be varied without departing from the spirit of the invention.
Referring now to
Either after or before the alignment step, the primary and target images are each divided or segmented into a plurality of sub regions (or more simply regions) or blocks as shown in step 42. It should be noted that divide step 42 may be performed as a so-called intelligent divide. An intelligent division of the image is a procedure in which regions of pixels that have similar attributes are formed into groups. One example is to group regions according to color. For a field scene, which has a blue sky and a green field, the intelligent divide would partition the image into the blue and green parts. Another example of an intelligent divide is to group pixels that belong to the same object. For instance, in an image of a printed circuit board, the intelligent divide would group image regions that belong to the same component. In an alternate embodiment, the intelligent divide can be performed on the target images in place of or in addition to the primary image.
In step 44, the properties in predetermined ones of the subregions or blocks of the primary image are combined or “collapsed” to provide a relatively compact representation of the primary. One particular technique to collapse the data in the subregions is to get the average value of a particular parameter over the whole subregion. For instance, this collapsing step could be to find the average luminance of the sub region. Another particular technique for combining the block properties is described hereinbelow in conjunction with
Processing then flows to step 46 in which a region or block within the primary image (referred to herein as a primary image region) is selected. Each primary image region has associated therewith a set of properties or characteristics or features. In some embodiments, the primary image region and regions within a predetermined distance of the primary image region (referred to herein as neighbor regions) are used in the matching process.
In the case where neighbor regions are used, the properties of both the primary image region and the neighbor regions are used in the matching process. In one embodiment, a radius R1 is selected to determine which neighbor regions around the primary image region to include for further processing. For example, a radius of zero (i.e. R1=0) indicates that no neighbor regions have been included, where a radius greater than zero (e.g. R1=1) would indicate that at least some neighbor regions are to be included. The particular number of neighbor regions to include should be selected in accordance with a variety of factors including but not limited to the particular application, the processing speed and the relative importance of particular structure in the matching process. Thus, one reason to include neighbors in the matching process is if the structure included in the neighbor is an important consideration in the matching process. For instance, when comparing facial images, the structure of the face is very important. In all cases there should be regions corresponding to the eyes, nose, mouth, cheeks, etc. These must be configured in the correct spatial organization (e.g. the cheeks should be on either side of the nose). The magnitude of the radius R1 indicates the level at which the structure is important. A radius R1 having a relatively small value indicates that local structure is important. As the value of radius R1 increases to the limit of the height and width of the image, then global structure is emphasized.
It should be noted that not all the neighbor regions around the primary image region are important. For instance, when comparing images of solder joints, assuming the solder joints are arranged vertically in the image, it may be that only the regions above and below the primary region should be included in the calculations. It is also not necessary for neighbor regions to have a common boundary to the primary region. For example, one definition of neighbors could be regions that are 1 region away from the primary image region.
Processing then flows to step 48 where a target image region or block is selected. In a manner similar to the selection of the primary image region, regions which are neighbors to the target image region can also be selected. In this case the neighbor regions are with respect to the target image region. For example, a radius R2 may be used to determine the neighbor regions which are within a predefined distance from the target image region. In most cases the neighbors for the primary image region and the target image regions should be computed in the same fashion. If this is not the case, for instance if the radius R1 is greater or equal to 2 (i.e. R1>=2) and radius R2 is set equal 1 (i.e. R2=1), then only the neighbor regions one step away from the primary image region and the target image regions should be considered in the computation. With R2 set to zero, a comparison between one or more properties of the primary image region and the target image region is made.
Processing then flows to step 50, where one or more properties of the selected primary image region and any neighboring regions are compared to corresponding properties in the target image region and any neighboring regions. In step 52, a score indicating the difference between the primary image region and its neighbors and the target image regions and its neighbors is computed and associated with the target image region. A score, for instance, could be based upon the characteristics of luminance and position. If the radii R1 and R2 are 0, the score can be computed as a linear combination of the absolute difference in luminance and the absolute difference in position of the primary and primary target image region. If radii R1 and R2 are greater than zero, then the score can be augmented by adding in difference in luminance and position of the corresponding primary and target region neighbors. Often, the differences in the neighbors characteristics are weighted less than the difference in the regions themselves. For instance, the weights of the neighbors can be calculated by a gaussian function centered on the location of the primary image region. Other weighting functions may of course also be used.
If radii R1 and R2 are greater than zero, then relative measures can be made between the primary image region and its neighbors or the neighbors to each other. For instance, one can compute whether the neighbors have greater, less, or equal luminance to the primary image region. The same computation can be made between the primary target region and its neighbors. The difference between the primary image region and the target image region will be greater if order of these relative relationships is violated in the target image region. Differences in relative measures may be weighted and added into the total score.
Processing then flows to decision block 54 where it is determined whether it is necessary to compare the properties of the primary image region to the properties of a next target primary image region. If it is necessary to compare the primary image region to a next target image region, then processing flows to step 56, where a next target image region is selected and then to step 58, where the primary image region is aligned with the next target primary region for purposes of comparison. Processing then returns to step 48. Thus, steps 54, 56 and 58 implement a loop in which the primary image region is moved over a predetermined number of target primary image regions. In some cases, the primary image region may be moved over or compared to all image regions in the target image. In other cases, the primary image region may be moved only over a selected number of image regions in the target image.
Once decision is made in decision step 54 not to move to the next target image primary image, then processing flows to step 58 in which the best match is found between the primary image region and a target image primary region by comparing the scores computed in step 52.
Processing then flows to step 62 where decision is made as to whether more primary image regions should be processed. If more primary image regions remain to be processed, then processing flows to step 64 where the next primary image region is selected. It should be noted that in some cases, each subregion of the primary image is selected as a primary image subregion for processing. In other cases, however, it may be desirable or may not be necessary to utilize each image region in the primary image. By processing fewer than all subregions in the primary image region, it may be possible to increase the speed with which a primary image and a target image can be matched. After step 64, processing returns to step 48 and steps 48-62 are repeated until there are no more primary images to process.
In decision block 62, when the decision is made that no more primary image regions to process exist, processing flows to step 66 in which the best scores associated with each primary image region are combined in order to provide a score of how much deformation was required to map the primary image to the target image. One way to calculate the amount of deformation is to add together the best scores associated with each of the primary image regions.
Processing then flows to step 68 in which an output is provided to a user.
Referring now to
In this particular example, three different axes of a Cartesian coordinate system represent three different attributes attribute 1, attribute 2 and attribute 3 of an image. Image regions c1-c8 and q18 are plotted along the axis. A distance between the attributes in the query image and attributes in a candidate image are computed. The sum of the distances between the points, c1-c8 and q1-q8 are totaled and the minimum value is selected as the distance corresponding to the best match. Thus the best match is defined as the image having the least amount of deformation with respect to the query image. The distance is computed as a function of the properties of the query image and the properties of the candidate image. The function used can be, for example, the absolute difference between the property of the ith query image (denoted as prop(qi)) and the property of the jth candidate image (denoted as prop(cj)) multiplied by a weight value.
Referring now to
In
Referring now to
Each block formed by the intersection of a row and column will be denoted as 76XX where XX corresponds to an index identifier for a particular block. For example, reference numeral 76aa denotes the block in the upper left-hand corner of image 70, while reference numeral 76hh denotes the block in the lower right hand corner of primary image 70. The block structure can be referred to as a grid.
The particular number of rows and columns to be used in any particular application to create image blocks may be selected in accordance with a variety of factors including but not limited to the desire to have distinctive or important features in the image delineated by one or more blocks. Consider a field scene in which the important regions are those corresponding to the sky and the field. It is undesirable to have a very coarse grid where many important features of the image are merged together. The most extreme example of this coarseness is to consider the whole image as one block (1 row, 1 column). In this case, it would not be possible to distinguish the sky from the field. It is also undesirable to have a grid where unimportant details in the image are separated by blocks. Consider the same field scene and a fine grid which delineates each blade of grass. In the most extreme case the blocks are made up of individual pixels.
Although primary image 70 is here shown having a square shape, those of ordinary skill in the art will appreciate that other shapes may also be used. Although primary image 70 has been divided into a plurality of square regions of the same size, those of ordinary skill in the art will appreciate that in some applications it may be desirable to use other sized or shaped regions and a different region reference method could be used. For example, in some applications processing speed could be improved or the complexity of particular computations could be simplified by selecting blocks having a rectangular, a circular or a triangular shape.
Target image 80 has likewise been divided into a predetermined number of rows and a predetermined number of columns. It is most common to divide the target image in the same manner as the primary imago. In this case target image 80 has the same number of rows and columns as primary image 70.
The primary image 70 is aligned with a target image 80. This step can be performed before or after dividing the images into regions or blocks.
In primary image 70, a first primary image block, 76bg is selected. Next, a plurality of neighbors, proximate primary image block 76bg, are selected. In this case, the neighboring blocks are 76af, 76ag, 76bf, 76cf, and 76cg. The primary image block 76bg and the neighboring blocks 76af, 76ag, 76bf, 76cf, and 76cg form a selected image region 78.
After choosing a first primary region, a target image block, is selected. Also, a neighbor region is defined around the target image block. In this particular example, a first target primary block, 86gb, is identified and neighboring blocks 86fa, 86fb, 86ga; 86ha, and 86hb are also selected. Thus, a target image region 88 is defined within the target image.
It should be noted that in this particular example, the size and shape of the primary image region 78 and the target region 88 have been selected to be the same. It should be noted, however, that it is not necessary for the size of the regions 78 and 88 to be the same. For example, in some embodiments it may be desirable not to identify any blocks neighboring block 86gb. In this case, only the properties of region 76bg would be compared with the properties of region 86gb to determine a match.
As shown, however, one or more selected properties of each of the blocks within region 78 are compared with like properties of the block within region 88.
The region 78 is then moved to a next portion of the target image. It should be noted that for an ideal match for primary image region 76b, the ideal location should be region 86bg in the target image.
It should also be noted that in some embodiments it may be desirable or necessary to place constraints on the positions within target image 80 at which region 78 may be placed. For example, as shown in phantom in
The neighbor properties are matched when the selected region is compared with the region of the target image but the weighting for the neighbor comparisons may be given less weight than the primary image region. For example, in one embodiment, a Gaussian weighting scheme can be used.
Thus, the properties of primary image region 76bg are compared with the properties of target image region block 86gb while the properties of blocks 76ag are compared with properties of block 86fb. The properties of block 76af are compared with the properties of block 86fa, the properties of 76bf are compared with the properties of block 86ga. The properties of neighbor block 76cf are compared with the properties of block 86ha and the properties of neighbor block 76cg are compared with the properties of blocks 86hb.
The appropriate plug-in module (e.g., module 30 in
If the properties are not consistent, then this factor would be taken into account when computing a total score for the match at that particular location.
The aggregate score given a primary image region, a target image region, and a set of properties is computed as a function of the difference between the each of the actual values of the properties in the primary and target region. A simple example of the function is function which increases linearly by the difference in the primary, and target properties.
Referring now to
If image 80 is considered the primary image and image 70 is considered the target image then the primary image 80 can be compared with target image 70 using the comparison method described in
An example can be used to illustrate this point. Let the regions in image 70 be all black. Let one region in the center of image 80 be black and the rest white. Let luminance be the only property to be considered. When image 70 is compared to image 80, all the regions in image 70 will find an exact luminance match with the middle region in image 80. Thus, the match between image 70 and image 80 in this case is 0, the best possible match. Now, reverse the roles of the two images. When image 70 is compared to image 80, only the center black region will have a good luminance match to regions in image 70. The rest will have a high luminance difference to all the regions in 70. The match from 80 to 70 gives a low degree of similarity (or a high match score). Thus, the match computation as it stands is not symmetric.
It is sometimes desirable to have a symmetric measurement. For instance, this will help to insure that the same images are returned as the result of a database search with similar query images. For example, in
Processing begins in step 90 where a first image (image A) is matched to a second image (e.g., image B) and an aggregate score is computed. The processing performed to match image A to image B is the processing described above in conjunction with
Processing then flows to step 92 in which the second image (i.e. image B) is matched to the first image (i.e. image A) and a second aggregate score is computed using the processing steps described above in conjunction with
Next, as shown in processing step 94, the scores from the two searches are combined to provide a composite score. The scores may be combined, for example, by simply computing a mathematical average or alternatively, the scores may be combined using a different mathematical technique which includes a weighting function or other technique to emphasize one of the scores.
One could also retain the scores for each property in steps 90 and 92 of
Referring now to
Likewise image portions 102c, 102d are combined and represented as image portion 104b in image region 100h; image portions 102e, 102f are combined and represented as image portion 104c in image region 100j; and image portions 102g; 102h are combined and represented as image portion 1044 in image region 10.0c. Thus image 102 may be compactly and efficiently represented as image 104 comprising image portions 104a-104d. Image 104 is used to perform searching within an image database such as database 18 (
Each image within the database 18 is likewise compactly represented such that when two images are compared, a relatively large amount of image information is contained within a relatively compact representation thus allowing rapid comparisons to be made. Also, by utilizing the “collapsed image” approach, an image can be stored in a relatively small amount of storage space.
Referring now to
For example, query image portion 106a appear is in the same image segment as database image portion 108a and thus no deformation of the query image is required to match the two image portions. Query image portion 106b must be deformed or moved in a downward direction by one image segment to thus align with database image segment lose. The same matching steps are carried out for each portion of the query image portions 106a-106d with respect to each segment and portion of the database image 108. Once each of the query image portions 106a-106d have been tested for a match then the process is repeated for the next target image which may for example be retrieved from an image database.
Referring now to
It should be noted that the selected query section 136 is not limited to just one image. The results section 138 can contain 1 or more results, not necessarily eight images as here shown. Images in the results section 138 can also be selected and put into the selected query region 136.
In this particular example; a display of three images 134a-134c as query choices, is shown. The rightmost image 134c (the triangle with the swirl) was chosen as the query image and placed in the query location 136. A search system using the techniques described above in conjunction with
The system orders the whole database in terms of how similar each database image is to the query image. It displays the results in order of their computed visual similarity to the query image. In
In the case of
As can be seen from
It should be noted that the display may be dynamically updated during a search. By comparing the position of images 138a-138h in
For example, by comparing the order of the result images from 138 in
Such image updates can be provided to a user in real or pseudo real time. A user viewing this dynamic update process in which some images move from one location to another, some images move off the display, and other new images appear on the display and the effect appears as a “ripple effect.”
Referring now to
Processing begins in block 160 where a query image and a set of target images are selected for processing. Typically, a predetermined number of target images (N) are selected for processing. Processing then proceeds to block 162 in which the query image is compared with the subset of the target images. The comparison may be accomplished using the techniques described above in conjunction with.
Processing then proceeds to block 164 in which the processed target images and their respective scores are stored in a list of processed images. Next as shown in block 166, the list of processed images are sorted by their similarity score.
In block 168, the set of processed target images are moved from the database of target images and the value of N is decreased. In this example, the number of images left to process denoted as N, is decreased by a value corresponding to x. Processing then proceeds to block 170 where the best M images are displayed to the user. Processing then flows to decision block 172 where decision is made as to whether there are any more target images to be processed. If there are more target images to be processed, processing flows to block 162 and the processing in block 162-170 is repeated until there are no more target images to be processed.
Referring now to
It may be desirable to be able to stop the search and view the result images at times prior to the completion of the search (e.g., prior to completion of all images in a database) since any of the resulting images 182a-182h can be selected and used as starting points for a new search. For example, as shown in
It may also be desirable to stop the search so that any result images 182a-182h that are interesting to the user can be stored. In
Furthermore, by allowing the search to be interrupted, a user watching the visual display can stop the search if a match is found without waiting for the image processing system to complete a search of the entire database thereby decreasing the amount of time needed to satisfy a query; in this case finding a similar or infringing trademark.
It should be appreciated that although the images 182a-182h in
Referring now to
Referring now to FIGS. 9 and 9E-F, the manner in which one or more portions of a query image may be emphasized is shown.
Also a relative relationship can be defined between each of the selected image regions such as between regions 192, 194. An example of a relative relationship between regions is a spatial relationship. In the case of
It should also be noted that portions of the image may be removed or erased to completely de-emphasize a feature. For instance image 196 in
Referring now to
Processing then flows to decision block 202 where a decision is made as to whether the target or the primary subregion should be resized. If the decision is made to resize one of the images (e.g. the primary subregion), then processing flows to block 204, where the one or more resized images are generated. It should be appreciated that in some applications it may be desirable to resize the target image and the primary subregions such that they are the same size, in which case processing then flows to block 206 and steps performed in conjunction with
Processing then flows to decision block 207 in which a decision is made as to whether any more resized images remain to be processed. If more images remain to be processed then processing returns to block 205. If no more images remain to be processed then processing ends.
In the image pyramid case, each image in the image pyramid corresponds to the primary subregion at a different scale. In this case, each scaled primary subregion is compared to the target image and the origin of each of the images in the image pyramid are aligned at a particular location with the target image. Then processing to compare the images in accordance with steps explained above in conjunction with
If in decision block 202 a decision is made to not resize an image, then processing flows to block 214 where the origin of the selected region is aligned with a reference point in the target image are selected.
Processing then flows to block 215 where certain conditions and constraints are specified. For example, boundary conditions may be specified. The boundary conditions can, for example, place limits on two positions at which region 222 may be placed relative to the target image. For example, one boundary condition may be that no portion of region 222 may lie outside any region of the target image.
When selecting the origin, care should be taken to not select a reference point which causes the subregion (e.g. region 222 in
In some embodiments it may be desirable to specify the boundary conditions prior to selecting a reference point. Regardless of the order in which steps 214 and 215 are performed, the boundary conditions are compared with the selected reference points to determine whether a selected reference point is valid.
In this particular example, subregion 222 is assumed to have an origin at its upper left-most corner. It should be appreciated, however, that other origins could also be selected for subregion 222. For example, the lowermost right hand corner of subregion 222 could also serve as the origin. Thus, although in this example, the origin is assumed to be in the upper left hand comer, it is not necessary for this to be so.
Once the origin of the subregion 222 and its reference point in target image 224 are identified, it is then possible to place subregion 222 at that reference point in target image 224. In this particular example, the subregion 222 is shown having its origin located at the same position as the origin of the target image 224.
If the subregion and the target image are not the same size, it should be noted that it is not necessary to place the origin of subregion 222 at the origin of the target image 224. The origin of subregion 222 may be placed at any location within target image 224 as long as all other constraints (including boundary conditions) are satisfied.
It should be noted that in some applications it may be desirable to select more than one region in the primary image for matching to the target image. This is illustrated in
Processing then flows to block 216 where the processing to compare the images in accordance with steps explained above in conjunction with
If in decision block 218 a decision is made that more reference points remain to be processed, then processing returns to block 215. If no more reference points remain to be processed then processing ends.
The query to image processing system 12 (
Turning now to
Processing then flows to decision block 236 where a decision is made as to whether more target images should be processed with the same primary image. If the decision is made to process more target images, then processing flows to block 240 where a next target image is selected and a similarity score between the primary image and the next selected target image is again computed. The primary and target image identifiers and the associated score are again stored in a storage device. This loop is repeated until it is not desired to select anymore target images.
Processing then flows to decision block 238 where a decision is made as to whether more primary images should be processed. If a decision is made to process more primary images, then processing flows to Step 242 where a next primary image is processed and blocks 232 through 242 are repeated until it is not desired to process anymore primary images. Processing then flows to block 244 where a score is computed and an output is provided to a user. The particular manner in which these scores could be computed is described below in conjunction with
Processing then flows to block 250 where the aggregate score is associated with the target image. Processing then flows to decision block 252 in which a decision is made as to whether more target images should be processed. If more target images should be processed, then processing flows to block 254 and blocks 248 and 250 are repeated. This loop is repeated until there are no more target images to process. When the decision is made in decision block 252 that there are no more target images, then processing flows to block 256 in which the target images are sorted according to their aggregate score. Processing then flows to block 258 in which the sorted images and associated scores are provided to the user. Such output may be provided in the form of a visual display, an audio display or a printed display.
If in decision block 266 a decision is made to not process more primary images, then processing flows to block 270 in which a threshold number is selected. In block 272, target images that are in the first N positions in all the lists associated with the primary images are identified. In block 274, the resulting subset target images are sorted by best score or best average score and provided to a user. The information may be provided in the form of a visual display, an audio display or a printed display or using any other display technique well known to those of ordinary skill in the art.
There are many other techniques to compute a similarity score based on a multiple set of query or primary images. For example one may match each primary image to each of the target images individually. A score is then associated with each target image and this record is put into a list. The list can be sorted based on the computed scores. The system should return a non-duplicative set of images that have the lowest individual score to at least one primary image.
Rather than trying to combine multiple scores derived from multiple primary image, one may calculate the common characteristics across the primary images to produce a new “condensed” query image that embodies those characteristics. The matching may be performed using this new query image and a target as described in
Alternatively still one may match multiple query images to each other before processing the target images. The goal is to find the system parameters that find the most consistent, best similarity scores calculated between query images. In other words, the goal is to find the system parameters that best explain the similarities between the query images. These system parameters and one or more of the query images may be used to calculate a similarity score to the target images.
It should also be noted that the query may be refined using positive examples. The results may be calculated by one or more query images. The system parameters may be refined by having the user choose images from the result set as “positive examples”. One way to refine the parameters is to alter the them such that the resulting measure of similarity gives the lowest (or best) possible scores to the positive examples.
Alternatively, it should also be noted that the query may be refined using negative examples. The results may be calculated by one or more query images where now the user may specify both positive examples (images that fit a particular criteria or embody the perceptual concept) and negative examples (images that do not embody the perceptual concept). One way to incorporate positive and negative examples is to alter the system parameters such that the resulting measure of similarity produces a maximal difference between the scores for the positive examples and the negative examples. Another method is to compute the commonalities between the positive examples and then to remove any features contained in the negative examples. A new query image that embodies the characteristics of the positive examples and does not contain the characteristics of the negative examples may be reduced and used to perform the matching.
Referring now to
In block 286, the similarity score with each target image in the list is stored. In block 288, the list of target images is sorted by similarity score and in block 290, an output of the sorted list is provided to a user.
Referring now to
Associated with each image data field 296 is a text field 298. The text field 298 includes a textural description of the image provided by the image data 296. Thus in this particular example, image 296a is described by text 298a as being a triangle swirl or geometric object. The particular text associated with each image is typically subjective and descriptive of the particular image. Image 296b is an image of an apple and thus the text 298b associated with image 296b is apple, fruit and food. Other associated text may also be associated with the apple such as the color red for example. In addition, the text information may not consist of a meaningful word. Often codes are used to describe images, such as the numeric coding scheme used by the Patent and Trademark office to describe trademark images. An example of such a code is the text string 01.01.03 which is known to be associated with a five pointed star.
In the example in
The use of text based search with visual similarity matching has several good attributes. The text based search ensures that the target images that are returned are in some way related to the query image. Unfortunately. the resulting set of images may be extremely long and unsorted. Without visual similarity matching, a user would have to look at each image to find the best match. Visual similarity matching sorts the resulting set of images in terms of visual closeness to the query image. Thus, the most visual similar target images are given priority in the list over less similar images. This last steps brings the most salient images to the attention of the user first. The user may use the similarity scores to determine when to stop looking at the list of images. If the scores get too high or there is a large jump in scores, this information may signal to the user that images after that point may not have much visual similarity with the query image.
Referring now to
The particular manner in which the circuit component 330, circuit line 334, and lead 332, as well as the soldering technique used to couple the lead to the circuit line, may have an impact on the specific processing which takes place during an inspection process. However, during the inspection of the circuit component, different features or characteristics may become more or less important depending upon the manner and/or technique used to mount the circuit component, couple the circuit component to the circuit line, and fabricate the circuit line.
In this particular example, inspection of the gull wing lead 332 and solder joint 336 is described. In this particular example, it is recognized that the relative brightness between predetermined regions at and proximate to the solder joint 336, can be used to verify the joint as an acceptable or an unacceptable solder joint. Specifically, a top surface of circuit component 330 reflects a certain amount of light. The angled portion 332a of lead 332 reflects less light and thus appears darker than region 330a. Similarly, flat lead portion 3326 appears brighter than lead portion 332a and a first portion of solder joint 336a appears darker than lead portion 332b. Solder portion 336b appears brighter than region 336a and solder region 336c appears darker than region 336b due to the angle at which the solder contacts the circuit line. Circuit line portion 334a corresponds to a flat portion of the circuit line 334 and thus reflects light at a different angle than solder region 336c and thus appears brighter than solder region 336b. Thus, when inspecting the solder joint 336, the relative luminance of each of the regions can be used in the inspection process and it is these relative luminance characteristics which can be specified in a plug-in module to thus expedite the image matching process.
Referring now to
Processing then proceeds to step 346 where each lead and solder joint is seen as a primary image. The target image may be another stored image (or images) of a good lead and solder joint or a synthetic or composite image of the lead and joint. The target lead/joint is processed in accordance with the steps explained above in conjunction with
It should be noted that the threshold may have to be adjusted for different types of leads. Leads may vary in many aspects such as shape, size, and pitch. The threshold value may also change depending on the goals of the manufacturer. The manufacturer when setting up a production line may want to find all the bad leads at the expense of some false positives. In this case the threshold in 348 should be biased towards the low side. The manufacturer can analyze where and why the bad leads occur and change the production process to compensate for these errors. (This will be explained more fully in the text in conjunction with
It should be noted that in step 346, the target images could be pictures (real or composite) of bad leads. It would be ideal if this set of target images spanned the whole class of lead and joint defects. It would also be desirable, if each of the target images had information regarding what defect it portrays. In this case, a new lead such as image portions 332-334 in
It is very common for one component to have multiple leads on multiple sides. It should be noted images of one (or more) of the leads from the component may act as the primary image(s). Images of the rest of the leads from that component may act as the target images. For instance, in step 346, image 337 can be treated as the primary image as discussed above in conjunction with
The target image does not have to contain both the lead and the joint. Just the solder joint as shown in 339 of
Referring now to
Processing then flows to block 356 in which the printed circuit board with the solder paste properly applied thereon is provided to a component placement station. The component placement station can include a so called pick and place machine or alternatively, the placement station may involve manual placement of circuit components on the printed circuit board. The decision to use automated or manual component placement techniques is made in accordance with a variety of factors including but not limited to the complexity of the circuit component, the sensitivity of the circuit component to manual or machine handling, technical limitations of automated systems to handle circuit components of particular sizes and shapes and the cost effectiveness of using automated versus manual systems.
Once the circuit component is placed on the printed circuit board, processing moves through block 358 in which a placement inspection station performs an inspection of the placed circuit component. The placement inspection system includes an image capturing device to capturing image of each of the circuit components of interest. The captured image of the circuit component is compared to a target image of a placed circuit component using the techniques described here and above in conjunction with
Once determination is made in block 358 that the circuit component is properly placed and no other defects are detected, processing flows to block 360 in which a solder reflow station reflows the solder thus coupling the circuit component to the printed circuit board. Solder reflow station may be provided as an automated station or as a manual station. After solder reflow in block 360, processing flows to block 362 where a placement and solder joint inspection station inspects each circuit component and solder joint of interest. If no defects are detected, then processing flows to block 364 where a populated printed circuit board is provided.
It should be noted that in the above description, the target images are all good examples of paste application, component placement, or re-flowed solder joints. It is possible to have a mixed population of both good and bad examples in the target images. The results of matching the input image to the good and bad examples will tell the system whether the image looks more similar to the good examples rather than the bad examples (or vice versa). This discrimination can be very important in the decision of whether to pass or fail the image. Similarity of the primary image to a specific bad example can provide information regarding the type of defect. This information is extremely desirable for purposes of tuning the production process.
Good and bad examples can be collected over time as multiple boards are sent through stages 350 to 362 in
Each inspection stage (stages 354, 358 and 362) can collect data regarding the number and type of defects found. This data can be analyzed and fed back into the system to tune the process. For instance at stage 354, the post-paste inspection stage may consistently report insufficient paste over the whole board. This may mean that not enough paste is being applied in step 352. The inspection system may consistently report insufficient paste in one area on the board, suggesting a clogged opening in the paste stencil. A similar analysis may be done for post placement and post-reflow defects. Feedback regarding errors may be sent from any inspection station to any part of the process prior to that inspection stage. For instance, consistent detection of insufficient solder in step 362 (post-reflow) may mean that not enough paste is being applied in step 352 (pre-reflow).
Having described the preferred embodiments of the invention, it will now become apparent to one of ordinary skill in the art that other embodiments incorporating their concepts may be used. It is felt therefore that these embodiments should not be limited to disclosed embodiments but rather should be limited only by the spirit and scope of the appended claims. All publications and references cited herein are expressly incorporated herein by reference in their entirety.
Claims
1. A non-transitory storage medium with computer-readable instructions for processing circuit board images to compare a first image with a plurality of images to find a second image, the instructions comprising code for:
- reading the first image, wherein the first image is of a circuit board;
- processing the first image to create first information at least related to the first image;
- reading the plurality of images;
- processing each of the plurality of images to respectively create information at least related to each of the plurality of images;
- comparing a code related with the first image with a plurality of codes related with the plurality of images to find a subset of the plurality of images, wherein: the code related with the first image indicates a category of the first image; and the plurality of codes related with the plurality of images indicates one or more categories of the plurality of images;
- comparing the first information with the information at least related to each image of the subset; and
- determining if the subset compares favorably with the first image, at least in part, on an outcome from the comparing the first information with the information for the subset.
2. The non-transitory storage medium with the computer-readable instructions for processing the circuit board images to compare the first image with the plurality of images to find the second image as recited in claim 1, wherein the instructions further comprise code for displaying the plurality of images, wherein an order of the plurality of images corresponds with a likelihood of a match to the first image.
3. The non-transitory storage medium with the computer-readable instructions for processing the circuit board images to compare the first image with the plurality of images to find the second image as recited in claim 1, wherein the first information is gathered from the first image and a negative example that does not match the first image.
4. The non-transitory storage medium with the computer-readable instructions for processing the circuit board images to compare the first image with the plurality of images to find the second image as recited in claim 1, wherein the first information is gathered from two or more images.
5. A method for processing images to compare a first image with a plurality of images to find a second image, the method comprising:
- reading the first image;
- processing the first image to create first information at least related to the first image;
- reading a first code associated with the first image, wherein the first code categorizes the first image;
- reading the plurality of images;
- processing each of the plurality of images to respectively create information at least related to each of the plurality of images;
- reading a plurality of codes associated with the plurality of images, wherein the codes categorize the plurality of images;
- comparing the first code with the plurality of codes to find a subset of the plurality of images;
- comparing the first information with the information at least related to each image of the subset; and
- determining if the subset compares favorably with the first image, at least in part, on an outcome from the comparing the first information with the information for the subset.
6. The method for processing images as recited in claim 5, wherein the first code is determined by a human.
7. The method for processing images as recited in claim 5, wherein the first information is gathered from two or more images.
8. The method for processing images as recited in claim 5, wherein the first image is of a circuit board.
9. The method for processing images as recited in claim 5, further comprising displaying the plurality of images, wherein an order of the plurality of images corresponds with a likelihood of a match to the first image.
10. The method for processing images as recited in claim 5, wherein the processing the first image comprises logically combining the first image and a second image to create the first information.
11. The method for processing images as recited in claim 5, wherein the first information is gathered from the first image and a negative example that does not match the first image.
12. A system for processing images to compare a first image with a plurality of images to find a second image, the system comprising:
- an input system;
- an image database; and
- an image processing system coupled with the input system and the image database, the image processing system configured to: receive the first image from the input system; process the first image to create first information at least related to the first image; determine a first code associated with the first image, wherein the first code categorizes the first image; receive the plurality of images from the image database; process each of the plurality of images to respectively create information at least related to each of the plurality of images; determine a plurality of codes associated with the plurality of images, wherein the plurality of codes categorize the plurality of images; compare the first code with the plurality of codes to find a subset of the plurality of images; compare the first information with the information at least related to each image of the subset; and determine if the subset compares favorably with the first image, at least in part, on an outcome from the comparing the first information with the information for the subset.
13. The system for processing images to compare the first image with the plurality of images to find the second image as recited in claim 12, wherein the first image is of a circuit board.
14. The system for processing images to compare the first image with the plurality of images to find the second image as recited in claim 12, further comprising at least one plug-in module configured to provide application information used by the image processing system for at least on application.
15. The system for processing images to compare the first image with the plurality of images to find the second image as recited in claim 14, wherein the at least one plug-in module provides application information used by the image processing system to process images having at least one item from the group of items consisting of:
- a printed circuit board,
- a face,
- a scene,
- a fabric, and
- a trademark.
16. The system for processing images to compare the first image with the plurality of images to find the second image as recited in claim 12, wherein the image processing system is configured to determine the first code based, at least in part, on input from a human.
17. The system for processing images to compare the first image with the plurality of images to find the second image as recited in claim 12, wherein the image processing system is configured to create the first information by processing two or more images.
18. The system for processing images to compare the first image with the plurality of images to find the second image as recited in claim 12, further comprising an output system, wherein:
- the image processing system is configured to display the plurality of images on the output system; and
- an order of the plurality of images corresponds with a likelihood of a match to the first image.
19. The system for processing images to compare the first image with the plurality of images to find the second image as recited in claim 12, wherein the image processing system is configured to process the first image by at least logically combining the first image and a second image to create the first information.
20. The system for processing images to compare the first image with the plurality of images to find the second image as recited in claim 12, wherein the first information is gathered from the first image and a negative example that does not match the first image.
Type: Application
Filed: Jun 9, 2011
Publication Date: Nov 17, 2011
Applicant: Massachusetts Institute of Technology (Cambridge, MA)
Inventors: Pamela R. Lipson (Cambridge, MA), Pawan Sinha (Cambridge, MA)
Application Number: 13/156,673
International Classification: G06F 17/30 (20060101);