System and method for reduction of speckle noise in an image
The present invention includes methods for the reduction of speckle noise in an image and methods for segmenting an image. Each of the methods disclosed herein includes steps for analyzing the uniformity of a pixel within a plurality of pixels forming a portion of the image and, based on the uniformity of the intensity of the plurality of pixels, adjusting and/or replacing the pixel in order to produce a speckle-noise reduced image, a segmented image, or a segmented and speckle-noise reduced image. The methods of the present invention can employ for example conditional probability density functions, nonlinear estimator functions, convex energy functions and simulated annealing algorithms in the performance of their respective steps.
Latest STC.UNM Patents:
This application is a divisional of and claims the benefit of priority under 35 U.S.C. §120 to U.S. patent application Ser. No. 11/831,353, filed on Jul. 31, 2007, which issued on Jun. 14, 2011, as U.S. Pat. No. 7,961,975, entitled “SYSTEM AND METHOD FOR REDUCTION OF SPECKLE NOISE IN AN IMAGE,” which claims the benefit of priority under 35 U.S.C. §119(e) to U.S. Provisional Patent Application Ser. No. 60/834,508, filed on Jul. 31, 2006, entitled “SPECKLE NOISE REDUCTION BASED ON MARKOV RANDOM FIELD MODEL,” the benefit of priority of each of which is claimed hereby, and each of which are incorporated by reference herein in its entirety.
BACKGROUND1. Field of the Present Invention
The present invention relates generally to the field of image processing, and more specifically to the field of de-noising and segmenting coherent images.
2. History of the Related Art
Coherent imaging has a number of practical uses, for example in synthetic aperture radar (SAR) and ultrasonic imaging. For example, SAR has a number of advantages over other passive imaging systems because, as the SAR system emits its own radiation, it is not dependent upon any external source of radiation. Moreover, due to the long wavelengths, most SAR systems are capable of imaging the Earth's surface independent of inclement or adverse weather.
Unfortunately, the efficiency of aerial data collection and visualization with SAR systems is often impeded by their high susceptibility to speckle noise. A SAR system measures both the amplitude and the phase of the signals echoed from the Earth's surface. Due to the microscopic roughness of the reflecting objects on the surface, the amplitudes of the echoed signals reflected from the locality of each targeted spot have random phases. The amplitudes of these signals interfere coherently at the antenna, which ultimately gives rise to the signal-dependent and grainy speckle noise formed in the SAR imagery. Similarly, speckle noise in ultrasonic imaging is caused by the interference of energy from randomly distributed scatters, too small to be resolved by the imaging system. Speckle noise degrades both the spatial and contrast resolution in ultrasonic imaging and thereby reduces the diagnostic value of the images.
There have been a number of speckle noise reduction techniques developed in the image processing field. Some example techniques include the Lee filter and its derivatives, the geometric filter, the Kuan filter, the Frost filter and its derivatives, the Gamma MAP filter, the wavelet approach and some other Markov-based techniques. Unfortunately, each of these approaches assumes that speckle noise is multiplicative relative to the image intensity. While this assumption can be useful in simplifying the complex nature of speckle noise, it does not allow any of the foregoing techniques to substantially eradicate speckle noise from an image.
Similarly, image segmentation is often used in the automated analysis and interpretation of SAR data. Various segmentation approaches have been attempted in the past, such as for example edge detection, region growing technique and thresholding technique. As in the case of speckle noise, each of these techniques is fundamentally flawed in that they either require affirmative user input to segment the image and/or they are adversely affected by the speckle noise otherwise inherent in SAR images. As such, there is a need in the art of image processing for one or more methods, systems and/or devices for reducing speckle noise in an image as well as segmenting the same image for ease of analysis and interpretation of both SAR and ultrasound data.
SUMMARY OF THE PRESENT INVENTIONAccordingly, the present invention includes methods for the reduction of speckle noise reduction within an image and segmentation of an image. The speckle noise reduction method includes the steps of receiving an image comprising a plurality of pixels and establishing a coherence factor, a noise threshold factor, a pixel threshold factor, and a neighborhood system for pixels. The speckle noise reduction method can also include the steps of performing a uniformity test on a subset of pixels comprising a portion of the plurality of pixels, performing a noise detection test on the subset of pixels, and performing an intensity update on a pixel within the subset of pixels in response to the pixel being substantially non-uniform with respect to its neighborhood. The speckle noise reduction method can further include the step of repeating some or all of the foregoing steps for substantially all of the plurality of pixels in order to produce a speckle-noise reduced image.
The present invention further includes a method of segmenting an image. The segmentation method includes the steps of receiving an image comprising a plurality of pixels and establishing a coherence parameter and a number of classes. For each of the plurality of pixels, the segmentation method includes steps for comparing an intensity of each pixel to an intensity of one or more neighboring pixels, classifying each pixel into a class in response to a maximum value of a conditional probability function in response to the intensity of each pixel, and providing a segmented image in response to the classification of each of the plurality of pixels.
The methods of the present invention are based on the physical statistical properties of one or more pixels in an image. The methods of the present invention are practicable in a number of environments, including for example image processing systems for both SAR systems, ultrasound systems, and other coherent imaging systems. Each of the methods is practicable in real time or near real time, making them quite an efficient use of both time and computing power. Further details and advantages of the present invention are described in detail below with reference to the following Figures.
The following description of the preferred embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention as set forth in the appended claims.
The present invention includes a method of removing speckle noise from an image as well as a method of segmenting an image. Each of the methods of the present invention can be performed by automated systems, including for example image processing systems and the like. The methods of the present invention can be embodied in hardware, software, firmware or any other suitable means for causing an appropriate system to perform the requisite steps and provide a speckle noiseless and/or segmented image. The methods of the present invention are particularly well-suited for SAR and ultrasonic imaging applications or any other suitable imaging system. In particular, the methods of the present invention can be performed by SAR and/or ultrasound systems for improving the image quality of the respective systems. The methods of the present invention are described below with reference to the Figures. However, prior to providing a detailed description of the preferred embodiments, it is useful to provide the following definitions and mathematical framework for the methodology of the present invention.
As used herein, the term pixel is defined as the smallest complete portion of an image. In accordance with the methodology described herein, any plurality of pixels can be organized and analyzed using a Markov Random Field (MRF) algorithm. As shown in
From the graph G shown in
One or more pairwise cliques can be organized into a neighborhood system of pixels, and example of which is shown in
One suitable means for calculating the intensity Ikj, at a point kj given the intensity Iki at a point ki is a conditional probability density function (CPDF). In order to reduce the speckle noise as a function of intensity, the present invention can use a spatially inhomongenous variable θkj, representing the true intensity of the image at point kj. The true intensity of the image at point k1 corresponds to the statistical optical properties of the image, which necessarily provides improvements in the present invention over the noted prior systems. Denoting a random variable Ikj at point kj by ikj yields a CPDF of the following form:
pIkj|Iki(ikj|iki)={exp{[−|μ(rkikj)2|iki+ikj]/θkj(1−|μ(rkikj)2|)}/(θkj(1−|μ(rkikj)2|)}}Io{2(ikiikj)1/2|μ(rkikj)|/θkj(1−|μ(rkikj)2|)}, (1)
where θkj is defined as the true spatial intensity (based on the physical properties of the pixels) at a point kj, |μ(rkikj)| is defined as a coherence factor, and rkikji is defined as the Euclidian distance between the points ki and Kj.
In one alternative to the method of the preferred embodiment, the methodology assumes that the coherence factor has the following form:
If rkikj is greater than one, then the CPDF in equation (1) becomes independent of iki and the density of the speckle intensity becomes an exponential function of the form PIkj(ikj)=exp (−ikj/θkj)/θkj.
In other alternative embodiments, the methodology of the present invention can implement a larger correlation, i.e. greater than one pixel, for certain types of images. For example, the methodology can be configured to preprocess the data or apply a spatial-interpolation or down-sampling scheme for images having a larger correlation. In such a manner, even images having a larger correlation can be processed according to the methodology described above.
Referring back to
pIk|Ik1 . . . 4(ik|ik1 . . . ik4)=[pIk|Ik1(ik|ik1)pIk|Ik2(ik|ik2)pIk|Ik3(ik|ik3)pIk|Ik4(ik|ik4)]/[pIk(ik)]3. (3)
As each term in equation (3) is known from equation (1), the CPDF of the center pixel can take the form:
pIk|Ik1 . . . 4(ik|ik1 . . . ik4)=exp{Σ−ln [B(ik,iki)]−([A(ik,ik1)]/[B(ik,ik1)])}+ln {To([C(ik,ikj)]/[B(ik,ikj)])−3 ln [pIk(ik)]}, (4)
where A(ik, ikj) equals αrkkj|2ikj+ik, B(ik, ikj) equals (1-|αrkkj|2) θk, C(ik, ikj) equals 2(ik, ikj)1/2|αrkkj|, the summation is from j=1 to 4, and To is the modified Bessel function of the first kind and zero order.
In another variation of the method of the preferred embodiment, the parameter θk, which represents the true pixel intensity at index “k,” can be approximated in equation (4) by the empirical average of the observed pixel values within a predetermined window, or matrix, of pixels. For example,
As noted above, the methodology of the present invention can employ a MRF distribution function to update the intensity of one or more pixels in the image. Equation (4) can be rewritten as the following:
pIk|Ik1 . . . 4(ik|ik1 . . . ik4)=exp[−U(ik,ik1 . . . ik4)], where
U(ik,ik1 . . . ik4)=VC1(ik)+VC2(ik,ik1 . . . ik4), and
VC1(ik)=3ln [pIk(ik)], such that
VC2(ik,ik1 . . . ik4)=Σ{([A(ikikj)])/[B(ik,ikj)])−ln [To([C(ik,ikj)]/[B(ik,ikj)])]−ln [B(ik,ikj)])}. (5)
As in equation (4), the summation is from j=1 to 4, and To is the modified Bessel function of the first kind and zero order.
Given equation (5), it is straightforward to identify the energy function as U(ik,ik1 . . . ik4). Referring back to
The speckle-noise reduction method of the preferred embodiment includes the steps of receiving an image comprising a plurality of pixels and establishing a coherence factor, a noise threshold factor, a pixel threshold factor, and a neighborhood system for pixels. As used herein, the neighborhood system for pixels comprises a first pixel and one or more neighboring pixels as illustrated below. The speckle noise reduction method of the preferred embodiment also includes the steps of performing a uniformity test on a subset of pixels comprising a portion of the plurality of pixels, performing a noise-detection test on the subset of pixels, and performing an intensity update on a pixel within the subset of pixels in response to the pixel being substantially non-uniform with respect to its neighborhood. The speckle noise reduction method of the preferred embodiment can further include the step of repeating some or all of the foregoing steps for substantially all of the plurality of pixels in order to produce a speckle-noise reduced image.
In
In step S104, the method of
Step S1042 of the illustrated method recites evaluating the intensity variation within the window relative to a plurality of parameters. As shown in
The intensity update of step S106 can also include a plurality of steps therein. One such step includes step S1060, in which a new pixel intensity iknew is generated, wherein iknewε L\{ik} is generated at random and wherein L\{ik} is defined as a set of all or substantially all grey levels except ik. In step S1062, the temperature To is updated to Tk=λx Tk-1. In step S1064, the illustrated method recites minimizing a probability function of the form p=min{1, exp (−ΔU/Tk)}, wherein ΔU=U(iknew, ik1 . . . 4)−U(ik, ik1 . . . 4), and further wherein U is a function expressing the energy of the pixel as described above. The energy function gradually updates the intensity of the pixel as a function of the temperature, which gradually decreases as a function of λ. In step S1066, the illustrated method recites generating a uniformly distributed r.v. R ε {0,1} for accepting or rejecting the pixel's updated intensity through a sampling scheme. In step S1068, the illustrated method queries whether R<p. If the answer is affirmative, then the updated intensity of the pixel is accepted in step S1072 to iknew and the illustrated method proceeds to step S108. If the answer is negative, then the updated intensity of the pixel is rejected in step S1070 and the pixel intensity is maintained at the original ik, after which the illustrated method proceeds to step S108.
In step S108, the illustrated method queries whether the index k is greater than M×N, which is defined as the image size. If the answer is affirmative, then the illustrated method proceeds to step S110. If the answer is negative, then the illustrated method returns to step S104, at which time a new candidate pixel k+1 is selected for the foregoing processes. In step S110, the illustrated method queries whether the uniformity test, i.e. step S104 and its associated sub-steps, is true for almost every pixel. If the answer is negative, then the illustrated method returns to step S102 and selects another first pixel k of the image. If the answer is affirmative, then the illustrated method terminates at step S112, indicating that the speckle noise of the image has been substantially reduced and/or eliminated.
Upon completion of the illustrated method, the speckle reduced image can be provided to a user in any number of ways. For example, the image can be saved, displayed, transmitted, or otherwise made available to a user for further analysis, manipulation and/or modification.
In a variation of the method described above, the intensity update of step S106 can be performed using a different non-linear estimation approach instead of using the simulated annealing (SA) algorithm described above. Recalling from above the spatially inhomongenous variable θk, representing the true intensity of the image at a pixel k of the image, the present invention provides a nonlinear estimator function for a, defined by the conditional expectation:
Θk=E[Ik|I\{Ik}] (6)
where I\{Ik} is the set of all pixels in the image excluding Ik. Given the Markovian nature of I, equation (6) can be rewritten as:
Θk=E[Ik|Nk], (7)
where Nk={Ik1, Ik2, Ik3, Ik4}, constitutes the set of intensities of the four pixels adjacent to k and the associated CPDF shown above in equation (4).
Unlike the prior variation of the method of the preferred embodiment, the methodology including the foregoing non-linear estimator does not require the definition of the temperature parameter in the initialization phase to perform. Otherwise, each of steps S100, S102 and S104 are identical to those described above with reference to
In order to perform the intensity update step using the non-linear estimator, this variation of the method recites computing pIk|Ik1 . . . 4(ik|ik1 . . . ik4)=exp[−U(ik,ik1 . . . ik4)], for ik=Wkj wherein j ranges for zero to eight for a 3×3 window. Following this computation, this variation of the method of the preferred embodiment recites performing the intensity update ik←Θk, wherein as noted above Θk=E [Ik|I\Ik1 . . . Ik4], which in turn can be written as Θk=ΣikPik|Ik1 . . . Ik4(ik|ik1 . . . ik4), summing from ik=Wko to Wk8 for a 3×3 window.
In this variation of the method of the preferred embodiment, the pixel is tested as before by computing the intensity variability within the window. As before, low variability in intensity within the window or along a direction (in the presence of lines) is indicative of relative intensity homogeneity, which in turn implies that the pixel is not sufficiently noisy as defined by the parameters δ and γ, described above. In this instance, the intensity of the pixel is not updated. However, in the case in which the variability of the intensity within the window is found to sufficiently high, the pixel is replaced with a pixel having an intensity estimated according to the non-linear estimator function described above. In one alternative embodiment, the method can restrict the set of intensity values for ik to only those intensity values corresponding to pixels within the window, i.e. a total of eight intensity values for a 3×3 window. Alternatively, the set of intensity values for ik can be any set of potential values, such as for example the set {0, . . . , 255}.
After updating the pixel k using the non-linear estimator function described above, this variation of the method of the preferred embodiment proceeds to steps S108, S110 and S112, described above with reference to
The present invention further includes a method of segmenting an image. The segmentation method of the preferred embodiment includes the steps of receiving an image comprising a plurality of pixels and establishing a coherence parameter and a number of classes. For each of the plurality of pixels, the segmentation method of the preferred embodiment includes comparing an intensity of each pixel to an intensity of one or more neighboring pixels, classifying each pixel into a class in response to a maximum value of a conditional probability function in response to the intensity of each pixel, and providing a segmented image in response to the classification of each of the plurality of pixels. The segmentation method of the preferred embodiment is practicable in a number of environments, including for example image processing systems for both SAR systems, ultrasound systems, and other coherent imaging systems.
One implementation of the segmentation method of the preferred embodiment is shown in the flowchart of
In step S2042, the class (CL) corresponding to the CPDF maximizing grey level is assigned to the pixel k, a process which is repeatable for every pixel in the image. As shown in
Step S208 terminates the method and produces the segmented image for the user. Upon completion of the illustrated method, the segmented image can be provided to a user in any number of ways. For example, the image can be saved, displayed, transmitted, or otherwise made available to a user for further analysis, manipulation and/or modification.
Those of skill in the art of image processing will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The methods described herein can be readily introduced into a number of formats to cause one or more computers, systems, and/or image processors to perform the steps described above. For example, the various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration that is attachable, directly or indirectly to a system or device for receiving raw image data as an input.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
The present invention has been described with reference to its preferred embodiments so as to enable any person skilled in the art to make or use the present invention. However, various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention as set forth in the following claims.
Claims
1. A method of segmenting an image comprising:
- (a) receiving an image comprising a plurality of pixels;
- (b) establishing a coherence parameter and a number of classes;
- (c) for each of the plurality of pixels, comparing an intensity of each pixel to an intensity of one or more neighboring pixels;
- (d) classifying each pixel into a class in response to a maximum value of a conditional probability function in response to the intensity of each pixel; and
- (e) providing a segmented image in response to the classification of each of the plurality of pixels.
2. The method of claim 1 wherein step (c) further comprises minimizing a cost function.
3. The method of claim 2 wherein the cost function comprises a convex Gibbs energy function.
4. The method of claim 2, wherein minimizing the cost function includes maximizing the conditional probability function for each pixel in the image.
5. The method of claim 1 wherein step (c) further comprises, for each pixel, computing the intensity of each pixel over all possible grey-level values.
6. The method of claim 1 wherein the conditional probability function is derived from one of a Gibbs distribution or a Markov random field distribution.
7. The method of claim 1 further comprising the step:
- (f) prior to performing step (a), removing speckle noise from the image.
8. The method of claim 1, wherein providing the segmented image includes providing the segmented image in a coherent imaging system.
9. The method of claim 8, wherein providing the segmented image in the coherent imaging system includes providing the segmented image in a synthetic aperture radar system or an ultrasonic imaging system.
10. A computer program product comprising:
- a non-transitory computer readable medium comprising: code to cause at least an image processor to receive an image comprising a plurality of pixels; code to cause at least an image processor to establish a coherence parameter and a number of classes; code to cause at least an image processor to compare an intensity of each pixel to the intensity of one or more of a neighboring pixel, for each of the plurality of pixels; code to cause at least an image processor to classify each pixel into a class in response to a maximum value of a conditional probability function in response to the intensity of each pixel; and code to cause at least an image processor to provide a segmented image in response to the classification of each of the plurality of pixels.
11. The product of claim 10 further comprising code to cause at least an image processor to minimize a cost function in order to compare the intensity of each pixel to the intensity of one or more of a neighboring pixel.
12. The product of claim 11, wherein the code to cause at least an image processor to minimize a cost function includes code to maximize the conditional probability function for each pixel in the image.
13. The product of claim 11 wherein the cost function comprises a convex Gibbs energy function.
14. The product of claim 10 further comprising code for causing at least an image processor to compute, for each pixel, the intensity of each pixel over all possible grey-level values.
15. The product of claim 10 further comprising code to cause at least an image processor to remove speckle noise from the image prior to receiving the image.
16. The product of claim 10, wherein the code to cause at least the image processor to provide the segmented image includes code to provide the segmented image in a coherent imaging system.
17. The product of claim 16, wherein the code to provide the segmented image in the coherent imaging system includes code to provide the segmented image in a synthetic aperture radar system or an ultrasonic imaging system.
4887306 | December 12, 1989 | Hwang et al. |
5022091 | June 1991 | Carlson |
5452367 | September 19, 1995 | Bick et al. |
5754618 | May 19, 1998 | Okamoto et al. |
5910115 | June 8, 1999 | Rigby |
5987094 | November 16, 1999 | Clarke et al. |
6071240 | June 6, 2000 | Hall et al. |
6155978 | December 5, 2000 | Cline et al. |
6322509 | November 27, 2001 | Pan et al. |
6636645 | October 21, 2003 | Yu et al. |
6753965 | June 22, 2004 | Kumar et al. |
6990225 | January 24, 2006 | Tanaka et al. |
6990627 | January 24, 2006 | Uesugi et al. |
7520857 | April 21, 2009 | Chalana et al. |
7545979 | June 9, 2009 | Fidrich et al. |
7623709 | November 24, 2009 | Gering |
7744532 | June 29, 2010 | Ustuner et al. |
7860344 | December 28, 2010 | Fitzpatrick et al. |
7921717 | April 12, 2011 | Jackson et al. |
7961975 | June 14, 2011 | Lankoande et al. |
7983486 | July 19, 2011 | Zhou |
8050498 | November 1, 2011 | Wilensky et al. |
20030036703 | February 20, 2003 | Li |
20050134813 | June 23, 2005 | Yoshikawa et al. |
20080025619 | January 31, 2008 | Lankoande et al. |
20090175557 | July 9, 2009 | Lankoande et al. |
- “U.S. Appl. No. 11/831,353, Non Final Office Action mailed Oct. 31, 2010”, 8 pgs.
- “U.S. Appl. No. 11/831,353, Notice of Allowance mailed Feb. 7, 2011”, 7 pgs.
- “U.S. Appl. No. 11/831,353, Response filed Jan. 13, 2011 to Non Final Office Action mailed Oct. 13, 2010”, 9 pgs.
- “U.S. Appl. No. 11/831,353, Response filed Sep. 30, 2010 to Restriction Requirement mailed Sep. 1, 2010”, 7 pgs.
- “U.S. Appl. No. 11/831,353, Restriction Requirement mailed Sep. 1, 2010”, 5 pgs.
- Ersahin, Kaan, “Image Segmentation Using Binary Tree Structured Markov Random Fields”, Unknown Publisher, (Dec. 19, 2004), 9 pgs.
- Gencaga, Sar D, “Image Enhancement Using Particle Filters”, Interactive Presentation, (Oct. 1, 2005), 2 pgs.
- Xie, Hua, “SAR Specke Reduction Using Wavelet Denoising and Markov Random Field Modeling”, IEEE Transactions on Geoscience and Remote Sensing, vol. 40, No. 10, (Oct. 1, 2002), 17 pgs.
- “U.S. Appl. No. 12/406,730 , Response filed Jan. 9, 2012 to Non Final Office Action mailed Oct. 7, 2011”, 12 pgs.
- “U.S. Appl. No. 12/406,730, Non Final Office Action mailed Oct. 7, 2011”, 9 pgs.
Type: Grant
Filed: May 27, 2011
Date of Patent: Jun 26, 2012
Patent Publication Number: 20110229034
Assignee: STC.UNM (Albuquerque, NM)
Inventors: Ousseini Lankoande (Westfield, MA), Majeed M. Hayat (Albuquerque, NM), Balu Santhanam (Albuquerque, NM)
Primary Examiner: Kanjibhai Patel
Attorney: Schwegman, Lundberg & Woessner, P.A.
Application Number: 13/118,165
International Classification: G06K 9/34 (20060101); G06K 9/62 (20060101);