Processing a digital image of content using content aware despeckling
Systems and methods for removing artifacts from a page of digital image are presented. More particularly, a digital image is obtained, the digital image having at least one page of content to be processed. A content bounding box is determined for the content of the page. Additionally, a set of segments is generating, the set corresponding to particular areas of the content within the content bounding box, each area associated with a type of content. For each segment of the set of segments, the following are performed. Despeckling criteria are selected for identifying artifacts according to the type associated with the segment. Artifacts are identified in the segment according to the despeckling criteria. The identified artifacts are then removed from the page. Thereafter, the updated digital image is stored in a content store.
Latest Amazon Patents:
- Dynamic clear lead injection
- Forward-looking mobile network performance visibility via intelligent application programming interfaces
- Low power wide area network communication mechanism
- Merging accounts associated with computing devices
- Real-time low-complexity stereo speech enhancement with spatial cue preservation
This application is related to co-pending U.S. patent application Ser. No. 11/864,208, filed Sep. 28, 2007, entitled “Processing a Digital Image of Content.” This application is also related to co-pending U.S. patent application Ser. No. 11/864,180, filed Sep. 28, 2007, entitled “Processing a Digital Image of Content to Remove Border Artifacts.”
BACKGROUNDThe publishing industry has greatly benefited from the many advances in digital imaging and printing technologies. Indeed, one of the many advances has been the creation of an on-demand printing market where a publisher prints small quantities of a book or other publication to satisfy orders for the publication as the orders are made. This is especially advantageous where requests for the publication are sporadic or limited, such that generating a typical print run would not be cost effective. Moreover, on-demand printing proves advantageous when the publisher is not the originator of the publication and has only a printed copy of the publication, since the publisher can scan the pages of the publication, and generate a document therefrom.
Publishers, including on-demand publishers, receive source copies of a publication in a variety of formats, including digital and printed formats. Authors will provide electronic source copies of a document in a particular format using a word processor. This type of document has the advantage that the content is free of extraneous speckles and lines (referred to as “artifacts”) that arise when source documents originate from a scanned or photocopied source. For example,
In spite of the fact that many authors could provide publishers with a source copy of content that is free of artifacts, more often than not the source copies have some artifacts that detract from the overall quality of the book when printed in an on-demand manner. Unfortunately, on-demand publishers simply use the source copy provided to them by authors or content aggregators, relying upon them to maintain the source copy artifact-free. Moreover, when the source copy is a printed document (such as a 100-year old book that has long been out of print), even the best scanners and photocopiers frequently include artifacts.
The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
Techniques, methods, and apparatus are described for processing content into print-ready content suitable for on-demand printing. In particular,
As shown in
A request may indicate a particular publication to be printed, such as an out-of-print book, or include the particular content 206 for the publishing service 210 to publish via its on-demand printing services. A given publication request may include the content to be published, or alternatively may identify a publication or other content that is available to the publishing service 210. For example, a client/vendor using computing device 204 may request that the publishing service 210 generate five copies of an out-of-print book 212 or other published, printed document that is available to the publishing service in physical form 212 or of digital content (not shown) stored in a content store 214. Alternatively, a client/author using computing device 202 may supply an original document to the publication service 210 for on-demand printing. Of course, a publication service 210 may also receive physical printed copies of content or digital content on physical media via a physical delivery means with an accompanying request to prepare and print the content.
The publication service 210 will typically have available or include the necessary hardware, such as a scanner 220 or other imaging device, to generate digital images of printed content. To this end, while not shown, the publication service 210 may also include a fax machine that may receive and store a faxed image of content, or print the faxed content for conversion to a digital image.
Once a request is received, and the printed content (if the request is in regard to printed content) is converted to an initial digital image, the publication service 210 processes the image to place it in a condition for on-demand printing. There are typically many aspects to processing an image of printed content for on-demand printing that are performed by the publication service 210. To that end,
As shown in
Also shown in the publication service 210 are various components that perform certain functions of processing a digital image for on-demand printing. However, it should be appreciated that these various components are logical components, not necessarily actual and/or discrete components, as any of the following-described components may be combined together or with other components, including components not discussed herein. Moreover, these various components of the publishing service 210 may be implemented in hardware, in software, or a combination thereof. The additional components may include, but are not limited to, an image segmentation component 304 for identifying content regions (image and text regions) which can be segmented from each page in a digital image, a border removal component 306 for removing border artifacts (such as border artifacts 110 or 112 of
Components further shown as included in the publishing service 210 may include a deskew component 312 for vertically aligning a content area in a digital image page, a despeckle component 314 for removing speckle artifacts (such as speckles 102-108 of
As mentioned previously in regard to
With regard to processing a digital image for on-demand printing,
In one embodiment, the routine 400 is initiated as the publishing service 210 receives a user request for on-demand printing of a particular document or other content. For the purpose of discussion, it is assumed that the content store 214 coupled to the publishing service 210 includes a digital image file representing the requested content. Moreover, it is also assumed that the digital image file has not yet been processed to make it suitable for on-demand printing. By way of example, the requested content may include, but is not limited to, a book, a magazine, an article, a newspaper, etc. It is also further assumed that the requested content is available as digital image file and has not be previously processed and stored in the content store 214.
Beginning with block 402, the publishing service 210 obtains a digital image file from the content store 214 corresponding to the requested content. Alternatively, in the event where the digital image file corresponding to the requested content is not readily available from the content store 214, the digital image file may be obtained through scanning of a physical copy of the requested content. Irrespective of the manner or source from which the digital image file is generated and/or obtained, it should be appreciated that the digital image file will typically include a plurality of pages, each page corresponding to a printed page of content.
At control block 404, a looping construct, illustrated as a “for” loop, is begun that iterates through each page in the obtained digital image. Moreover, the control block 404 iterates through each page performing the steps between control block 404 and end control block 414. As those skilled in the art will appreciate, when all pages of the digital image have been processed, the routine 400 exits the looping construct and proceeds to block 416 as described below.
At block 406, a “deskew” process may be performed on the current page of the digital image. The content of a page in a digital image is often tilted or skewed during the scanning or imaging process. The deskew process corrects the skewing in that it aligns or orients the page to a desired alignment (i.e., such that the page of content will be aligned/oriented with the printed page). For example, as shown in
It should be appreciated that, for purposes of this disclosure, the term “orientation” is not a general reference to the arrangement of printed content on paper (commonly associated with terms such as “landscape” and “portrait”) but rather is a reference to the rotation and position of the image content with regard to page image boundaries. Moreover, as will be appreciated by one of ordinary skill in the art, deskewing typically comprises rotating the content (i.e., page 502) such that its bounding box 506 is aligned with the desired orientation. In one embodiment, a bounding box, as used herein, is a rectangle that delineates the content of the page 502 from non-content areas (margins) of the page, as shown in box 506. The result of the deskewing process of block 406 is shown in
At block 408, a segmentation process is performed on the current digital image page. The segmentation process identifies various areas or regions of the page and removes artifacts from those regions. The segmentation process may be repeated iteratively to identify various patterns, for example, text lines, graphics, background, margins, etc., in the content area in order to enhance artifact removal, including on or in between areas of the identified segments. In one embodiment, a segmentation process may be performed several times to achieve a different level of granularity in segmenting. In another embodiment, the type of content in a particular identified region determines the type of artifact removal that is performed. Additionally, in yet another embodiment, the segmentation process, in conjunction with the despeckle process, may be performed iteratively to enhance artifact identification and removal. Iterative segmentation and artifact removal are discussed later in greater detail in conjunction with
At block 410, a despeckling process is performed on the segmented digital image page. As suggested above, the despeckling process removes speckle artifacts (such as speckles 102-108 of
At block 412, a border removal process may be performed for removing border artifacts (such as borders 110 and 112 of
After each page of the digital image has been processed (deskewed, segmented, despeckled, and borders removed, corresponding to blocks 406-412), the publishing service 210 may perform various other processes to further process the page into a printing ready content. One process, at block 416, may be a “reassembly” or “reconstruct” process that assembles individual pages in accordance with a desired page order of the on-demand print ready document.
At block 418, an alignment process is performed across the reassembled pages. As will be well understood, in order to provide a pleasant look of the on-demand printed document, it is desirable that the content area on each digital image page be similarly aligned across all pages and placed at approximately the same position on each physical page of the on-demand printed document. However, while many digital images are generated by scanners, some digital images may be generated through different imaging devices, resulting in misalignment among digital image pages. Further, even before scanning of a book, the physical pages of the book can be misaligned due to various errors in the printing or binding process. In order to place the content area of each digital image page at approximately the same position on a physical page of the on-demand printed document, an anchor point of a bounding box, such as bounding box 506, may be used to align the content area across the digital image pages. The anchor point may be defined for the alignment process, for example, the top left corner of the bounding box or the center of the bounding box.
In some instances, some outermost bounding boxes of the digital image pages have differences in size. For example, a first digital image page for a chapter which contains less text lines may have a smaller outermost bounding box than the rest of the digital image pages. In such a case, the publishing service 210 may define margins to the outermost bounding boxes and control the margins on a digital image page based on the size of the outermost bounding box.
In one embodiment, the publishing service 210 may allow an end user to specify desirable margins for a physical page of the on-demand printed document. In one embodiment, for each image page, typical margins and minimal margins are identified. The typical margins may be determined from the outermost bounding box and the minimal margins may be determined from a theoretical box encompassing the content area and all other images on the page such as annotations, notes, headers and footers, and the like. The publishing service 210 may determine a final trim box that satisfies both typical margin and minimal margin requirements. Subsequently, all images on a digital image page are cropped by the final trim box. As a result, the output images will have consistent margins across the pages.
After the digital image pages are aligned and have defined margins, a print-ready document file is ready for on-demand printing of the content, and at block 420, the print-ready document is stored in the content store 214. Thereafter, the routine 400 terminates.
It is noted that the aforementioned processes and embodiments are described only for illustrative purposes and, thus, it is contemplated that all or some of processes described in
In at least some embodiments described herein, the publishing service 210 may identify several layers or areas of a digital image page and apply a layered despeckling process that uses a different speckle artifact removal criteria for each particular layer. The removal criteria may be determined to maintain a balance between the quality of the content and the accuracy in removing speckle artifacts.
There are various ways to identify and generate layers of the digital image page. However, for the purpose of discussion, it is to be assumed that the layers of the digital image page are identified through an iterative segmentation process, such as that described in
Beginning at block 602, a first segmentation process is performed to produce several segmentation layers. Results of a segmentation process may include text regions, image regions, text lines, words, characters, and the like. In one embodiment, the segmentation process results in segments corresponding to the images, text regions, and, within text regions, text lines.
Each segment/layer is associated with a type of content within the segment. Correspondingly, based on the type of content, a suitable despeckling process or criteria is selected and applied. Thus, if a layer includes less important content or no content, such as the margins of a page, the publishing service may apply an aggressive despeckling process. Likewise, if a layer includes more important content, such as textual content, the publishing service may apply a conservative despeckling process.
To begin generating the layers/segments, as shown in
With regard to
At block 606, on the second layer (such as a content area 706 in
At block 608, another despeckling process may be applied to a third layer of segments to despeckle inter-regional areas. As shown in
As mentioned above, for each type of segment being processed, a corresponding despeckling algorithm is applied. In other words, the despeckling process is a content-aware despeckling process. For inter-regional (e.g., areas between image segments and text segments) a more aggressive despeckling algorithm or removal threshold may be applied. As one example, in the most aggressive despeckling, connected pixels (as determined by a connected component algorithm) of less than 50 pixels are removed as superfluous noise. Alternatively, when despeckling within a text region, connected pixels of less than 10 pixels are removed as being superfluous noise. Still further, when despeckling between segments of text lines, a yet more conservative threshold is used, such as removing only connected pixels of 5 or less.
As discussed above, the layered despeckling process, in conjunction with an iterative segmentation process, may improve the image quality of the on-demand printing ready document because after each despeckling process, less speckle artifacts (noises) may remain on the digital image page. In other words, (while not shown) after segmenting and despeckling the digital image page, the process may be repeated. The despeckling enables a subsequent segmentation process to achieve more improved segmentations. The publishing service 210 may continue this iteration of segmentation/despeckling until predetermined threshold criteria for the despeckling process have been met. Additionally, the iterative segmentation and despeckling process can be used to generate various outputs, including XML files, TIFF files, etc.
Referring now to
As will be appreciated by one of ordinary skill in the art, the despeckling criteria may be modified and tuned based on the importance of the content on the layer, the possibility to degrade the quality of the content by removing speckles, the richness of the content, etc. As an alternative to focusing on despeckling, and as shown in
For the purpose of discussion, it is assumed that a segmentation process may be used to identify several different layers, for example, a first layer (background, non-content area), a second layer (content area which is the union of all the content, text, and images), a third layer (text lines), etc. It is further assumed that a degree of border noise removal will be tailored for each layer based on the importance of the content or the richness of the content within the layer.
Beginning with block 1002, the page layers or segments are obtained, such as a first layer (boundary region or background), a second layer (a content box region) and a third layer (regions which the publishing service wants to preserve without any change, e.g., text lines regions). At block 1004, the first layer of segments is obtained.
At block 1006, border removal criteria for the current level is selected. If this is the first level, border removal criteria is selected for removing almost all border artifacts, noises, etc., from the first layer. As described above, the first layer is generally a background/non-content area, hence the aggressive removal process.
At block 1008, the border artifacts found in the current layer based according to the selected border removal criteria are removed. In one embodiment, the publishing service may apply a connected component analysis to the first layer, which identifies border objects and analyzes pixels in the border objects to identify a border artifact that is to be removed. The connected component analysis and associated border removal criteria will be explained later in
At decision block 1010, a decision is made as to whether there are any additional content layers to be processed. If so, at block 1012 the next layer is selected, the routine proceeds to block 1006 where border removal criteria for the now-current layer is selected, and the above-described process is repeated. This continues until, at decision block 1010, there are no more layers to process and the routine 1000 terminates.
With each layer, suitable border removal criteria is selected. This selection assists in maintaining a balance between accurately identifying and removing border artifacts and preserving the important content.
Beginning at block 1102, border objects for the digital page image are identified. In particular, border objects are superfluous (i.e., not part of the page content) objects/artifacts that fall outside of the content area of a page image but within the page boundaries. Illustrative border objects are shown in
At control block 1106, a looping construct is begun to iterate through all of the identified border objects. For each border object, at least some of the steps up to end control block 1116 are executed. In particular, at decision block 1108, a determination is made as to whether the border object, such as border object 1205, touches or crosses within the page content's bounding box 1204. If it is determined at decision block 1108 that the border object does not touch the content bounding box 1204, at block 1110 the border object is evaluated according to various criteria to determine whether the border object should be removed. These criteria include by way of illustration, but are not limited to, whether the border object is closer to the content box 1204 or to the page boundary (with those closer to the page boundary more likely superfluous); whether the border object is aligned with, or oblique to, the nearest page boundary (indicating a possible intended page border); a ratio of the width to the height of the border object such that if greater than a threshold the border object is considered superfluous; and the like.
At decision block 1112, if the evaluation of criteria indicates that the border object is superfluous, the routine 1100 proceeds to block 1114 where the border object is removed from the digital image page. Alternatively, if the evaluation of criteria indicates that the border object should be retained, or after deleting the border object, the routine 1100 proceeds to end control block 1116 where the routine 1100 returns to control block 1106 if there are additional border objects to process. Otherwise, the routine 1100 terminates.
Returning to decision block 1108, if the border object touches (or crosses within) the content's bounding box 1204, the routine 1100 proceeds to decision block 1118 (
At block 1122, the number of pixels of the border object within the content region is determined. If, at decision block 1124, the determined number of pixels exceeds a predetermined threshold, this may indicate that removing the border object may cause a degradation in the quality of the image/content. In such a case, at block 1126, the border object may not be deleted from the digital image page, to preserve the content region, and the routine 1100 proceeds to block 1106 as described above. Alternatively, if the number of pixels of the border object within the content region does not exceed the predetermined threshold, at block 1114 (
If it is determined at decision block 1118 that the border object 1205 does not touch any content regions, at block 1120, the number of pixels of the border object residing in the boundary area 1210 will be evaluated to determine whether the border object is a border artifact that is to be removed. If, at decision block 1124, the number of pixels of the border object residing within the boundary area does not exceed a predetermined threshold, at block 1120 the border object is deleted from the digital image page. Alternatively, if the number of pixels exceeds a threshold, this may indicate that the border object should not be deleted from the digital image page and at block 1126 the border object is retained.
After all border objects in the page have been processed, as determined at end control block 1116, the routine terminates.
It is to be noted that the above-mentioned rules explained in
As mentioned above, one aspect of preparing a digital image for on-demand printing is to arrange all pages within the image so that the content of each page is similarly located. The need to arrange and align pages of a digital image arises due to a variety of factors, including issues with regard to scanning a printed copy of the content. By way of illustration example,
At block 1506, the “regular” pages of the image are registered, meaning that a registration point common to all regular content boxes is placed at a common location for the page. While any number of points may be used as a registration point, in one embodiment, the top left corner of the content box is used as the registration point and the content of the “regular” pages of the image are positioned on their respective pages with their registration point at a common location on the page. At block 1508, the remaining pages with content (i.e., the “irregular” pages) are registered with the “regular” pages such that content thereon is positioned as best as possible, if not optimally, with the content of the other pages
At block 1510, the margins for the pages are normalized, i.e., made the same. Normalizing page margins addresses the fact that the content of the pages may be of different sizes (whether or not the pages are regular or irregular pages). After normalizing the page margins for the pages in the image, adjustments may optionally be made for binding purposes. For example, binding width may be added to the left margin of odd numbered pages while, conversely, binding width may be added to the right margin of even numbered pages. Thereafter, the routine 1500 terminates.
While the above description is generally made in regard to receiving a client request for on-demand printing of content, it should be appreciated that this is an illustrative example, and the subject matter disclosed above should not be construed as limited to this particular scenario. In alternative embodiments, rather than waiting for a consumer's request for on-demand printing, a service may anticipatorily process content for on-demand printing and store the processed content in the content store 214. Still further, the processed content may be used in ways other than for on-demand printing. For example, the processed content may be used in electronic book readers, on user computers, and the like. Moreover, the processed content may be stored in any number of formats, both open and proprietary, such as XML, PDF, and the like.
While illustrative embodiments have been illustrated and described, it will be appreciated that various changes can be made therein without departing from the spirit and scope of the invention.
Claims
1. A method for removing a plurality of artifacts from a page of a digital image, the method comprising:
- obtaining the digital image having the page;
- determining a content bounding box for content of the page;
- generating a set of segments corresponding to areas of the content within the content bounding box according to a segment type in a plurality of segment types, each segment type corresponding to a type of content in a respective one of the segments;
- for each segment of the set of segments: determining a level of importance associated with the corresponding segment type, wherein the plurality of segment types comprises an image type and a text type; selecting despeckling criteria based at least in part on the level of importance for identifying superfluous artifacts, wherein a first despeckling criteria is selected on a low level of importance and a second despeckling criteria is selected based at least in part on a high level of importance, wherein the second despeckling criteria is more conservative with respect to an artifact removal than the first despeckling criteria; identifying artifacts according to the despeckling criteria; and removing the artifacts from the page; and storing an updated digital image in a content store.
2. The method of claim 1, wherein the plurality of segment types further comprises a background type.
3. The method of claim 1, wherein the despeckling criteria for a background segment more aggressively identifies the artifacts than for a text region segment.
4. The method of claim 1, wherein selecting the despeckling criteria for identifying the artifacts comprises selecting a threshold count under which connected pixels are removed.
5. A method for removing a plurality of artifacts from a page of a digital image, the method comprising:
- obtaining the digital image having a plurality of pages;
- for each page of the digital image: determining a content bounding box for content on the page; segmenting the content within the content bounding box to generate a set of segments, each of the segments having a corresponding one of a plurality of segment types according to a type of content in a respective one of the segments, at least some of the segment types having a hierarchical relationship;
- for each segment of the set of segments: determining a level of importance associated with the corresponding segment type; selecting despeckling criteria for identifying the artifacts within the segment based at least in part on the level of importance associated with the corresponding segment type, wherein a first despeckling criteria is selected if a low level of importance is associated with the segment and a second despeckling criteria is selected if a high level of importance is associated with the segment, wherein the second despeckling criteria is more conservative with respect to an artifact removal than the first despeckling criteria; identifying the artifacts within the segment according to the selected despeckling criteria; and removing the identified artifacts from the segment of the page; and storing an updated digital image in a content store.
6. The method of claim 5, wherein the plurality of segment types comprises at least one of a background, an image, a text region, or a text-line region.
7. The method of claim 5, wherein selecting the despeckling criteria for identifying the artifacts with the segment comprises selecting a threshold count under which connected pixels are removed.
8. The method of claim 7, wherein the connected pixels are identified according to a connected component algorithm.
9. A non-transitory computer-readable medium bearing computer-executable instructions which, when executed on a computing device having a processor and a memory, and connected to a content store, carry out a method for processing a digital image of content to remove a plurality of artifacts from the content of the digital image, the method comprising:
- obtaining the digital image, wherein the digital image comprises a plurality of pages;
- for each page of the digital image: determining a content bounding box for the content on the page; segmenting the content within the content bounding box to generate a set of segments, each of the segments having a corresponding segment type according to a type of content in the segment;
- for each segment of the set of segments: determining a level of importance associated with the corresponding segment type, wherein the plurality of segment types comprises an image type and a text type; selecting a despeckling criteria for identifying the artifacts within the segment based at least in part on the level of importance associated with the segment, wherein a first despeckling criteria is selected if a low level of importance is associated with the segment and a second despeckling criteria is selected if a high level of importance is associated with the segment, wherein the second despeckling criteria is more conservative with respect to an artifact removal than the first despeckling criteria; identifying the artifacts within the segment according to the selected despeckling criteria; and removing the identified artifacts from the segment of the page; and storing an updated digital image in the content store.
10. A digital image processing system for processing the digital image to remove a plurality of artifacts from the digital image according to a type of content in which an artifact is found, the system comprising:
- a processor;
- a memory; and
- a content store storing a plurality of digital images of printed content, each digital image having at least one page of content;
- wherein the system is configured to obtain the digital image to process, and for each page of content in the digital image: determine a content bounding box for the content on the page; segment the content within the bounding box on the page to generate a set of segments, each segment having one of a plurality of segment types determined according to the type of content in the segment;
- for each segment of the set of segments: determine a level of importance associated with a corresponding segment type, wherein the plurality of segment types comprises an image type and a text type; select despeckling criteria to identify the artifacts within the segment based at least in part on the level of importance associated with the corresponding segment type, wherein a first despeckling criteria is selected if a low level of importance is associated with the segment and a second despeckling criteria is selected if a high level of importance is associated with the segment, wherein the second despeckling criteria is more conservative with respect to an artifact removal than the first despeckling criteria; identify the artifacts within the segment according to the selected despeckling criteria; and remove the identified artifacts from the segment of the page; and store a processed digital image in the content store.
11. The digital image processing system of claim 10, wherein the plurality of segment types comprises at least one of a background segment, an image segment, a text region segment, or a text-line region segment.
12. The digital image processing system of claim 10, wherein the despeckling criteria for an image segment and a text region segment are distinct.
13. The digital image processing system of claim 10, wherein selecting despeckling criteria for identifying the artifacts with the segment comprises selecting a threshold count under which a plurality of connected pixels are removed.
14. The digital image processing system of claim 13, wherein the connected pixels are identified according to a connected component algorithm.
15. The method of claim 5, wherein the plurality of segment types comprises a text region segment and a text lines segment.
16. The method of claim 15, wherein the plurality of segment types further comprises an inter-word text segment and an inter-character text segment.
17. The method of claim 15, wherein the plurality of segment types further comprises an inter-regional segment that falls outside of the text lines segment.
18. The method of claim 5, wherein the selected despeckling criteria corresponds to a filter threshold.
19. The system of claim 10, wherein the selected despeckling criteria corresponds to a filter threshold.
5848184 | December 8, 1998 | Taylor et al. |
6400845 | June 4, 2002 | Volino |
6507670 | January 14, 2003 | Moed |
6976223 | December 13, 2005 | Nitschke |
6993185 | January 31, 2006 | Guo et al. |
7031543 | April 18, 2006 | Cheng et al. |
7337399 | February 26, 2008 | Jensen et al. |
7505632 | March 17, 2009 | Hu et al. |
7529407 | May 5, 2009 | Marquering et al. |
7557963 | July 7, 2009 | Bhattacharjya |
7773803 | August 10, 2010 | Fan |
20020118889 | August 29, 2002 | Shimizu |
20030118234 | June 26, 2003 | Tanaka et al. |
20050163374 | July 28, 2005 | Ferman et al. |
20060126093 | June 15, 2006 | Fedorovskaya et al. |
20070074108 | March 29, 2007 | Xie et al. |
20090022397 | January 22, 2009 | Nicholson |
4311172 | October 1993 | DE |
WO2007/065087 | June 2007 | WO |
- Aiazzi, et al. “Multiresolution Adaptive Speckle Filtering: a Comparison of Algorithms.” Geoscience and Remote Sensing, 1997. IGARSS '97. Remote Sensing—A Scientific Vision for Sustainable Development., 1997 IEEE International . 2. (1057-1056): 1997. Print.
- Walessa, et al. “Model-Based Despeckling and Information Extraction from SAR Images.” IEEE Transactions on Geoscience and Remote Sensing. 38.5 (2000): 2258-2269. Print.
- Cannon, et al. “Quality Assessment and Restoration of Typewritten Document Images.” International Journal on Document Analysis and Recognition. 2. (1999): 80-89. Print.
- Silva et al. “Background Removal of Documents Images Acquired Using Portable Digital Cameras.” Lecture Notes in Computer Science, ICIAR 2005. 3656. (2005): 278-285. Print.
- Yin, et al. “Multi-component Document Image Coding Using Regions-of-Interest.” Springer Lecture Notes in Computer Science—DAS 2004. 3163. (2004): 158-169. Print.
- European Patent Application No. 08252547.8 Search Report and Written Opinion mailed Apr. 27, 2009, 10 pages.
- Fan, J. et al, “A Comprehensive Image Processing Suite for Book Re-mastering,” International Conference on Document Analysis and Recognition, Aug. 2005, 5 pages, IEEE, USA.
- Baird, H.S., “Difficult and Urgent Open Problems in Document Image Analysis for Libraries,” International Workshop on Document Image Analysis for Libraries, Jan. 2004, 8 pages, IEEE, USA.
- Le Bourgeois, F. et al., “Document Images Analysis Solutions for Digital Libraries,” International Workshop on Document Image Analysis for Libraries, Jan. 2004, 23 pages, IEEE, USA.
- Cinque, L. et al., “Segmentation of Page Images Having Artifacts of Photocopying and Scanning,” Pattern Recognition Society, May 2002, 11 pages, Elsevier Science Ltd., Great Britain.
Type: Grant
Filed: Sep 28, 2007
Date of Patent: Oct 1, 2013
Assignee: Amazon Technologies, Inc. (Reno, NV)
Inventors: Sherif M. Yacoub (Seattle, WA), Jian Liang (Seattle, WA), Hanning Zhou (Seattle, WA)
Primary Examiner: Michael A Newman
Application Number: 11/864,187
International Classification: G06K 9/40 (20060101); G06K 9/34 (20060101); H04N 1/00 (20060101);