METHODS AND SYSTEMS FOR AUTOMATED SELECTION OF REGIONS OF AN IMAGE FOR SECONDARY FINISHING AND GENERATION OF MASK IMAGE OF SAME
Automated systems, methods and tools that automatically extract and select portions of an image to automatically generate a premium finish mask specific to the image which require little or no human intervention are presented. Graphical user interface tools allowing a user to provide an image and to indicate regions of the image for application of premium finish are also presented.
Latest Cimpress Schweiz GmbH Patents:
- Technologies for automatically determining and displaying salient portions of images
- Methods and apparatus to translate and manage product data
- Content replacement system using visual design object models
- Systems and methods for assessing text legibility in electronic documents
- Technologies for rendering items within a user interface using various rendering effects
The present invention relates to designing and manufacturing products having first surface areas having a first type of finish applied and second surface areas having a second type of finish applied, and more particularly, to the methods, tools, and systems for designating areas of different types of finish in a design to be applied to a product based on an image of the design, and further to automated generation of a corresponding mask image associated with the design.
BACKGROUND OF THE INVENTIONPrinting services Web sites allowing a user to access the site from the user's home or work and design a personalized product are well known and widely used by many consumers, professionals, and businesses. For example, Vistaprint markets a variety of printed products, such as business cards, postcards, brochures, holiday cards, announcements, and invitations, online through the site www.vistaprint.com. Printing services web sites often allow the user to review thumbnail images of a number of customizable design templates prepared by the site operator having a variety of different styles, formats, backgrounds, color schemes, fonts and graphics from which the user may choose. When the user has selected a specific product design template to customize, the sites typically provide online tools allowing the user to incorporate the user's personal information and content into the selected template to create a custom design. When the design is completed to the user's satisfaction, the user can place an order through the web site for production and delivery of a desired quantity of a product incorporating the corresponding customized design.
Finishes such as foil, gloss, raised print, vinyl, embossment, leather, cloth, and other textured finishes (hereinafter a “secondary finish”) that must be applied to a printed product separately from the traditional ink application are typically reserved only for premium printed products due to the expense, time, and equipment required for design, setup, and application of the premium finishes. To add a premium finish to a printed product, at least one premium finish mask, designating areas where the secondary finish is to be applied versus areas where the secondary finish is not to be applied, must be generated. The generation of the mask requires knowledge of which portions of the design are to be finished using the secondary finish (i.e., foiled, glossed, raised print, or other premium finishes).
Currently, there are no tools for automatically extracting regions of a design designated for application of a premium finish (e.g., foil, gloss, raised print, etc.) which would produce satisfactory results in most cases which corresponds well to human judgment. In the printing world, premium finishes are generally used to accent or highlight features of an image. Generally, the determination of which features look good when accentuated using a premium finish is better left to a human designer because humans can more easily understand the meaning and relationship between the different areas of the image and can quickly select those areas of an image that make sense (to other humans) to highlight and to disregard for highlighting those areas that would detract from the overall aesthetics of the final product. It is difficult to translate this type of human judgment into a computer algorithm. Additionally, upon selection of regions of a given design for premium finish, there are currently no tools for automatically generating a secondary finish mask for the design. Thus, for every print design offered by a vendor, the vendor must expend resources designing one or more associated premium finish masks for that particular design. The design of a premium finish mask is therefore typically performed by a human graphics designer, often the same designer who created the print design. It would therefore be desirable to have automated tools available that would automatically extract regions of a given design for premium finish based on a selected area of the design image, and that would automatically generate one or more premium finish masks specific to the given design in order to minimize the amount of human designer time expended on the creation of a particular design without diminishing the quality of the end design and premium finish aesthetics.
Additionally, often customers of a retail printed products provider may wish to provide an image to be printed, and to apply a premium finish (e.g., foil, raised print, etc.) to portions of the image to be applied to the end printed product. No simple tools or techniques for indicating which portions of the image are to be premium finished exist. Instead, a trained designer must hand create an appropriate premium finish mask specific to the image the customer provided to ensure that the premium finish is applied in an aesthetically pleasing manner. Accordingly, it is difficult and expensive for end customers to add premium finish to their own images. It would therefore be desirable to have available web-enabled tools that allow a customer to provide an image and to indicate regions of the image for application of premium finish. It would further be desirable to provide an intuitive user interface that conforms to editing interfaces that customers are already used to dealing with in other document editing applications.
SUMMARYVarious embodiments of the invention include automated systems, methods and tools for automatically extracting and selecting portions of an image for application of a premium finish (i.e., a secondary finish separately applied to a product before or after application of a primary finish). Embodiments of the invention also automatically generate a premium finish mask specific to the image based on the automatic (and optionally, user-modified) segment selections. The system, method and tools require little or no human intervention.
In one embodiment, a method for generating by a computer system a premium finish mask image for applying a secondary finish to a product to be finished with a primary finish includes the steps of receiving an image comprising a plurality of pixels, generating a segmented image having a plurality of pixels corresponding to the respective plurality of pixels of the received image, the segmented image comprising discrete segments each comprising a subset of pixels, receiving a selection of segments designated for premium finish, and automatically generating a mask image having first areas indicating non-premium finish areas and second areas indicating premium finish areas, the premium finish areas corresponding to the selection of segments designated for premium finish.
In an embodiment, the automated premium finish segment selection is subjected to a confidence level test to ensure that the automated selection of segments corresponds well to a human value judgment relative to the aesthetics of premium finish applied to the selected segments in the final product.
In an embodiment, a graphical user interface provides user tools for allowing a user to modify an automatic segment selection
In an embodiment, a computerized system automatically receives an image and generates an associated mask image for use in application of premium finish to a product.
It will be understood that, while the discussion herein describes an embodiment of the invention in the field of preparation of customized printed materials having premium finish regions such as foil, gloss, raised print, etc., it will be understood that the invention is not so limited and could be readily employed in any embodiment involving the presentation of an electronic image of any type of product wherein it is desired to indicate one or more areas of the image that is to be finished using different finish than other areas of the image.
In accordance with certain aspects of the invention, described herein are automated systems, methods and tools that automatically extract regions of a given design and automatically generate one or more premium finish masks specific to the given design while requiring little or no human intervention. In accordance with other aspects of the invention, a set of web-enabled tools are provided which allow a customer to provide an image and to indicate regions of the image for application of premium finish. In accordance with other aspects of the invention, there are provided simple intuitive user interfaces for indicating regions of an image to be finished with a secondary finish. In accordance with other aspects of the invention, methods for capturing a designer's or customer's intent to premium finish specific parts of an image in the most intuitive, simple, reliable and useful way are described.
As is well known and understood in the art, color images displayed on computer monitors are comprised of many individual pixels with the displayed color of each individual pixel being the result of the combination of the three colors red, green and blue (RGB). In a typical display system providing twenty-four bits of color information for each pixel (eight bits per color component), red, green and blue are each assigned an intensity value in the range from 0, representing no color, to 255, representing full intensity of that color. By varying these three intensity values, a large number of different colors can be represented. Transparency is often defined through a separate channel (called the “alpha channel”) which includes a value that ranges between 0 and 100%. The transparency channel associated with each pixel is typically allocated 8 bits, where alpha channel values range between 0 (0%) and 255 (100%) representing the resulting proportional blending of the pixel of the image with the visible pixels in the layers below it.
In general, given an image from which to generate a corresponding secondary finish mask, the image is segmented and then certain segments are selected for secondary finish, from which a corresponding mask image is automatically generated.
Turning now in detail to
As will be discussed in more detail hereinafter, in a preferred embodiment the original design image is segmented by first applying a smoothing filter which preserves strong edges (such as the well-known mean-shift filtering technique) (step 2a) followed by extracting color-based segment labels (for example, using connected component labeling based on color proximity) (step 2b).
As will also be discussed in more detail hereinafter, the selection of segments designated for premium finish received in step 3 may be obtained in one of several ways. In a first embodiment, the system is simply instructed of the segments that are to be premium finished that is the system receives the selected parameters by a remote entity (step 3a). In an alternative embodiment, the system heuristically determines a set of segments that are to be selected (also step 3a). For example, the system may be configured to receive a selected area of the design image (which can be the entire design (or one to a few pixels just inside the outer boundaries of the image) or a portion of the image thereof), and the system may be configured to separate the foreground from the background and to automatically select only the foreground areas for premium finish. In another embodiment, a set of user tools can be exposed to the user (step 3b) to allow the user to individually select and/or deselect individual segments.
In additional embodiments, the system and method may determine a selection confidence measure (step 8a) and compare it against a minimum confidence threshold (step 8b). In an embodiment, if the value of the selection confidence measure meets or exceeds the minimum confidence threshold, this indicates that the system is confident that the selected set of segments reflect the intent of the user/designer to designate said segments as premium finish. In such case, it may be desirable to display a preview image of the final product containing the product containing the premium finish in the selected areas of the design (step 9). If the value of the selection confidence measure does not meet the minimum confidence threshold, and/or optional in the case where the selection confidence measure does meet the minimum confidence threshold, the segmented image may be sent for human review and touch up to finalize the selection of segments designated for premium finish.
One or more client computer(s) 110 (only one shown) is conventionally equipped with one or more processors 112, computer storage memory 113, 114 for storing program instructions and data, respectively, and communication hardware 116 configured to connect the client computer 110 to the server 120 via the network 101. The client 110includes a display 117 and input hardware 118 such as a keyboard, mouse, etc., and executes a browser 119 which allows the customer to navigate to a web site served by the server 120 and displays web pages 127 served from the server 120 on the display 117.
Memory 122, 126, 113, and 114 may be embodied in any one or more computer-readable storage media of one or more types, such as but not limited to RAM, ROM, hard disk drives, optical drives, disk arrays, CD-ROMs, floppy disks, memory sticks, etc. Memory 122, 126, 113, and 114 may include permanent storage, removable storage, and cache storage, and further may comprise one contiguous physical computer readable storage medium, or may be distributed across multiple physical computer readable storage media, which may include one or more different types of media. Data memory 126 may store web pages 127, typically in HTML or other Web browser renderable format to be served to client computers 110 and rendered and displayed in client browsers 119. Data memory 126 also includes a content database 129 that stores content such as various layouts, patterns designs, color schemes, font schemes and other information used by the server 120 to enable the creation and rendering of product templates and images. Co-owned U.S. Pat. No. 7,322,007 entitled “Electronic Document Modification”, and U.S. Pat. Publication No. 2005/0075746 A1 entitled “Electronic Product Design”, each describes a Web-based document editing system and method using separately selectable layouts, designs, color schemes, and font schemes, and each is hereby incorporated by reference in its entirety into this application.
In preface to a more detailed discussion of the premium finish mask generation tool 130, which in one embodiment executes on the server 120 (but could alternatively be executed on a stand-alone system such as the client computer system 110), a brief discussion of digital image storage and display is useful. A digital image is composed of a 2-dimensional grid of pixels. For example, with reference to
A product vendor, a customer of a product vendor, or product designer may wish to generate a design, based on an image such as a photograph or graphic, which may be applied to a product, where the design incorporates one or more regions of premium finish. In more advanced applications, it may be desirable to allow a customer to upload an image to be converted to a design and to designate certain regions of the design (i.e., image segments) as premium finish regions. The premium finish regions are applied to the product via a separate process from the rest of the design. For example, the product may be a printed product such as a business card, and a customer may wish to have their company logo or customized text or even a photograph converted to design that includes foiled areas. The addition of foil to a printed design requires a printing process and a foiling process, which are separate processes. An example of a foiling process is stamping in which a metal plate engraved with an image of the areas to be foiled first strikes a foil film, causing the foil film to adhere to the engraved plate, and then strikes the substrate (e.g., business card) to transfer the foil to the desired areas of the design. Another foiling process is hot stamping (i.e., foil stamping with heat applied), while another type of foiling process involves applying adhesive ink in areas to be foiled, followed by application of a foil layer over the entire image, followed by foil removal, such that foil remains on the areas to which it adheres to the glue. Clearly, each foil process is a separate process from the printing process, and requires different input—namely the mask image indicating the areas for application of foil. Other types of premium finish also require different and separate processes. Embossment, for example, is typically performed by compressing a substrate (e.g., business card stock) between two die engraved with an image representing areas where the substrate is to be embossed—one die having raised areas where the image is to be embossed and the other having recessed areas where the image is to be embossed. Clearly, the embossment process is a completely separate process than the printing of the design on the substrate. In general, then, application of a premium finish requires separate processing from the application of the primary finish, and the premium finish processing utilizes the premium finish mask image as instruction of where the premium finish should be deposited/applied to the product.
When a premium finish design is to be generated based on an image, areas of the image must be mapped to discrete segments of the image. In one embodiment, the problem of separating the image into segments and then determining which segments to select for premium finish is treated at least in part by determining and separating background regions from foreground regions of the image. In order to handle all types of image content, the background/foreground separation is performed for the general case (i.e., distinguishing between content in the foreground versus a multicolor background), rather than only from a single-color background.
In a preferred embodiment, segments of the image are extracted by first smoothing the image using a smoothing filter 131 (see
The image smoothing filter 131 may be implemented using other edge-preserving smoothing techniques in place of mean shift filtering. For example, an anisotropic filter may be applied instead, to sharpen detected boundaries by smoothing only at angles oblique to the detected edges where strong edges are detected and to smooth in all directions in areas where no (or weak) edges are detected. It is possible that still other smoothing algorithms can be applied without departing from the scope of the invention so long as the stronger edges in the image are preserved and the resulting smoothed image operates to assist in the final segmentation to reduce the final number of color segments to a manageable number of segments (which can be independently selected or unselected) while simultaneously preserving significant image features.
Once the original image is smoothed by the smoothing filter 131, the image segmentation engine 132 (see
Below is a section of pseudo-code that exemplifies one implementation of the image segmentation engine 132 based on connected component labeling based on color proximity:
-
- First pass: pixel labeling
- Input: image with edge-preserving smoothing applied (2-dimensional array colors[height, width])
- Output: pixel label data (2-dimensional array labels[height, width])
-
- Second pass: calculating colors of segments
- Input: image with edge-preserving smoothing applied (2-dimensional array colors[height, width])
- Input: pixel label data (2-dimensional array labels[height, width])
- Output: segmented image (2-dimensional array segments[height, width])
The result of the above algorithm is a segmented image labeled based on colors. Each segment comprises an exclusive subset of pixels set to the same color.
In an alternative embodiment, separation of the foreground regions from the background regions may be performed by one of several available tools, for example the “Remove Background” tool available in MS Office 2010, available from Microsoft Corporation, which implements a version of an algorithm referred to as the “Grab Cut” algorithm, which is described in detail in C. Rother, V. Kolmogorov, and A. Blake, “GrabCut: Interactive foreground extraction using iterated graph cuts”, ACM Trans. Graph., vol. 23, pp. 309-314, 2004, and is hereby incorporated by reference for all that it teaches. Such products work well on photographic images but are not effective at detecting holes in the foreground and understanding that the holes are part of the background. This can be a problem when important areas of the image contain text: the amount of work required to remove holes from line-art type images is very large—one would need to unselect every hole inside each individual letter/character when text present, and this can be time-consuming. Furthermore, the currently available solutions are not very predictable: a small change in the position or size of the selection rectangle can cause abrupt changes in what gets removed (i.e., the algorithm is not local).
Other methods of performing color segmentation of the smoothed image may be applied in place of connected component labeling based on color proximity without departing from the scope of the invention. For example, in an alternative embodiment to separation of the foreground areas from the background areas of the image, it may instead be convenient to segment the areas of the image based on colors contained in the image itself. In general, a color segmented image is generated by iteratively merging pixel colors based on pixel color similarity and edge strength until the image contains only a predetermined number of colors or predetermined number of color segments. This process is known in the image processing art as “color reduction”. During a color reduction process, all of the pixels in the image are classified and set to the nearest one of a reduced number of colors. The reduced set of colors can be determined by iterative color merging, or by identifying the dominant or most frequently used colors in the image.
Upon production of a segmented image by the segment selection engine 133, segments are designated for application of premium finish. The selection of the set of segments designated for premium finish may vary depending on the particular use of the system. In one embodiment, as indicated at step 3a in
Referring now to
For some systems, selecting the entire foreground portion of the image for premium finish may be what is desired. In other systems, however, premium finish is used more for highlighting certain features in the image, and thus it would not be appropriate and/or desired to apply premium finish to the entirety of the foreground regions of the image. One of the goals of the systems, methods and tools presented herein is to make premium finish mask creation as simple as possible for both designers and end-user customers of products incorporating applied designs. According to one aspect, this is achieved by simply automating the selection of the segments for premium finish. However, should it be desired to select less than the entire foreground content of the image, the determination of the segments to be premium finished can be quite difficult due to the amount of detail in most photographic images. Accordingly, in some embodiments, as indicated in step 3b of
In an embodiment the system and method includes tools and steps for exposing user tools on a user's (i.e., designer's or customer's) display on a client computer system 110 (step 3b in
The graphical user interface 200 may include one or more user tools 204, 205, 206 to individually select and/or deselect segments of the design (including segments of the image contained in image container 220) to include the user-selected segments or exclude the user-deselected segments from the set of segments designated for premium finish.
In the illustrative embodiment, the user segment selection/deselection tools include a Rectangle Selection tool 204, a FreeForm Selection tool 205, and a Selection/Deselection marker tool 206.
In
The FreeForm drawing may not always allow the level of accuracy that a user intends. For example, due to low resolution of the match of cursor movement to pixel position, it may be that occasionally some segments that the user does not intend to premium finish get unintentionally selected. For example, as illustrated in the gradient image in
In
When the user is satisfied with the premium finish selection, the user can save the design by clicking the save button 211 using the cursor 203. Furthermore, the user can instruct the system to generate a premium finish mask corresponding to the design by clicking on the Create Mask button 212 using the cursor 203. Alternatively, the mask creation may be performed automatically by the system when the user saves the design.
It is to be noted that the selection Rectangle 231, the selection FreeForm shape 232, and the selection/deselection markers 232, are not part of the design itself but are rendered on the user's display on a separate layer over the rendered design in work area 202 merely to allow the user to view where the user has placed the selection areas. In implementation, once a portion (or all) of the image is selected by a selection Rectangle or a selection FreeForm shape, the system analyzes the boundary of the selection shape and only selects segments, based on the color-segmented version (
In general, the selection engine operates to select for premium finish all segments within selection area bounded by a selection boundary. In embodiments which perform automatic selection of the entire foreground, for example in systems where user tools may not be offered, the automatic selection is achieved by automatically setting the selection area boundary to the outer pixels (or near the outer edges) of the image, as illustrated at 44a in
Important to the ease of use of the present invention is that the segments automatically selected for premium finish accurately and aesthetically reflect what the user or other human would believe is a desirable and pleasing application of the premium finish. In other words, is the premium finish on the final product going to be applied in areas that enhance the overall appearance of the design, and does it make sense to an average consumer of the product?
In this regard, in a preferred embodiment, the system 100, and in particular the premium finish mask generation tool 130, includes a selection confidence calculator 134 (see
The system is thus preferably equipped with a selection confidence calculator 134 which calculates a value that corresponds to a level of confidence that corresponds well to a human judgment about the quality of the selection of segments understood by the system as selected for premium finish. In an embodiment, the selection confidence calculator 134 generates a number (for example, but not by way of limitation, in the range [0 . . . 1]), referred to herein as the “Selection Confidence”, which in one embodiment reflects the degree of separation of foreground from background. In an embodiment, after the segmented image is generated, the selection confidence calculator 134 calculates the gradient of the original image (
The human eye is sensitive to image features such as edges. In image processing (by computers), image features are most easily extracted by computing the image gradient. The gradient between two pixels represents how quickly the color is changing and includes a magnitude component and a directional component. For a grayscale image of size M×N, each pixel at coordinate (x, y), where x ranges from [0. . . M−1] and y ranges from 0 to N−1, defined by a function f(x, y), the gradient off at coordinates (x, y) is defined as the two-dimensional column vector
The magnitude of the vector is defined as:
For a color image of size M×N, each pixel at coordinate (x, y) is defined by a vector having a Red component value R(x, y), a Green component value G(x, y), and a Blue component value B(x, y). The pixel can be notated by a vector of color components as:
Defining r, g, and b as unit vectors along the R, G, and B axis of the RGB color space, we can define the vectors
and then define the terms gxx, gyy, and gxy in terms of the dot product of these vectors, as follows:
The direction of the gradient of the vector C at any point (x, y), is given by:
and the magnitude of the rate of change at (x,y), in the direction of θ, is given by:
When the gradient is calculated across the image 30, the newly introduced edges 22a and 22b are easily detectable. A comparison of the gradient calculated across image 20 and across image 30 will reveal the newly introduced edges 22a, 22b and this information can be used to inform the Selection Confidence calculator 134 that the selection of one or the other (but not both) of the segments 21a and 21b may result in an undesired premium finish result. That is, if one or the other of the segments 21a and 21b is selected for premium finish, the premium finish will only serve to highlight the newly introduce edge(s) 21a, 21b which did not exist in the original image. This can adversely affect the aesthetics of the finished product.
Color segmentation can also result in the elimination of original boundaries. For example, referring again to
Referring now to
The gradient image of the original image of
Returning to
In an embodiment, in optional step 10a the design image and segment selections associated with the original design are sent to a human reviewer who will take a look at the selected segments, and touch up the selected segments in the case that the system-generated segment selection over-included or under-included segments for premium finish. For example, the segmentation may include small detailed segments that the user would not be aware of and which may inadvertently get included in the selection area and be considered foreground segment. While it would be difficult for a computerized system to recognize that such a segment should not be included for premium finish, a human reviewer would easily recognize that such segment(s) are not part of the foreground image and can remove the selection of such segments.
Step 10a may be performed only when the Confidence Selection does not meet the minimum confidence threshold, or may alternatively be performed even when the Confidence Selection meets the threshold.
In an embodiment, it may be desired to present a preview image of the finished product, for example using the techniques described in U.S. patent application Ser. No. 13/973,396, filed Aug. 22, 2013, and entitled “Methods and Systems for Simulating Areas of Texture of Physical Product on Electronic Display”, and is incorporated by reference herein for all that it teaches, in order to show the user/designer what the final product will look like with the premium finish applied.
In a system which sends designs and segment selections to a human reviewer for final touchup, and especially in cases where the Selection Confidence is below the minimum confidence threshold such that the human reviewer would be likely to make noticeable changes to the set of segments selected for premium finish, it may not be desirable to display a preview image in such cases. Thus, in an embodiment, a preview image of the finished product is displayed (step 9) only when the Selection Confidence value meets or exceeds the minimum confidence threshold (for example, 90% gradient match). If such embodiment, if the Selection Confidence value does not meet the threshold, the preview image is not displayed.
In yet another embodiment, if the Selection Confidence value meets or exceeds the minimum confidence threshold, for example when the minimum confidence threshold is set to a high value (such as, for example and not limitation, 98%), the human review in step 10a may be skipped.
Once the segments designated for premium finish are selected and finalized, the system generates a mask image indicating the selected segments (step 4). In this regard, in an embodiment a mask image is generated from a design image based on the segment selections. In an embodiment, in the corresponding mask image, pixels corresponding to the selected segments of the received image are set to a first predetermined RGBA (RGB plus Alpha) value, while pixels corresponding to the remaining portions of the image are set to a different predetermined RGBA value. In an embodiment, pixels corresponding to premium finish segments are set to a White RGB value, whereas pixels corresponding to non-premium finish segments are set to a Black RGB value. Other mask file formats and other color schemes may alternatively be used.
Optionally, the original design image and corresponding mask image, and optionally the segmented image from which the mask image is generated, are passed to a human reviewer for a final review prior to manufacture of the design on a product. For example, in a production environment, members of a human production support staff may review designs and corresponding masks prior to manufacture to ensure that the mask image enhances areas which make sense using human judgment and does not enhance areas of the image that would not make sense. If the human reviewer judges that automatically generated mask image indicates areas of premium finish that do not make sense based on the original image, or does not include areas indicated for premium finish that the human judges should be included in the premium finish, the human reviewer may touch up the mask image by adding or removing areas indicated for premium finish (step 10b).
In application, the image itself, or a design based on the image, is manufactured on a product (step 5). In the illustrative embodiment shown in
The order that the primary finish and premium finish is applied can vary depending on the desired finished effect. In one embodiment, the design is applied using the primary finish first (step 6a) and then premium finish is applied to areas of the product as indicated in the mask image (step 7a). In an alternative embodiment, the premium finish is applied first (step 6b)—the mask image used in the application of premium finish to the manufactured product in areas of the card stock as indicated by the mask image. The primary finish is applied second (step 7b)—in the illustrative embodiment, for example, the design (as indicated in the design image) is then printed on the card stock (such that ink may be printed over the premium finish). It is to be noted that the segments designated as premium finish as described herein may be designated as such only for the purposes of premium finish mask generation. In general, the original design image will still be finished in full using the primary finish, whereas the premium finish will be applied only in areas indicated by the mask image. For example, in the embodiment illustrated in
The method disclosed in
In a second use case, the user actively requests a premium finish product and provides or selects the original image, but in this use case the user is provided with a set of user tools (step 3b) to allow the user to select one or more areas of the design in which premium finish is desired by the user. In this use case, the system must detect the user's selection and decode the user's selection to identify the segments of the image on which to base the premium finish mask image. In this use case, the Selection Confidence may be calculated (step 8a) to ensure that the selected segments make sense according to human judgment given the original image, and if the threshold is met, the mask is automatically generated, and otherwise the image and segment selection and/or mask image is sent to a human reviewer for touch-up (step 10a and/or 10b).
In a third use case, the user may provide a design image which is desired to be applied to a physical product, and the system then automatically segments the image, automatically selects segments for designation as premium finish (step 3b), and a selection confidence engine calculates the Selection Confidence (step 8a). Then, if the Selection Confidence value meets or exceeds a minimum confidence level (step 8b), a preview image of the design with premium finish applied according to the automatically selected segments, is generated and shown to the user (step 9) and a product upgrade to a premium finished version of the product is offered to the user (step 11) namely, the same product designed by the user but enhanced with the application of premium finish. If the user accepts the offer (determined in step 12), the method continues as described.
It will be appreciated from the above discussion that the systems, methods and tools described herein allow a simple efficient way to quickly generate a mask image indicating premium finish areas of a product to be manufactured. While embodiments are described in the context of manufacturing mask images for applying premium finish (such as foil, gloss, or other separately applied texture) to a business card or other printed product, the invention is not so limited. The principles described herein may be applied to the manufacture of any product that requires a separate mask image for the application of a second finish to a physical product.
Those of skill in the art will appreciate that the invented method and apparatus described and illustrated herein may be implemented in software, firmware or hardware, or any suitable combination thereof. Preferably, the method and apparatus are implemented in software, for purposes of low cost and flexibility. Thus, those of skill in the art will appreciate that the method and apparatus of the invention may be implemented by a computer or microprocessor process in which instructions are executed, the instructions being stored for execution on a computer-readable medium and being executed by any suitable instruction processor. Alternative embodiments are contemplated, however, and are within the spirit and scope of the invention.
Claims
1. A computerized apparatus executing a server which segments and designates premium finish regions based on an image received from a client device in communications with the server via a communications network, the server operating to:
- generate from the received image a segments image having a plurity of pixels corresponding to the respective plurality of pixels of the received image, the segmented image comprising discrete segments each comprising a subset of pixels;
- process the segmented image to classify one or more of the discrete segments as foreground segments and one or more of the discrete segments as background segments by receiving a selection area boundary that corresponds to a selection area of the segments area of the segmented image as foreground segments and classifying all segments outside of or that overlap the selection area of the segments image as background segments; and
- designate the foreground segments as premium finish regions and the background segments as primary finish regions; and
- provide the designation of the premium finish regions to the client device via the communications network.
2. The computerize apparatus of claim 1, the server further operating to:
- generate a mask image corresponding to the received image, the mask image having first areas indicating the primary finish regions and second area indicating the premium finish regions; and
- provide the mask image as the designation of the premium finish regions to the client device via the communications network.
3. The computerized apparatus of claim 1, the server computer operating to generate the segmented image by
- smoothing the received image while preserving strong edges, and
- generating a color-segmented image from the smoothed image through extraction of segment labels based on color.
4. The computerized apparatus of claim 3, the server computer smoothing the received image by applying mean-shift filtering to the received image to generate the smoothed image.
5. The computerized apparatus of claim 4, the server computer generating the color-segmented image by applying connected component labeling to the smoothed image based on color proximity.
6. A computer implemented method for segmenting and designating premium finish regions based on an image at a client device, the client device in communication via a communications network with a server executing on a computer device, the method comprising:
- sending by the client device the image to the server via the communications network;
- receiving by the client device, from the server via the communications network, a designation of the premium finish regions in the image, the designation of the premium finish regions generate by the server by
- generating from the received image a segmented image having a plurality of pixels corresponding to the respective plurality of pixels of the received image, the segmented image comprising discrete segments each comprising a subset of pixels;
- processing the segmented image to classify one or more of the discrete segments as foreground segments and one or more of the discrete segments as background segments by receiving a selection area boundary that corresponds to a selection area of the segmented image and classifying all segments and classifying all segments outside of or that overlap the selection area of the segmented image as background segments; and
- designating the foreground segments as premium finish regions and the background segments as primary finish regions.
7. The method of claim 1, the designation of the premium finish regions generated by
- generating a mask corresponding to the received image, the mask image having first areas indicating the primary finish regions and second areas indicating the premium finish regions; and
- providing the mask image as the designation of the premium finish regions to the client device via the communications network.
8. The method of claim 1, the server generating the segmented image by
- smoothing the received image while preserving strong edges, and
- generating a color-segmented image from the smoothed image through extraction of segment labels based on color.
9. The method of claim 3, the server smoothing the received image by applying mean-shift filtering to the received image to generate the smoothed image.
10. The method of claim 4, the server generating the color-segmented image by applying connected component labeling to the smoothed image based on color proximity.
11. A computerized apparatus executing a client process in communication with a serve via a communications network, the client operating to:
- rend an image to the server via the communications network;
- receive, from the server via the communications network, a designation of the premium finish regions in the image, a designation of the premium finish regions for the image, as generated by the server, the server generating the designation of premium finish regions by generating from the received plurality of pixels of the received image, the segmented image comprising discrete segments each comprising a subset of pixels;
- processing the segmented image to classify one or more of the discrete segments as foreground segments and one or more of the discrete segments as background segments by receiving a selection area boundary that corresponds to a selection area of the segmented image and classifying all segments that are fully contained within the selection area of the segmented image as foreground segments and classifying all segments outside of or that overlap the selection area of the segmented image as background segments, and
- designating the foreground segments as premium finish regions and the background segments as primary finish regions.
12. The computerized apparatus of claim 1, the designation of the premium finish regions further generated by
- generating a mask image corresponding to the received image, the mask image having first areas indicating the primary finish regions and second areas indicating the premium finish regions; and
- providing the mask image as the designation of the premium finish regions to the client device via the communications network.
13. The computerized apparatus of claim 1, the server generating the segmented image by smoothing the received image while preserving strong edges, and
- generating a color-segmented image from the smoothed image through extraction of segment labels based on color.
14. The computerized apparatus of claim 3, the server smoothing the received image by applying mean-shift filtering to the received image to generate the smoothed image.
15. The computerized apparatus of claim 4, the server generating the color-segmented image by applying connected component labeling to the smoothed image based on color proximity.
Type: Application
Filed: Jun 26, 2017
Publication Date: Nov 16, 2017
Applicant: Cimpress Schweiz GmbH (Winterthur)
Inventor: Vyacheslav Nykyforov (Littleton, MA)
Application Number: 15/633,021