Systems and Methods for Performing Image Inpainting
Various embodiments are disclosed for performing image inpainting. One embodiment is a method for editing a digital image in an image editing device that comprises obtaining a restoration region in the digital image and generating a structure strength map corresponding to the restoration region based on structure characteristics associated with each pixel in the restoration region. Based on the structure strength map, priority levels are determined for pixels in the restoration region. An inpainting operation is applied to the pixels in the restoration region, beginning with a pixel having a highest relative priority determined based on the structure characteristics.
Latest CYBERLINK CORP. Patents:
- Systems and methods for anti-spoofing protection using motion detection and video background analysis
- Systems and methods for foreground and background processing of content in a live video
- Systems and methods for automatic eye gaze refinement
- Systems and methods for random access of slide content in recorded webinar presentations
- Systems and methods for performing distributed playback of 360-degree video in a plurality of viewing windows
Over the years, digital content has gained increasing popularity with consumers. With the ever-growing amount of digital content available to consumers through the Internet using computers, smart phones, and other sources, consumers have access to a vast amount of content. Furthermore, many devices (e.g., smartphones) and services are readily available that allow consumers to capture and generate digital images.
The process of inpainting involves reconstructing lost or deteriorated parts of images and videos. Specifically, restoration algorithms are applied to replace portions of an image. A user, for example, may wish to remove one or more regions within an image containing objects or defects. Some inpainting techniques involve filling in the restoration region in the image by searching for similar patches in a nearby source region of the image and then copying the pixels from the most similar patch into the restoration region.
SUMMARYBriefly described, one embodiment, among others, is a method for editing a digital image in an image editing device that comprises obtaining a restoration region in the digital image and determining structure information corresponding to the restoration region. Based on the structure information, a structure strength map corresponding to the restoration region is generated. Based on the structure strength map, priority levels are determined for pixels in the restoration region and an inpainting operation is applied to pixels in the restoration region based on the corresponding priority levels.
Another embodiment is a system for editing a digital image. The system comprises a structure descriptor generator configured to determine structure descriptors corresponding to a restoration region within the digital image to undergo an inpainting operation and a structure strength map generator configured to generate a structure strength map corresponding to the restoration region based on the structure descriptors. The system further comprises a prioritizer configured to determine a priority level for pixels in the restoration region based on the structure strength map and an inpainting component configured to apply the inpainting operation to pixels in the restoration region based on the corresponding priority levels.
Another embodiment is a method for editing a digital image in an image editing device that comprises obtaining a restoration region in the digital image and generating a structure strength map corresponding to the restoration region based on structure characteristics associated with each pixel in the restoration region. Based on the structure strength map, priority levels are determined for pixels in the restoration region. An inpainting operation is applied to pixels in the restoration region, beginning with a pixel having a highest relative priority.
Other systems, methods, features, and advantages of the present disclosure will be or become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the present disclosure, and be protected by the accompanying claims.
Many aspects of the disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
The process of inpainting involves reconstructing lost or deteriorated parts of images and videos. Specifically, restoration algorithms are applied to replace lost or corrupted portions of an image. Patch matching is a commonly used technique for inpainting, where this technique works well in cases where the image exhibits regular texture and where the missing information resulting from removal of an object in the image can be reconstructed using suitable patches from information associated with areas in the image that is known (i.e., those areas outside the area to be restored). However, many images comprise unique non-repetitive structures, and structure information associated with an image is typically not considered during the restoration process, thereby resulting in artifacts.
Various embodiments are disclosed for improving the quality of an image after performing image inpainting by analyzing and utilizing information corresponding to image structure during the reconstruction of image pixels in the restoration region. For some embodiments, a structure structure strength map is derived and applied during image inpainting in order to ensure structural continuity in the area being restored. One embodiment, among others, is a method for editing a digital image in an image editing device, where the method comprises obtaining a restoration region in the digital image. For example, the restoration region may be manually defined by a user wishing to remove an object from a digital image.
The method further comprises determining structure information corresponding to the restoration region. Based on the structure information, a structure strength map corresponding to the restoration region is generated. Based on the structure strength map, a priority level for each pixel in the restoration region is determined. Priority-based Image inpainting is then performed on the restoration region based on the corresponding priority level, which structural continuity throughout the restoration region relative to the remainder of the digital image is maintained.
A description of a system for facilitating image inpainting is now described followed by a discussion of the operation of the components within the system.
For embodiments where the image editing system 102 is embodied as a smartphone 109 or tablet, the user may interface with the image editing system 102 via a touchscreen interface (not shown). In other embodiments, the image editing system 102 may be embodied as a video gaming console 171, which includes a video game controller 172 for receiving user preferences. For such embodiments, the video gaming console 171 may be connected to a television (not shown) or other display 104.
The image editing system 102 is configured to retrieve, via the media interface 112, digital media content 115 stored on a storage medium 120 such as, by way of example and without limitation, a compact disc (CD) or a universal serial bus (USB) flash drive, wherein the digital media content 115 may then be stored locally on a hard drive of the image editing system 102. As one of ordinary skill will appreciate, the digital media content 115 may be encoded in any of a number of formats including, but not limited to, JPEG (Joint Photographic Experts Group) files, TIFF (Tagged Image File Format) files, PNG (Portable Network Graphics) files, GIF (Graphics Interchange Format) files, BMP (bitmap) files or any number of other digital formats.
As depicted in
The digital camera 107 may also be coupled to the image editing system 102 over a wireless connection or other communication path. The image editing system 102 may be coupled to a network 118 such as, for example, the Internet, intranets, extranets, wide area networks (WANs), local area networks (LANs), wired networks, wireless networks, or other suitable networks, etc., or any combination of two or more such networks. Through the network 118, the image editing system 102 may receive digital media content 115 from another computing system 103. Alternatively, the image editing system 102 may access one or more image sharing websites 134 hosted on a server 137 via the network 118 to retrieve digital media content 115.
The structure descriptor generator 114 in the image editing system 102 is configured to analyze and identify structural attributes of the media content 115 retrieved by the media interface 112 in order to facilitate image inpainting of the media content 115 for editing purposes. For some embodiments, the structure descriptor generator 114 is configured to determine structure information corresponding to the restoration region, where such structure information may be based on, for example, textual details, level of detail (LOD) information, edge information, etc. found in the media content 115 being edited.
The structure strength map generator 116 is configured to generate a structure strength map corresponding to the restoration region based on the structure information derived by the structure descriptor generator 114. Based on the structure strength map, the prioritizer 119 is configured to determine a priority level for each pixel in the restoration region. The inpainting component 122 then performs priority-based image inpainting according to the respective priority levels of each pixel in the restoration region.
The processing device 202 may include any custom made or commercially available processor, a central processing unit (CPU) or an auxiliary processor among several processors associated with the image editing system 102, a semiconductor based microprocessor (in the form of a microchip), a macroprocessor, one or more application specific integrated circuits (ASICs), a plurality of suitably configured digital logic gates, and other well known electrical configurations comprising discrete elements both individually and in various combinations to coordinate the overall operation of the computing system.
The memory 214 can include any one of a combination of volatile memory elements (e.g., random-access memory (RAM, such as DRAM, and SRAM, etc.)) and nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.). The memory 214 typically comprises a native operating system 217, one or more native applications, emulation systems, or emulated applications for any of a variety of operating systems and/or emulated hardware platforms, emulated operating systems, etc.
The applications may include application specific software which may comprise some or all the components (media interface 112, structure descriptor generator 114, structure strength map generator 116, prioritizer 119, inpainting component 122) of the image editing system 102 depicted in
Input/output interfaces 204 provide any number of interfaces for the input and output of data. For example, where the image editing system 102 comprises a personal computer, these components may interface with one or more user input devices via the I/O interfaces 204, where the user input devices may comprise a keyboard 106 (
In the context of this disclosure, a non-transitory computer-readable medium stores programs for use by or in connection with an instruction execution system, apparatus, or device. More specific examples of a computer-readable medium may include by way of example and without limitation: a portable computer diskette, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory), and a portable compact disc read-only memory (CDROM) (optical).
With further reference to
Reference is made to
Although the flowchart of
Beginning with block 310, the image editing system 102 obtains a restoration region in a digital image obtained by the media interface 112 (
In block 320, the structure descriptor generator 114 (
In this regard, structure descriptor may comprise an edge vector that further comprises an edge magnitude, an edge direction, a vector representing the texture similarity, or a vector representing the level of detail information (or any combination thereof). As described in more detail below, structural attributes corresponding to objects both within the restoration region and outside the restoration region are derived based on edge detection and/or other image details in order to ensure structural continuity during the image inpainting process.
In block 330, the structure strength map generator 116 (
To further illustrate the various concepts disclosed, reference is made to
With reference to
As shown in the example in
Referring back to
Upon retrieving a restoration region 602, the structure descriptor generator 114 (
Reference is made to
Note, however, that for alternative embodiments, the pixels located directly on the boundary 602 may also be sampled. For other embodiments, both pixels on and near the boundary may be sampled. Furthermore, the number of points that are sampled may be based on a predetermined number. For example, a sample size of 100 pixels (each corresponding to a point B) may be utilized in deriving structure descriptors. In accordance with some embodiments, for each pixel in the restoration region 602 (point P), the correlation is determined relative to an edge vector associated with every pixel (point B) along the boundary.
As shown in
Next, for a given point P (e.g., point P1), a BP vector is defined with respect to every pixel on the boundary of the restoration region (Point B1 to N). That is, for a given point P, BP vectors 702 are derived for every pixel (point B) on the boundary of the restoration region such that BP vectors 702 are formed for every boundary pixel (point B) relative to a common point P, as illustrated in
For example, the correlation between the B1P vector 702 and the edge vector for point B1 is calculated followed by the correlation between the B2P vector 702 and the edge vector for point B2 and continuing on to the correlation between the BNP vector 702 and the edge vector for point BN, where N represents the total number of boundary pixels. The BP vector 702 corresponding to the (P, BN) combination that exhibits the strongest correlation with respect to the edge vector of the corresponding point BN is determined to be the structure strength value for the point P.
The correlation of a (P, B) combination is a function of the angle (θ) formed between the edge vector of point B and the BP vector extends from point B to a point P as represented by the following expression:
correlation(P,B)=f(θ,BP vector,B edge vector).
In the expression above, the function f( ) may represent, for example, a cosine function, which produces a higher value at 0 or 180 degrees. The function f( ) is also related to the magnitude of BP and B edge vector, as shown in the expression below:
To illustrate, reference is made to
Structure Strength(P)=maxBicorrelation(P,Bi)
That is, for a given point P, the structure strength value is calculated according to the highest correlation value corresponding to a given (P, B) combination when compared to all (P, B) combinations. This process is repeated for every pixel (point P) within the restoration region such that every point P is assigned a corresponding structure strength value.
Priority-based image inpainting is then performed according to the structure strength value of each point P. Thus, for every point P within the restoration region, the edge vector of a boundary pixel that is most closely correlated with the vector formed between the point and that boundary pixel is identified. The correlation value serves as the structure strength or priority level for that pixel. The prioritizer 119 (
With reference to
Reference is made to
It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
Claims
1. A method for editing a digital image in an image editing device, comprising:
- obtaining a restoration region in the digital image;
- determining structure information corresponding to the restoration region;
- based on the structure information, generating a structure strength map corresponding to the restoration region;
- based on the structure strength map, determining priority levels for pixels in the restoration region; and
- applying an inpainting operation to the pixels in the restoration region based on the corresponding priority levels derived based on the structure information.
2. The method of claim 1, wherein the restoration region is obtained via user input, wherein the user input corresponds to an object in the digital image.
3. The method of claim 1, wherein determining structure information comprises computing structure descriptors corresponding to at least one of: wherein structure descriptors are computed based on structure analysis.
- pixels near a boundary of the restoration region; and
- pixels on the boundary of the restoration region,
4. The method of claim 3, wherein the structure analysis comprises at least one of: edge detection analysis, texture synthesis analysis, and level of detail (LOD) analysis.
5. The method of claim 3, wherein each structure descriptor comprises at least one of: an edge vector comprising an edge magnitude and an edge direction, a vector representing texture similarity, and a vector representing level of detail information.
6. The method of claim 3, wherein determining structure information further comprises determining, for a pixel within the restoration region, vectors extending from the restoration region pixel to each of at least a portion of the pixels with a corresponding calculated structure descriptor.
7. The method of claim 6, wherein generating a structure strength map comprises determining correlation values, where each correlation value is calculated according to a vector extending from the restoration region pixel to each of at least a portion of the pixels with a corresponding calculated structure descriptor, and a corresponding edge vector of the pixel with the corresponding calculated structure descriptor.
8. The method of claim 7, wherein the structure strength map comprises maximum correlation values respectively calculated for each pixel in the restoration region with respect to all edge vectors.
9. The method of claim 7, wherein the structure strength map comprises relatively high correlation values calculated for each pixel in the restoration region with respect to all edge vectors.
10. A system for editing a digital image, comprising:
- a structure descriptor generator configured to determine structure descriptors corresponding to a restoration region within the digital image to undergo an inpainting operation;
- a structure strength map generator configured to generate a structure strength map corresponding to the restoration region based on the structure descriptors;
- a prioritizer configured to determine priority levels for pixels in the restoration region based on the structure strength map; and
- an inpainting component configured to apply the inpainting operation to the pixels in the restoration region based on the corresponding priority levels derived based on the structure descriptors.
11. The system of claim 10, wherein the structure descriptors comprise edge vectors representing an edge strength and edge direction, wherein the structure descriptor generator is configured to determine edge vectors for at least one of:
- pixels near a boundary of the restoration region; and
- pixels on the boundary of the restoration region.
12. The system of claim 11, wherein the structure descriptor generator determines, for each pixel in the restoration region, a vector extending from the restoration region pixel to each of at least a portion of the pixels with a corresponding calculated structure descriptor.
13. The system of claim 12, wherein the structure strength map generator is configured to assign, for each pixel within the restoration region, a structure strength value based on a restoration region pixel and an edge vector exhibiting a highest correlation value.
14. The system of claim 12, wherein the prioritizer configured to determine the priority level based on the assigned structure strength values.
15. The system of claim 12, wherein the inpainting component is configured to apply the inpainting operation beginning with pixels in the restoration region having a highest priority relative to a remaining pixels in the restoration region.
16. A method for editing a digital image in an image editing device, comprising:
- obtaining a restoration region in the digital image;
- generating a structure strength map corresponding to the restoration region based on structure characteristics associated with each pixel in the restoration region;
- based on the structure strength map, determining priority levels for pixels in the restoration region; and
- applying an inpainting operation to pixels in the restoration region, beginning with a pixel having a highest relative priority determined based on the structure characteristics.
17. The method of claim 16, wherein the structure characteristics comprise an edge magnitude and an edge direction.
18. The method of claim 16, wherein for each restoration region pixel, the structure characteristics are calculated for at least one of:
- pixels near a boundary of the restoration region; and
- pixels on the boundary of the restoration region.
19. The method of claim 18, wherein for each restoration region pixel, a highest correlation value associated with the restoration region pixel and a pixel with structure characteristics combination is selected as a structure strength value for the restoration region pixel.
20. The method of claim 19, wherein the structure strength map comprises the structure strength values for each restoration region pixel.
21. The method of claim 16, wherein the structure characteristics comprise texture characteristics.
22. The method of claim 16, wherein the structure characteristics comprise level of detail (LOD) characteristics.
Type: Application
Filed: Nov 2, 2012
Publication Date: May 8, 2014
Applicant: CYBERLINK CORP. (Shindian City)
Inventors: Po-Yu Huang (New Taipei City), Ho-Chao Huang (New Taipei City)
Application Number: 13/667,074
International Classification: G06K 9/40 (20060101);