Method and system for image blurring processing
A system and a method for image blurring processing are provided. The method includes: obtaining an image and a depth information corresponding to the image; identifying an object and a background in the image according to the depth information, wherein the object includes a plurality of object pixels and the background includes a plurality of background pixels; defining a number of an edge pixel spaced between the object and the background; performing a blurring processing on the edge pixel and the background pixels respectively by using a plurality of masks with different sizes; combining the plurality of object pixels, the edge pixel after the blurring processing, and the background pixels after the blurring processing to generate an output image; and outputting the output image.
Latest Wistron Corporation Patents:
This application claims the priority benefit of Taiwan application serial no. 108106469, filed on Feb. 26, 2019. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
BACKGROUND OF THE INVENTION Field of the InventionThe invention relates to a method and a system for image processing, and more particularly, to a method and a system for image blurring processing for blurring an object edge in a captured image.
Description of Related ArtWith the development of optical sensing elements and optical lenses and performance enhancement of image processing chips and software, digital cameras/video cameras and portable electronic products with lenses (such as mobile phones, tablets, notebooks, etc.) have long been indispensable electronic products in everyday life that allow us to take photos or take selfies and store them as digital images anytime, anywhere. However, in order to achieve a special visual effect with depth of field, there are products that blur the background of the captured image to highlight the subject.
However, in general, when blurring the background of an image, only the subject (such as a person) remains clear to highlight the effect of the photographed subject, but it may thus cause the edges of the subject to be visually too sharp and unnatural with respect to the blurred background. Therefore, traditionally, one solution is to use a blurry algorithm, which calculates the pixel values in the entire sliding window (or mask) for the entire image, but in this case, the color of the subject may blend with the background color, causing the edges of the subject to blur and spread outward.
SUMMARY OF THE INVENTIONThe invention provides a method for image blurring processing and a system for image blurring processing that allows a processed image to achieve the effect of a blurred background and a clear edge of the foreground subject.
The invention provides a method for image blurring processing. The method includes the following steps. An image and a depth information corresponding to the image are obtained. An object and a background are identified in the image according to the depth information, wherein the object includes a plurality of object pixels and the background includes a plurality of background pixels. A number of the at least one edge pixel spaced between the object and the background is defined. A blurring processing is performed on the edge pixel and the plurality of background pixels respectively using a plurality of masks with different sizes. The plurality of object pixels, the edge pixel after the blurring processing, and the plurality of background pixels after the blurring processing are combined to generate an output image. The output image is outputted.
The invention provides a system for image blurring processing, including: an image-capture circuit, a depth-sensing circuit, and a processor. The image-capture circuit is used for obtaining an image. The depth-sensing circuit is used for obtaining a depth information corresponding to the image. The processor is coupled to the image-capture circuit and the depth-sensing circuit. The processor is used for performing the following operations. An object and a background are identified in the image according to the depth information, wherein the object includes a plurality of object pixels and the background includes a plurality of background pixels. A number of the at least one edge pixel spaced between the object and the background is defined. A blurring processing is performed on the edge pixel and the plurality of background pixels respectively using a plurality of masks with different sizes. The plurality of object pixels, the edge pixel after the blurring processing, and the plurality of background pixels after the blurring processing are combined to generate an output image. The output image is outputted.
Based on the above, the method for image blurring processing and the system for image blurring processing of the invention may divide an image into a plurality of regions such as an object (i.e., foreground), an edge of the object, and a background and perform blurring processing individually on the regions and then recombine the regions into an output image. By performing the blurring processing on individual regions, the processed image may achieve the effect of a blurred background and a clear object (person) and edge.
In order to make the aforementioned features and advantages of the disclosure more comprehensible, embodiments accompanied with figures are described in detail below.
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
Descriptions of the invention are given with reference to the exemplary embodiments illustrated with accompanied drawings. In addition, whenever possible, elements/components having the same reference numerals represent the same or similar parts in the figures and the embodiments.
Referring to
The processor 240 may be a central processing unit (CPU) or a programmable general-use or special-use microprocessor, digital signal processor (DSP), programmable controller, application-specific integrated circuit (ASIC), or other similar devices or a combination thereof.
The image-capture circuit 242 is for capturing one or a plurality of images. For example, the image-capture circuit 242 may be equipped with a camera lens of a charge-coupled device (CCD), a complementary metal-oxide semiconductor (CMOS) device, or other types of photosensitive devices.
The depth-sensing circuit 244 may be another image-capture circuit that is the same as the image-capture circuit 242, which may be used to capture an image and allow the processor 240 to generate a depth map according to an image captured by the depth-sensing circuit 244 and an image captured by the image-capture circuit 242 to determine the depth of the object in the image captured by the image-capture circuit 242 (or the image captured by the depth-sensing circuit 244). Alternatively, the depth-sensing circuit 244 may also be an infrared sensor for emitting infrared light and receiving reflections of the infrared light to obtain depth. Alternatively, the depth-sensing circuit 244 may be another type of sensor that may obtain depth information, such as a structured light sensor, and is not limited herein.
The output circuit 246 may be associated circuitry for outputting a signal to a display or other electronic devices.
The storage circuit 248 may be any type of fixed or movable random-access memory (RAM), read-only memory (ROM), flash memory, a similar device, or a combination of the devices. In the present exemplary embodiment, a plurality of code snippets is stored in the storage circuit, and after the code snippets are installed, the code snippets are executed by the processor 240. For example, the storage circuit includes a plurality of modules, and the modules are respectively used to perform each operation of the image blurring processing method of the invention, wherein each module is composed of one or a plurality of code snippets. However, the invention is not limited thereto. Each operation of the system 1000 for image blurring processing may also be implemented using other hardware forms or with other software or integrating other software or replacing at least a part of the circuits with software.
Referring to
Thereafter, the processor 240 may identify an object (i.e., the foreground) in the image captured by the image-capture circuit 242 according to the depth information (step S203). In detail, since the depth information obtained by the depth-sensing circuit 244 may be used to obtain the depth of each pixel in the image captured by the image-capture circuit 242, a depth preset value (for example, two meters) may be set in advance. A pixel having a depth less than or equal to the depth preset value is identified as the foreground of the image, and a pixel larger than the depth preset value is identified as the background, wherein the object located in the foreground is regarded as the object above.
Next, the processor 240 defines the number of at least one edge pixel spaced between the object and the background according to the manufacturer/user setting or according to the size of the color contrast of the foreground and the background (step S205). For example,
Referring to
Referring to
In particular, in the present embodiment, the edge pixels in regions R2 to R3 may also be divided into near-edge pixels and far-edge pixels according to the distance from the object. For example, the processor 240 may identify the pixels in region R2 as the near-edge pixels and identify the pixels in region R3 as the far-edge pixels. In the present embodiment, the distance between each near-edge pixel and the object is zero pixels, and the distance between each far-edge pixel and the object is one pixel. In other words, the distance between each near-edge pixel and the object is less than the distance between each far-edge pixel and the object. In the present embodiment, the mask acting on the edge regions may further include a mask (also referred to as a third mask) for performing blurring processing on the near-edge pixels and a mask (also known as a fourth mask) for performing blurring processing on the far-edge pixels. In the present embodiment, the size of the third mask (for example, 3×3) is smaller than the size of the fourth mask (for example, 5×5). In other words, after the blurring processing is performed on regions R2 and R3 using the third mask and the fourth mask, respectively, region R3 is blurrier with respect to region R2. In addition, region R4 is blurrier with respect to region R3.
Then, referring to
In the foregoing example, the processor 240 can, for example, take out the pixels without the blurring processing in region R1 from the image captured by the image-capture circuit 242, perform blurring processing on the image captured by the image-capture circuit 242 using the third mask and only take out the pixels after the blurring processing in region R2, perform blurring processing on the image captured by the image-capture circuit 242 using the fourth mask and only take out the pixels after the blurring processing in region R3, and perform blurring processing on the image captured by the image-capture circuit 242 using the second mask and only take out the pixels after the blurring processing in region R4. Thereafter, the processor 240 may combine the pixels without the blurring processing in region R1, the pixels after the blurring processing in region R2 (according to the third mask), the pixels after the blurring processing in region R3 (according to the fourth mask), and the pixels after the blurring processing in region R4 (according to the second mask) to generate an output image.
However, in other embodiments, the processor 240 may also respectively take out the regions of regions R1 to R4 from the image captured by the image-capture circuit 242 and perform blurring processing on the pixels in the extracted region R2 using the third mask, perform blurring processing on the pixels in the extracted region R3 using the fourth mask, and perform blurring processing on the pixels in the extracted region R4 using the second mask. Thereafter, the processor 240 may combine the pixels without the blurring processing in region R1, the pixels after the blurring processing in region R2 (according to the third mask), the pixels after the blurring processing in region R3 (according to the fourth mask), and the pixels after the blurring processing in region R4 (according to the second mask) to generate an output image.
The method for image blurring processing of the invention is explained below by an example.
Referring to
Referring to
Referring to
Referring to
Therefore, when the degree of blurring of the edge pixels between the foreground and the background is to be adjusted or controlled, different selected pixels in
In more detail, referring to
As shown in
Referring to
In the present embodiment, the processor 240 may respectively obtain a plurality of color parameters of the object pixels of the object in a color space in step 205. In the present embodiment, the color space is the color space of HSV, and the color parameters include hue, saturation, and value. The color space of HSV represents points in the RGB color space in a cylindrical coordinate system. The geometry structure of the color space of HSV is more intuitive than the color space of the RGB. In particular, the hue (H) is the basic attribute of color, which is the so-called color name, such as red, yellow, etc., and the range thereof is between 0° and 360°. Saturation (S) refers to the purity of color. The higher the saturation, the purer the color is. The lower the saturation, the more the color gradually becomes gray. Saturation is between 0% and 100%. The value (V) is between 0% and 100%.
When the difference between the object (i.e., foreground) and the background is substantial, the larger the number of the edge pixels spaced between the object and the background, the smoother the diffusion of the edges of the person may be blurred. If the maximum value of the preset number of the edge pixels spaced between the object and the background is N, then the color space of HSV may be divided into N equal parts, wherein N is a positive integer. As shown in Table 1 above, if the maximum value of the preset number of the edge pixels spaced between the object and the background is 4, then the color space of HSV may be equally divided into 4 parts. By detecting and determining the difference between the object and the background color, the processor 240 may automatically select the number of the edge pixels spaced between the object and the background in the image. In the present embodiment, only one of the difference values of the color spaces H, S, and V needs to meet the preset condition.
For example, in step S205, the processor 240 respectively obtains the H value, S value, and V value of each object pixel of the object in the image in the color space of HSV. In addition, the processor 240 respectively obtains the H value, S value, and V value of each background pixel of the background in the image in the color space of HSV. The processor 240 compares each H value, S value, and V value of the object pixels located at the edge in all the object pixels with the H value, S value, and V value of the background pixels adjacent to the object pixels at the edge to calculate the difference. Thereafter, the processor 240 selects a first pixel and defines the H value, S value, and V value of the first pixel as the “first color parameter”. Moreover, the processor 240 further selects a second pixel and defines the H value, S value, and V value of the second pixel as the “second color parameter”. In particular, a difference (also referred to as a first difference) of the first color parameter and the second color parameter is largest. In other words, in the selection of the first pixel of the object and the second pixel of the background, two pixels having the largest H value, S value, or V value difference in all the adjacent object pixels and background pixels in the entire picture are selected. For example, the first difference is the difference of the H value, and the difference of this H value is the largest in the image. Alternatively, the first difference may also be the difference of the S value, and the difference of this S value is the largest in the image. Alternatively, the first difference may also be the difference of the V value, and the difference of this V value is the largest in the image.
For example, as shown in
After the first difference is obtained, the processor 240 may define the number of the edge pixels spaced between the object and the background, for example, according to Table 1 and the first difference described above.
For example, the difference in which the first difference is the H value is taken as an example. When the differences between the three values of the HSV of the first pixel and the second pixel are respectively H: 105°, S: 20%, and V: 45%, the processor 240 may select the number of the edge pixels spaced between the object and the background to be three according to Table 1. That is, the processor 240 selects a first color difference (for example, H: 90°, S: 50%, and V: 55% in Table 1) corresponding to the first difference in the plurality of color differences from Table 1 according to the first difference of the first color parameter and the second color parameter. Thereafter, the processor 240 obtains a first preset number (i.e., the number of the edge pixels spaced between the object and the background is three) corresponding to the first color difference according to the first color difference. Thereafter, the processor 240 defines the first preset number as the number of the edge pixels spaced between the object and the background.
The difference in which the first difference is the V value is taken as an example. When the differences between the three values of the HSV of the first pixel and the second pixel are respectively H: 5°, S: 5%, and V: 80%, the processor 240 may select the number of the edge pixels spaced between the object and the background to be four according to Table 1.
Based on the above, the method for image blurring processing and the system for image blurring processing of the invention may divide an image into a plurality of regions such as an object (i.e., foreground), an edge of the object, and a background and perform blurring processing on the regions respectively using masks with different sizes and then recombine the regions in to an output image. By performing blurring process on individual regions, the processed image may achieve the effect of a blurred background and a clear person and edge.
Although the invention has been described with reference to the above embodiments, it will be apparent to one of ordinary skill in the art that modifications to the described embodiments may be made without departing from the spirit of the invention. Accordingly, the scope of the invention is defined by the attached claims not by the above detailed descriptions.
Claims
1. A method for an image blurring processing, comprising:
- obtaining an image and a depth information corresponding to the image;
- identifying an object and a background in the image according to the depth information, wherein the object comprises a plurality of object pixels, and the background comprises a plurality of background pixels;
- determining a number of at least one edge pixel spaced between the object and the background by detecting a color difference between the adjacent object pixels and the background pixels;
- performing a blurring processing on the number of at least one edge pixel and the plurality of background pixels respectively using a plurality of masks with different sizes;
- combining the plurality of object pixels, the edge pixel after the blurring processing, and the plurality of background pixels after the blurring processing to generate an output image; and
- outputting the output image,
- wherein a first mask is used for the blurring processing performed on the edge pixel, a second mask is used for the blurring processing performed on the background pixels, and a size of the first mask is less than a size of the second mask,
- wherein the first mask comprises a plurality of first weights correspond to at least one of the object pixels, the edge pixels and the background pixels, the second mask comprises a plurality of second weights correspond to at least one of the object pixels, the edge pixels and the background pixels,
- wherein during the blurring processing, at least one value of at least one of the edge pixels is multiplied by the plurality of the second weights to obtain at least one new value of the at least one edge pixel, and at least one value of at least one of the background pixels is multiplied by the plurality of the first weights to obtain at least one new value of the at least one processed background pixel.
2. The method for the image blurring processing of claim 1, wherein the edge pixel comprises a plurality of near-edge pixels and a plurality of far-edge pixels, and a distance between each of the plurality of near-edge pixels and the object is less than a distance between each of the plurality of far-edge pixels and the object.
3. The method for the image blurring processing of claim 2, wherein the first mask comprises a third mask for performing the blurring processing on the plurality of near-edge pixels and a fourth mask for performing the blurring processing on the plurality of far-edge pixels, and a size of the third mask is less than a size of the fourth mask.
4. The method for the image blurring processing of claim 1, wherein the step of determining the number of the edge pixel spaced between the object and the background by detecting the color difference between the adjacent object pixels and the background pixels comprises:
- obtaining, respectively, a plurality of color parameters of the plurality of object pixels in a color space;
- obtaining, respectively, a plurality of color parameters of the plurality of background pixels adjacent to the object pixels in the color space;
- comparing the color parameter of each of the adjacent object pixels with the color parameter of each of the background pixels and calculating a difference respectively;
- finding a largest first difference from all of the differences and selecting a first color parameter from the plurality of color parameters of the plurality of object pixels and selecting a second color parameter from the plurality of color parameters of the plurality of background pixels according to the first difference; and
- defining the number of the edge pixel spaced between the object and the background according to the first difference of the first color parameter and the second color parameter.
5. The method for the image blurring processing of claim 4, further comprising:
- pre-storing a corresponding relationship of a plurality of color differences and a plurality of preset numbers of the edge pixel,
- wherein the step of defining the number of the edge pixel spaced between the object and the background according to the first difference of the first color parameter and the second color parameter comprises: selecting a first color difference corresponding to the first difference in the plurality of color differences according to the first difference of the first color parameter and the second color parameter; obtaining a first preset number corresponding to the first color difference in the plurality of preset numbers according to the first color difference; and defining the first preset number as the number of the edge pixel spaced between the object and the background.
6. The method for the image blurring processing of claim 4, wherein the color space is a color space of HSV.
7. A system for an image blurring processing, comprising:
- an image-capture circuit for obtaining an image;
- a depth-sensing circuit for obtaining a depth information corresponding to the image; and
- a processor coupled to the image-capture circuit and the depth-sensing circuit, wherein
- the processor identifies an object and a background in the image according to the depth information, wherein the object comprises a plurality of object pixels, and the background comprises a plurality of background pixels,
- the processor determines a number of at least one edge pixel spaced between the object and the background by detecting a color difference between the adjacent object pixels and the background pixels,
- the processor performs a blurring processing on the number of at least one edge pixel and the plurality of background pixels of the background respectively using a plurality of masks with different sizes,
- the processor combines the plurality of object pixels, the edge pixel after the blurring processing, and the plurality of background pixels after the blurring processing to generate an output image, and
- the processor outputs the output image,
- wherein a first mask is used for the blurring processing performed on the edge pixel, a second mask is used for the blurring processing performed on the background pixels, and a size of the first mask is less than a size of the second mask,
- wherein the first mask comprises a plurality of first weights correspond to at least one of the object pixels, the edge pixels and the background pixels, the second mask comprises a plurality of second weights correspond to at least one of the object pixels, the edge pixels and the background pixels,
- wherein during the blurring processing, at least one value of at least one of the edge pixels is multiplied by the plurality of the second weights to obtain at least one new value of the at least one edge pixel, and at least one value of at least one of the background pixels is multiplied by the plurality of the first weights to obtain at least one new value of the at least one processed background pixel.
8. The system for the image blurring processing of claim 7, wherein the edge pixel comprises a plurality of near-edge pixels and a plurality of far-edge pixels, and a distance between each of the plurality of near-edge pixels and the object is less than a distance between each of the plurality of far-edge pixels and the object.
9. The system for the image blurring processing of claim 8, wherein the first mask comprises a third mask for performing the blurring processing on the plurality of near-edge pixels and a fourth mask for performing the blurring processing on the plurality of far-edge pixels, and a size of the third mask is less than a size of the fourth mask.
10. The system for the image blurring processing of claim 7, wherein during the operation of determining the number of the edge pixel spaced between the object and the background by detecting the color difference between the adjacent object pixels and the background pixels,
- the processor obtains, respectively, a plurality of color parameters of the plurality of object pixels in a color space,
- the processor obtains, respectively, a plurality of color parameters of the plurality of background pixels adjacent to the object pixels in the color space,
- the processor compares the color parameter of each of the adjacent object pixels with the color parameter of each of the background pixels and calculates a difference respectively;
- the processor finds a largest first difference from all of the differences and selects a first color parameter from the plurality of color parameters of the plurality of object pixels and selects a second color parameter from the plurality of color parameters of the plurality of background pixels according to the first difference, and
- the processor defines the number of the edge pixel spaced between the object and the background according to a first difference of the first color parameter and the second color parameter.
11. The system for the image blurring processing of claim 10, further comprising:
- a storage circuit for pre-storing a corresponding relationship of a plurality of color differences and a plurality of preset numbers of the edge pixel,
- wherein during the operation of defining the number of the edge pixel spaced between the object and the background according to the first difference of the first color parameter and the second color parameter,
- the processor selects a first color difference corresponding to the first difference in the plurality of color differences according to the first difference of the first color parameter and the second color parameter,
- the processor obtains a first preset number corresponding to the first color difference in the plurality of preset numbers according to the first color difference, and
- the processor defines the first preset number as the number of the edge pixel spaced between the object and the background.
12. The system for the image blurring processing of claim 10, wherein the color space is a color space of HSV.
5475507 | December 12, 1995 | Suzuki |
10217195 | February 26, 2019 | Agrawal |
20170124717 | May 4, 2017 | Baruch |
107509031 | December 2017 | CN |
107566723 | January 2018 | CN |
- “Office Action of China Counterpart Application”, dated Mar. 11, 2021, p. 1-p. 10.
Type: Grant
Filed: May 7, 2019
Date of Patent: Nov 2, 2021
Patent Publication Number: 20200273152
Assignee: Wistron Corporation (New Taipei)
Inventor: Ge-Yi Lin (New Taipei)
Primary Examiner: Zhitong Chen
Application Number: 16/404,750
International Classification: G06T 5/00 (20060101); G06T 5/20 (20060101); G06T 7/50 (20170101); G06K 9/00 (20060101); G06T 5/50 (20060101);