IMAGE CAPTURE SYSTEM AND OPERATION METHOD THEREOF

- Etron Technology, Inc.

An image capture system includes a depth information generation unit, a feature extraction unit, and a merging unit. The depth information generation unit generates a depth information corresponding to at least one object of an original image. The feature extraction unit generates a feature information corresponding to the at least one object of the original image. The merging unit is coupled to the depth information generation unit and the feature extraction unit, and merges the depth information and the feature information into a feature depth map and outputs the feature depth map to an application unit.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 61/831,620, filed on Jun. 6, 2013 and entitled “Depth Map Post Process System,” the contents of which are incorporated herein by reference.

BACKGROUND OF THE INVENTION

1. Field of the Invention

The present invention relates to an image capture system and an operation method thereof, and particularly to an image capture system and an operation method thereof that can generate and output a feature depth map simultaneously including a depth information and a feature information corresponding to at least one object of an original image to decrease transmission data amount and bandwidth for the feature depth map.

2. Description of the Prior Art

If a gesture application provided by the prior art wants to determine whether an operator is an effective operator, the simplest method executed by the gesture application is face detection or face determination. Generally speaking, the gesture application provided by the prior art utilizes gray level information or color information of an original image to execute face detection, text recognition, or any pattern recognition (e.g. quick response (QR) code.) and utilizes depth information generated from the original image to execute gesture detection. However, the gesture application provided by the prior art simultaneously needs information of the original image and depth information, so a disadvantage of the gesture application provided by the prior art is that the gesture application needs more transmission data amount and bandwidth. Therefore, the gesture application provided by the prior art is not a good choice for a user.

SUMMARY OF THE INVENTION

An embodiment provides an image capture system. The image capture system 100 includes a depth information generation unit, a feature extraction unit, and a merging unit. The depth information generation unit generates a depth information corresponding to at least one object of an original image. The feature extraction unit generates a feature information corresponding to the at least one object of the original image. The merging unit is coupled to the depth information generation unit and the feature extraction unit, and merges the depth information and the feature information into a feature depth map and outputting the feature depth map to an application unit.

Another embodiment provides an operation method of an image capture system, wherein the image capture system comprises a depth information generation unit, a feature extraction unit, and a merging unit. The operation method includes the depth information generation unit generating a depth information corresponding to at least one object of an original image; the feature extraction unit generating a feature information corresponding to the at least one object of the original image; and the merging unit merging the depth information and the feature information into a feature depth map and outputting the feature depth map to an application unit.

The present invention provides an image capture system and an operation method thereof. The image capture system and the operation method utilize a depth information generation unit of the image capture system to generate a depth information corresponding to at least one object of an original image, a feature extraction unit of the image capture system to generate a feature information corresponding to the at least one object of the original image, and a merging unit of the image capture system to generate a feature depth map by merging the depth information and the feature information. Compared to the prior art, because the feature depth map simultaneously includes the depth information and the feature information, transmission data amount and bandwidth for the feature depth map can be decreased.

These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a diagram illustrating an image capture system for outputting depth and feature information of at least one object in an original image according to an embodiment.

FIG. 2 is a diagram illustrating the original image.

FIG. 3 is a diagram illustrating depth information corresponding to an object of the original image.

FIG. 4 is a diagram illustrating a feature depth map.

FIG. 5 is a diagram illustrating a recognition result generated by the application unit after the application unit executes face recognition and gesture recognition on the feature depth map.

FIG. 6 is flowchart illustrating an operation method of the image capture system 100 according to another embodiment.

DETAILED DESCRIPTION

Please refer to FIG. 1, FIG. 2, FIG. 3, and FIG. 4. FIG. 1 is a diagram illustrating an image capture system 100 according to an embodiment, FIG. 2 is a diagram illustrating an original image OIM, FIG. 3 is a diagram illustrating depth information corresponding to an object of the original image OIM, and FIG. 4 is a diagram illustrating a feature depth map. As shown in FIG. 1, the image capture system 100 includes a depth information generation unit 102, a feature extraction unit 104, and a merging unit 106. The depth information generation unit 102 is used for generating a depth information 108 (as shown in FIG. 3) corresponding to an object 110 of the original image OIM (as shown in FIG. 2). But, the present invention is not limited to the original image OIM only including the object 110. That is to say, the original image OIM can include at least one object. The feature extraction unit 104 is used for generating a feature information corresponding to the object 110 according to the original image OIM. For example, the feature extraction unit 104 can generate the feature information corresponding to the object 110 according to eye edges, a face edge, or a lip of the object 110 of the original image OIM. But, the present invention is not limited to the feature extraction unit 104 generating the feature information corresponding to the object 110 according to the eye edges, the face edge, or the lip of the object 110. As shown in FIG. 1, the merging unit 106 is coupled to the depth information generation unit 102 and the feature extraction unit 104. After the merging unit 106 receives the depth information 108 from the depth information generation unit 102 and the feature information from the feature extraction unit 104, the merging unit 106 gives a first weight to the depth information 108 and a second weight to the feature information corresponding to the object 110. Then, the merging unit 106 can merge the depth information 108 and the feature information corresponding to the object 110 into a feature depth map 112 (as shown in FIG. 4) and output the feature depth map 112 to an application unit 114 according to the first weight and the second weight, wherein the feature information corresponds to high frequency parts of the feature depth map 112 and the depth information 108 corresponds to low frequency parts of the feature depth map 112. In addition, the image capture system 100 can utilize a high-pass filter (not shown in FIG. 1) to filter the feature depth map 112 to generate the high frequency parts of the feature depth map 112, and a low-pass filter (not shown in FIG. 1) to filter the feature depth map 112 to generate the low frequency parts of the feature depth map 112.

As shown in FIG. 1, after the application unit 114 receives the feature depth map 112, the application unit 114 utilizes the high frequency parts of the feature depth map 112 to execute face recognition corresponding to the object 110 and the low frequency parts of the feature depth map 112 to execute gesture recognition corresponding to the object 110. In addition, the application unit 114 can also utilize the high frequency parts of the feature depth map 112 to recognize patterns corresponding to the object 110, or characters shown in the original image OIM. In addition, in another embodiment of the present invention, after the application unit 114 receives the feature depth map 112, the application unit 114 can utilize the high frequency parts of the feature depth map 112 to execute face recognition, text recognition, QR code recognition, pattern recognition, or profile recognition corresponding to the object 110.

Please refer to FIG. 5. FIG. 5 is a diagram illustrating a recognition result 116 generated by the application unit 114 after the application unit 114 executes face recognition and gesture recognition on the feature depth map 112. As shown in FIG. 5, the recognition result 116 includes a face profile 1162 and a body profile 1164 corresponding to the object 110. Then, the application unit 114 can utilize the recognition result 116 to execute corresponding operation. Further, in another embodiment of the present invention, the application unit 114 can utilize the low frequency parts of the feature depth map 112 to determine a distance between the object 110 and the image capture system 100. Further, in another embodiment of the present invention, the application unit 114 can simultaneously utilize the low frequency parts of the feature depth map 112 to execute gesture recognition corresponding to the object 110 and determine a distance between the object 110 and the image capture system 100.

Please refer to FIGS. 1 to 6. FIG. 6 is flowchart illustrating an operation method of the image capture system 100 according to another embodiment. The operation method in FIG. 6 is illustrated using the image capture system 100 in FIG. 1. Detailed steps are as follows:

Step 600: Start.

Step 602: The depth information generation unit 102 generates depth information 108 corresponding to the object 110 according to the original image OIM.

Step 604: The feature extraction unit 104 generates feature information corresponding to the object 110 according to the original image OIM.

Step 606: The merging unit 106 gives a first weight to the depth information 108 and a second weight to the feature information corresponding to the object 110.

Step 608: The merging unit 106 merges the depth information 108 and the feature information corresponding to the object 110 to generate a feature depth map 112 according to the first weight and the second weight.

Step 610: End.

In Step 604, the feature extraction unit 104 can generate the feature information corresponding to the object 110 according to eye edges, a face edge, or a lip of the object 110 of the original image OIM. But, the present invention is not limited to the feature extraction unit 104 generating the feature information corresponding to the object 110 according to the eye edges, the face edge, or the lip of the object 110. In Step 606, after the merging unit 106 receives the depth information 108 from the depth information generation unit 102 and the feature information from the feature extraction unit 104, the merging unit 106 gives the first weight to the depth information 108 and the second weight to the feature information corresponding to the object 110. Then, in Step 608, the merging unit 106 can merge the depth information 108 and the feature information corresponding to the object 110 to generate and output the feature depth map 112 (as shown in FIG. 4) to the application unit 114 according to the first weight and the second weight, wherein the feature information corresponds to high frequency parts of the feature depth map 112 and the depth information 108 corresponds to low frequency parts of the feature depth map 112. In addition, the image capture system 100 can utilize a high-pass filter (not shown in FIG. 1) to filter the feature depth map 112 to generate the high frequency parts of the feature depth map 112, and a low-pass filter (not shown in FIG. 1) to filter the feature depth map 112 to generate the low frequency parts of the feature depth map 112.

As shown in FIG. 1, after the application unit 114 receives the feature depth map 112, the application unit 114 can utilize the high frequency parts of the feature depth map 112 to execute face recognition corresponding to the object 110 and the low frequency parts of the feature depth map 112 to execute gesture recognition corresponding to the object 110.

In addition, in another embodiment of the present invention, after the application unit 114 receives the feature depth map 112, the application unit 114 can utilize the high frequency parts of the feature depth map 112 to execute face recognition, text recognition, QR code recognition, pattern recognition, or profile recognition corresponding to the object 110.

In addition, the application unit 114 can also utilize the high frequency parts of the feature depth map 112 to recognize veins corresponding to the object 110 or characters shown in the original image OIM.

In addition, after the application unit 114 executes face recognition and gesture recognition on the feature depth map 112, the application unit 114 can generate a recognition result 116. As shown in FIG. 5, the recognition result 116 includes a face profile 1162 and a body profile 1164 corresponding to the object 110. Then, the application unit 114 can utilize the recognition result 116 to execute corresponding operation. Further, in another embodiment of the present invention, the application unit 114 can utilize the low frequency parts of the feature depth map 112 to determine a distance between the object 110 and the image capture system 100. Further, in another embodiment of the present invention, the application unit 114 can simultaneously utilize the low frequency parts of the feature depth map 112 to execute gesture recognition corresponding to the object 110 and determine a distance between the object 110 and the image capture system 100.

To sum up, the image capture system and the operation method thereof the depth information generation unit to generate a depth information corresponding to at least one object of an original image, the feature extraction unit to generate a feature information corresponding to the at least one object of the original image, and the merging unit to generate a feature depth map by merging the depth information and the feature information. Compared to the prior art, because the feature depth map simultaneously includes the depth information and the feature information, transmission data amount and bandwidth for the feature depth map can be decreased.

Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.

Claims

1. An image capture system comprising:

a depth information generation unit generating a depth information corresponding to at least one object of an original image;
a feature extraction unit generating a feature information corresponding to the at least one object of the original image; and
a merging unit coupled to the depth information generation unit and the feature extraction unit, the merging unit merging the depth information and the feature information into a feature depth map and outputting the feature depth map to an application unit.

2. The image capture system of claim 1, wherein the merging unit gives a first weight to the depth information and a second weight to the feature information, and the merging unit merges the depth information and the feature information into the feature depth map according to the first weight and the second weight.

3. The image capture system of claim 1, wherein the feature information corresponds to high frequency parts of the feature depth map and the depth information corresponds to low frequency parts of the feature depth map.

4. The image capture system of claim 3, wherein the application unit utilizes the high frequency parts of the feature depth map to execute face recognition corresponding to the at least one object.

5. The image capture system of claim 3, wherein the application unit utilizes the low frequency parts of the feature depth map to execute gesture recognition corresponding to the at least one object.

6. The image capture system of claim 3, wherein the application unit utilizes the low frequency parts of the feature depth map to determine a distance between the at least one object and the image capture system.

7. The image capture system of claim 3, wherein the application unit utilizes the low frequency parts of the feature depth map to execute gesture recognition corresponding to the at least one object and determine a distance between the at least one object and the image capture system.

8. An operation method of an image capture system, wherein the image capture system comprises a depth information generation unit, a feature extraction unit, and a merging unit, the operation method comprising:

the depth information generation unit generating a depth information corresponding to at least one object of an original image;
the feature extraction unit generating a feature information corresponding to the at least one object of the original image; and
the merging unit merging the depth information and the feature information into a feature depth map and outputting the feature depth map to an application unit.

9. The operation method of claim 8, wherein the merging unit merging the depth information and the feature information into the feature depth map comprises:

the merging unit giving a first weight to the depth information and a second weight to the feature information; and
the merging unit merging the depth information and the feature information into the feature depth map according to the first weight and the second weight.

10. The operation method of claim 8, wherein the feature information corresponds to high frequency parts of the feature depth map and the depth information corresponds to low frequency parts of the feature depth map.

11. The operation method of claim 10, wherein the application unit utilizes the high frequency parts of the feature depth map to execute face recognition corresponding to the at least one object.

12. The operation method of claim 10, wherein the application unit utilizes the low frequency parts of the feature depth map to execute gesture recognition corresponding to the at least one object.

13. The operation method of claim 10, wherein the application unit utilizes the low frequency parts of the feature depth map to determine a distance between the at least one object and the image capture system.

14. The operation method of claim 10, wherein the application unit utilizes the low frequency parts of the feature depth map to execute gesture recognition corresponding to the at least one object and determine a distance between the at least one object and the image capture system.

Patent History
Publication number: 20140363097
Type: Application
Filed: Mar 23, 2014
Publication Date: Dec 11, 2014
Applicant: Etron Technology, Inc. (Hsinchu)
Inventor: Chi-Feng Lee (Hsinchu County)
Application Number: 14/222,679
Classifications
Current U.S. Class: Combining Image Portions (e.g., Portions Of Oversized Documents) (382/284)
International Classification: G06T 7/00 (20060101); G06T 19/20 (20060101);