Regional Image Processing in an Image Capture Device
Disclosed are various embodiments of applying image processing techniques in an image capture device. Regions of an image can be isolated and a respective region type identified. The image capture device can apply various image processing techniques to various regions of the image based at least upon a region type that is identified for the various regions.
Latest BROADCOM CORPORATION Patents:
This application claims priority to co-pending U.S. provisional application entitled, “Image Capture Device Systems and Methods,” having Ser. No. 61/509,747, filed Jul. 20, 2011, which is entirely incorporated herein by reference.
BACKGROUNDImage capture devices (e.g., still cameras, video cameras, etc.) can apply various image processing techniques. These techniques can be applied globally or, in other words, to an entire image. Images captured by an image capture device can often contain various objects and/or subjects such that application of a single image processing technique to the entirety of the image can result in a less than desirable result.
Many aspects of the invention can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present invention. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
Embodiments of the present disclosure relate to systems and methods that can be executed in an image capture device. More specifically, embodiments of the present disclosure further comprise tailored regional image processing techniques applied to various regions of an image based at least upon an identification and characterization of various image elements and/or objects that can be isolated within the captured imagery and/or video. An image capture device can include a camera, video camera, a mobile device with an integrated image capture device, or other devices suitable to capturing imagery and/or video as can be appreciated. In some embodiments, an image capture device according to an embodiment of the disclosure can include a device such as a smartphone, tablet computing system, laptop computer, desktop computer, or any other computing device that has the capability to receive and/or capture imagery via image capture hardware.
Accordingly, image capture device hardware can include components such as lenses, image sensors (e.g., charge coupled devices, CMOS image sensor, etc.), processor(s), image signal processor(s), a main processor, memory, mass storage, or any other hardware or software components that can facilitate capture of imagery and/or video. In some embodiments, an image signal processor can be incorporated as a part of a main processor in an image capture device module that is in turn incorporated into a device having its own processor, memory and other components.
An image capture device according to an embodiment of the disclosure can provide a user interface via a display that is integrated into the image capture device. The display can be integrated with a mobile device, such as a smartphone and/or tablet computing device, and can include a touchscreen input device (e.g., a capacitive touchscreen, etc.) with which a user may interact with the user interface that is presented thereon. The image capture device hardware can also include one or more buttons, dials, toggles, switches, or other input devices with which the user can interact with software executed in the image capture device.
Referring now to the drawings,
The mobile device 102 may be configured to execute various applications, such as a camera application that can interact with an image capture module that includes various hardware and/or software components that facilitate capture and/or storage of images and/or video. In one embodiment, the camera application can interact with application programming interfaces (API's) and/or other software libraries and/or drivers that are provided for the purpose interacting with image capture hardware, such as the lens system and other image capture hardware. The camera application can be a special purpose application, a plug-in or executable library, one or more API's, image control algorithms, image capture device firmware, or other software that can facilitate communication with image capture hardware in communication with the mobile device 102. Accordingly, a camera application according to embodiments of the present disclosure can capture imagery and/or video via the various image capture hardware as well as facilitate storage of the captured imagery and/or video in memory and/or mass storage associated with the mobile device 102.
The image capture device 104 includes a lens system 200 that conveys images of viewed scenes to an image sensor 202. By way of example, the image sensor 202 comprises a charge-coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) sensor that is driven by one or more sensor drivers 204. The analog image signals captured by the sensor 202 are provided to an analog-to-digital (A/D) converter 206 for conversion into binary code that can be processed by a processor 208. The processor can also execute a regional image processing application 151 that can carry out the regional image processing discussed herein. In some embodiments, the regional image processing application 151 can take the form of API's, control algorithms, or other software accessible to the image capture device 104 and/or a mobile device 102 or other system in which the image capture device 104 is integrated.
Operation of the sensor driver(s) 204 is controlled through a camera controller 210 that is in bi-directional communication with the processor 208. In some embodiments, the controller 210 can control one or more motors 212 that are used to drive the lens system 200 (e.g., to adjust focus, zoom, and/or aperture settings). The controller 210 can also communicate with a flash system, user input devices (e.g., buttons, dials, toggles, etc.) or other components associated with the image capture device 104. Operation of the camera controller 210 may be adjusted through manipulation of a user interface. A user interface comprises the various components used to enter selections and commands into the image capture device 104 and therefore can include various buttons as well as a menu system that, for example, is displayed to the user in, for example, a camera application executed on a mobile device 102 and/or on a back panel associated with a standalone digital camera.
The digital image signals are processed in accordance with instructions from an image signal processor 218 stored in permanent (non-volatile) device memory. Processed (e.g., compressed) images may then be stored in storage memory, such as that contained within a removable solid-state memory card (e.g., Flash memory card). The embodiment shown in
An image capture device (e.g., camera, mobile device with integrated camera, etc.) and/or processing system can be configured with tailored regional processing that is based at least upon an identification and characterization of various image elements. Image and video adjustments associated with prior art image capture systems (e.g., post processing outside of the camera or image capture device) is often applied to the entirety of an image. For example, adjusting brightness, tone, color intensity, contrast, gamma, etc., or other aspects of an image generally involves application of such an adjustment to the entire image or sequence of frames in prior art systems. The following drawings illustrate various examples of logic that can, alone or in combination, be implemented in an image capture device.
An image capture device according to an embodiment of the disclosure can apply various image processing techniques to various regions of an image that can be associated with a particular region type. The identification and characterization of regions within captured imagery as well as application of various image processing techniques can be accomplished via software executed by the processor 208, the ISP 218 as well as a processor associated with a device in communication with the image capture device 104. It should be appreciated that the specific implementation and/or embodiments disclosed herein are merely examples.
Accordingly, reference is now made to
Therefore,
The region library can specify parameters and/or signatures corresponding to various region types, which can include, but are not limited to, a landscape region, a portrait region, a low light region, a fireworks region, a backlight region, a high motion region, a facial region, or any other region for which various image parameters and/or ranges of parameters can be defined.
In the depicted example, the image capture device can employ facial recognition algorithms to isolate and/or determine whether a region in the image 300 corresponds to a human face. In the depicted example, the image capture device can determine whether region 302 corresponds to a human face by analyzing its relative size, color, shape, and other properties as can be appreciated. Accordingly, the image capture device can associate region 302 with a region type corresponding to a human face or head. The image capture device can employ the various image recognition techniques to determine whether a portion of the image corresponds to a background and/or sky region type. In the depicted example, the image capture device can determine whether region 304 corresponds to a set of parameter and/or parameter ranges specified by a region library as associated with a sky. The image capture device can also calculate a confidence score that is based at least upon how closely a region isolated in the image 300 matches the parameters associated with a known region type specified by the region library. In other words, the image capture device can isolate the various regions of an image and characterize certain regions as a known region type of the parameters specified by the region library are within a certain range.
Similarly, the image capture device can isolate other regions 306, 308, 310 as well as other regions correspond to known region types in a region library. In various embodiments, the region library can be stored in memory associated with a mobile device with which the image capture device is integrated, in memory associated with the image capture device, hard coded into the processor and/or ISP of the image capture device, or provided in other ways as can be appreciated.
Upon identification of region types associated with the various regions of the image, the image capture device can apply various image processing techniques that can be associated with the region types. Image processing techniques can include, but are not limited to, adjusting color levels, sharpness, brightness, contrast, or any other parameter or property associated with a region of the image. An image processing technique can also include, but is not limited to, the application of one or more signal processing techniques, filters, or any other process that receives as an input image data associated with a region and outputs image data that is altered or modified in some form. For example, one or more image processing techniques can be associated with a known region type corresponding to a human face and applied only to the region 302 corresponding to the face rather than globally to the entire image 300. Accordingly, the image capture device can apply smoothing, blemish removal, or other image processing techniques to the facial region 302.
As another example, the image capture device can employ various image recognition techniques to determine whether a portion of the image 300 corresponds to a background or sky region. In the depicted example, the image capture device can determine whether region 304 corresponds to a sky region and apply image processing techniques specific to such a region type only to the region 304. For example, these image processing techniques can include color enhancement, adjustment of various color levels, modifying sharpness, contrast, application of one or more image filters, or other image processing techniques as can be appreciated. The image processing techniques associated with a particular region type can be preconfigured so that the image capture device applies these image processing techniques only to the region types that are identified within the image 300 rather than to the entire image 300 globally. Similarly, the image capture device can also determine whether the other regions 306, 308, 310 correspond to other region types for which image processing techniques are defined and apply the preconfigured image processing techniques that are associated with the identified region types to these regions.
Additionally, as noted above, the image capture device can calculate a confidence score that is associated with an identification of a region type in an image. Accordingly, the image capture device can apply the image processing techniques to an identified region at higher levels when a confidence score associated with identification of the region type is higher. In other words, the image capture device can more aggressively apply image processing techniques associated with a region type when a confidence score reflects a high degree of confidence that identification of a region is accurate. Additionally, in the case of video captured by the image capture device, the image capture device can employ the same techniques described above to each frame associated with a video. In some embodiments, the image capture device can apply the image processing techniques to a sampling of frames associated with a video. Additionally, the image capture device can employ object tracking techniques to track a particular object throughout the various frames of a video so that the same image processing techniques are applied to the object in the various video frames.
In some embodiments, the image capture device can apply the image processing techniques that are associated with identified regions in an image and/or video frames by modifying the captured image data prior to storage in memory and/or a mass storage device. In other embodiments, the image capture device can record the image processing techniques that are applied in meta data associated with the image while retaining the originally captured image data. In such a scenario, a camera application or other software generating a user interface associated with content captured by the image capture device can display either the originally captured image data or the image after application of the image processing techniques.
Referring next to
First, in box 501, image capture can be initiated in the image capture device so that one or more images are captured by the lens system, image sensor, and other image capture device hardware as discussed above. In box 503, the image capture device can isolate a region within the captured imagery. In box 505, the image capture device can associate the isolated region with an image type. If a region type can be identified, then in box 507 the image capture device can apply image processing techniques that can be preconfigured as associated with the identified region type. In box 509, if the image capture device can determine whether there are additional regions to be processed in the captured image and repeat the process if so.
Embodiments of the present disclosure can be implemented in various devices, for example, having a processor, memory as well as image capture hardware that can be coupled to a local interface. The logic described herein can be executable by one or more processors integrated with a device. In one embodiment, an application executed in a computing device, such as a mobile device, can invoke API's that provide the logic described herein as well as facilitate interaction with image capture hardware. Where any component discussed herein is implemented in the form of software, any one of a number of programming languages may be employed such as, for example, processor specific assembler languages, C, C++, C#, Objective C, Java, Javascript, Perl, PHP, Visual Basic, Python, Ruby, Delphi, Flash, or other programming languages.
As such, these software components can be executable by one or more processors in various devices. In this respect, the term “executable” means a program file that is in a form that can ultimately be run by a processor. Examples of executable programs may be, for example, a compiled program that can be translated into machine code in a format that can be loaded into a random access portion of memory and run by a processor, source code that may be expressed in proper format such as object code that is capable of being loaded into a random access portion of the memory and executed by the processor, or source code that may be interpreted by another executable program to generate instructions in a random access portion of the memory to be executed by the processor, etc. An executable program may be stored in any portion or component of the memory including, for example, random access memory (RAM), read-only memory (ROM), hard drive, solid-state drive, USB flash drive, memory card, optical disc such as compact disc (CD) or digital versatile disc (DVD), floppy disk, magnetic tape, or other memory components.
Although various logic described herein may be embodied in software or code executed by general purpose hardware as discussed above, as an alternative the same may also be embodied in dedicated hardware or a combination of software/general purpose hardware and dedicated hardware. If embodied in dedicated hardware, each can be implemented as a circuit or state machine that employs any one of or a combination of a number of technologies. These technologies may include, but are not limited to, discrete logic circuits having logic gates for implementing various logic functions upon an application of one or more data signals, application specific integrated circuits having appropriate logic gates, or other components, etc. Such technologies are generally well known by those skilled in the art and, consequently, are not described in detail herein.
The flowchart of
Although the flowchart of
Also, any logic or application described herein that comprises software or code can be embodied in any non-transitory computer-readable medium for use by or in connection with an instruction execution system such as, for example, a processor in a computer device or other system. In this sense, the logic may comprise, for example, statements including instructions and declarations that can be fetched from the computer-readable medium and executed by the instruction execution system. In the context of the present disclosure, a “computer-readable medium” can be any medium that can contain, store, or maintain the logic or application described herein for use by or in connection with the instruction execution system. The computer-readable medium can comprise any one of many physical media such as, for example, magnetic, optical, or semiconductor media. More specific examples of a suitable computer-readable medium would include, but are not limited to, magnetic tapes, magnetic floppy diskettes, magnetic hard drives, memory cards, solid-state drives, USB flash drives, or optical discs. Also, the computer-readable medium may be a random access memory (RAM) including, for example, static random access memory (SRAM) and dynamic random access memory (DRAM), or magnetic random access memory (MRAM). In addition, the computer-readable medium may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or other type of memory device.
It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
Claims
1. An image capture device, comprising:
- at least one image sensor; and
- an application executed in the image capture device, the application comprising: logic that initiates capture of at least one image via the at least one image sensor associated with the image capture device; logic that isolates at least one region of the image; logic that identifies a region type associated with the at least one region; logic that applies at least one image processing technique to the at least one region of the image based at least upon a preconfigured image processing configuration associated with the region type.
2. The image capture device of claim 1, the logic that isolates the at least one region of the image further comprises:
- logic that identifies at least one object in the image; and
- logic that designates the at least one object as at least one region of the image.
3. The image capture device of claim 2, wherein the image capture device is configured to capture a plurality of images associated with a plurality of frames of a video, and the image capture application further comprises:
- logic that tracks the object in subsequent frames of the video; and
- logic that applies the image processing technique associated with region type to the at least one region in the subsequent frames.
4. The image capture device of claim 1, wherein the logic that isolates the at least one region of the image further comprises:
- logic that performs at least one edge recognition algorithm on the image; and
- logic that extracts at least one region of the image associated with at least one edge identified in the image.
5. The image capture device of claim 1, wherein the logic that identifies the region type associated with the at least one region further comprises performing at least one image recognition algorithm on the at least one region, the image recognition algorithm configured to determine whether the at least one region corresponds to a region type specified by a region library accessible to the image capture device.
6. The image capture device of claim 5, wherein the region library comprises at least one signature associated with a respective known region type, the at least one signature comprising at least one parameter uniquely associated with the respective known region type.
7. The image capture device of claim 6, wherein the logic that identifies the region type associated with the at least one region further comprises:
- logic that generates a respective signature associated with the at least one region; and
- logic that determines whether the respective signature is within a predetermined rage of the at least one signature associated with the respective known region type.
8. The image capture device of claim 1, wherein the logic that applies the at least one image processing technique to the at least one region of the image further comprises recording the at least one image processing technique to meta data associated with the image.
9. The image capture device of claim 1, wherein the logic that identifies the region type associated with the at least one region further comprises:
- logic that generates a confidence score associated with identification of the region type, the confidence score corresponding to a confidence level of the identification; and
- logic that adjusts a level associated with the at least one image processing technique associated with the region type based at least upon the confidence score.
10. The image capture device of claim 1, wherein the region type is one of: a landscape region, a portrait region, a low light region, a fireworks region, a backlight region, sky region, a high motion region, and a facial region.
11. The image capture device of claim 1, wherein the logic that identifies a region type associated with the at least one region further comprises:
- logic that identifies a first region type associated with a first region;
- logic that identifies a second region type associated with the second region;
- logic that applies a first image processing technique to the first region, the first image processing technique associated with the first region type; and
- logic that applies a second image processing technique to the second region, the second image processing technique associated with the second region type.
12. A method, comprising the steps of:
- capturing, in an image capture device, an image via an image sensor associated with the image capture device;
- isolating, in the image capture device, at least one region of the image;
- identifying, in the at least one image capture device, a region type associated with the at least one region; and
- applying, in the image capture device, at least one image processing technique to the at least one region of the image based at least upon a preconfigured image processing configuration associated with the region type.
13. The method of claim 12, wherein the step of isolating the at least one region of the image further comprises:
- identifying at least one object in the image;
- extracting the object from the image; and
- designating the object as at least one region of the image.
14. The method of claim 12, wherein the step of isolating at least one region of the image further comprises:
- performing at least one edge recognition algorithm on the image; and
- extracting at least one region of the image associated with at least one edge identified in the image.
15. The method of claim 12, wherein the step of identifying the region type associated with the at least one region further comprises performing at least one image recognition algorithm on the at least one region, the image recognition algorithm configured to determine whether the at least one region corresponds to a region library accessible to the image capture device.
16. The method of claim 15, wherein the region library comprises at least one signature associated with a respective known region type, the at least one signature comprising at least one parameter uniquely associated with the respective known region type.
17. The method of claim 12, wherein the step of applying the at least one image processing technique to the at least one region of the image further comprises recording the at least one image processing technique to meta data associated with the image.
18. The method of claim 12, wherein the step of identifying a region type associated with the at least one region further comprises:
- generating a confidence score associated with identification of the region type, the confidence score corresponding to a confidence level of the identification; and
- adjusting a level associated with the at least one image processing technique associated with the region type based at least upon the confidence score.
19. The method of claim 12, wherein the step of identifying a region type associated with the at least one region further comprises:
- identifying a first region type associated with a first region;
- identifying a second region type associated with the second region;
- applying a first image processing technique to the first region, the first image processing technique associated with the first region type; and
- applying a second image processing technique to the second region, the second image processing technique associated with the second region type.
20. A system, comprising:
- means for capturing an image via an image sensor associated with the image capture device;
- means for isolating at least one region of the image;
- means for identifying a region type associated with the at least one region; and
- means for applying at least one image processing technique to the at least one region of the image based at least upon a preconfigured image processing configuration associated with the region type.
Type: Application
Filed: Sep 27, 2011
Publication Date: Jan 24, 2013
Applicant: BROADCOM CORPORATION (Irvine, CA)
Inventors: Benjamin Sewell (Truro), David Plowman (Great Chesterfield), Gordon (Chong Ming Gordon) Lee (Cambridge), Efrat Swissa (Pittsburgh, PA)
Application Number: 13/245,941
International Classification: H04N 5/228 (20060101); G06K 9/00 (20060101);