System and Method for Imaging and Image Processing

One or more objects of interest from a scene are selected. Depth information of the one or more objects is calculated. Additionally, depth information of the scene is calculated. The calculated depth information of the one or more objects is compared with calculated depth information of the scene. Based on the comparison a blur is applied to an image that includes the scene.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description

The present invention relates to a system and method for creating an image having blurred and non blurred areas using an image capturing device. Moreover, the invention relates to an apparatus for creating an image with a low depth of field appearance, to an apparatus for creating an image with highlighted areas of interest and to an apparatus for creating an image with highlighted differences in an image sequence.

BACKGROUND OF THE INVENTION

WO 2006/039486 relates to a method for digitally imaging a scene, the method comprising: using a photo sensor array to simultaneously detect light from the scene that is passed to different locations on a focal plane; determining the angle of incidence of the light detected at the different locations on the focal plane; and using the determined angle of incidence and the determined depth of field to compute an output image in which at least a portion of the image is refocused. This International application discloses a system as well, comprising: a main lens; a photo sensor array for capturing a set of light rays; a microlens array between the main lens and the photo sensor array; a data processor to compute a synthesized refocused image via a virtual redirection of the set of light rays captured by the photo sensor array.

U.S. Pat. No. 7,224,384 relates to an optical imaging system comprising: a taking lens that collects light from a scene being imaged with the optical imaging system; a 3D camera comprising at least one photo surface that receives light from the taking lens simultaneously from all points in the scene and provides data for generating a depth map of the scene responsive to the light; and an imaging camera comprising at least one photo surface that receives light from the taking lens and provides a picture of the scene responsive to the light; and a light control system that controls an amount of light from the taking lens that reaches at least one of the 3D camera and the imaging camera without affecting an amount of light that reaches the other of the 3D camera and the imaging camera.

WO 2008/087652 relates to a method for mapping an object, comprising: illuminating the object with at least two beams of radiation having different beam characteristics; capturing at least one image of the object under illumination with each of the at least two beams; processing the at least one image to detect local differences in an intensity of the illumination cast on the object by the at least two beams; and analyzing the local differences in order to generate a three-dimensional (3D) map of the object.

An object of the present invention is to use information captured by the camera to blur only selected pixels in the image.

Another object of the present invention is to use depth information captured by the camera and a distance of interest set by an algorithm or by a user to blur only selected pixels.

Another object of the present invention is to use chromatic information captured by the camera and a spectrum of interest set by an algorithm or by a user to blur only selected pixels.

Another object of the present invention is to use difference information between two or more sequential frames to blur only selected pixels.

The term multi aperture digital camera as referred to means a camera that consists of more than one imaging lenses each having its aperture and lens elements. The term imaging channel refers to a lens and sensor area of one aperture in a multi aperture digital camera.

Using a multi lens camera allows us to extract distance information of certain objects in a scene. The distance between the lenses of the different imaging channels creates a parallax effect causing object that are not at infinity to appear at different position on the images of the different imaging channels. Calculating these position shifts using an algorithm such as auto-correlation allows us to determine the distance of each object in the scene. Using a time-of-flight systems allows us to calculate depth information of objects in a scene by means of emitting light toward the scene and measuring the time it takes the light to be return to the sensor. The farther an object is the longer time it will take.

Using a structured light system to allow us the calculate depth information of objects in a scene is based on a light emitting system in which light is emitted in a structured manner such as a grid of dots. An imaging camera is used to image these dots and an algorithm measures to position of these dots on the its image. The light emitting system and the imaging camera are separated laterally and therefore a parallax effect is present and by calculating the position of the dots or any other pattern the system can determine the distance of the object in which the dot was reflected from.

Using multiple cameras positioned differently allows us to extract distance information of certain objects in a scene. The distance between the lenses of the different imaging channels creates a parallax effect causing object that are not at infinity to appear at different position on the images of the different imaging channels. Calculating these position shifts using an algorithm such as auto-correlation allows us to determine the distance of each object in the scene.

The present inventors found that it possible to blur selected part of an image in order to create a low depth of field appearance and to highlight certain areas or objects in an image or image sequence. Human, when looking at an image tend to focus the attention to areas that are the sharpest in their surroundings therefore blurring areas which are of lower interest has a clear advantage.

When using cameras with lenses with a low F/# (focal length divided by aperture diameter) the depth of field becomes smaller when the F/# is smaller. Although this effect may be considered a disadvantage as object which are not positioned at the focus distance are severely blurred it may also create a 3 dimensional impression of the scene. Using the method described above for obtaining object distances by calculating the local shift between the images of the different imaging channels or using another technology as described above we can intentionally blur areas in the image that are far from the object of interest which we want to keep sharp.

The present invention relates to a system and method which may be applied to a variety of imaging systems. This system and method provide high quality imaging while considerably reducing the length of the camera as compared to other systems and methods.

Specifically, the object of the present invention is to provide a system and a method to improve image capturing devices while. This may be accomplished by using a 2 or more apertures each using a lens. Each lens forms a small image of the scene. Each lens transfers light emitted or reflected from objects in the scenery onto a proportional area in the detector. The optical track of each lens is proportional to the segment of the detector which the emitted or reflected light is projected on. Therefore, when using smaller lenses the area of the detector which the emitted or reflected light is projected on, referred hereinafter as the active area of the detector, is smaller. When the detector is active for each lens separately, each initial image formed is significantly smaller as compare to using one lens which forms an entire image. One lens camera transfers emitter or reflected light onto the entire detector area.

According to an embodiment the present invention relates to a method for creating an image having blurred and non blurred areas using an image capturing device capable of depth mapping comprising the steps of:

Selecting one or more object of interest from the scene,

Calculating depth information of said one or more objects of interest from the scene,

Retrieving raw data from the multi aperture camera of the complete scene,

Calculating depth information of the complete scene,

Comparing the calculated depth information of the selected object of interest with the calculated depth of the complete scene.

Applying a blur that is dependent on the result of the comparison.

The step of selecting can be done automatically by an algorithm that recognizes area of interest such as faces in conventional photography. Blurring can be achieved by means of convolution of an area of the image with a blur filter such as a Gaussian.

More in detail, if a scene consists of a room with 3 people standing at 1, 2 and 3 meters from the camera respectively, an object of interest can be chosen as the person standing at 1 meter. According to the embodiment above, first we will calculate the distance of the object of interest and than calculate the distance of all other objects and compare them. According to this comparison we decide on the type or size of blur to apple to each object. In this case a small blur will be applied to the person standing at 2 meters and a larger blur will be applied to the person standing at 3 meters. The object of interest which is the person standing at one meter will not be blurred at all.

The advantage of the embodiment is that a low depth of field appearance is achieved.

Another advantage is that the selection of object of interest can be applied automatically or by a user using a touch screen or an input device and a display, in one frame that can be part of a preview mode frame sequence after which a full resolution image may be captured and processed to keep the object of interest in focus while blurring other object respectively with their distance from the object of interest. This eliminates the need to apply the blur only after the image is captured.

According to another embodiment the present invention relates to a method for creating an image having blurred and non blurred areas using an image capturing device capable of depth mapping comprising the steps of:

Capturing an image from the image capturing device,

Calculating a depth map,

Selecting one or more object of interest from the image,

Comparing the calculated depth information of the selected object of interest with the calculated depth of the complete scene,

Applying a blur that is dependent on the result of the comparison.

Blurring can be achieved by means of convolution of an area of the image with a blur filter such as a Gaussian.

The advantage of this embodiment is that the selection of the object of interest is done after the capturing and depth calculating. This allows the user to choose different objects of interest or correct his selection while keeping the non blurred information and depth map. Another advantage is that the selection of objects of interest, comparing with distances of the other objects and blurring accordingly can be done at a different time with respect to the time of the image capturing allowing us the operate these operations on a device different than the one used for image capturing. For example the image capturing device could be a multi aperture camera integrated in to a mobile phone or tablet computer and the selection of object of interest and blurring can be done on a tablet or laptop computer at a different time. Another advantage is that by saving the image and the depth information it is possible to apply select object of interest and blur multiple time while saving the resulting image as a computer file. Each time the selection of object of interest may be different.

According to another embodiment the present invention relates to a method for creating an image having blurred and non blurred areas using an image capturing device, in which the method comprises the following steps:

Capturing an image from the image capturing device,

Calculating chromatic properties of objects appearing in the captured image,

Selecting one or more object of interest from the image according to the calculated chromatic properties,

Applying a blur that is dependent on the result of the selection.

Blurring can be achieved by means of convolution of an area of the image with a blur filter such as a Gaussian.

The advantage of this embodiment is that we can highlight object with certain chromatic nature such as tissue suspected as harmful in an image captured by for example an endoscopic camera.

According to another embodiment the present invention relates to a method for creating an image having blurred and non blurred areas using an image sequence capturing device, in which the method comprises the following steps:

Capturing an image sequence from the image sequence capturing device,

Calculating differences between the sequential images,

Selecting one or more pixel area of interest from the images according to the differences calculated between the sequential frames,

Applying a blur that is dependent on the result of the selection.

Blurring can be achieved by means of convolution of an area of the image with a blur filter such as a Gaussian.

The advantage of this embodiment is that objects that are moving or changing will be highlighted by the effect of the blurring of all other areas of the image or image sequence.

An example of the embodiment is a surveillance camera coupled with a display that is observed by a human. The scene may contain many details and objects which make it more difficult for the human to detect moving objects. By blurring an object that is not moving we attract the attention of the observing human to the moving or changing objects.

The present invention could be integrated in many devices such as a digital camera, digital video camera, mobile phone, a personal computer, tablet, PDA, notebooks, gaming consoles, televisions, monitors, displays, automotive cameras, glasses, helmet, projector, microscopes, imaging endoscopes, imaging medical probe, surveillance systems, inspection systems, speed detection systems, traffic management systems, area access systems, satellite imaging, machine vision and augmented reality systems.

The invention will be more clearly understood by reference to the following description of preferred embodiments thereof read in conjunction with the figures attached hereto. In the figures, identical structures, elements or parts which appear in more than one figure are labeled with the same numeral in all the figures in which they appear. Dimensions of components and features shown in the figures are chosen for convenience and clarity of presentation and are not necessarily shown to scale. The figures are listed below

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 illustrates a side view of a single lens camera.

FIG. 2 illustrates a sensor array (201) having multiple pixels.

FIG. 3 illustrates a side view of a three lens camera having one sensor and three lenses.

FIG. 4 illustrates an example of a scene as projected on to the sensor.

FIG. 5 illustrates a front view of a three lens camera using one rectangular sensor divided in to three regions.

FIG. 6 illustrates a front view of a three lens camera having one sensor, one large lens and two smaller lenses.

FIG. 7 illustrates a front view of a four lens camera having a one sensor (700) and four lenses.

FIG. 8 illustrates a 16 lens camera having four regions, each containing four lenses as illustrated in FIG. 7.

DETAILED DESCRIPTION OF THE DRAWINGS

FIG. 1 illustrates a side view of a single lens camera having a single lens (102) that can comprise one or more elements and a single sensor (101).

FIG. 2 illustrates a sensor array (201) having multiple pixels where the position of the green filter, red filter and blue filter are marked by (202), (203) and (204) respectively. The image that will be taken using this configuration needs to be processed in order to separate the green, red and blue images.

FIG. 3 illustrates a side view of a three lens camera having one sensor (310) and three lenses (301), (302) and (303). Each one of the said lens will project the image of the same scene on to segments of the sensor marked by (311), (312) and (313) respectively. Each one of the three lenses will have different color filters integrated within the lens, in front of it or between the lens and sensor (310). Using the described configuration the image acquired by the sensor will be composed of two or more smaller images, each imaging information from the scene at different spectrums.

FIG. 4 illustrates an example of a scene as projected on to the sensor (401), in each region of the sensor (402), (403) and (404) the same scene is projected but each region will contain information for light at different wavelengths representing different colors according to the filters integrated within the lens that forms the image on each region.

The described configuration does not require the use of a color mask and therefore the maximal spatial frequency that can be resolved by the sensor is higher, on the other hand using smaller lens and smaller active area per channel necessarily means that the focal length of the lens is smaller and therefore the spatial resolution in objects space is decreased. Overall the maximal resolvable resolution for each color remains same.

The image acquired by the sensor is composed of two or smaller images, each containing information of the same scene but in different colors. The complete image is then processed and separated in to 3 or more smaller images and combined together to one large color image.

The Described Method of Imaging has Many Advantages:

    • 1. Shorter lens track (height) as each one of the lens used is smaller in size than the single lens covering the same field of view, the total track (height) of each lens is smaller allowing the camera to be smaller in height, an important factor for mobile phone cameras, notebook cameras and other applications requiring short optical track.
    • 2. Reduced Color artifacts—Since each color is captured separately, artifacts originating from spatial dependency of each color in a color mask will not appear.
    • 3. Lens requirements: each lens does not have to be optimal for all spectrums used but only for one spectrum, allowing simplifying the lens design and possibly decreasing the amount of elements used in each lens as no color correction is needed.
    • 4. Larger Depth of Focus: the depth of focus of a system depends on its focal length. Since we use smaller lenses with smaller focal lengths, we increase the depth of focus by the scale factor squared.
    • 5. Elimination of focus mechanism: focus mechanisms change the distance between the lens and the sensor to compensate for the change in object distance and to assure that the desired distance is in focus during the exposure time. Such a mechanism is costly and has many other disadvantages such as:
      • a. Size
      • b. Power consumption
      • c. Shutter lag
      • d. Reliability
      • e. price

Using a fourth lens in addition to the three used for each color red, green and blue (or other colors) with a broad spectral transmission can allow extension of the sensor's dynamic range and improve the signal-to-noise performance of the camera in low light conditions.

All configuration described above using a fourth lens element can be applied on other configurations having two or more lenses.

Another configuration that is proposed is using two or more lenses with one sensor having a color mask integrated or on top of the sensor such as a Bayer mask. In such a configuration no color filter will be integrated in to each lens channel and all lenses will create a color image on the sensor region corresponding to the specific lens. The resulting image will be processed to form one large image combining the two or more color images that are projected on to the sensor.

Three Lens Camera:

Dividing the sensor's active area in to 3 areas, one for each color Red, Green and Blue (or other colors) can be achieved by placing 3 lens one beside the other as described in the drawing below: The resulting image will consist of 3 small images were each contains information of the same scene in different color. Such a configuration will comprise of 3 lenses where the focal length of each lens is 4/9 of an equivalent single lens camera that uses a color filter array, these values assume a 4:3 aspect ratio sensor.

FIG. 5 illustrates a front view of a three lens camera using one rectangular sensor (500) divided in to three regions (501), (502) and (503). The three lenses (511), (512) and (513) each having different color filters integrated within the lens, in front of the lens or between the lens and the sensor are used to form an image of the same scene but in different colors. In This example each region of the sensor (501), (502) and (503) are rectangular having the longer dimension of the rectangle perpendicular to the long dimension of the complete sensor.

Other three lens configuration can be used, such as using a larger green filtered lens and two smaller lenses for blue and red, such a configuration will results in higher spatial resolution in the green channel since more pixels are being used.

FIG. 6 illustrates a front view of a three lens camera having one sensor (600), one large lens (613) and two smaller lenses (611) and (612). The large lens (613) is used to form an image on the sensor segment marked (603) while the two smaller lenses form an image on the sensor's segments marked with (601) and (602) respectively. The larger lens (613) can use a green color filter while the two smaller lenses (611) and (612) can use a blue and red filter respectively. Other color filters could be used for each lens.

Four Lens Camera:

FIG. 7 illustrates a front view of a four lens camera having a one sensor (700) and four lenses (711), (712), (713) and (714). Each lens forms an image on the corresponding sensor region marked with (701), (702), (703) and (704) respectively. Each one of the lenses will be integrated with a color filter in side the lens, in front of the lens or between the lens and the sensor. All four lenses could be integrated with different color filter or alternatively two of the four lenses could have the same color filter integrated in side the lens, in front of the lens or between the lens and the sensor. For example using two green filters one blue filter and one red filter will allow more light collection in the green spectrum.

M×N Lens Camera:

Using M and/or N larger than 2 allows higher shortening factor and higher increase in depth of focus.

FIG. 8 illustrates a 16 lens camera having 4 regions (801), (802), (803) and (804) each containing four lenses as illustrated in FIG. 7.

Claims

1-15. (canceled)

16. A method for creating an image comprising:

selecting one or more objects of interest from a scene;
calculating first depth information of the one or more objects of interest;
calculating second depth information of the scene;
comparing the first depth information with the second depth information; and
creating an image having at least one blurred area and at least one non-blurred area based on the comparison.

17. The method of claim 16, wherein the first depth information is calculated using a multi aperture digital camera having a plurality of imaging channels.

18. The method of claim 17, wherein the plurality of imaging channels includes filters with identical chromatic transmission properties.

19. The method of claim 17, wherein the plurality of imaging channels each includes a filter with proportional chromatic transmission properties.

20. The method of claim 17, wherein the first depth information is calculated by comparing a plurality of respective images from the plurality of imaging channels.

21. The method of claim 16, wherein the first depth information is calculated using a time-of-flight system.

22. The method of claim 16, wherein the first depth information is calculated by comparing two or more images captured by a differently positioned digital camera.

23. The method of claim 16, wherein the image having at least one blurred area and at least one non-blurred area has a low depth of field appearance.

24. A method for creating an image having blurred and non blurred areas, the method comprising:

capturing an image;
calculating a depth map;
selecting one or more objects of interest from the image;
comparing the calculated depth map with depth information of the selected one or more objects; and
applying a blur to the image based on the comparison.

25. The method of claim 24, wherein responsive to applying the blur, the image has a low depth of field appearance.

26. A method for creating an image having blurred and non blurred areas, the method comprising:

capturing an image sequence comprising sequential images;
calculating differences between the sequential images;
selecting one or more pixel areas of interest from the sequential images based on the calculated differences; and
applying a blur to the image sequence based on the selection of the one or more pixel areas.

27. The method of claim 26, wherein responsive to applying the blur, the differences between the sequential images are highlighted in the image sequence.

Patent History
Publication number: 20140192238
Type: Application
Filed: Apr 24, 2011
Publication Date: Jul 10, 2014
Applicant: LINX COMPUTATIONAL IMAGING LTD. (Zichron Yaakov)
Inventors: Ziv Attar (Rotterdam), Chen Aharon-Attar (Rotterdam), Edwin Maria Wolterink (Valkenswaard)
Application Number: 13/881,039
Classifications
Current U.S. Class: With Transition Or Edge Sharpening (e.g., Aperture Correction) (348/252)
International Classification: H04N 5/232 (20060101);