IMAGING SYSTEM AND METHOD
A method of distance measuring includes obtaining a depth map and a stereo pair of images of a scene of interest, and enhancing a precision of the depth map based on disparity values of corresponding points between the images. The images have a higher resolution than the depth map. Enhancing the precision of the depth map includes optimizing an energy function of the images over a predetermined range of disparity values to obtain an optimized energy function; determining the disparity values based on the optimized energy function; and replacing low precision values of the depth map with corresponding high precision values based on the disparity values.
This application is a continuation of U.S. application Ser. No. 16/109,211, filed on Aug. 22, 2018, which is a continuation of International Application No. PCT/CN2016/074520, filed on Feb. 25, 2016, the entire contents of both of which are incorporated herein by reference.
COPYRIGHT NOTICEA portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
FIELDThe disclosed embodiments relate generally to digital imaging and more particularly, but not exclusively, to systems and methods for enhancing precision of depth perception in stereoscopic imaging.
BACKGROUNDStereoscopic imaging, a technique whereby multiple imaging devices are used to form a three dimensional image through stereopsis, is becoming increasingly common in many fields. Stereoscopic imaging is particularly useful in robotics, where it is often desirable to gather three-dimensional information about a machine's environment. Stereoscopic imaging simulates the binocular visions of human eyes, which apply the principle of stereopsis to achieve depth perception. This technique can be reproduced by artificial imaging devices by viewing a given object of interest using multiple imaging devices from slightly different vantage points. Differences between varying views of the object of interest convey depth information about the object, thereby enabling three-dimensional imaging of the object.
The ability of stereoscopic imaging to resolve depth is a function of the resolution of images that are taken from different vantage points and compared. Higher resolution images yields more precise depth measurements. Obtaining greater precision of depth perception is especially important in applications for viewing distant objects, such as in outdoor imaging applications. However, existing methods of determining depth by stereoscopic imaging scale poorly as image resolution increases, and are ill-suited for such imaging applications. Accordingly, there is a need for systems and methods that more efficiently increase depth perception precision in stereo imaging.
SUMMARYIn accordance with a first aspect disclosed herein, there is set forth a method of distance measuring, comprising: obtaining a depth map and a stereo pair of images of a scene of interest, the images having a higher resolution than the depth map; and enhancing a precision of the depth map based on disparity values of corresponding points between the images.
In accordance with another aspect disclosed herein, there is set forth an imaging system, comprising: a pair of imaging devices configured to obtain a stereo pair of images of a scene of interest; and one or more processors configured to enhance a precision of a depth map of the scene of interest based on disparity values of corresponding points between the images, wherein the images have a higher resolution than the depth map.
In accordance with another aspect disclosed herein, there is set forth an apparatus for imaging, comprising one or more processors configured to: obtain a depth map of the scene of interest; obtain a stereo pair of images of a scene of interest, the images having a higher resolution than the depth map; and enhance a resolution of a depth map based on disparity values of corresponding points between the images.
In accordance with another aspect disclosed herein, there is set forth a computer readable storage medium, comprising: instruction for obtaining a depth map and a stereo pair of images of a scene of interest, the images having a higher resolution than the depth map; and instruction for enhancing a resolution of the depth map based on disparity values of corresponding points between the images.
It should be noted that the figures are not drawn to scale and that elements of similar structures or functions are generally represented by like reference numerals for illustrative purposes throughout the figures. It also should be noted that the figures are only intended to facilitate the description of the embodiments. The figures do not illustrate every aspect of the described embodiments and do not limit the scope of the present disclosure.
DETAILED DESCRIPTION OF THE EMBODIMENTSThe present disclosure sets forth systems and methods for enhancing the precision of depth measurements obtained using stereoscopic imaging, which overcome limitations of traditional systems and methods. More particularly, prior systems and methods for finding a disparity between corresponding points in two separate images are inefficient, scaling with the cube of the resolution of the images. For example, increasing image resolution from 320×240 pixels to 640×480 can increase computational costs by a factor of eight, even though the precision of the resulting depth map is increased only by a factor of two. The present systems and methods significantly enhance efficiency of obtaining high precision depth information.
Turning now to
The imaging system 100 can include any number of imaging devices 110, as desired, though two imaging devices 110a and 110b are shown for illustrative purposes only. For example, the imaging system 100 can have 2, 3, 4, 5, 6, or even a greater number of imaging devices 110. The imaging devices 110 can be arranged in any desired manner in the imaging system 100. The specific arrangement of imaging devices 110 can depend on the imaging application. In some embodiments, for example, a pair of imaging devices 110 can be positioned side-by-side as a left imaging device 110a and a right imaging device 110b. In some embodiments, the imaging devices 110a and 110b can be configured to have parallel optical axes (not shown in
Each imaging device 110 can perform the function of sensing light and converting the sensed light into electronic signals that can be ultimately rendered as an image. Exemplary imaging devices 110 suitable for use with the disclosed systems and methods, include, but are not limited to, commercially-available cameras and camcorders. Suitable imaging devices 110 can include analog imaging devices (for example, video camera tubes) and/or digital imaging devices (for example, charge-coupled device (CCD), complementary metal-oxide-semiconductor (CMOS), N-type metal-oxide-semiconductor (NMOS) imaging devices, and hybrids/variants thereof). Digital imaging devices, for example, can include a two-dimensional grid or array of photosensor elements (not shown) that can each capture one pixel of image information. In some embodiments, each imaging device 110 has a resolution of at least 0.01 Megapixels, 0.02 Megapixels, 0.05 Megapixels, 0.1 Megapixels, 0.5 Megapixels, 1 Megapixel, 2 Megapixels, 5 Megapixels, 10 Megapixels, 20 Megapixels, 50 Megapixels, 100 Megapixels, or an even greater number of pixels. Exemplary image resolutions that can be used for the present systems and methods include 320×240 pixels, 640×480 pixels, 800×600 pixels, 1024×786 pixels, 1280×960 pixels, 1536×1180 pixels, 2048×1536 pixels, 2560×1920 pixels, 3032×2008 pixels, 3072×2304 pixels, 3264×2448 pixels, and other image resolutions.
Each imaging device 110 can also include a lens 105 for focusing light onto the photosensor elements, such as a digital single-lens reflex (DSLR) lens, pin-hole lens, biological lens, simple convex glass lens, macro lens, zoom lens, telephoto lens, fisheye lens, wide-angle lens, or the like.
Each imaging device 110 can also include apparatus (not shown) that separates and/or filters the sensed light based on color and directs the light onto the appropriate photosensor elements. For example, the imaging device 110 can include a color filter array that passes red, green, or blue light to selected pixel sensors and forms an interlaced color mosaic grid in a Bayer pattern. Alternatively, for example, each imaging device 110 can include an array of layered pixel photosensor elements that separates light of different wavelengths based on the properties of the photosensor elements.
Each imaging device 110 can have specialty functions for use in various applications such as thermography, creation of multi-spectral images, infrared detection, gamma detection, x-ray detection, and the like. Each imaging device 110 can include, for example, electro-optical sensors, thermal/infrared sensors, color or monochrome sensors, multi-spectral imaging sensors, spectrophotometers, spectrometers, thermometers, and/or illuminometers.
As shown in
In some embodiments, the processor 120 is physically located adjacent to the imaging devices 110, in which case data between the processor 120 and the imaging devices 110 can be communicated locally. An advantage of local communication is that transmission delay can be reduced to facilitate real-time image processing, and depth precision enhancement. In other embodiments, the processor 120 can be located remotely from the imaging devices 110. Remote processing may be preferable, for example, because of weight restrictions or other reasons relating to an operational environment of the imaging system 100. As a non-limiting example, if the imaging devices 110 are mounted aboard a mobile platform, such as an unmanned aerial vehicle 50 (UAV) (shown in
As shown in
Data from the processors 120 and/or the memories 130 can be communicated with one or more input/output devices 140 (for example, buttons, a keyboard, keypad, trackball, displays, and/or a monitor). The input/output devices 140 can each have a suitable interface to deliver content to a user 20. The input/output devices 140 can be used to provide a user interface for interacting with the user 20 to obtain images and control a process for enhancing depth precision. Various user interface elements (for example, windows, buttons, menus, icons, pop-ups, tabs, controls, cursors, insertion points, and the like) can be used to interface with the user 20. The video synchronization system 100 can further include one or more additional hardware components (not shown), as desired.
Turning now to
Turning now to
where cx and cy represent respective center coordinates of the imaging devices 110a and 110b, xi and y1 represent the coordinates of the object 150 of interest in one or both of the images 200a and 200b, b is the baseline (in other words, the distance between the center coordinates of the imaging devices 110a and 110b), f is the focal length of each imaging devices 110a and 110b (assuming here that the imaging devices have the same focal length), i is an index over the objects of interest 15, and di is the disparity of the object of interest 15 between the images 200a and 200b, represented as:
di=xli−xti. Equation (4)
Turning now to
In some embodiments, the low resolution left and right images 200a, 200b can be rectified prior to searching for pixel correspondence, so as to improve search performance. For example, the left and right images 200a, 200b can be rotated such that the horizontal axes of the images are parallel to each other. The left and right images 200a, 200b can be rectified prior to performing depth measurement precision, as described herein.
After a corresponding pixel is located, a disparity d can be found between corresponding pixels. The disparity d can be represented as a number of pixels or as an absolute distance (where the distance width of each pixel is known). As each pixel 210a, 210b produces a depth measurement for a corresponding pixel 310 in the low resolution depth map 300, the x-y resolution of the low resolution depth map 300 is dependent on the x-y resolution of the image pair 200a, 200b. Similarly, the depth precision of the low resolution depth map 300 is also dependent on the x-y resolution of the image pair 200a, 200b, as the depth precision is determined by the granularity of the disparity d. Thus, the low resolution images 200a, 200b can generate a corresponding low resolution depth map 300.
Depth precision of a depth map can be increased using higher resolution binocular image pairs (for example, 640×480 pixels rather than 320×240 pixels). As shown in
Values of disparity d for each pair of corresponding pixels can be used to produce a high resolution depth map 350, where each pixel 360 of the high resolution depth map 350 conveys the disparity d for a given location. In this example, the disparity din the high resolution depth map 350 has twice the precision of the low resolution depth map 300, since the disparity range for any given object distance is represented by twice the number of pixels. However, to achieve this two-fold increase in depth precision, computational intensity increased by a factor of 8 because pixel correspondence is searched in the x, y, and depth dimensions.
Turning now to
The low resolution depth map 300 can be obtained using any means. For example, the low resolution depth map 300 can be acquired from low resolution images 200a, 200b (shown in
In some embodiments, the low resolution depth map 300 can be obtained using the present systems and methods by a “bootstrapping” process using, as input, a depth map having a still lower resolution, as well as a pair of low resolution images 200a, 200b having the same resolution as the low resolution depth map 300a. For example, a 320×240 pixel depth map can be constructed from a 160×120 pixel depth map, as well as a pair of images 200a, 200b having a 320×240 pixel resolution. The bootstrapping process can continue for multiple iterations. For example, a 160×120 pixel resolution depth map can be constructed from a 80×60 pixel depth map, as well as a pair of images having a 160×120 pixel resolution, and so forth. In some embodiments, a pair of images can be used as input for multiple levels of this bootstrapping process. For example, a given pair of 640×480 pixel resolution images 250a, 250b can be processed to reduce resolution to 320×240 pixels as input for one level of the bootstrapping process, reduced to a resolution of 160×120 pixels for a subsequent level of the process, and so forth. This bootstrapping process advantageous enables efficient scaling for obtaining more precise depth measurements during stereoscopic imaging.
Accordingly, turning now to
Corresponding pixels 260a, 260b between the images 250a, 250b can be identified and/or acquired using any suitable method, such as machine vision and/or artificial intelligence methods, and the like. Suitable methods include feature detection, extraction and/or matching techniques such as RANSAC (RANdom SAmple Consensus), Shi & Tomasi corner detection, SURF blob (Speeded Up Robust Features) detection, MSER blob (Maximally Stable Extremal Regions) detection, SURF (Speeded Up Robust Features) descriptors, SIFT (Scale-Invariant Feature Transform), FREAK (Fast REtinA Keypoint) descriptors, BRISK (Binary Robust Invariant Scalable Keypoints) descriptors, HOG (Histogram of Oriented Gradients) descriptors, and the like. Size and shape filtered can be applied to identify corresponding pixels 260a, 260b between the images 250a, 250b, as desired.
Turning now to
E(d)=Ed(d)+pEs(d) Equation (5)
wherein Ed(d) is a similarity component reflecting correspondences between pixel intensities of the images 200a, 200b, Es(d) is a smoothness component reflecting continuity of depth transitions between elements of the depth map 300, and p is a weighing term. The energy function E(d) is a function of the disparity values d of the depth map 300, such that optimizing the energy function E(d) can yield disparity values d that best reflect actual distances of objects imaged. In some embodiments, the similarity component Ed(d) can include a sum of absolute differences (SAD) of a pixel dissimilarity metric, such as a Birchfield-Tomasi (BT) pixel dissimilarity metric. An exemplary similarity component Ed(d) that includes a sum of absolute differences of a Birchfield-Tomasi pixel dissimilarity metric Ed
Ed(d)=Σd
Ed
Ed
wherein x and y are pixel coordinates, d is the disparity, IL is an array of image pixel intensities of a left image 200a, and IR is an array of image pixel intensities of a right image 200b. Although a Birchfield-Tomasi pixel dissimilarity metric is shown herein for illustrative purposes only, any suitable pixel dissimilarity metric can be used for the present systems and methods.
In some embodiments, the smoothness component Es(d) can be based on a sum of trigger functions. An exemplary smoothness component Es(d) that is based on a sum of trigger functions is shown in Equation (11) below:
Esmoothness(d)=Σp1T(|d(x, y)−d(x′, y′)|==1)+p2T(|d(x, y)−d(x′, y′)|>1) Equation (11)
wherein T is a trigger function, p1 and p2 are weighing terms, and the sum is taken over neighboring pixels (for example, four neighboring pixels) of a pixel at pixel coordinates (x, y). Although a smoothness component Es(d) based on a sum of trigger functions is shown herein for illustrative purposes only, any suitable smoothness component Es(d) can be used for the present systems and methods.
At 1220, low precision values of the depth map 300 can be replaced with corresponding high precision depth values based on the disparity values d determined at 1210. In some embodiments, all low precision values of the depth map 300 can be replaced with corresponding high precision depth values. In some embodiments, some, but not all low precision values of the depth map 300 can be replaced with corresponding high precision depth values. In some embodiments, selected low precision values of the low precision depth map 300 can be replaced with corresponding high precision values based on the low precision values being within a predetermined threshold disparity dT. In some embodiments, the threshold disparity dT can correspond to a predetermined threshold distance DT.
The method of replacing low precision values with high precision values in a depth map 300 is illustrated with reference to
In another embodiment, shown in the lower right portion of
In some embodiments, the efficiency of enhancing depth precision can be improved by optimizing the energy function E(d) over a predetermined range of disparity values d (rather than, for example, optimizing over all possible disparity values d). In some embodiments, the energy function can be optimized over a range of disparity values that are within a predetermined disparity threshold dT. The predetermined disparity threshold dT can correspond, for example, to a predetermined threshold distance DT. For example, to resolve distance measurements for distant objects, a disparity threshold dT of 8 pixels can be preset that corresponds to objects of, for example, 100 meters or greater from the imaging device. Accordingly, only disparities between 0 pixels to 7 pixels are sampled when optimizing the data term E(x, y, d) with respect to the disparity d. Optimization of the energy function E(d) over a predetermined range of disparity values d can advantageously reduce computational costs.
In some embodiments, the efficiency of enhancing depth precision can be improved by optimizing the energy function E(d) using an interval sampling technique, as illustrated in
The energy function E(d) can be optimized using any suitable technique. In some embodiments, the energy function E(d) can be optimized using dynamic programming. An exemplary dynamic programing technique is based on the recurrence relation below:
wherein optimal values of the disparity d* can be given by:
d′=argmindΣL(x, y, d) Equation (14)
In some embodiments, the energy function E(d) can be further optimized using non-local optimization. An exemplary non-local optimization is recursive filtering. In some embodiments, non-local optimization of the energy function E(d) can be performed according to Equation (15) as follows:
E(d)=Σ=|d(x, y)−d′(x, y)|2+Σexp(|IL(x, y)−IL(x′, y′)|+|x′−x|+|y′−y|)|d(x, y)−d(x′ y′)| Equation (15)
Depth precision enhancement according to the present systems and methods can be used for images taken by mobile platforms. In some embodiments, the mobile platform is an unmanned aerial vehicle (UAV) 50, as shown in
Turning now to
The disclosed embodiments are susceptible to various modifications and alternative forms, and specific examples thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the disclosed embodiments are not to be limited to the particular forms or methods disclosed, but to the contrary, the disclosed embodiments are to cover all modifications, equivalents, and alternatives.
Claims
1. A method of distance measuring, comprising:
- obtaining a depth map and a stereo pair of images of a scene of interest, the images having a higher resolution than the depth map; and
- enhancing a precision of the depth map based on disparity values of corresponding points between the images, including: optimizing an energy function of the images over a predetermined range of disparity values to obtain an optimized energy function; determining the disparity values based on the optimized energy function; and replacing low precision values of the depth map with corresponding high precision values based on the disparity values.
2. The method of claim 1, wherein optimizing the energy function over the predetermined range of disparity values includes optimizing the energy function over a range of disparity values within a predetermined disparity threshold.
3. The method of claim 2, wherein the predetermined disparity threshold corresponds to a predetermined threshold distance.
4. The method of claim 1, wherein optimizing the energy function over the predetermined range of disparity values includes determining a similarity component of the energy function over the predetermined range of disparity values, the similarity component reflecting correspondences between pixel intensities of the images.
5. The method of claim 4, wherein determining the similarity component includes determining a sum of absolute differences of a pixel dissimilarity metric.
6. The method of claim 5, wherein determining the sum of the absolute differences of the pixel dissimilarity metric includes determining a sum of absolute differences of a Birchfield-Tomasi pixel dissimilarity metric.
7. The method of claim 1, wherein optimizing the energy function includes determining a smoothness component of the energy function reflecting continuity of depth values within the depth map.
8. The method of claim 7, wherein the smoothness component is a weighted sum of trigger functions.
9. The method of claim 8, wherein each of the trigger functions is a function of a disparity difference between a disparity value corresponding to a pixel within the depth map and a disparity value corresponding to one of a plurality of neighboring pixels of the pixel within the depth map.
10. The method of claim 9, wherein a first weight is applied to one or more of the trigger functions of disparity differences that are equal to a non-zero threshold, and a second weight is applied to another one or more of the trigger functions of disparity differences that are larger than the non-zero threshold.
11. The method of claim 1, wherein optimizing the energy function includes optimizing the energy function using at least one of dynamic programming or non-local optimization.
12. The method of claim 11, wherein optimizing the energy function includes optimizing the energy function using recursive filtering.
13. The method of claim 1, wherein replacing the low precision values of the depth map with the corresponding high precision values includes:
- replacing all low precision values of the depth map with corresponding high precision values.
14. The method of claim 1, wherein replacing the low precision values of the depth map with the corresponding high precision values includes:
- replacing selected low precision values of the depth map with corresponding high precision values based on the low precision values being within a predetermined threshold disparity.
15. The method of claim 1, wherein replacing the low precision values of the depth map with the corresponding high precision values includes:
- replacing selected low precision values of the depth map with corresponding high precision values based on the low precision values being within a disparity range that corresponds to a predetermined threshold distance.
16. The method of claim 1, wherein:
- the stereo pair of images are a first stereo pair of images of the scene of interest; and
- obtaining the depth map includes obtaining the depth map from a second stereo pair of images of the scene of interest, the second stereo pair of images having a same resolution as the depth map.
17. The method of claim 16, wherein:
- the depth map is a first depth map; and
- obtaining the first depth map includes obtaining the first depth map from the second stereo pair of images and a second depth map having a lower resolution than the second stereo pair of images.
18. The method of claim 1, further comprising:
- rectifying the stereo pair of images prior to enhancing the precision of the depth map.
19. An imaging system, comprising:
- a pair of imaging devices configured to obtain a stereo pair of images of a scene of interest; and
- one or more processors configured to enhance a precision of a depth map of the scene of interest based on disparity values of corresponding points between the images, the images having a higher resolution than the depth map, and enhancing the precision of the depth map includes: optimizing an energy function of the images over a predetermined range of disparity values to obtain an optimized energy function; determining the disparity values based on the optimized energy function; and replacing low precision values of the depth map with corresponding high precision values based on the disparity values.
20. A non-transitory computer readable storage medium, comprising:
- instruction for obtaining a depth map and a stereo pair of images of a scene of interest, the images having a higher resolution than the depth map; and
- instruction for enhancing a precision of the depth map based on disparity values of corresponding points between the images, including: instruction for optimizing an energy function of the images over a predetermined range of disparity values to obtain an optimized energy function; instruction for determining the disparity values based on the optimized energy function; and instruction for replacing low precision values of the depth map with corresponding high precision values based on the disparity values.
Type: Application
Filed: Jun 21, 2021
Publication Date: Oct 7, 2021
Inventors: Zhenyu ZHU (Shenzhen), Honghui ZHANG (Shenzhen), Zhiyuan WU (Shenzhen)
Application Number: 17/353,769