Global approximation to spatially varying tone mapping operators
Techniques to generate global tone-mapping operators (G-TMOs) that, when applied to high dynamic range images, visually approximate the use of spatially varying tone-mapping operators (SV-TMOs) are described. The disclosed G-TMOs provide substantially the same visual benefits as SV-TMOs but do not suffer from spatial artifacts such as halos and are, in addition, computationally efficient compared to SV-TMOs. In general, G-TMOs may be identified based on application of a SV-TMO to a down-sampled version of a full-resolution input image (e.g., a thumbnail). An optimized mapping between the SV-TMO's input and output constitutes the G-TMO. It has been unexpectedly discovered that when optimized (e.g., to minimize the error between the SV-TMO's input and output), G-TMOs so generated provide an excellent visual approximation to the SV-TMO (as applied to the full-resolution image).
Latest Apple Patents:
This disclosure relates generally to the field of image processing and, more particularly, to techniques for generating global tone-mapping operators (aka tone mapping curves).
High Dynamic Range (HDR) images are formed by blending together multiple exposures of a common scene. Use of HDR techniques permit a large range of intensities in the original scene to be recorded (such is not the case for typical camera images where highlights and shadows are often clipped). Many display devices such as monitors and printers however, cannot accommodate the large dynamic range present in a HDR image. To visualize HDR images on devices such as these, dynamic range compression is effected by one or more Tone-Mapping Operators (TMOs). In general, there are two types of TMOs: global (spatially-uniform) and local (spatially-varying).
Global TMOs (G-TMOs) are non-linear surjective functions that map an input HDR image to an output Low Dynamic Range (LDR) image. G-TMO functions are typically parameterized by image statistics drawn from the input image. Once a G-TMO function is defined, every pixel in an input image is mapped globally (independent from surrounding pixels in the image). By their very nature, G-TMOs compress or expand the dynamic range of the input signal (i.e., image). By way of example, if the slope of a G-TMO function is less than 1 the image's detail is compressed in the output image. Such compression often occurs in highlight areas of an image and, when this happens, the output image appears flat; G-TMOs often produce images lacking in contrast.
Spatially-varying TMOs (SV-TMOs) on the other hand, take into account the spatial context within an image when mapping input pixel values to output pixel values. Parameters of a nonlinear SV-TMO function can change at each pixel according to the local features extracted from neighboring pixels. This often leads to improved local contrast. It is known, however, that strong SV-TMOs can generate halo artifacts in output images (e.g., intensity inversions near high contrast edges). Weaker SV-TMOs, while avoiding such halo artifacts, typically mute image detail (compared to the original, or input, image). As used herein, a “strong” SV-TMO is one in which local processing is significant compared to a “weak” SV-TMO (which, in the limit, tends toward output similar to that of a G-TMO). Still, it is generally recognized that people feel images mapped using SV-TMOs are more appealing than the same images mapped using G-TMOs. On the downside, SV-TMOs are generally far more complicated to implement than G-TMOs. Thus, there is a need for a fast executing global tone-mapping operator that is able to produce appealing output images (comparable to those produced by spatially variable tone-mapping operators).
SUMMARYIn one embodiment the inventive concept provides a method to convert a high dynamic range (HDR) color input image to a low-dynamic range output image. The method includes receiving a HDR color input image, from which a brightness or luminance image may be obtained, extracted or generated. The grayscale image may then be down-sampled to produce, for example, a thumbnail representation of the original HDR color input image. By way of example, the HDR input image may be an 8, 5 or 3 megapixel image while the down-sampled grayscale image may be significantly smaller (e.g., 1, 2, 3 or 5 kilobytes). A spatially variable tone mapping operator (SV-TMO) may then be applied to the down-sampled image to produce a sample output image. A mapping from the grayscale version of the output image to the sample output image may be determined—generating a global tone mapping operator (G-TMO). It has been discovered that this G-TMO, when applied to the full resolution grayscale version of the HDR color input image, produces substantially the same visual result as if the SV-TMO was applied to the full resolution grayscale image. This is so even thought it's generation was based on a down-sampled image and which, as a result, have significantly less information content. In one embodiment the resulting low dynamic range (LDR) grayscale image may be used. In another embodiment the resulting LDR grayscale image may have detail restoration operations applied and then used. In yet another embodiment, the resulting LDR grayscale image may have both detail and color restoration operations applied. In still another embodiment, the generated G-TMO may undergo smoothing operations prior to its use. In a similar manner, an unsharp mask may be applied to the resulting LDR image.
Methods in accordance with this disclosure may be encoded in any suitable programming language and used to control the operation of an electronic device. Illustrative electronic devices include, but are not limited to, desktop computer systems, notebook computer systems, tablet computer systems, and other portable devices such as mobile telephones and personal entertainment devices. The methods so encoded may also be stored in any suitable memory device (e.g., non-transitory, long-term and short term electronic memories).
This disclosure pertains to systems, methods, and computer readable media for generating global tone-mapping operators (G-TMOs) that, when applied to high dynamic range (HDR) images, generate visually appealing low dynamic range (LDR) images. The described G-TMOs provide substantially the same visual benefits as spatially varying tone-mapping operators (SV-TMOs) but do not suffer from spatial artifacts such as halos and are, in addition, computationally efficient to implement compared to SV-TMOs. In general, techniques are disclosed in which a G-TMO may be identified based on application of a SV-TMO to a down-sampled version of a full-resolution input image (e.g., a thumbnail). More specifically, a mapping between the SV-TMO's input (i.e., the down-sampled input image) and output constitutes the G-TMO. It has been unexpectedly discovered that when optimized (e.g., to minimize the error between the SV-TMO's input and output), G-TMOs so generated may be applied to the full-resolution HDR input image to provide an excellent visual approximation to the SV-TMO (as applied to the full-resolution image).
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the inventive concept. As part of this description, some of this disclosure's drawings represent structures and devices in block diagram form in order to avoid obscuring the invention. In the interest of clarity, not all features of an actual implementation are described in this specification. Moreover, the language used in this disclosure has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter. Reference in this disclosure to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention, and multiple references to “one embodiment” or “an embodiment” should not be understood as necessarily all referring to the same embodiment.
It will be appreciated in the development of any actual implementation (as in any development project), numerous decisions must be made to achieve the developers' specific goals (e.g., compliance with system- and business-related constraints), and that these goals may vary from one implementation to another. It will also be appreciated that such development efforts might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the design an implementation of image processing systems having the benefit of this disclosure.
Referring to
It should be noted, optimal G-TMO 145 is not generally an arbitrary function. Rather, G-TMO 145 is typically monotonically increasing both to avoid intensity inversions and to allow image manipulations to be undone. (Most tone curve adjustments made to images such as brightening, contrast changes and gamma are monotonically increasing functions.) Thus, in accordance with one embodiment, operation 140 seeks to find a one-dimensional (1-D) surjective and monotonically increasing function that best maps modified input image 125 to temporary image 135. While not necessary to the described methodologies, input data values will be assumed to be in the log domain (e.g., HDR image 125 and LDR image 135 pixel values). This approach is adopted here because relative differences are most meaningful to human observers whose visual systems have approximately a log response and, practically, most tone mappers (hardware and/or software) use the log of their input image as input.
Returning to
subject to {circumflex over (m)}(X1)≦{circumflex over (m)}(X2)≦ . . . ≦{circumflex over (m)}(Xn). In the embodiment represented by EQ. 1, the mapping error noted above is minimized in a root mean squared error (RMSE) sense. A designer may use whatever minimization technique they deem appropriate for their particular. For example, monotonically increasing function {circumflex over (m)}( ) may be found using Quadratic Programming (QP), where QP is an optimization technique for finding the solution to a sum of square objective function (e.g., EQ. 1) subject to linear constraints (e.g., monotonic increasing). A QP solution can have the advantage that it permits “almost” monotonically increasing G-TMOs to be found. Another approach to minimizing EQ. 1 may use the Pool Adjacent Violators Algorithm (PAVA).
Referring to
Y*i=(Yi+Yi−1+ . . . +Yi)÷n, EQ. 2
where ‘n’ represents the number of pixel values being pooled.
Following block 220, another check may be made to determine whether the average Y image pixel value (Y*i) is larger than the preceding Y image pixel value (Yi−1) (block 225). If it is (the “YES” prong of block 225), the net largest X pixel value may be selected (block 215), whereafter operation 140 continues at block 210. If, on the other hand, Y*i−1 is not less than or equal to Y*i (the “NO” prong of block 225), pixel values to the “left” of the current pixel may be pooled together, replacing them with their average, Y*i (block 230). Acts in accordance with block 230 may continue to pool to the left until the monotonicity requirement of block 225 is violated.
In the end, operation 140 yields {circumflex over (m)}( ) which, from EQ. 1, gives us approximated optimal G-TMO 145. It has been found, quite unexpectedly, that G-TMO 145 may be used to approximate the use of a SV-TMO on the full-resolution HDR grayscale input image (e.g., input image 115)—this is so even though its' development was based on a down-sampled input image (e.g., a thumbnail). As a consequence, HDR-to-LDR conversions in accordance with this disclosure can enjoy the benefits of SV-TMOs (e.g., improved local contrast and perceptually more appealing images) without incurring the computational costs (SV-TMOs are generally far more complicated to implement than G-TMOs), intensity inversions near high contrast edges (i.e., halos), and muted image detail typical of SV-TMOs. It has been discovered that, in practice, use of a SV-TMO on a down-sampled version of a full-resolution HDR input image (i.e., a thumbnail) to develop a G-TMO that optimally approximates the SV-TMO, is significantly easier to implement and uses less computational resources (e.g., memory and processor time) than application of the same SV-TMO directly to the full-resolution HDR input image.
In one embodiment, G-TMO 145 may be applied directly to full-resolution grayscale image Iin 115. It has been found, however, that results obtained through the application of PAVA are sensitive to outliers. Outliers can cause PAVA to produce results (i.e., tone mapping operators or functions) whose outputs exhibit long flat portions; the visual meaning is that a range of input values are all mapped to a common output value, with a potential loss of detail as a result. Thus, even though the basic PAVA solution may be optimal in terms of a RMSE criteria, flat regions in a tone curve (e.g., G-TMO 145 output) can result in visually poor quality images. It has been found that a smoothed PAVA curve has almost the same RMSE as a fully optimal tone mapping operator (e.g., a non-smoothed G-TMO) and does not exhibit flat output regions.
Referring to
Idetail=In+BF(Iin,Ig-tmo′−Iin), EQ. 3
where BF( ) represents a bilateral filter operation. As used here, a bilateral filter calculates a local average of an image where the average for an image pixel I(x,y) weights neighboring pixels close to x and y more than pixels further away. Often the averaging is proportional to a 2-dimensional symmetric Gaussian weighting function. Further, the contribution of a pixel to the local average is proportional to a photometric weight calculated as f(I(x, y), I(x′,y′)), where f( ) returns 1 when the pixel values at locations (x′, y′) and (x, y) are similar. If (x′, y′) and (x, y) are dissimilar, f( ) will return a smaller value and 0 if the pixel values differ too much. In one embodiment pixel similarity may be calculated according to an ‘auxiliary image’. For example, if the ‘red’ channel of a color image is bilaterally filtered the photometric distance might be measured according to a luminance image. Thus, the notation BF(I1,I2) signifies I2 is ‘spatially averaged’ according to auxiliary image I1.
While an image generated by G-TMO′ 310 may be similar to an image produced by a SV-TMO, it can look flat—especially in highlight region areas (where the best global tone-curve has a derivative less than 1). An unsharp mask can often ameliorate this problem—application of which produces approximate output image Iapprox 340 (block 335). Substantially any operator which enhances edges and other high frequency components in an image may be used in accordance with block 335.
Because G-TMO′ 310 is applied to a brightness image (e.g., input image Iin 115), to generate LDR color output image Iout 350 requires color reconstruction operations (block 345). In one embodiment, color reconstruction—for each pixel—may be provided as follows:
where Cout represents one of the color channels (e.g., red, green, or blue) in LDR color output image Iout 350, Cin represents the color value for the corresponding pixel in original color HDR image Iorig 105, and Lin represents the luminance values before operation 315 is applied and Lout represents pixel values after unsharp mask operation 345.
Determination of G-TMO 145 even when based on a down-sampled image can be a computationally expensive operation. To reduce this cost, a PAVA operation such as that described above with respect to block 140 (see
If it is assumed that the G-TMO's final output value is to be the same linear combination of these quantization levels, then:
{circumflex over (m)}(X)=(1−a){circumflex over (m)}(qn)+a{circumflex over (m)}(qn+1). EQ. 6
Referring to
Referring to
Processor 505 may be a system-on-chip such as those found in mobile devices and include a dedicated graphics processing unit (GPU). Processor 505 may be based on reduced instruction-set computer (RISC) or complex instruction-set computer (CISC) architectures or any other suitable architecture and may include one or more processing cores. Graphics hardware 520 may be special purpose computational hardware for processing graphics and/or assisting processor 505 process graphics information. In one embodiment, graphics hardware 520 may include one or more programmable graphics processing unit (GPU) and other graphics-specific hardware (e.g., custom designed image processing hardware).
Referring to
Processor 605 may execute instructions necessary to carry out or control the operation of many functions performed by device 600 (e.g., such as the generation and/or processing of images in accordance with operations 100, 300 and 400). Processor 605 may, for instance, drive display 610 and receive user input from user interface 615. User interface 615 can take a variety of forms, such as a button, keypad, dial, a click wheel, keyboard, display screen and/or a touch screen. Processor 605 may be a system-on-chip such as those found in mobile devices and include a dedicated graphics processing unit (GPU). Processor 605 may be based on reduced instruction-set computer (RISC) or complex instruction-set computer (CISC) architectures or any other suitable architecture and may include one or more processing cores. Graphics hardware 620 may be special purpose computational hardware for processing graphics and/or assisting processor 605 process graphics information. In one embodiment, graphics hardware 620 may include a programmable graphics processing unit (GPU).
Sensor and camera circuitry 650 may capture still and video images that may be processed to generate images in accordance with this disclosure. Output from camera circuitry 650 may be processed, at least in part, by video codec(s) 655 and/or processor 605 and/or graphics hardware 620, and/or a dedicated image processing unit incorporated within circuitry 650. Images so captured may be stored in memory 660 and/or storage 665. Memory 660 may include one or more different types of media used by processor 605, graphics hardware 620, and image capture circuitry 650 to perform device functions. For example, memory 660 may include memory cache, read-only memory (ROM), and/or random access memory (RAM). Storage 665 may store media (e.g., audio, image and video files), computer program instructions or software, preference information, device profile information, and any other suitable data. Storage 665 may include one more non-transitory storage mediums including, for example, magnetic disks (fixed, floppy, and removable) and tape, optical media such as CD-ROMs and digital video disks (DVDs), and semiconductor memory devices such as Electrically Programmable Read-Only Memory (EPROM), and Electrically Erasable Programmable Read-Only Memory (EEPROM). Memory 660 and storage 665 may be used to retain computer program instructions or code organized into one or more modules and written in any desired computer programming language. When executed by, for example, processor 605 such computer program code may implement one or more of the methods described herein.
It is to be understood that the above description is intended to be illustrative, and not restrictive. The material has been presented to enable any person skilled in the art to make and use the invention as claimed and is provided in the context of particular embodiments, variations of which will be readily apparent to those skilled in the art (e.g., some of the disclosed embodiments may be used in combination with each other). For example, the use of quantization levels, G-TMO smoothing, and unsharp mask operations need not be performed. In addition, image operations in accordance with this disclosure may be used to convert HDR color input images to LDR grayscale images by omitting color reconstruction (e.g., block 345 in
Claims
1. A non-transitory program storage device comprising instructions stored thereon to cause one or more processors to:
- obtain a grayscale version of a high dynamic range (HDR) color input image;
- downsample the grayscale version of the HDR color input image to generate a down-sampled grayscale input image;
- apply a spatially varying tone mapping operator (SV-TMO) to the down-sampled grayscale input image to generate a down-sampled grayscale output image;
- determine a global tone mapping operator (G-TMO) according to, at least in part, a Pool-Adjacent-Violators-Algorithm (PAVA); and
- apply the G-TMO to the grayscale version of the HDR color input image to generate a low dynamic range (LDR) grayscale output image.
2. The non-transitory program storage device of claim 1, wherein the instructions to determine the G-TMO further comprises instructions to cause the one or more processors to:
- determine a first G-TMO according to the PAVA; and
- apply a smoothing filter to the first G-TMO to generate the G-TMO.
3. The non-transitory program storage device of claim 1, wherein the instructions to determine the G-TMO comprise instructions to cause the one or more processors to determine a mapping between each pixel in the down-sampled grayscale input image to a corresponding pixel in the down-sampled grayscale output image, wherein the mapping is selected to minimize a specified error criterion.
4. The non-transitory program storage device of claim 3, wherein the specified error criterion comprises a root means squares error criteria.
5. The non-transitory program storage device of claim 1, further comprising instructions to cause the one or more processors to convert the LDR grayscale output image to a LDR color output image.
6. The non-transitory program storage device of claim 2, wherein each pixel in the color output image is based, in part, on a ratio of the corresponding pixels in the grayscale output image and the grayscale version of the color input image.
7. The non-transitory program storage device of claim 1, wherein the instructions to obtain the grayscale version of the HDR color input image comprise instructions to cause the one or more processors to obtain a brightness channel of the HDR color input image, wherein the HDR color input image comprises a brightness channel and one or more chrominance channels.
8. The non-transitory program storage device of claim 1, wherein the instructions to generate the down-sampled grayscale input image comprise instructions to cause the one or more processors to generate a thumbnail of the grayscale version of the HDR color input image.
9. The non-transitory program storage device of claim 1, wherein the instructions to determine the first G-TMO according to the PAVA comprise instructions to cause the one or more processors to quantize values from the down-sampled grayscale input image into a specified number of levels, wherein the specified number of levels is less than the number of possible brightness levels that a pixel in the down-sampled grayscale input image may have.
10. The non-transitory program storage device of claim 1, wherein the instructions to generate the down-sampled grayscale output image comprise instructions to cause the one or more processors to:
- apply the G-TMO to the grayscale version of the HDR color input image to generate a first grayscale image; and
- perform detail recovery operations on the first grayscale image to generate a grayscale output image.
11. The non-transitory program storage device of claim 10, wherein the instructions to perform the detail recovery operations comprise instructions to cause the one or more processors to apply a bilateral filter to a combination of the grayscale version of the HDR color input image and the first grayscale image.
12. The non-transitory program storage device of claim 10, wherein the instructions to generate the grayscale output image comprise instructions to cause the one or more processors to:
- perform detail recovery operations on the first grayscale image to generate a second grayscale image; and
- apply an unsharp mask to the second grayscale image to generate a grayscale output image.
13. An electronic device, comprising:
- a display element;
- a memory operatively coupled to the display element; and
- one or more processing units operatively coupled to the display element and the memory, and adapted to execute instructions stored in the memory to: obtain a grayscale version of a high dynamic range (HDR) color input image; downsample the grayscale version of the HDR color input image to generate a down-sampled grayscale input image; apply a spatially varying tone mapping operator (SV-TMO) to the down-sampled grayscale input image to generate a down-sampled grayscale output image; determine a global tone mapping operator (G-TMO) according to, at least in part, a Pool-Adjacent-Violators-Algorithm (PAVA); and apply the G-TMO to the grayscale version of the HDR color input image to generate a low dynamic range (LDR) grayscale output image.
14. The electronic device of claim 13, wherein the one or more processing units are adapted to execute instructions to:
- determine a first G-TMO according to the PAVA; and
- apply a smoothing filter to the first G-TMO to generate the G-TMO.
15. The electronic device of claim 13, wherein the one or more processing units are adapted to execute instructions to determine a mapping between each pixel in the down-sampled grayscale input image to a corresponding pixel in the down-sampled grayscale output image, wherein the mapping is selected to minimize a specified error criterion.
16. The electronic device of claim 15, wherein the specified error criterion comprises a root means squares error criteria.
17. The electronic device of claim 13, wherein the one or more processing units are adapted to execute instructions to convert the LDR grayscale output image to a LDR color output image.
18. The electronic device of claim 17, wherein each pixel in the color output image is based, in part, on a ratio of the corresponding pixels in the grayscale output image and the grayscale version of the color input image.
19. The electronic device of claim 13, wherein the one or more processing units are adapted to execute instructions to obtain a brightness channel of the HDR color input image, wherein the HDR color input image comprises a brightness channel and one or more chrominance channels.
20. The electronic device of claim 13, wherein the one or more processing units are adapted to execute instructions to generate a thumbnail of the grayscale version of the HDR color input image.
21. The electronic device of claim 13, wherein the one or more processing units are adapted to execute instructions to quantize values from the down-sampled grayscale input image into a specified number of levels, wherein the specified number of levels is less than the number of possible brightness levels that a pixel in the down-sampled grayscale input image may have.
22. The electronic device of claim 13, wherein the one or more processing units are adapted to execute instructions to:
- apply the G-TMO to the grayscale version of the HDR color input image to generate a first grayscale image; and
- perform detail recovery operations on the first grayscale image to generate a grayscale output image.
23. The electronic device of claim 22, wherein the one or more processing units are adapted to execute instructions to apply a bilateral filter to a combination of the grayscale version of the HDR color input image and the first grayscale image.
24. The electronic device of claim 22, wherein the one or more processing units are adapted to execute instructions to
- perform detail recovery operations on the first grayscale image to generate a second grayscale image; and
- apply an unsharp mask to the second grayscale image to generate a grayscale output image.
25. An image conversion method, comprising:
- obtaining a grayscale version of a high dynamic range (HDR) color input image;
- downsampling the grayscale version of the HDR color input image to generate a down-sampled grayscale input image;
- applying a spatially varying tone mapping operator (SV-TMO) to the down-sampled grayscale input image to generate a down-sampled grayscale output image;
- determining a global tone mapping operator (G-TMO) according to, at least in part, a Pool-Adjacent-Violators-Algorithm (PAVA); and
- applying the G-TMO to the grayscale version of the HDR color input image to generate a low dynamic range (LDR) grayscale output image.
6167165 | December 26, 2000 | Gallagher |
8515196 | August 20, 2013 | Hogasten |
20080002904 | January 3, 2008 | Lyu |
20080131016 | June 5, 2008 | Kokemohr |
20080267494 | October 30, 2008 | Cohen |
20090220169 | September 3, 2009 | Bennett |
20100157078 | June 24, 2010 | Atanassov |
20100166301 | July 1, 2010 | Jeon |
20100177203 | July 15, 2010 | Lin |
20100195901 | August 5, 2010 | Andrus |
20110110601 | May 12, 2011 | Hong |
20110188744 | August 4, 2011 | Sun |
20110194618 | August 11, 2011 | Gish |
20110229019 | September 22, 2011 | Batur |
20110243473 | October 6, 2011 | Chen |
20120170842 | July 5, 2012 | Liu |
20120183210 | July 19, 2012 | Zheng |
20120206470 | August 16, 2012 | Frank |
20120218442 | August 30, 2012 | Jandhyala |
20120262600 | October 18, 2012 | Velarde |
20130083838 | April 4, 2013 | Touze |
20130241931 | September 19, 2013 | Mai |
20130335438 | December 19, 2013 | Ward |
20140002479 | January 2, 2014 | Muijs |
20140044372 | February 13, 2014 | Mertens |
20140369410 | December 18, 2014 | Olivier |
20150201222 | July 16, 2015 | Mertens |
- “Optimal Contrast Enhancement for Tone-mappd Low Dynamic Range Images based on High Dynamic Range Images,” Ka-Yue Yip et al., IEEE Pacific Rim Conference on Communications, Computers and Signal Processing, 2009, Aug. 23-26, 2009, pp. 53-58.
- “Photography Enhancement Based on the Fusion of Tone and Color Mappings in Adaptive Local Region,” Te-Hsun Wang et al., IEEE Transaction on Image Processing, vol. 19, No. 12, Dec. 2010, pp. 3089-3105.
- “Tone Reproduction: A Perspectiv from Luminance-Driven Perceptual Groupig,” Hwann-Tzong Chen et al. IEEE Computer Society Conferece on Computer Visio and Patter Recognition 2005, CVPR 2005, vol. 2, Jun. 20-25, 200, pp. 369-376.
- Finlayson, Graham D. and Jakkarin Singnoo. “Optimal Global Approximation to Spatially Varying Tone Mapping Operators.” May 2012, pp. 1-8.
Type: Grant
Filed: Aug 31, 2015
Date of Patent: Apr 18, 2017
Patent Publication Number: 20170018059
Assignee: Apple Inc. (Cupertino, CA)
Inventors: Graham D. Finlayson (Norwich), Jakkarin Singnoo (Norwich)
Primary Examiner: Li Liu
Application Number: 14/840,843
International Classification: G06K 9/00 (20060101); G06T 5/00 (20060101); G06T 5/40 (20060101);