Structured-stereo imaging assembly including separate imagers for different wavelengths
The present disclosure describes structured-stereo imaging assemblies including separate imagers for different wavelengths. The imaging assembly can include, for example, multiple imager sub-arrays, each of which includes a first imager to sense light of a first wavelength or range of wavelengths and a second imager to sense light of a different second wavelength or range of wavelengths. Images acquired from the imagers can be processed to obtain depth information and/or improved accuracy. Various techniques are described that can facilitate determining whether any of the imagers or sub-arrays are misaligned.
Latest ams Sensors Singapore Pte. Ltd. Patents:
- Linear temperature calibration compensation for spectrometer systems
- Spectrometer including an illumination channel that includes a light pipe
- Manufacture of optical light guides
- Manufacture of optical diffusers composed of reflowable materials
- Designing and constructing dot projectors for three-dimensional sensor modules
The present disclosure relates to structured-stereo imaging assemblies including separate imagers for different wavelengths.
BACKGROUNDVarious techniques can be used to produce three-dimensional (3D) images with a variety of lighting conditions and textures. For example, in a structured-light assembly, a pattern is projected onto a subject, an image of the pattern is obtained, the projected pattern is compared to the collected pattern, and differences between the two patterns are correlated with depth information. In other words, distortions in the pattern are correlated with depth. This technique can be useful for low-light and low-texture objects or scenes. In structured-light assemblies, the projected pattern often uses infra-red (IR) radiation. However, projected IR techniques cannot readily be used when ambient IR is high (e.g., outdoors). Further, the projected IR pattern typically cannot be projected over long distances, and the accuracy of the system tends to be highly sensitive to the accuracy of the projected pattern. This situation can arise, for example, when part of the pattern is occluded by a contour or feature of the object or scene being imaged.
Stereoscopic image capture is another technique for producing 3D depth information. Stereoscopic image capture requires two imagers spatially separated by a baseline distance (B). Depth information (Z) can be extracted from the measured disparity (D) between matched pixels in each image, and the focal length (F) of the imagers according to: Z=(F×B)/D. Distinct features and/or intensity of the object or scene, such as texture, are required to facilitate matching pixels in both images. Consequently, stereoscopic pixel matching is particularly challenging for objects or scenes with low texture, or low-light scenes (where the observable features and intensity of textural differences are diminished). Often an object or scene may possess either high- and low-texture or high and low-light regions, where combinations of both stereoscopic and structured-light image capture are useful for generating depth information of the entire scene.
Structured-stereo (also referred to as active stereo) combines the benefits of both structured-light and stereoscopic image capture. In structured-stereo assemblies, a pattern is projected onto a subject, and images of the pattern are collected by two spatially separated imagers (e.g., IR imagers). Pattern capture by two (or more) imagers can eliminate the problems of pattern occlusion. Moreover, in low-texture and low-light scenes the projected pattern simulates the essential scene texture required for stereoscopic matching of pixels. The disparity between pixels is then correlated with depth. In some cases, disparities between collected patterns may be correlated with depth.
In addition, more advanced structured-stereo embodiments combine image projection (e.g., IR) and pattern-facilitated stereoscopic matching with additional stereoscopic image capture (e.g., RGB). Such an implementation is useful to gather depth information of a scene that is characterized by both low and high texture and/or low and high lighting features. Both structured-stereo and additional stereoscopic image capture are employed to correct the deficiencies of each method. Specifically, depth information from low-texture/low-light regions of a scene is combined with depth information from high-texture/high-light regions of a scene, where each is gathered by structured-stereo (e.g., IR) and additional stereoscopic image capture (e.g., RGB), respectively.
Some imaging assemblies that employ structured-light combinations or structured-stereo project an IR pattern and collect images with an IR/RGB color filter array (CFA), where RGB refers to red (R), green (G) and blue (B) light in the visible part of the spectrum, and where the IR and RGB sensitive regions of the imager are contiguous (i.e., part of the same mosaic or CFA). An imaging assembly that uses such a combined IR/RGB CFA can present various issues. First, crosstalk from the IR-pixels to RGB pixels of the imager can be significant, which can lead to images of reduced quality. Further, the combined IR/RGB CFA generally precludes incorporation of dedicated (e.g., wave-length-specific) optical elements such as lenses designed specifically for IR, red, green or blue light. Consequently, aberrations such as chromatic aberrations can be produced. In order to collect IR radiation, combined IR/RGB imagers typically do not have IR-cut filters. Consequently, RGB pixels may sense a lot of IR radiation, resulting in significant noise.
Another issue that can arise in spatially separated imaging imagers such as structured-stereo assemblies is misalignment. Misalignment is particularly detrimental to the accuracy of the collected depth information. Calibration of the imaging assembly typically occurs before reaching the end-user (e.g., before the assembly leaves the manufacturing factory). During normal use, however, the user may drop the imaging assembly, causing some of the imagers to become misaligned.
SUMMARYThe present disclosure describes structured-stereo imaging assemblies including separate imagers for different wavelengths.
For example, in one aspect, a structured-stereo imaging assembly includes multiple imager sub-arrays, each of which includes a first imager to sense light of a first wavelength or range of wavelengths and a second imager to sense light of a different second wavelength or range of wavelengths. Each of the imager sub-arrays can be mounted on a support.
Some implementations include one or more of the following features. For example, each imager sub-array can include an IR-sensitive imager and a RGB-sensitive imager. In some cases, the imaging assembly includes an array of more than two imager sub-arrays, each of which includes a combination of IR, UV, RGB and/or monochrome imagers. In some implementations, the imaging assembly includes one or more light projection units arranged to project an IR pattern onto a three-dimensional object. If there are multiple pattern projection units, they may have different optical characteristics from one another.
In some cases, one or more processors are operable, collectively, to obtain, from the imager sub-arrays, signals that collectively allow the processor to generate one or more of the following: (i) a stereo image based on images acquired from the first imagers, (ii) a stereo IR image based on images acquired from the second imagers, (iii) a stereo image based on images acquired in response to the light pattern projected by the pattern projection unit, or (iv) a non-stereo image based on images acquired in response to the light pattern projected by the pattern projection unit.
Imaging assemblies are described that have different combinations of imagers in the respective sub-arrays. For example, in some cases, the imaging assembly includes first, second and third imager sub-arrays, each of which includes a first imager to sense light of a first wavelength or range of wavelengths and a second imager to sense light of a different second wavelength or range of wavelengths. In some instances, the imaging assembly includes first and second imager sub-arrays, each of which includes a first RGB-sensitive imager, a second IR-sensitive imager and a third IR-sensitive imager, wherein the second and third IR-sensitive imagers are sensitive to different, respective portions of the IR spectrum. In some implementations, the imaging assembly includes first and second imager sub-arrays, each of which includes a first RGB-sensitive imager, a second RGB-sensitive imager, and a third IR-sensitive imager, wherein one of the first or second RGB-sensitive imagers includes a neutral density filter. One or more processors collectively can acquire images from different ones of the imagers in the sub-arrays and can process the acquired images, for example, to obtain improved depth information and/or improved accuracy.
In some instances, the techniques can includes matching images acquired from different sub-arrays (or groups of the sub-arrays), identifying disparities in pixel positions of the matched images, correlating the disparities with depth, and generating a depth map.
The disclosure also describes various techniques that can facilitate determining whether any of the imagers or sub-arrays are misaligned. For example, in one aspect, an imaging assembly includes multiple imager sub-arrays, each of which includes a first imager to sense light of a first wavelength or range of wavelengths and a second imager to sense light of a different second wavelength or range of wavelengths. One or more processors are configured, collectively, to obtain intra-array and inter-array information based on signals from the imager arrays, the information including intra-array depth information. The processor(s) can compare the intra-array depth information to inter-array depth information and determine whether one of the imager sub-arrays is misaligned based on the comparison.
Some implementations can achieve both structured and stereo benefits such as high-quality 3D imaging, even under low-light conditions, for low texture objects, and/or for images of objects where IR radiation is relatively high.
Other aspects, features and advantages will be apparent from the following detailed description, the accompanying drawings, and the claims.
As shown in
The IR-sensitive imager 14 and the RGB-sensitive imager 16 in a given imager sub-array 12 are separated spatially from one another (i.e., not contiguous) so as to reduce optical cross-talk and to reduce IR saturation of RGB pixels. As explained below, particular combinations of multiple-imager sub-arrays can be used for calibration. The placement of the sub-arrays 12 relative to each other, and the placement of the imagers 14, 16 (within each sub-array 12) relative to each other, is dictated generally by the fact that the placement (B, baseline) is limited by focal length (F), and pixel size. Each imager 14, 16 can have its own dedicated beam shaping element(s) (e.g., optical elements such as a lens stack disposed over the imager). Providing dedicated beam shaping elements can help reduce optical aberrations.
The imaging assembly 10 also includes a pattern projection unit 18 for projecting an IR or other optical pattern onto a 3D object. The pattern can be, for example, regular (e.g., grid) or irregular (e.g., speckles). The pattern projection unit 18 can include a light source and a pattern generator, and can be mounted on the same PCB as the imager sub-arrays 12. The pattern projection unit 18 can be positioned, for example, between a pair of imager sub-arrays 12. When an object is irradiated by light from the pattern projection unit 18, a light pattern appears on the surface of the object. If the pattern projection unit 18 generates IR radiation, then the light pattern may not be visible to the human eye. The projection distance should match the target system specification and system parameters. For proper functionality, the focal length (F) and baseline (B) values should be selected to fit the target depth resolution and maximum detection range. The illumination should be strong enough to achieve good signal-to noise ratio (SNR) and contrast when sensing the pattern. These characteristics generally can be a function of pixel sensitivity, field-of-view (FOV), and processing capability. Preferably, the pattern projection unit 18 can be altered dynamically (e.g., intensity can change as a function of the object or scene conditions).
Some implementations include multiple pattern projection units, where the projectors have different characteristics from one another. For example, in some implementations, a first unit can project a pattern configured for close objects, and another unit can project a pattern optimized for objects a further away. Some implementations include pattern projection units configured to illuminate different regions of the scene such that, collectively, the projection units cover the entire scene. Some instances include multiple units, each of which projects a pattern using one or more wavelengths of light that differ from the other units. For example, the projection units can have different wavelengths/frequencies, where there are corresponding sensing channels with band-pass filters with similar frequency responses. An advantage of this arrangement is that the combined depth maps generated from multiple frequencies can increase the lateral resolution of the final depth map.
Depending on the implementation, one or more processors can be associated with the imaging assembly 10. Collectively, the processors handle image processing and stereo-matching. In some cases, the imaging assembly 10 may include a dedicated processor 20, which can be implemented, for example, as an integrated circuit semiconductor chip with appropriate logic and other circuitry. The processor 20 also can be mounted, in some cases, on the same PCB as the imager sub-arrays 12. In some implementations, the dedicated processor 20 performs both image processing and stereo-matching. In other implementations, the processor 20 processes and stiches the images, but stereo-matching is performed by a processor 20A external to the imaging assembly 10. Such situations can be useful, for example, where the imaging assembly 10 is integrated into an electronic notebook or other host device 21 that has its own processor 20A (see
The processor 20 is configured to control illumination of an object by the pattern projection unit 18, acquire signals generated by the imagers 14, 16 of the imager sub-arrays 12, and to process the acquired signals. In some implementations, the output from the sub-arrays 12 is a standard Bayer pattern or clear/IR monochromatic pattern. A processing chip having two standard sensor inputs (e.g., MIPI/parallel) can be used.
As illustrated by
Collectively, the imager sub-arrays 12 can generate depth information from stereoscopically matched RGB images, and depth information from stereoscopically matched IR images of the IR pattern projected by the unit 18. For example, by using the signals from the RGB imagers 16 in two or more sub-arrays 12, the processor 20 can generate a stereo RGB image of the object 22 by matching corresponding pixels in each image based on the detected visible light. As indicated by
For example, signals from two or more of the IR-sensitive imagers 14 on different sub-arrays 12 can be used by the processor 20 to generate depth information 30 based on the IR images acquired by the imagers (inter-array depth information) 14. Further the signals from imagers on the same sub-array (e.g., imagers 16A and 14A, or imagers 16B and 14B) can generate depth information 34 that, when compared to inter-array depth information 30 or 32 (e.g., imagers 16A and 16B, or imagers 14A and 14B), can be used for inter-array calibration (i.e., to correct for misalignment between imager sub-arrays when an entire sub-array is misaligned with respect to the other). In addition, the signal(s) from a single one of the IR imagers 14 can be used to generate depth information when the IR image of the projected pattern is compared to known patterns. Such a non-stereo image can be used, for example, for intra-array calibration 36, as described below.
In general, the processor 20 in the imaging assembly 10 can obtain inter-array and/or intra-array depth information using the various imagers 14, 16 in the sub-arrays 12. The depth information can be used, for example, to generate a depth map, which then may be used to generate a 3D image. Specific examples are described below.
A first example is described in connection with
A second example is described on connection with
The processor 20 also acquires images with two or more RGB imagers from two or more sub-arrays 12 (block 214) and matches corresponding pixels in the RGB images, for example, based on native features (block 216). The processor 20 then identifies disparities in the matched pixel position (block 218) and correlates the disparities with depth (block 220). In some cases, blocks 214 through 220 are performed in parallel with blocks 204 through 212.
As indicated by blocks 222 and 224, the processor compares the depth information derived in blocks 212 and 220, and determines to which regions in the image the respective depth information should be applied. The determination of block 224 can be based, for example, on statistics. For example, a first part of the scene may not have enough native features to derive good depth information and thus the depth information derived in block 212 may be used to represent the first part of the scene in the final depth image. On the other hand, a second part of the scene may have sufficient texture so that depth information derived in bock 220 may be used to represent the second part of the scene in the final depth map. The processor 20 then constructs a final depth map including depth information, derived at least in part from the depth information of obtained in blocks 212 and 220 such that each region in a scene (of low or high texture, lighting) is accurately mapped (block 224). In some cases, the processor 20 then may generate a final 3D image (block 226). In some cases, differences in color in the depth map can correspond to different depths.
A third example is illustrated in
As further indicated by
Intra-array depth information, in combination with inter-array depth information, also can be used to calibrate of the imaging assembly 10. The intra-array imager distance and orientation can be assumed to be fixed. Thus, the distance between the IR and RGB imagers 14, 16 on the same sub-array 12, as well as their relative orientation, can be assumed to remain the same. As the IR and RGB imagers 14, 16 on a given sub-array 12 are spatially separated from one another, images acquired by the two imagers can be used by the processor 20 to derive depth information. Inter-array depth information can be obtained as well. The processor 20 then can compare intra-array and inter-array depth information and determine whether one of the arrays is misaligned. In some cases, one of the sub-arrays (e.g., sub-array 12B) may be misaligned or one or more of the imagers (e.g., imager 14B or 16B) in a particular sub-array may be misaligned (e.g., as a result of the imaging assembly 10 being dropped and hitting a hard surface). If the processor 20 identifies a disparity between the intra- and inter-array depth information, the difference can be used, for example, to correct for the misalignment. Such corrections can be used for automated inter-array calibration of the imaging assembly 10 either at the factory (i.e., during the manufacturing and production process) and/or by a consumer or other user at a later time. Specific calibration examples are described below.
A first calibration example is illustrated in
If the processor 20 determines that any of the distance values D3, D4, D5, D6 equals either D1 or D2, then the processor concludes that the imager combinations corresponding to values D3, D4, D5 and/or D6 that have a value equal to D1 or D2 do not contain the misaligned imager. For example, if D1≠D2 and D5=D3=D1, then the processor 20 concludes that the imagers 14A, 14B and 16A are not misaligned because collectively the values D5 and D3 are based on images from these imagers. Assuming further that the remaining distance values D4 and D6 have a value that differs from both D1 and D2, then the processor 20 concludes that the imager 16B is the misaligned imager. The processor 20 also can determine that the distances D1=D3=D5 represent the accurate distance to the object 22, because each of these distance values is based on images from the aligned imagers 14A, 14B and 16A. The processor 20 can apply correction factors to the misaligned imager (i.e., 16B in the example) to compensate for the misalignment.
To determine which of the sub-arrays 12 contains the misaligned imagers, a structured-light pattern is projected onto the object 22 from the light projection unit 18. Light from the projection unit 18 is shown in
In some implementations, it can be beneficial to acquire images from imagers in three or more different sub-arrays. The following techniques, which use at least three imagers, can be used in conjunction with any of the foregoing examples.
In some implementations, it can be beneficial to provide multiple IR-sensitive imagers in each sub-array, where each IR-sensitive imager in a particular sub-array is sensitive to a different portion of the near-IR spectrum. An example is illustrated in
Using the assembly of
In some implementations, it can beneficial to provide multiple RGB-sensitive imagers in each sub-array, where at least one of the RGB-sensitive imagers in each particular sub-array includes a neutral density filter. An example is illustrated in
The imaging assembly of
As described above, the stereo imaging assembly 10 can include a light source (e.g., that is part of a pattern projection unit 18) for projecting a pattern onto an object in a scene. In some instances, the light source can be implemented as a laser diode that emits a predetermined narrow range of wavelengths in the IR part of the spectrum. For example, the laser diode in some cases may emit light in the range of about 850 nm±10 nm, or in the range of about 830 nm±10 nm, or in the range of about 940 nm±10 nm. Different wavelengths and ranges may be appropriate for other implementations depending, for example, on the availability of particular laser diodes. Structured light emitted by the laser diode may be reflected, for example, by an object in a scene such that the reflected light is directed toward the image sub-arrays 12A, 12B.
In some cases, each IR imager (e.g., 14A, 14B) includes a band-pass filter designed to filter substantially all IR light except for IR light emitted by the laser diode. The band-pass filter preferably permits passage of a slightly wider range of wavelengths than is emitted by the laser diode. Further, the wider wavelength range that the band-pass filter allows to pass should be centered at slightly longer wavelengths relative to the narrow range of wavelength emitted by the laser diode. This latter feature is inherent, for example, in dielectric-type band-pass filters, where the peak (passed) wavelength shifts to shorter wavelengths as the angle of incidence of impinging light increases. On the other hand, each of the visible-light imagers (e.g., 16A, 16B) can include an IR-cut filter designed to filter substantially all IR light such that almost no IR light reaches the photosensitive region of the visible-light imagers. Thus, the IR-cut filter preferably allows only visible light to pass.
The foregoing features, which can be applied to any of the various implementations described above, can allow the same IR imager (e.g., 14A or 14B) to collect projected IR structured light as well as visible light without IR-washout. The IR imagers can be used, for example, to collect stereo images in various scenarios. For example, in some implementations, the IR imagers can collect patterned or textured IR light that has been projected onto an object or scene (i.e., light projected by a laser diode). In other implementations, the IR imagers in conjunction with the projected textured or patterned light can be used under ambient light conditions (e.g., outside, where there is a significant amount of natural sunlight including IR radiation). In some implementations, a visible-light imager (e.g., 16A or 16B) with an IR-cut filter and a single IR imager (e.g., 14A or 14B) with a band-pass filter can be used to acquire depth information. In some instances, the optical intensity output by the laser diode can be modulated to facilitate serial image capture using different amounts of projected power. For example, low power projected light may be used for closer objects, whereas high power projected light may be used for more distant objects. These features can, in some cases, improve pattern or texture resolution, and also may reduce power. In some implementations, instead of obtaining serial images as the laser diode is modulated between two different non-zero power modes, serial images can be obtained as the laser diode is switched between an ON mode and an OFF mode.
In the context of the present disclosure, the term “light” can include radiation in the visible portion of the spectrum as well as radiation in the IR (including near-IR or far-IR) or UV (including near-UV or far-UV) portions of the spectrum. Light in the visible region may be, for example, monochrome or RGB.
The imaging assemblies described above can be integrated into a wide range of mobile or portable devices, such as cellular phones, smart phones, personal digital assistants (PDAs), digital cameras, portable game consoles, portable multimedia players, handheld e-books, portable tables, and laptops, among others.
Various modifications can be made to the foregoing examples within the spirit of the invention. Accordingly, other implementations are within the scope of the claims.
Claims
1. An apparatus comprising:
- an imaging assembly including: a plurality of imager sub-arrays, each of which includes a first imager to sense light of a first wavelength or range of wavelengths and a second imager to sense light of a different second wavelength or range of wavelengths, wherein each of the first and second imagers has its own respective one or more dedicated beam shaping elements; and a pattern projection unit to project a light pattern onto an object; and
- one or more processors operable collectively to receive and process signals from the first and second imagers in each imager sub-array,
- wherein the one or more processors are operable collectively to obtain, from the imager sub-arrays, signals that collectively allow the one or more processors to generate one or more of the following: (i) a stereo image based on images acquired from the first imagers, (ii) a stereo IR image based on images acquired from the second imagers, (iii) a stereo image based on images acquired in response to the light pattern projected by the pattern projection unit, or (iv) a non-stereo image based on images acquired in response to the light pattern projected by the pattern projection unit,
- wherein the one or more processors are operable collectively to obtain intra-array and inter-array depth information based on signals from the imager sub-arrays, to compare the intra-array depth information to inter-array depth information, to determine whether one of the imager sub-arrays is misaligned based on the comparison and, if so, to determine whether misalignment is intra-array or inter-array,
- wherein the intra-array information is based on pixel data from one of the first imagers and one of the second imagers in a same imager sub-array as one another, and wherein the inter-array information is based on pixel data from respective ones of the first and/or second imagers in different imager sub-arrays from one another.
2. The apparatus of claim 1 wherein the one or more processors are operable collectively to correct for intra-array misalignment.
3. The apparatus of claim 1 wherein the one or more processors are operable collectively to correct for inter-array misalignment.
4. The apparatus of claim 1 wherein the intra-array information comprises stereoscopically matched pixels from one of the first imagers and one of the second imagers in the same imager sub-array.
5. The apparatus of claim 1 wherein each first imager is operable to sense radiation in a visible part of the spectrum, and each second imager is operable to sense radiation in an IR part of the spectrum.
6. The apparatus of claim 1 wherein the one of more processors are operable to generate each of (i), (ii), (iii) and (iv).
7. The apparatus of claim 1 wherein the pattern projection unit is operable to project an IR pattern onto a scene, and wherein the one of more processors are operable collectively to:
- match corresponding pixels in IR images acquired from the imager sub-arrays,
- identify disparities in positions of the matched pixels, and
- correlate the disparities with depth.
8. The apparatus of claim 1 wherein the pattern projection unit is operable to project an IR pattern onto a scene, and wherein the one of more processors are operable collectively to:
- match corresponding IR-sensitive pixels in images acquired from different ones of the imager sub-arrays, identify first disparities in positions of the matched IR-sensitive pixels, and correlate the first disparities with depth to obtain first depth information, and
- match corresponding RGB-sensitive pixels in images acquired from different ones of the imager sub-arrays, identify second disparities in positions of the matched RGB-sensitive pixels, and correlate the second disparities with depth to obtain second depth information.
9. The apparatus of claim 8 wherein the one of more processors are further operable collectively to:
- compare the first and second depth information derived, and determine to which regions of an image representing the scene the respective depth information should be applied.
10. The apparatus of claim 8 wherein the one of more processors are further operable collectively to:
- construct a depth map including depth information derived at least in part from the first and second depth information.
11. The apparatus of claim 1 wherein the one of more processors are operable collectively to:
- match corresponding pixels in RGB images acquired from different ones of the imager sub-arrays,
- identify disparities in positions of the matched pixels, and
- correlate the disparities with depth.
12. The apparatus of claim 1 wherein the one of more processors are operable collectively to:
- match corresponding RGBIR-sensitive pixels in images acquired from different ones of the imager sub-arrays, identify first disparities in positions of the matched RGBIR-sensitive pixels, and correlate the first disparities with depth to obtain first depth information, and
- match corresponding RGB-sensitive pixels in images acquired from different ones of the imager sub-arrays, identify second disparities in positions of the matched RGB-sensitive pixels, and correlate the second disparities with depth to obtain second depth information.
13. The apparatus of claim 12 wherein the one of more processors are further operable collectively to:
- compare the first and second depth information derived, and determine to which regions of an image representing a scene the respective depth information should be applied.
6377700 | April 23, 2002 | Mack et al. |
7136170 | November 14, 2006 | Notni et al. |
7330593 | February 12, 2008 | Hong et al. |
7433024 | October 7, 2008 | Garcia et al. |
7970177 | June 28, 2011 | St. Hillaire et al. |
8350847 | January 8, 2013 | Shpunt |
8456517 | June 4, 2013 | Spektor et al. |
8462207 | June 11, 2013 | Garcia et al. |
8493496 | July 23, 2013 | Freedman et al. |
8494252 | July 23, 2013 | Freedman et al. |
20070102622 | May 10, 2007 | Olsen |
20080319704 | December 25, 2008 | Forster |
20090167843 | July 2, 2009 | Izzat et al. |
20090185274 | July 23, 2009 | Shpunt |
20100225746 | September 9, 2010 | Shpunt |
20100315490 | December 16, 2010 | Kim et al. |
20110025827 | February 3, 2011 | Shpunt |
20110096182 | April 28, 2011 | Cohen et al. |
20110164032 | July 7, 2011 | Shadmi |
20110175983 | July 21, 2011 | Park et al. |
20120056982 | March 8, 2012 | Katz et al. |
20120056988 | March 8, 2012 | Stanhill et al. |
20130095920 | April 18, 2013 | Patiejunas |
20140307057 | October 16, 2014 | Kang |
2007/043036 | April 2007 | WO |
2007/0105205 | September 2007 | WO |
2007/105215 | September 2007 | WO |
2008/0120217 | October 2008 | WO |
- Australian Patent Office (ISA/AU), International Search Report and Written Opinion issued for patent application PCT/SG2015/050055 (dated Aug. 4, 2015).
Type: Grant
Filed: Mar 31, 2015
Date of Patent: Jul 9, 2019
Patent Publication Number: 20170034499
Assignee: ams Sensors Singapore Pte. Ltd. (Singapore)
Inventors: Moshe Doron (San Francisco, CA), Ohad Meitav (Sunnyvale, CA), Markus Rossi (Jona), Dmitry Ryuma (San Francisco, CA), Alireza Yasan (San Jose, CA)
Primary Examiner: Rowina J Cattungal
Application Number: 15/300,997
International Classification: H04N 13/25 (20180101); H04N 5/225 (20060101); H04N 5/33 (20060101); G06T 7/521 (20170101); G06T 7/593 (20170101);