ARCHITECTURES FOR IMAGER ARRAYS AND ARRAY CAMERAS
Architectures for imager arrays configured for use in array cameras in accordance with embodiments of the invention are described. One embodiment of the invention includes a plurality of focal planes, where each focal plane comprises a two dimensional arrangement of pixels having at least two pixels in each dimension and each focal plane is contained within a region of the imager array that does not contain pixels from another focal plane, control circuitry configured to control the capture of image information by the pixels within the focal planes, where the control circuitry is configured so that the capture of image information by the pixels in at least two of the focal planes is separately controllable, and sampling circuitry configured to convert pixel outputs into digital pixel data.
Latest Pelican Imaging Corporation Patents:
- Extended Color Processing on Pelican Array Cameras
- Systems and Methods for Synthesizing Images from Image Data Captured by an Array Camera Using Restricted Depth of Field Depth Maps in which Depth Estimation Precision Varies
- Thin Form Factor Computational Array Cameras and Modular Array Cameras
- Array Camera Configurations Incorporating Constituent Array Cameras and Constituent Cameras
- Array Cameras Incorporating Independently Aligned Lens Stacks
This application claims priority to U.S. patent application Ser. No. 61/334,011 filed on May 12, 2010, which is incorporated by reference herein in its entirety.
FIELD OF THE INVENTIONThe present invention relates generally to imagers and more specifically to imager arrays used in array cameras.
BACKGROUND OF THE INVENTIONA sensor used in a conventional single sensor camera, typically includes a row controller and one or more column read-out circuits. In the context of the array of pixels in an imager, the term “row” is typically used to refer to a group of pixels that share a common control line(s) and the term “column” is a group of pixels that share a common read-out line(s). A number of array camera designs have been proposed that use either an array of individual cameras/sensors or a lens array focused on a single focal plane sensor. When multiple separate cameras are used in the implementation of an array camera, each camera has a separate I/O path and the camera controllers are typically required to be synchronized in some way. When a lens array focused on a single focal plane sensor is used to implement an array camera, the sensor is typically a conventional sensor similar to that used in a conventional camera. As such, the sensor does not possess the ability to independently control the pixels within the image circle of each lens in the lens array.
SUMMARY OF THE INVENTIONSystems and methods are disclosed in which an imager array is implemented as a monolithic integrated circuit in accordance with embodiments of the invention. In many embodiments, the imager array includes a plurality of imagers that are each independently controlled by control logic within the imager array and the image data captured by each imager is output from the imager array using a common I/O path. In a number of embodiments, the pixels of each imager are backside illuminated and the bulk silicon of the imager array is thinned to different depths in the regions corresponding to different imagers in accordance with the spectral wavelengths sensed by each imager.
One embodiment of the invention includes a plurality of focal planes, where each focal plane comprises a two dimensional arrangement of pixels having at least two pixels in each dimension and each focal plane is contained within a region of the imager array that does not contain pixels from another focal plane, control circuitry configured to control the capture of image information by the pixels within the focal planes, where the control circuitry is configured so that the capture of image information by the pixels in at least two of the focal planes is separately controllable, and sampling circuitry configured to convert pixel outputs into digital pixel data.
In a further embodiment, the plurality of focal planes is arranged as a two dimensional array of focal planes having at least three focal planes in one dimension.
In another embodiment, the plurality of focal planes is arranged as a two dimensional array of focal planes having at least three focal planes in both dimensions.
In a still further embodiment, the plurality of focal planes arranged as an N×M array of focal planes comprising at least two focal planes configured to capture blue light, at least two focal planes configured to capture green light, and at least two focal planes configured to capture red light.
In still another embodiment, each focal plane comprises rows and columns of pixels.
In a yet further embodiment, the control circuitry is configured to control capture of image information by a pixel by controlling the resetting of the pixel.
In yet another embodiment, the control circuitry is configured to control capture of image information by a pixel by controlling the readout of the pixel.
In a further embodiment again, the control circuitry is configured to control capture of image information by controlling the integration time of each pixel.
In another embodiment again, the control circuitry is configured to control the processing of image information by controlling the gain of the sampling circuitry.
In a further additional embodiment, the control circuitry is configured to control the processing of image information by controlling the black level offset of each pixel.
In another additional embodiment, the control circuitry is configured to control the capture of image information by controlling readout direction.
In a still yet further embodiment, the read-out direction is selected from the group including top to bottom, and bottom to top.
In still yet another embodiment, the read-out direction is selected from the group including left to right, and right to left.
In a still further embodiment again, the control circuitry is configured to control the capture of image information by controlling the readout region of interest.
In still another embodiment again, the control circuitry is configured to control the capture of image information by controlling horizontal sub-sampling.
In a still further additional embodiment, the control circuitry is configured to control the capture of image information by controlling vertical sub-sampling.
In still another additional embodiment, the control circuitry is configured to control the capture of image information by controlling pixel charge-binning.
In a yet further embodiment again, the imager array is a monolithic integrated circuit imager array.
In yet another embodiment again, a two dimensional array of adjacent pixels in at least one focal plane have the same capture band.
In a yet further additional embodiment, the capture band is selected from the group including blue light, cyan light, extended color light comprising visible light and near-infra red light, green light, infra-red light, magenta light, near-infra red light, red light, yellow light, and white light.
In a further additional embodiment again, a first array of adjacent pixels in a first focal plane have a first capture band, a second array of adjacent pixels in a second focal plane have a second capture band, where the first and second capture bands are the same, the peripheral circuitry is configured so that the integration time of the first array of adjacent pixels is a first time period, and the peripheral circuitry is configured so that the integration time of the second array of adjacent pixels is a second time period, where the second time period is longer than the first time period.
In another further embodiment, at least one of the focal planes includes an array of adjacent pixels, where the pixels in the array of adjacent pixels are configured to capture different colors of light.
In yet another further embodiment, the array of adjacent pixels employs a Bayer filter pattern.
In still another further embodiment, the plurality of focal planes is arranged as a 2×2 array of focal planes, a first focal plane in the array of focal planes includes an array of adjacent pixels that employ a Bayer filter pattern, a second focal plane in the array of focal planes includes an array of adjacent pixels configured to capture green light, a third focal plane in the array of focal planes includes an array of adjacent pixels configured to capture red light, and a fourth focal plane in the array of focal planes includes an array of adjacent pixels configured to capture blue light.
In another further embodiment again, the plurality of focal planes is arranged as a two dimensional array of focal planes having at least three focal planes in one dimension.
In another further additional embodiment, the plurality of focal planes is arranged as a two dimensional array of focal planes having at least three focal planes in both dimensions.
In still yet another further embodiment, the control circuitry comprises a global counter.
In still another further embodiment again, the control circuitry is configured to stagger the start points of image read-out such that each focal plane has a controlled temporal offset with respect to a global counter.
In still another further additional embodiment again, the control circuitry is configured to separately control the integration times of the pixels in each focal plane based upon the capture band of the pixels in the focal plane using the global counter.
In yet another further embodiment again, the control circuitry is configured to separately control the frame rate of each focal plane based upon the global counter.
In yet another further additional embodiment, the control circuitry further comprises a pair of pointers for each focal plane.
In a still further embodiment, the offset between the pointers specifies an integration time.
In still another embodiment, the offset between the pointers is programmable.
In a yet further embodiment, the control circuitry comprises a row controller dedicated to each focal plane.
In yet another embodiment, the imager array includes an array of M×N focal planes, and the control circuitry comprises a single row decoder circuit configured to address each row of pixels in each row of M focal planes.
In a further embodiment again, the control circuitry is configured to generate a first set of pixel level timing signals so that the row decoder and a column circuit sample a first row of pixels within a first focal plane, and the control circuitry is configured to generate a second set of pixel level timing signals so that the row decoder and a column circuit sample a second row of pixels within a second focal plane.
In another embodiment again, each focal plane has dedicated sampling circuitry.
In a further additional embodiment, at least a portion of the sampling circuitry is shared by a plurality of the focal planes.
In another additional embodiment, the imager array includes an array of M×N focal planes, and the sampling circuitry comprises M analog signal processors (ASPs) and each ASP is configured to sample pixels read-out from N focal planes.
In a still yet further embodiment, each ASP is configured to receive pixel output signals from the N focal planes via N inputs, and each ASP is configured to sequentially process each pixel output signal on its N inputs.
In still yet another embodiment, the control circuitry is configured so that a single common analog pixel signal readout line is shared by all pixels in a group of N focal planes, and the control circuitry is configured to control the capture of image data to time multiplex the pixel output signals received by each of the M ASPs.
In still another embodiment again, the imager array includes an array of M×N focal planes, the sampling circuitry comprises a plurality of analog signal processors (ASPs) and each ASP is configured to sample pixels read-out from a plurality of focal planes, the control circuitry is configured so that a single common analog pixel signal readout line is shared by all pixels in the plurality of focal planes, and the control circuitry is configured to control the capture of image data to time multiplex the pixel output signals received by each of the plurality of ASPs.
In a yet further embodiment again, the sampling circuitry comprises analog front end (APE) circuitry and analog-to-digital conversion (ADC) circuitry.
In yet another embodiment again, the sampling circuitry is configured so that each focal plane has a dedicated AFE and at least one ADC is shared between at least two focal planes.
In a yet further additional embodiment, the sampling circuitry is configured so that at least one ADC is shared between a pair of focal planes.
In yet another additional embodiment, the sampling circuitry is configured so that at least one ADC is shared between four focal planes.
In a further additional embodiment again, the sampling circuitry is configured so that at least one AFE is shared between at least two focal planes.
In another additional embodiment again, the sampling circuitry is configured so that at least one AFE is shared between a pair of focal planes.
In another further embodiment, the sampling circuitry is configured so that two pairs of focal planes that each share an AFE collectively share an ADC.
In still another further embodiment, the control circuitry is configured to separately control the power down state of each focal plane and associated AFE circuitry or processing timeslot therein.
In yet another further embodiment, the control circuitry configures the pixels of at least one inactive focal plane to be in a constant reset state.
In another further embodiment again, at least one focal plane includes reference pixels to calibrate pixel data captured using the focal plane.
In another further additional embodiment, the control circuitry is configured to separately control the power down state of the focal plane's associated AFE circuitry or processing timeslot therein, and the control circuitry is configured to power down the focal plane's associated AFE circuitry or processing timeslot therein without powering down the associated AFE circuitry or processing timeslot therein for readout of the reference pixels of the focal plane.
In still yet another further additional embodiment, the pixels in the array of adjacent pixels share read-out circuitry.
In still another further embodiment again, the read-out circuit includes a reset transistor, a floating diffusion capacitor, and a source follower amplifier transistor.
In still another further additional embodiment, the array of adjacent pixels in the at least one focal plane is a first array of adjacent pixels, the imager array includes a second array of adjacent pixels within another of the plurality of focal planes and the pixels in the second array of adjacent pixels in the second array of adjacent pixels, the capture band of the pixels in the second array of adjacent pixels differs from the capture band of the pixels in the first array of adjacent pixels, and the full well capacity of the pixels in the first array of adjacent pixels is different to the full well capacity of the pixels in the second array of adjacent pixels.
In yet another further embodiment again, the full well capacity of the pixels in the first array of adjacent pixels is configured so that each pixel well is filled by the number of electrons generated when the pixel is exposed for a predetermined integration time to light within the first capture band having a predetermined maximum spectral radiance, and the full well capacity of the pixels in the second array of adjacent pixels is configured so that each pixel well is filled by the number of electrons generated when the pixel is exposed for a predetermined integration time to light within the second capture band having a predetermined maximum spectral radiance.
In yet another further additional embodiment, the floating diffusion capacitance determines the conversion gain of each pixel in the array of adjacent pixels.
In a further embodiment, the array of adjacent pixels in the at least one focal plane is a first array of adjacent pixels, the imager array includes a second array of adjacent pixels within another of the plurality of focal planes and the pixels in the second array of adjacent pixels has a floating diffusion capacitance that determines the conversion gain of each pixel in the second array of adjacent pixels, the capture band of the pixels in the second array of adjacent pixels differs from the capture band of the pixels in the first array of adjacent pixels, and the conversion gain of the pixels in the first array of adjacent pixels is different to the conversion gain of the pixels in the second array of adjacent pixels.
In another embodiment, the floating diffusion capacitors of the first and second arrays of adjacent pixels are configured to minimize the input referred noise of the pixel outputs.
In a still further embodiment, the array of adjacent pixels in the at least one focal plane is a first array of adjacent pixels, the imager array includes a second array of adjacent pixels within another of the plurality of focal planes and the pixels in the second array of adjacent pixels has a floating diffusion capacitance that determines the conversion gain of each pixel in the second array of adjacent pixels, the capture band of the pixels in the second array of adjacent pixels is the same as the capture band of the pixels in the first array of adjacent pixels, and the conversion gain of the pixels in the first array of adjacent pixels is different to the conversion gain of the pixels in the second array of adjacent pixels.
In still another embodiment, the source follower gain of each pixel in the array determines the output voltage the pixels.
In a yet further embodiment, the array of adjacent pixels in the at least one focal plane is a first array of adjacent pixels, the imager array includes a second array of adjacent pixels within another of the plurality of focal planes and the pixels in the second array of adjacent pixels has a fixed source follower gain for each pixel in the second array of adjacent pixels, the capture band of the pixels in the second array of adjacent pixels differs from the capture band of the pixels in the first array of adjacent pixels, and the source follower gain of the pixels in the first array of adjacent pixels is different to the source follower gain of the pixels in the second array of adjacent pixels.
In yet another embodiment, the source follower gain of the first and second arrays of adjacent pixels are configured so that the maximum output signal swing of each pixel is the same.
In a further embodiment again, a first array of adjacent pixels in a first focal plane have a first capture band, a second array of adjacent pixels in a second focal plane have a second capture band, where the first and second capture bands differ, the imager array is backside illuminated, and the thinning depth of the imager array in the region containing the first array of adjacent pixels is different to the thinning depth of the region of the imager array containing the second array of adjacent pixels.
In another embodiment again, the first and second capture bands do not overlap.
In a further additional embodiment, the thinning depth of the imager in the array in the region containing the first array is related to the first capture band, and the thinning depth of the imager in the array in the region containing the second array is related to the second capture band.
In another additional embodiment, the first thinning depth is configured so as to position the peak carrier generation within the photodiode's depletion region given a nominal capture band wavelength of 450 nm.
In a still yet further embodiment, the first thinning depth is configured so as to position the peak carrier generation within the photodiode's depletion region given a nominal capture band wavelength of 550 nm.
In still yet another embodiment, the first thinning depth is configured so as to position the peak carrier generation within the photodiode's depletion region given a nominal capture band wavelength of 640 nm.
In a still further embodiment again, a first array of adjacent pixels in a first focal plane have a first capture band, a second array of adjacent pixels in a second focal plane have a second capture band, where the first and second capture bands differ, the pixels in the first array of adjacent pixels are a first pixel size, the pixels in the second array of adjacent pixels are a second pixel size, and the first pixel size is larger than the second pixel size and the first capture band includes longer wavelengths of light than the second capture band.
In a still further additional embodiment, a first portion of the control circuitry is located on one side of a focal plane and a second portion of the control circuitry is located on the opposite side of the focal plane.
In still another additional embodiment, the first portion of the control circuitry is configured to control the capture of information by a plurality of pixels in a first focal plane and in plurality of pixels in a second focal plane located adjacent the first focal plane.
In a yet further embodiment again, the imager array is configured to receive a lens array positioned above the focal planes of the imager array, and each of the plurality of focal planes is located within a region in the imager array corresponding to an image circle of the lens array, when a lens array is mounted to the imager array.
Yet another embodiment again, also includes a cover-glass mounted above the focal planes of the imager array.
In a further additional embodiment again, adjacent focal planes are separated by a spacing distance.
In another additional embodiment again, control circuitry is located within the spacing distance between adjacent focal planes.
In another further embodiment, sampling circuitry is located within the spacing distance between adjacent focal planes.
Turning now to the drawings, architectures for imager arrays configured for use in array cameras in accordance with embodiments of the invention are illustrated. In many embodiments, a centralized controller on an imager array enables fine control of the capture time of each focal plane in the array. The term focal plane describes a two dimensional arrangement of pixels. Focal planes in an imager array are typically non-overlapping (i.e. each focal plane is located within a separate region on the imager array). The term imager is used to describe the combination of a focal plane and the control circuitry that controls the capture of image information using the pixels within the focal plane. In a number of embodiments, the focal planes of the imager array can be separately triggered. In several embodiments, the focal planes of the imager array utilize different integration times tailored to the capture band of the pixels within each focal plane. The capture band of a pixel typically refers to a contiguous sub-band of the electromagnetic system to which a pixel is sensitive. In addition, the specialization of specific focal planes so that all or a majority of the pixels in the focal plane have the same capture band enables a number of pixel performance improvements and increases in the efficiency of utilization of peripheral circuitry within the imager array.
In a number of embodiments, the pixels of the imager array are backside illuminated and the substrate of the regions containing each of the focal planes are thinned to different depths depending upon the spectral wavelengths sensed by the pixels in each focal plane. In addition, the pixels themselves can be modified to improve the performance of the pixels with respect to specific capture bands. In many embodiments, the conversion gain, source follower gain and full well capacity of the pixels in each focal plane are determined to improve the performance of the pixels with respect to their specific capture bands.
In several embodiments, each focal plane possesses dedicated peripheral circuitry to control the capture of image information. In certain embodiments, the grouping of pixels intended to capture the same capture band into focal planes enables peripheral circuitry to be shared between the pixels. In many embodiments, the analog front end, analog to digital converter, and/or column read-out and control circuitry are shared between pixels within two or more focal planes.
In many embodiments, the imagers in an imager array can be placed in a lower power state to conserve power, which can be useful in operating modes that do not require all imagers to be used to generate the output image (e.g. lower resolution modes). In several embodiments, the pixels of imagers in the low power state are held with the transfer gate on so as to maintain the photodiode's depletion region at its maximum potential and carrier collection ability, thus minimizing the probability of photo-generated carriers generated in an inactive imager from migrating to the pixels of active imagers. Array cameras and imager arrays in accordance with embodiments of the invention are discussed further below.
1. Array Camera ArchitectureAn array camera architecture that can be used in a variety of array camera configurations in accordance with embodiments of the invention is illustrated in
The imager array 110 includes an M×N array of individual and independent focal planes, each of which receives light through a separate lens system. The imager array can also include other circuitry to control the capture of image data using the focal planes and one or more sensors to sense physical parameters. The control circuitry can control imaging and functional parameters such as exposure times, trigger times, gain, and black level offset. The control circuitry can also control the capture of image information by controlling read-out direction (e.g. top-to-bottom or bottom-to-top, and left-to-right or right-to-left). The control circuitry can also control read-out of a region of interest, horizontal sub-sampling, vertical sub-sampling, and/or charge-binning. In many embodiments, the circuitry for controlling imaging parameters may trigger each focal plane separately or in a synchronized manner. The imager array can include a variety of other sensors, including but not limited to, dark pixels to estimate dark current at the operating temperature. Imager arrays that can be utilized in array cameras in accordance with embodiments of the invention are disclosed in PCT Publication WO 2009/151903 to Venkataraman et al., the disclosure of which is incorporated herein by reference in its entirety. In a monolithic implementation, the imager array may be implemented using a monolithic integrated circuit. When an imager array in accordance with embodiments of the invention is implemented in a single sell-contained SOC chip or die, the imager array can be referred to as an imager array. The term imager array can be used to describe a semiconductor chip on which the imager array and associated control, support, and read-out electronics are integrated.
The image processing pipeline module 120 is hardware, firmware, software, or a combination thereof for processing the images received from the imager array 110. The image processing pipeline module 120 typically processes the multiple low resolution (LR) images captured by the camera array and produces a synthesized higher resolution image in accordance with an embodiment of the invention. In a number of embodiments, the image processing pipeline module 120 provides the synthesized image data via an output 122. Various image processing pipeline modules that can be utilized in a camera array in accordance with embodiments of the invention are disclosed in U.S. patent application Ser. No. 12/967,807 entitled “System and Methods for Synthesizing High Resolution Images Using Super-Resolution Processes” filed Dec. 14, 2010, the disclosure of which is incorporated by reference herein in its entirety.
The controller 130 is hardware, software, firmware, or a combination thereof for controlling various operation parameters of the imager array 110. In many embodiments, the controller 130 receives inputs 132 from a user or other external components and sends operation signals to control the imager array 110. The controller 130 can also send information to the image processing pipeline module 120 to assist processing of the LR images captured by the imager array 110.
Although a specific array camera architecture is illustrated in
An imager array in accordance with an embodiment of the invention is illustrated in
Focal plan array cores in accordance with embodiments of the invention include an array of imagers and dedicated peripheral circuitry for capturing image data using the pixels in each focal plane. Imager arrays in accordance with embodiments of the invention can include focal plan array cores that are configured in any of a variety of different configurations appropriate to a specific application. For example, customizations can be made to a specific imager array designs including (but not limited to) with respect to the focal plane, the pixels, and the dedicated peripheral circuitry. Various focal plane, pixel designs, and peripheral circuitry that can be incorporated into focal plane array cores in accordance with embodiments of the invention are discussed below.
3.1. Formation of Focal Planes on an Imager ArrayAn imager array can be constructed in which the focal planes are formed from an array of pixel elements, where each focal plane is a sub-array of pixels. In embodiments where each sub-array has the same number of pixels, the imager array includes a total of K×L pixel elements, which are segmented in M×N sub-arrays of X×Y pixels, such that K=M×X, and L=N×Y. In the context of an imager array, each sub-array or focal plane can be used to generate a separate image of the scene. Each sub-array of pixels provides the same function as the pixels of a conventional imager (i.e. the imager in a camera that includes a single focal plane).
As is discussed further below, an imager array in accordance with embodiments of the invention can include a single controller that can separately sequence and control each focal plane. Having a common controller and I/O circuitry can provide important system advantages including lowering the cost of the system due to the use of less silicon area, decreasing power consumption due to resource sharing and reduced system interconnects, simpler system integration due to the host system only communicating with a single controller rather than M×N controllers and read-out I/O paths, simpler array synchronization due to the use of a common controller, and improved system reliability due to the reduction in the number of interconnects.
3.2. Layout of ImagersAs is disclosed in P.C.T. Publication WO 2009/151903 (incorporated by reference above), an imager array can include any N×M array of focal planes such as the imager array (200) illustrated in
The human eye is more sensitive to green light than to red and blue light, therefore, an increase in the resolution of an image synthesized from the low resolution image data captured by an imager array can be achieved using an array that includes more focal planes that sense green light than focal planes that sense red or blue light. A 5×5 imager array (210) including 17 focal planes that sense green light (G), four focal planes that sense red light (R), and four focal planes that sense blue light (B) is illustrated in
Additional imager array configurations are disclosed in U.S. patent application Ser. No. 12/952,106 entitled “Capturing and Process of Images Using Monolithic Camera Array with Heterogenous Imagers” to Venkataraman et al., the disclosure of which is incorporated by reference herein in its entirety.
Although specific imager array configurations are disclosed above, any of a variety of regular or irregular layouts of imagers including imagers that sense visible light, portions of the visible light spectrum, near-IR light, other portions of the spectrum and/or combinations of different portions of the spectrum can be utilized to capture images that provide one or more channels of information for use in SR processes in accordance with embodiments of the invention. The construction of the pixels of an imager in an imager array in accordance with an embodiment of the invention can depend upon the specific portions of the spectrum imaged by the imager. Different types of pixels that can be used in the focal planes of an imager array in accordance with embodiments of the invention are discussed below.
3.3. Pixel DesignWithin an imager array that is designed for color or multi-spectral capture, each individual focal plane can be designated to capture a sub-band of the visible spectrum. Each focal plane can be optimized in various ways in accordance with embodiments of the invention based on the spectral band it is designated to capture. These optimizations are difficult to perform in a legacy Bayer pattern based image sensor since the pixels capturing their respective sub-band of the visible spectrum are all interleaved within the same pixel array. In many embodiments of the invention, backside illumination is used where the imager array is thinned to different depths depending upon the capture band of a specific focal plane. In a number of embodiments, the sizes of the pixels in the imager array are determined based upon the capture band of the specific imager. In several embodiments, the conversion gains, source follower gains, and full well capacities of groups of pixels within a focal plane are determined based upon the capture band of the pixels. The various ways in which pixels can vary between focal planes in an imager array depending upon the capture band of the pixel are discussed further below.
3.3.1. Backside Illuminated Imager Array with Optimized Thinning Depths
A traditional image sensor is illuminated from the front side where photons must first travel through a dielectric stack before finally arriving at the photodiode, which lies at the bottom of the dielectric stack in the silicon substrate. The dielectric stack exists to support metal interconnects within the device. Front side illumination suffers from intrinsically poor Quantum Efficiency (QE) performance (the ratio of generated carriers to incident photons), due to problems such as the light being blocked by metal structures within the pixel. Improvement is typically achieved through the deposition of micro-lens elements on top of the dielectric stack for each pixel so as to focus the incoming light in a cone that attempts to avoid the metal structures within the pixel.
Backside illumination is a technique employed in image sensor fabrication so as to improve the QE performance of imagers. In backside illumination (BSI), the silicon substrate bulk is thinned (usually with a chemical etch process) to allow photons to reach the depletion region of the photodiode through the backside of the silicon substrate. When light is incident on the backside of the substrate, the problem of aperturing by metal structures inherent in frontside illumination is avoided. However, the absorption depth of light in silicon is proportional to the wavelength such that the red photons penetrate much deeper than blue photons. If the thinning process does not remove sufficient silicon, the depletion region will be too deep to collect photo electrons generated from blue photons. If the thinning process removes too much silicon, the depletion region can be too shallow and red photons may travel straight though without interacting and generating carriers. Red photons could also be reflected from the front surface back and interact with incoming photons to create constructive and destructive interference due to minor differences in the thickness of the device. The effects caused by variations in the thickness of the device can be evident as fringing patterns and/or as spiky spectral. QE response.
In a conventional imager, a mosaic of color filters (typically a Bayer filter) is often used to provide RGB color capture. When a mosaic based color imager is thinned for BSI, the thinning depth is typically the same for all pixels since the processes used do not thin individual pixels to different depths. The common thinning depth of the pixels results in a necessary balancing of QE performance between blue wavelengths and red/near-IR wavelengths. An imager array in accordance with embodiments of the invention includes an array of imagers, where each pixel in a focal plane senses the same spectral wavelengths. Different focal planes can sense different sub-bands of the visible spectrum or indeed any sub-band of the electromagnetic spectrum for which the band-gap energy of silicon has a quantum yield gain greater than 0. Therefore, performance of an imager array can be improved by using BSI where the thinning depth for the pixels of a focal plane is chosen to match optimally the absorption depth corresponding to the wavelengths of light each pixel is designed to capture. In a number of embodiments, the silicon bulk material of the imager array is thinned to different thicknesses to match the absorption depth of each camera's capture band within the depletion region of the photodiode so as to maximize the QE.
An imager array in which the silicon substrate is thinned to different depths in regions corresponding to focal planes (i.e. sub-arrays) that sense different spectral bandwidths in accordance with an embodiment of the invention is conceptually illustrated in
In many embodiments, the designation of color channels to each imager within the array is achieved via a first filtration of the incoming photons through a band-pass filler within the optical path of the photons to the photodiodes. In several embodiments, the thinning depth itself is used to create the designation of capture wavelengths since the depletion region depth defines the spectral. QE of each imager.
3.3.2. Optimization of pixel size
Additional. SNR benefits can be achieved by changing the pixel sizes used in the imagers designated to capture each sub-band of the spectrum. As pixel sizes shrink, the effective QE of the pixel decreases since the ratio of photodiode depletion region area to pixel area decreases. Microlenses are typically used to attempt to compensate for this and they become more important as the pixel size shrinks. Another detriment to pixel performance by pixel size reduction comes from increased noise. To attempt to maintain the balance of photo-active to read-out circuit area, in many embodiments, the pixel transfer gate, source follower amplifier transistor and reset transistors are also made smaller. As these transistors reduce in size, numerous performance parameters are degraded typically resulting in noise increase.
Electrical “cross-talk” also increases as a function of reduced pixel-to-pixel spacing. Long wavelength photons penetrate deeper into the substrate before interacting with the silicon to create a charge carrier. These charge carriers wander in a somewhat random fashion before resurfacing and collection in a photodiode depletion region. This “circle” of probable resurface and collection increases as a function of generation depth. Thus the smaller the pixels become, the greater the number of pixels the circle of probable resurface covers. This effect results in a degradation of the Modulation Transfer Function (MTF) with increase in photon wavelength.
Imagers designated to capture longer wavelengths can therefore be optimized to improve system SNR by increasing the pixel size and thus increasing the QE of the pixel. Since MTF drops as a function of increased wavelength, the benefit of smaller pixels for resolution purposes is diminished with increased wavelength. Overall system resolution can thus be maintained while increasing the pixel size for longer wavelengths so as to improve QE and thus improve the overall system SNR.
Although in many embodiments, imager arrays in accordance with embodiments of the invention utilize as small pixels as can be manufactured. Accordingly, increasing pixel size in the manner outlined above is simply one technique that can be utilized to improve camera performance and the specific pixel size chosen typically depends upon the specific application.
3.3.3. Imager OptimizationThe push for smaller and smaller pixels has encouraged pixel designers to re-architect the pixels such that they share read-out circuits within a neighborhood. For example, a group of four photodiodes may share the same reset transistor, floating diffusion node and source follower amplifier transistors. When the four pixels are arranged in a Bayer pattern arrangement, the group of four pixels covers the full visible spectrum of capture. In imager arrays in accordance with embodiments of the invention, these shared pixel structures can be adapted to tailor the performance of pixels in a focal plane to a given capture band. The fact that these structures are shared by pixels that have different capture bands in a traditional color filter array based image sensor means that the same techniques for achieving performance improvements are typically not feasible. The improvement of the performance of pixels in a focal plane by selection of conversion gain, source follower gain, and full well capacity based upon the capture band of the pixels is discussed below. Although the discussion that follows is with reference to 4T CMOS pixels, similar improvements to pixel performance can be achieved in any imager array in which pixels share circuitry in accordance with embodiments of the invention.
3.3.3.1. Optimization of Conversion GainThe performance of imagers within an imager array that are intended to capture specific sub-bands of the spectrum can be improved by utilizing pixels with different conversion gains tailored for each of the different capture bands. Conversion gain in a typical 4T CMOS pixel can be controlled by changing the size of the capacitance of the “sense node”, typically a floating diffusion capacitor (FD). The charge to voltage conversion follows the equation V=Q/C where Q is the charge, C is the capacitance and V is the voltage. Thus the smaller the capacitance, the higher the voltage resulting from a given charge hence the higher the charge-to-voltage conversion gain of the pixel. The conversion gain cannot obviously be increased infinitely however. The apparent full well capacity of the pixel (number of photo-electrons the pixel can record) will decrease if the capacitance of the FD becomes too small. This is because the electrons from the photodiode transfer into the FD due to a potential difference acting on them. Charge transfer will stop when the potential difference is zero (or a potential barrier exists between the PF and the FD). Thus if the capacitance of the FD is too small, the potential equilibrium may be reached before all electrons have been transferred out of the photodiode.
3.3.3.2. Optimization of Source Follower GainAdditional performance gains can be achieved by changing the characteristics of the amplifiers in each pixel within a focal plane. The amplifier in a traditional 4T CMOS pixel is constructed from a Source Follower transistor. The Source Follower transistor amplifies the voltage across the FD so as to drive the pixel signal down the column line to the column circuit where the signal is subsequently sampled.
The output voltage swing as a function of the input voltage swing (i.e. the Source Follower amplifier's gain) can be controlled during fabrication by changing the implant doping levels. Given the pixel photodiode's full well capacity (in electrons) and the capacitance of the FD, a range of voltages are established at the input of the Source Follower transistor by the relationship Vin=Vrst−Q/C where Vrst is the reset voltage of the FD, Q is the charge of the electrons transferred to the FD from the photodiode and C is the capacitance of the FD.
The photodiode is a pinned structure such that the range of charge that may be accumulated is between 0 electrons and the full well capacity. Therefore, with a given full well capacity of the photodiode and a given capacitance of the FD and a desired output signal swing of the source follower, the optimal gain or a near optimal gain for the source follower transistor can be selected.
3.3.3.3. Optimization of Full Well CapacityAnother optimization that can be performed is through changing the full well capacity of the photodiodes. The full well capacity of the photodiode is the maximum number of electrons the photodiode can store in its maximally depleted state. The full well of the pixels can be controlled through the x-y size of the photodiode, the doping levels of the implants that form the diode structure and the voltage used to reset the pixel.
3.3.3.4. Three Parameter OptimizationAs can be seen in the previous sections, there are three main characteristics that can be tuned in order to configure pixels within a focal plane that have the same capture band for improved imaging performance. The optimal solution for all three parameters is dependent on the targeted behavior of a particular focal plane. Each focal plane can be tailored to the spectral band it is configured to capture. While the design of the pixel can be optimized, in many embodiments the performance of the pixels is simply improved with respect to a specific capture band (even though the improvement may not be optimal). An example optimization is as follows and similar processes can be used to simply improve the performance of a pixel with respect to a specific capture band:
a. Optimization of the Photodiode Full Well Capacity.
Given the speed of the optics and the transmittance of the color fillers, it is possible to estimate the number of electrons that will be generated given a minimum integration time (e.g. 50 psi for a given maximum spectral radiance. Each sub-band of the spectrum (color) will likely have a different number of electrons generated. The full well capacities of the photodiodes for each sub-band (color) can be chosen such that the maximum radiance within that band under minimum integration times will fill the well. The means by which this target full well capacity is achieved could be through changing the x-y dimensions, changing the doping levels during diode fabrication, changing the reset voltage of the pixels or a combination of two or more of these parameters.
b. Optimization of Conversion Gain
The next step is to optimize the conversion gain of the pixels. Given the number of electrons defined in the full well optimization step, an optimal capacitance for the floating diffusion can be chosen. The optimal capacitance is one, which maintains a potential difference to support charge transfer from the FD such that the full well capacity can be transferred in a reasonable duration of time. The goal of this optimization is to choose the smallest capacitance possible such that the charge to voltage conversion gain is as high as possible such that input referred noise is minimized and hence the maximum SNR for each color channel is realized.
c. Optimization of Source Follower Gain
Once the optimal full-well capacity and charge to voltage conversion gain is determined, the source follower amplifier gain can be chosen. The difference between the reset voltage of the FD (Vrst) and the voltage of the FD containing a full well charge load (Vrst-Q/C) enables the definition of an optimal gain for the source follower amplifier. The source follower gain defines the output signal swing between Vrst and Vrst-Q/C. The optimal signal swing is defined by such parameters as the operating voltage of the analog signal processing and the A/D converter that sample and covert the pixel output signal. The source follower gain is chosen for each color channel such that their respective signal swings are all matched to each other and match the maximum signal swing supported by the analog signal processing and A/D converter circuits.
Having performed these pixel level optimizations on a per capture band basis, the system will have the maximum SNR and dynamic range for each capture band given linear operation. Although the process described above is designed to provide an optimal solution with regard to maximum SNR and dynamic range, other design criteria can be used in the selection of the three parameters described above to provide improved pixel performance with respect to a specific capture band or application specific desired behavior.
3.3.4. Dynamic Range TailoringFurther optimizations of imager arrays can be achieved by using pixels of different conversion gains within the same spectral band. For example, the “green” imagers could be constructed from pixels that have two or more different conversion gains. Therefore, each “green” imager includes pixels that have a homogeneous conversion gain, which is different to the conversion gain of pixels in another of the “green” imagers in the array. Alternatively, each imager could be constructed from a mosaic of pixels having different conversion gains.
As mentioned previously, as the conversion gain increases beyond a certain threshold, the input referred noise continues to decrease but at the expense of effective full well capacity. This effect can be exploited to yield a system having a higher dynamic range. For example, half of all “green” focal planes could be constructed using a conversion gain that optimizes both input referred noise and full well capacity (a “normal green”). The other half of all “green” focal planes could be constructed from pixels that have a higher conversion gain, hence lower input referred noise and lower effective full well capacity (“fast green”). Areas of a scene having a lower light level could be recovered from the “fast green” pixels (that are not saturated) and areas of brighter light level could be recovered from the “normal green” pixels. The result is an overall increase in dynamic range of the system. Although, a specific 50/50 allocation of focal planes between “fast green” and “normal green” is discussed above the number of focal planes dedicated to “fast” imaging and the number of focal planes dedicated to “normal.” imaging is entirely dependent upon the requirements of a specific application. In addition, separate focal planes dedicated to “fast” and “normal.” imaging can be utilized to increase the dynamic range of other spectral bands and is not simply limited to increasing the dynamic range with which an imager array captures green light.
A similar effect could be achieved by controlling the integration time of the “fast” and “normal.” green sub-arrays such that the “fast” pixels integrate for longer. However in a non-stationary scene, this could result in motion artifacts since the “fast” pixels would integrate the scene motion for longer than the “normal.” pixels creating an apparent spatial disparity between the two green channels, which may be undesirable.
Although the present invention has been described in certain specific embodiments, many additional modifications and variations would be apparent to those skilled in the art. It is therefore to be understood that the present invention may be practiced otherwise than specifically described, including various changes in the size, shape and materials, without departing from the scope and spirit of the present invention. Thus, embodiments of the present invention should be considered in all respects as illustrative and not restrictive.
3.4. Peripheral CircuitryIn a conventional imager, pixels are typically accessed in a row-wise fashion using horizontal control lines that run across each row of pixels. Output signal lines that run vertically through each pixel are used to connect the pixel output to a sampling circuit at the column periphery. The horizontal control lines and the output signal lines are typically implemented as metal traces on silicon. The outputs from all pixels in a row are simultaneously sampled at the column periphery, and scanned out sequentially using column controllers. However, common row-wise access along the full row of K pixels in an imager array does not enable the imagers to be read out independently. As noted above, many of the benefits of utilizing an imager array derive from the independence of the focal planes and the ability for the imager array to separately control the capture of image information by the pixels in each focal plane. The ability to separately control the capture of information means that the capture of image information by the pixels in a focal plane can be customized to the spectral band the focal plane is configured to capture. In a number of embodiments, the ability to provide separate trigger times can be useful in synchronizing the capture of images using focal planes that have different integration times and in capturing sequences of images that can be registered to provide slow motion video sequences. In order to control the capture of image information by different focal planes within an imager array, independent read-out control can be provided for each focal plane. In several embodiments, the imager array has independent read-out control due to the fact that each focal plane has an associated row (column) controller, column (row) read-out circuits and a dedicated pixel signal analog processor and digitizer. In many embodiments, separate control of the capture of image information by pixels in different focal planes is achieved using peripheral circuitry that is shared between focal planes. Imager arrays implemented using dedicated peripheral circuitry and shared peripheral circuitry in accordance with embodiments of the invention are discussed below.
3.4.1. Dedicated Peripheral CircuitryAn imager array including multiple focal planes having independent read-out control and pixel digitization, where each focal plane has dedicated peripheral circuitry, in accordance with embodiments of the invention is illustrated in
In a device including M×N focal planes, the read-out control logic includes M sets of column control outputs per row of focal planes (N). Each column sampling/read-out circuit 308 can also have dedicated sampling circuitry for converting the captured image information into digital pixel data. In many embodiments, the sampling circuitry includes Analog Signal. Processor (ASP), which includes an Analog Front End (AFE) amplifier circuit and an Analog to Digital. Converter (ADC) 310. In other embodiments, any of a variety of analog circuitry can be utilized to convert captured image information into digitized pixel information. An ASP can be implemented in a number of ways, including but not limited to, as a single ASP operating at X pixel conversion per row period, where X is the number of pixels in a row of the focal plane served by the column sampling circuit (e.g. with a pipe-lined or SAR ADC), as X ASPs operating in parallel at 1 pixel conversion per row period or P ASPs operating in parallel at X/P conversions per row (see discussion below). A common read-out control circuit 312 controls the read-out of the columns in each imager.
In the illustrated embodiment, the master control logic circuitry 314 controls the independent read-out of each imager. The master control logic circuitry 314 includes high level timing control logic circuitry to control the image capture and read-out process of the individual focal plane. In a number of embodiments, the master control portion of this block can implement features including but not limited to: staggering the start points of image read-out such that each focal plane has a controlled temporal offset with respect to a global reference; controlling integration times of the pixels within specific focal planes to provide integration times specific to the spectral bandwidths being imaged; the horizontal and vertical read-out direction of each imager; the horizontal and vertical sub-sampling/binning/windowing of the pixels within each focal plane; the frame/row/pixel rate of each focal plane; and the power-down state control of each focal plane.
The master control logic circuitry 314 handles collection of pixel data from each of the imagers. In a number of embodiments, the master control logic circuitry packs the image data into a structured output format. Given that fewer than M×N output ports are used to output the image data (e.g. there are 2 output ports), the imager data is time multiplexed onto these output ports. In a number of embodiments, a small amount of memory (FIFO) is used to buffer the data from the pixels of the imagers until the next available time-slot on the output port 316 and the master control logic circuitry 314 or other circuitry in the imager array periodically inserts codes into the data stream providing information including, but not limited to, information identifying a focal plane, information identifying a row and/or column within a focal plane, and/or information identifying the relative time at which the capture or read-out process began/ended for one or more of the focal planes. Relative time information can be derived from an on-chip timer or counter, whose instantaneous value can be captured at the start/end of read-out of the pixels from each imager either at a frame rate or a line rate. Additional codes can also be added to the data output so as to indicate operating parameters such as (but not limited to) the integration time of each focal plane, and channel gain. As is discussed further below, the host controller can fully re-assemble the data stream back into the individual images captured by each focal plane. In several embodiments, the imager array includes sufficient storage to buffer at least a complete row of image data from all focal planes so as to support reordering and or retiming of the image data from all focal planes such that the data is always packaged with the same timing/ordering arrangement regardless of operating parameters such as (but not limited to) integration time and relative read-out positions. In a number of embodiments, the imager array includes sufficient storage to buffer at least a complete line of image data from all focal planes so as to support reordering and or retiming of the image data from all focal planes such that the data is packaged in a convenient manner to ease the host's reconstruction of the image data, for example retiming/reordering the image data to align the data from all focal planes to a uniform row start position for all focal planes irrespective of relative read-out position.
3.4.2. ASP SharingThe imager array illustrated in
Although the imager array illustrated in
Sharing of ADCs between pairs of focal planes in an imager array in accordance with embodiments of the invention is illustrated in
In many embodiments, the pixels within each focal plane are consistently processed through the same circuit elements at all times such that they have consistent offset and gain characteristics. In many embodiments, the control and read-out circuits and AFE are controlled by a common clocking circuit such that the phases and time slot assignment of each focal plane are consistent. An example of the phase shift between the column read-out of the different focal planes in accordance with embodiments of the invention is illustrated in
In order to support a reduction of power when certain focal planes are not imaging, the ASP, clocking, and bias/current schemes utilized within the imager array can support multiple sample rate configurations such that the sampling rate is always P times the pixel rate of a single focal plane, where P is the number of active focal planes being processed/sampled.
A rotated variation of the resource sharing architecture illustrated in
In another embodiment of the invention, fewer than M*N column circuits are used for sampling the pixel values of the focal planes in an imager array. An imager array 301 configured so that individual focal planes within a column of the imager array share a common column circuit block 308′ such that the device utilizes only M sets of column circuits in accordance with an embodiment of the invention is illustrated in
In several embodiments, the column circuits are time shared such that they enable read-out of pixels from focal planes above and below the column circuit. Sharing of a column circuit between pairs of focal planes within an imager array in accordance with embodiments of the invention is illustrated in
In other embodiments, variations on the imager array 301 illustrated in
A structural modification to the embodiment illustrated in
There are a number of different arrangements for the column sampling circuitry of imager arrays in accordance with embodiments of the invention. Often, the arrangement of the ASP circuitry follows a logical implementation of the column sampling circuits such that a single ASP is used per column circuit covering X pixels thus performing X conversions per row period. Alternatively, X ASPs can be utilized per column circuit performing one conversion per row period. In a general sense, embodiments of the invention can use P ASPs per column circuit of X pixels such that there are X/P conversions per row period. This approach is a means by which the conversion of the samples in any column circuit can be parallelized such that the overall ADC conversion process occurs at a slower rate. For example, in any of the configurations described herein it would be possible to take a column circuit arrangement that samples a number of pixels (T) and performs the analog-to-digital conversion using P ASPs, such that there are T/P conversions per row period. Given a fixed row period (as is the case with a fixed frame rate) the individual conversion rate of each ASP is reduced by the factor P. For example, if there are two ASPs, each runs at ½ the rate. If there are four, each ASP has to run at ¼ the rate. In this general sense, any number of ASPs running at a rate appropriate to a specific application irrespective of the configuration of the column circuitry can be utilized in accordance with embodiments of the invention.
3.4.5. Row Decoder OptimizationImager arrays in accordance with embodiments of the invention possess the ability to access different rows within each focal plane at a given instant so as to enable separate operating parameters with respect to the capture of image information by the pixels of each focal plane. The row decoder is typically formed from a first combinational decode of a physical address (represented as an E bit binary number) to as many as 2E “enable” signals (often referred to as a “one-hot” representation). For example, an 8 bit physical address is decoded into 256 “enable” signals so as to support addressing into a pixel array having 256 rows of pixels. Each of these “enable” signals are in turn logically ANDED with pixel timing signals, the results of which are then applied to the pixel array so as to enable row based pixel operations such as pixel reset and pixel charge transfer.
The row decoders can be optimized to reduce silicon area through sharing of the binary to one-hot decode logic. Rather than each sub-array having a fully functional row decoder, including binary to one-hot decoding, many embodiments of the invention have a single binary to one-hot decoder for a given row of focal planes within the imager array. The “enable” outputs of this decoder are routed across all focal planes to each of the (now less functional) row decoders of each focal plane. Separate sets of pixel level timing signals would be dedicated to each focal plane (generated by the row timing and control logic circuitry) and the logical AND function would remain in each focal plane's row decoder.
Readout with such a scheme would be performed in time slots dedicated to each focal plane such that there are M timeslots per row of focal planes in the camera array. A first row within the first focal plane would be selected and the dedicated set of pixel level timing signals would be applied to its row decoder and the column circuit would sample these pixels. In the next time slot the physical address would change to point to the desired row in the next focal plane and another set of dedicated pixel level timing signals would be applied to its row decoder. Again, the column circuits would sample these pixels. The process would repeat until all focal planes within a row of focal planes in the camera array have been sampled. When the column circuits are available to sample another row from the imager array, the process can begin again.
3.5. Providing a Memory Structure to Store Image DataAn additional benefit of the separate control of the capture of image information by each focal plane in an imager array is the ability to support slow motion video capture without increasing the frame rate of the individual focal planes. In slow motion video each focal plane is read out at a slightly offset point in time. In a traditional camera, the time delta between frames (i.e. the capture frame rate) is dictated by the read-out time of a single frame. In an imager array offering support of independent read-out time of the individual focal planes, the delta between frames can be less than the read-out of an individual frame. For example, one focal plane can begin its frame read-out when another focal plane is halfway through the read-out of its frame. Therefore an apparent doubling of the capture rate is achieved without requiring the focal planes to operate at double speed. However, when outputting the stream of images from the camera, this overlapping frame read-out from all focal planes means that there is continuous imagery to output.
Camera systems typically employ a period of time between read-out or display of image data known as the blanking period. Many systems require this blanking period in order to perform additional operations. For example, in a CRT the blanking interval is used to reposition the electron beam from the end of a line or frame to the beginning of the next line or frame. In an imager there are typically blanking intervals between lines to allow the next line of pixels to be addressed and the charge therein sampled by a sampling circuit. There can also be blanking intervals between frames to allow a longer integration time than the frame read-out time.
For an array camera operating in slow motion capture mode in accordance with an embodiment of the invention, the frame read-out is offset in time in all the focal planes such that all focal planes will enter their blanking intervals at different points in time. Therefore, there typically will not be a point in time where there is no image data to transmit. Array cameras in accordance with embodiments of the invention can include a retiming FIFO memory in the read-out path of the image data such that an artificial blanking period can be introduced during transmission. The retiming FIFO temporarily stores the image data to be transmitted from all the focal planes during the points in time where a blanking interval is introduced.
3.6. Imager Array Floor PlanImager arrays in accordance with embodiments of the invention can include floor plans that are optimized to minimize silicon area within the bounds of certain design constraints. Such design constraints include those imposed by the optical system. The sub-arrays of pixels forming each focal plane can be placed within the image circle of each individual lens stack of the lens array positioned above the imager array. Therefore, the manufacturing process of the lens elements typically imposes a minimum spacing distance on the imagers (i.e. a minimum pitch between the focal planes). Another consideration in the focal spacing coming from optical constraints is the magnitude of stray light that can be tolerated. In order to limit optical cross-talk between focal planes, many camera arrays in accordance with embodiments of the invention optically isolate the individual focal planes from each other. An opaque barrier can be created between the optical paths of adjacent focal planes within the lens stack. The opaque barrier extends down to the sensor cover-glass and can serve the additional purpose of providing a sensor to optics bonding surface and back focus spacer. The incursion of the opaque shield into the imaging circle of the lens can result in some level of reflection back into the focal plane. In many embodiments, the complex interplay between the optics and the imager array results in the use of an iterative process to converge to an appropriate solution balancing the design constraints of a specific application.
The space between the focal planes (i.e. the spacing distance) can be used to implement control circuitry as well as sampling circuitry including (but not limited to) ASP circuits or other circuitry utilized during the operation of the imager array. The logic circuits within the imager array can also be broken up and implemented within the spacing distance between adjacent focal planes using automatic place and routing techniques.
Although specific constraints upon the floor plans of imager arrays are described above, additional constraints can be placed upon floor plans that enable the implementation of the various logic circuits of the imager array in different areas of the device in accordance with embodiments of the invention. In many embodiments, requirements such as pixel size/performance, the optical system of the array camera, the silicon real-estate cost, and the manufacturing process used to fabricate the imager array can all drive subtle variations in the imager array overall architecture and floor plan.
3.6.1. Sampling DiversityIn many embodiments, the floor plan also accommodates focal planes that are designed to accommodate an arrangement that yields a preferred sampling diversity of the scene (i.e. the pixels within one focal plane are collecting light from a slightly shifted field of view with respect to other focal planes within the imager array). This can be achieved through a variety of techniques. In several embodiments, sampling diversity is achieved by constructing the imager array so that the focal planes are relatively offset from the centers of their respective optical paths by different subpixel amounts through a relative subpixel shift in alignment between the focal planes and their respective lenses. In many embodiments, the optical field of view are “aimed” slightly differently by an angle that corresponds to a subpixel shift in the image (an amount less than the solid angle corresponding to a single pixel). In a number of embodiments, slight nnicrolens shifts between the focal planes is utilized to alter the particular solid angle of light captured by the nnicrolens (which redirects the light to the pixel) thus achieving a slight subpixel shift. In certain embodiments, the focal planes are constructed with pixels having subtle differences in pixel pitch between focal planes such that sampling diversity is provided irrespective of optical alignment tolerances. For example, a 4×4 imager array can be constructed with focal planes having pixels with length and width dimensions of size 2.0 um, 2.05 um, 2.1 um, 2.15 um and 2.2 um. In other embodiments, any of a variety of pixel dimensions and/or techniques for improving sampling diversity amongst the focal planes within the imager array can be utilized as appropriate to a specific application.
4. Focal Plane Timing and Control CircuitryReferring back to
Traditional image sensors nominally employ two rolling address pointers into the pixel array, whose role is to indicate rows to receive pixel level charge transfer signals as well as “row select” signals for connecting a given row to the column lines enabling sampling of the sense node of the pixels. In many SOC image arrays in accordance with embodiments of the invention these two rolling address pointers are expanded to 2×M×N rolling address pointers. The pointer pairs for each focal plane can either address the same rows within each focal plane or can be offset from one another with respect to a global reference.
Focal plane timing and control address pointer circuitry in accordance with an embodiment of the invention is illustrated in
The system power management bias generation circuitry is configured to provide current and or voltage references to analog circuitry such as (but not limited to) the reference voltages against which an ADC would measure the signal to be converted against. In addition, system power management and bias generation circuitry in accordance with many embodiments of the invention can turn off the current/voltage references to certain circuits when they are not in use for power saving reasons. Additional power management techniques that can be implemented using power management circuitry in accordance with embodiments of the invention are discussed below.
5.1. Power OptimizationThe master control block of an imager array in accordance with embodiments of the invention can manage the power consumption of the imager array. In many embodiments, the master control block reduces power consumption by “turning off” certain focal planes during modes of operation where the desired output resolution is less than the full performance of the device. In such modes, amplifiers, bias generators, ADCs and other clocked circuits associated with the focal planes that are not used are placed in a lower power state to minimize or eliminate static and dynamic power draw.
5.1.1. Preventing Carrier Migration During Imager Power Down
Despite a focal plane being in a powered down state, light is incident upon the pixels in its sub-array. Incident photons will continue to create charge carriers in the silicon substrate. If the pixels in a powered-down focal plane are left floating, the charge carriers will fill the pixel well and deplete the potential barrier making it unable to trap any further carriers. Excess carriers, created by the persistent photon flux will then be left to wander the substrate. If these excess carriers wander from an inactive focal plane into an active focal plane, and collect in the well of a pixel in an active focal plane, they would be erroneously measured to be photo-electrons that were generated within that pixel The result can be the appearance of blooming around the periphery of the active imager caused by the tide of free carriers migrating into the active focal plane from the inactive neighbors.
To mitigate the migration of excess carriers from inactive focal planes, the photodiodes in the pixels of an inactive focal planes are connected to the power supply via transistor switches within each pixel such that the pixel well is held open to its maximum electrical potential. Holding the well open enables the photodiode to constantly collect carriers generated by the incident light and thus reduce the problem of carrier migration from an inactive imager. The transistors in each pixel are part of the normal pixel architecture i.e. the transfer gate, and it is the master control logic along with the row controllers that signal the transistors to hold the wells open.
5.1.2. Standby ModeIn many embodiments, reference pixels are used in the calibration of dark current and FPN. In several embodiments, the power management circuitry is configured to enable the powering down of the pixels in a focal plane in such a way that the reference pixels remain active. In several embodiments, this is achieved by powering the ASP during the readout of reference pixels but otherwise maintaining the ASP in a low power mode. In this way, the focal plane can be more rapidly activated by reducing the need to calibrate dark current and FPN when the focal plane is woken up. In many instances, calibration is performed with respect to dark current and FPN when the reference pixels are powered down during the low power state of the focal plane. In other embodiments, any of a variety of partial powering of circuitry can be utilized to reduce the current drawn by a focal plane and its associated peripheral circuitry in accordance with embodiments of the invention.
6. Focal Plane Data Collation and Framing LogicReferring again to
Although specific imager array architectures are described above, alternative imager array architectures can be used to implement. Imager arrays based upon requirements, including but not limited to, pixel size/performance, the optical system of the array camera, the silicon real-estate cost, and the manufacturing process used to fabricate the imager array in accordance with embodiments of the invention. In addition, imager arrays in accordance with embodiments of the invention can be implemented using any of a variety of shapes of pixels including but not limited to square pixels, rectangular pixels, hexagonal pixels, and a variety of pixel shapes. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents.
Claims
1. An imager array, comprising:
- a plurality of focal planes, where each focal plane comprises a two dimensional arrangement of pixels having at least two pixels in each dimension and each focal plane is contained within a region of the imager array that does not contain pixels from another focal plane;
- control circuitry configured to control the capture of image information by the pixels within the focal planes, where the control circuitry is configured so that the capture of image information by the pixels in at least two of the focal planes is separately controllable; and
- sampling circuitry configured to convert pixel outputs into digital pixel data.
2. The imager array of claim 1, wherein the plurality of focal planes is arranged as a two dimensional array of focal planes having at least three focal planes in one dimension.
3. The imager array of claim 1, wherein the plurality of focal planes is arranged as a two dimensional array of focal planes having at least three focal planes in both dimensions.
4. The imager array of claim 1, wherein the plurality of focal planes arranged as an N×M array of focal planes comprising at least two focal planes configured to capture blue light, at least two focal planes configured to capture green light, and at least two focal planes configured to capture red light.
5. The imager array of claim 1, wherein each focal plane comprises rows and columns of pixels.
6. The imager array of claim 1, wherein the control circuitry is configured to control capture of image information by a pixel by controlling the resetting of the pixel.
7. The imager array of claim 1, wherein the control circuitry is configured to control capture of image information by a pixel by controlling the readout of the pixel.
8. The imager array of claim 1, wherein the control circuitry is configured to control capture of image information by controlling the integration time of each pixel.
9. The imager array of claim 1, wherein the control circuitry is configured to control the processing of image information by controlling the gain of the sampling circuitry.
10. The imager array of claim 1, wherein the control circuitry is configured to control the processing of image information by controlling the black level offset of each pixel.
11. The imager array of claim 1, wherein the control circuitry is configured to control the capture of image information by controlling readout direction.
12. The imager array of claim 11, wherein the read-out direction is selected from the group consisting of:
- top to bottom; and
- bottom to top.
13. The imager array of claim 11, wherein the read-out direction is selected from the group consisting of:
- left to right; and
- right to left.
14. The imager array of claim 1, wherein the control circuitry is configured to control the capture of image information by controlling the readout region of interest.
15. The imager array of claim 1, wherein the control circuitry is configured to control the capture of image information by controlling horizontal sub-sampling.
16. The imager array of claim 1, wherein the control circuitry is configured to control the capture of image information by controlling vertical sub-sampling.
17. The imager array of claim 1, wherein the control circuitry is configured to control the capture of image information by controlling pixel charge-binning.
18. The imager array of claim 1, wherein the imager array is a monolithic integrated circuit imager array.
19. The imager array of claim 1, wherein a two dimensional array of adjacent pixels in at least one focal plane have the same capture band.
20. The imager array of claim 19, wherein the capture band is selected from the group consisting of:
- blue light;
- cyan light;
- extended color light comprising visible light and near-infra red light;
- green light;
- infra-red light;
- magenta light;
- near-infra red light;
- red light;
- yellow light; and
- white light.
21. The imager array of claim 1, wherein:
- a first array of adjacent pixels in a first focal plane have a first capture band;
- a second array of adjacent pixels in a second focal plane have a second capture band, where the first and second capture bands are the same;
- the peripheral circuitry is configured so that the integration time of the first array of adjacent pixels is a first time period; and
- the peripheral circuitry is configured so that the integration time of the second array of adjacent pixels is a second time period, where the second time period is longer than the first time period.
21. The imager array of claim 1, wherein at least one of the focal planes includes an array of adjacent pixels, where the pixels in the array of adjacent pixels are configured to capture different colors of light.
22. The imager array of claim 21, wherein the array of adjacent pixels employs a Bayer filler pattern.
23. The imager array of claim 22, wherein:
- the plurality of focal planes is arranged as a 2×2 array of focal planes;
- a first focal plane in the array of focal planes includes an array of adjacent pixels that employ a Bayer filter pattern;
- a second focal plane in the array of focal planes includes an array of adjacent pixels configured to capture green light;
- a third focal plane in the array of focal planes includes an array of adjacent pixels configured to capture red light; and
- a fourth focal plane in the array of focal planes includes an array of adjacent pixels configured to capture blue light.
24. The imager array of claim 22, wherein the plurality of focal planes is arranged as a two dimensional array of focal planes having at least three focal planes in one dimension.
25. The imager array of claim 22, wherein the plurality of focal planes is arranged as a two dimensional array of focal planes having at least three focal planes in both dimensions.
26. The imager array of claim 1, wherein the control circuitry comprises a global counter.
27. The imager array of claim 26, wherein the control circuitry is configured to stagger the start points of image read-out such that each focal plane has a controlled temporal offset with respect to a global counter.
28. The imager array of claim 26, wherein the control circuitry is configured to separately control the integration times of the pixels in each focal plane based upon the capture band of the pixels in the focal plane using the global counter.
29. The imager array of claim 26, wherein the control circuitry is configured to separately control the frame rate of each focal plane based upon the global counter.
30. The imager array of claim 26, wherein the control circuitry further comprises a pair of pointers for each focal plane.
31. The imager array of claim 30, wherein the offset between the pointers specifies an integration time.
32. The imager array of claim 30, wherein the offset between the pointers is programmable.
33. The imager array of claim 1, wherein the control circuitry comprises a row controller dedicated to each focal plane.
34. The imager array of claim 1, wherein:
- the imager array includes an array of M×N focal planes;
- the control circuitry comprises a single row decoder circuit configured to address each row of pixels in each row of M focal planes.
35. The imager array of claim 34, wherein:
- the control circuitry is configured to generate a first set of pixel level timing signals so that the row decoder and a column circuit sample a first row of pixels within a first focal plane; and
- the control circuitry is configured to generate a second set of pixel level timing signals so that the row decoder and a column circuit sample a second row of pixels within a second focal plane.
36. The imager array of claim 1, wherein each focal plane has dedicated sampling circuitry.
37. The imager array of claim 1, wherein at least a portion of the sampling circuitry is shared by a plurality of the focal planes.
38. The imager array of claim 37, wherein:
- the imager array includes an array of M×N focal planes;
- the sampling circuitry comprises M analog signal processors (ASPs) and each ASP is configured to sample pixels read-out from N focal planes.
39. The imager array of claim 38, wherein:
- each ASP is configured to receive pixel output signals from the N focal planes via N inputs; and
- each ASP is configured to sequentially process each pixel output signal on its N inputs.
40. The imager array of claim 38, wherein:
- the control circuitry is configured so that a single common analog pixel signal readout line is shared by all pixels in a group of N focal planes; and
- the control circuitry is configured to control the capture of image data to time multiplex the pixel output signals received by each of the M ASPs.
41. The imager array of claim 37, wherein:
- the imager array includes an array of M×N focal planes;
- the sampling circuitry comprises a plurality of analog signal processors (ASPs) and each ASP is configured to sample pixels read-out from a plurality of focal planes;
- the control circuitry is configured so that a single common analog pixel signal readout line is shared by all pixels in the plurality of focal planes; and
- the control circuitry is configured to control the capture of image data to time multiplex the pixel output signals received by each of the plurality of ASPs.
42. The imager array of claim 1, wherein the sampling circuitry comprises analog front end (AFE) circuitry and analog-to-digital conversion (ADC) circuitry.
43. The imager array of claim 42, wherein the sampling circuitry is configured so that each focal plane has a dedicated AFE and at least one ADC is shared between at least two focal planes.
44. The imager array of claim 43, wherein the sampling circuitry is configured so that at least one ADC is shared between a pair of focal planes.
45. The imager array of claim 43, wherein the sampling circuitry is configured so that at least one ADC is shared between four focal planes.
46. The imager array of claim 45, wherein the sampling circuitry is configured so that at least one AFE is shared between at least two focal planes.
47. The imager array of claim 46, wherein the sampling circuitry is configured so that at least one AFE is shared between a pair of focal planes.
48. The imager array of claim 47, wherein the sampling circuitry is configured so that two pairs of focal planes that each share an AFE collectively share an ADC.
49. The imager array of claim 1, wherein the control circuitry is configured to separately control the power down state of each focal plane and associated AFE circuitry or processing timeslot therein.
50. The imager array of claim 49, wherein the control circuitry configures the pixels of at least one inactive focal plane to be in a constant reset state.
51. The imager array of claim 1, wherein at least one focal plane includes reference pixels to calibrate pixel data captured using the focal plane.
52. The imager array of 51, wherein:
- the control circuitry is configured to separately control the power down state of the focal plane's associated AFE circuitry or processing timeslot therein; and
- the control circuitry is configured to power down the focal plane's associated AFE circuitry or processing timeslot therein without powering down the associated AFE circuitry or processing timeslot therein for readout of the reference pixels of the focal plane.
Type: Application
Filed: May 12, 2011
Publication Date: Jan 19, 2012
Applicant: Pelican Imaging Corporation (Mountain View, CA)
Inventors: Bedabrato Pain (Los Angeles, CA), Andrew Kenneth John McMahon (Menlo Park, CA)
Application Number: 13/106,797
International Classification: H01L 27/146 (20060101);