NON-SPAD PIXELS FOR DIRECT TIME-OF-FLIGHT RANGE MEASUREMENT
A Direct Time-of-Flight (DTOF) technique is combined with analog amplitude modulation within each pixel in a pixel array. No Single Photon Avalanche Diodes (SPADs) or Avalanche Photo Diodes (APDs) are used. Instead, each pixel has a Photo Diode (PD) with a conversion gain of over 400 μV/e− and Photon Detection Efficiency (PDE) of more than 45%, operating in conjunction with a Pinned Photo Diode (PPD). The TOF information is added to the received light signal by the analog domain-based single-ended to differential converter inside the pixel itself. The output of the PD in a pixel is used to control the operation of the PPD. The charge transfer from the PPD is stopped—and, hence, TOF value and range of an object are recorded—when the output from the PD in the pixel is triggered within a pre-defined time interval. Such pixels provide for an improved autonomous navigation system for drivers.
This application claims the priority benefit under 35 U.S.C. § 119(e) of U.S. Provisional Application No. 62/607,861 filed on Dec. 19, 2017, the disclosure of which is incorporated herein by reference in its entirety.
TECHNICAL FIELDThe present disclosure generally relates to image sensors. More specifically, and not by way of limitation, particular embodiments of the inventive aspects disclosed in the present disclosure are directed to a Time-of-Flight (TOF) image sensor in which a pixel uses a Photo Diode (PD) with a very high conversion gain to control the operation of a time-to-charge converter, such as a Pinned Photo Diode (PPD), to facilitate recording of TOF values and range of a three-dimensional (3D) object.
BACKGROUNDThree-dimensional (3D) imaging systems are increasingly being used in a wide variety of applications such as, for example, industrial production, video games, computer graphics, robotic surgeries, consumer displays, surveillance videos, 3D modeling, real estate sales, autonomous navigation, and so on.
Existing 3D imaging technologies may include, for example, the time-of-flight (TOF) based range imaging, stereo vision systems, and structured light (SL) methods.
In the TOF method, distance to a 3D object is resolved based on the known speed of light—by measuring the round-trip time it takes for a light signal to travel between a camera and the 3D object for each point of the image. The outputs of pixels in the camera provide information about pixel-specific TOF values to generate a 3D depth profile of the object. A TOF camera may use a scanner-less approach to capture the entire scene with each laser or light pulse. In a direct TOF imager, a single laser pulse may be used to capture spatial and temporal data to record a 3D scene. This allows rapid acquisition and rapid real-time processing of scene information. Some example applications of the TOF method may include advanced automotive applications such as autonomous navigation and active pedestrian safety or pre-crash detection based on distance images in real time, to track movements of humans such as during interaction with games on video game consoles, in industrial machine vision to classify objects and help robots find the items such as items on a conveyor belt, and so on.
Light Detection and Ranging (LiDAR) is an example of a direct TOF method that measures distance to a target by illuminating the target with a pulsed laser light and measuring the reflected pulses with a sensor. Differences in laser return times and wavelengths can then be used to make digital 3D representations of the target. LiDAR has terrestrial, airborne, and mobile applications. LiDAR is commonly used to make high-resolution maps such as, for example, in archaeology, geography, geology, forestry, and the like. LiDAR also has automotive applications such as, for example, for control and navigation in some autonomous cars.
In stereoscopic imaging or stereo vision systems, two cameras—displaced horizontally from one another—are used to obtain two differing views on a scene or a 3D object in the scene. By comparing these two images, the relative depth information can be obtained for the 3D object. Stereo vision is highly important in fields such as robotics, to extract information about the relative position of 3D objects in the vicinity of autonomous systems/robots. Other applications for robotics include object recognition, where stereoscopic depth information allows a robotic system to separate occluding image components, which the robot may otherwise not be able to distinguish as two separate objects—such as one object in front of another, partially or fully hiding the other object. 3D stereo displays are also used in entertainment and automated systems.
In the SL approach, the 3D shape of an object may be measured using projected light patterns and a camera for imaging. In the SL method, a known pattern of light—often grids or horizontal bars or patterns of parallel stripes—is projected onto a scene or a 3D object in the scene. The projected pattern may get deformed or displaced when striking the surface of the 3D objet. Such deformation may allow an SL vision system to calculate the depth and surface information of the object. Thus, projecting a narrow band of light onto a 3D surface may produce a line of illumination that may appear distorted from other perspectives than that of the projector, and can be used for geometric reconstruction of the illuminated surface shape. The SL-based 3D imaging may be used in different applications such as, for example, by a police force to photograph fingerprints in a 3D scene, inline inspection of components during a production process, in health care for live measurements of human body shapes or the micro structures of human skin, and the like.
SUMMARYIn one embodiment, the present disclosure is directed to a pixel in an image sensor. The pixel comprises: (i) a Photo Diode (PD) unit having at least one PD that converts received luminance into an electrical signal, wherein the at least one PD has a conversion gain that satisfies a threshold; (ii) an amplifier unit connected in series with the PD unit to amplify the electrical signal and to responsively generate an intermediate output; and (iii) a Time-to-Charge Converter (TCC) unit coupled to the amplifier unit and receiving the intermediate output therefrom. In the pixel, the TCC unit includes: (a) a device that stores an analog charge, and (b) a control circuit coupled to the device. The control circuit performs operations comprising: (1) initiating transfer of a portion of the analog charge from the device, (2) terminating the transfer in response to receipt of the intermediate output within a pre-defined time interval, and (3) generating a pixel-specific output for the pixel based on the portion of the analog charge transferred. In particular embodiments, the threshold for the conversion gain is at least 400 μV (microvolts) per photoelectron.
In another embodiment, the present disclosure is directed to a method, which comprises: (i) projecting a laser pulse onto a three-dimensional (3D) object; (ii) applying an analog modulating signal to a device in a pixel, wherein the device stores an analog charge; (iii) initiating transfer of a portion of the analog charge from the device based on modulation received from the analog modulating signal; (iv) detecting a returned pulse using the pixel, wherein the returned pulse is the projected laser pulse reflected from the 3D object, and wherein the pixel includes a Photo Diode (PD) unit having at least one PD that converts luminance received in the returned pulse into an electrical signal and that has a conversion gain that satisfies a threshold; (v) processing the electrical signal using an amplifier unit in the pixel to responsively generate an intermediate output; (vi) terminating the transfer of the portion of the analog charge in response to generation of the intermediate output within a pre-defined time interval; and (vii) determining a Time of Flight (TOF) value of the returned pulse based on the portion of the analog charge transferred upon termination. In some embodiments, the threshold for the conversion gain is at least 400 μV per photon.
In yet another embodiment, the present disclosure is directed to a system, which comprises: (i) a light source; (ii) a plurality of pixels; (iii) a memory for storing program instructions; and (iv) a processor coupled to the memory and to the plurality of pixels. In the system, the light source projects a laser pulse onto a 3D object. In the plurality of pixels, each pixel includes: (a) a pixel-specific PD unit having at least one PD that converts luminance received in a returned pulse into an electrical signal, wherein the at least one PD has a conversion gain that satisfies a threshold, and wherein the returned pulse results from reflection of the projected laser pulse by the 3D object; (b) a pixel-specific amplifier unit connected in series with the pixel-specific PD unit to amplify the electrical signal and to responsively generate an intermediate output; and (c) a pixel-specific TCC unit coupled to the pixel-specific amplifier unit and receiving the intermediate output therefrom. In the system, the pixel-specific TCC unit includes: (i) a device that stores an analog charge, and (ii) a control circuit coupled to the device. The control circuit performs operations comprising: (a) initiating transfer of a pixel-specific first portion of the analog charge from the device; (b) terminating the transfer of the pixel-specific first portion upon receipt of the intermediate output within a pre-defined time interval; (c) generating a first pixel-specific output for the pixel based on the pixel-specific first portion of the analog charge transferred; (d) transferring a pixel-specific second portion of the analog charge from the device, wherein the pixel-specific second portion is substantially equal to a remainder of the analog charge after the pixel-specific first portion is transferred; and (e) generating a second pixel-specific output for the pixel based on the pixel-specific second portion of the analog charge transferred. In the system, the processor executes the program instructions, whereby the processor performs the following operations for each pixel in the plurality of pixels: (a) facilitating transfers of the pixel-specific first and second portions of the analog charge, respectively; (b) receiving the first and the second pixel-specific outputs; (c) generating a pixel-specific pair of signal values based on the first and the second pixel-specific outputs, respectively, wherein the pixel-specific pair of signal values includes a pixel-specific first signal value and a pixel-specific second signal value; (d) determining a corresponding pixel-specific TOF value of the returned pulse using the pixel-specific first signal value and the pixel-specific second signal value; and (e) determining a pixel-specific distance to the 3D object based on the pixel-specific TOF value. In certain embodiments, the threshold for the conversion gain is at least 400 μV per photoelectron.
In the following section, the inventive aspects of the present disclosure will be described with reference to exemplary embodiments illustrated in the figures, in which:
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the disclosure. However, it will be understood by those skilled in the art that the disclosed inventive aspects may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present disclosure. Additionally, the described inventive aspects can be implemented to perform low power, range measurements and 3D imaging in any imaging device or system, including, for example, a computer, an automobile navigation system, and the like.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” or “according to one embodiment” (or other phrases having similar import) in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Also, depending on the context of discussion herein, a singular term may include its plural forms and a plural term may include its singular form. Similarly, a hyphenated term (e.g., “three-dimensional,” “pre-defined”, “pixel-specific,” etc.) may be occasionally interchangeably used with its non-hyphenated version (e.g., “three dimensional,” “predefined”, “pixel specific,” etc.), and a capitalized entry (e.g., “Projector Module,” “Image Sensor,” “PIXOUT” or “Pixout,” etc.) may be interchangeably used with its non-capitalized version (e.g., “projector module,” “image sensor,” “pixout,” etc.). Such occasional interchangeable uses shall not be considered inconsistent with each other.
It is noted at the outset that the terms “coupled,” “operatively coupled,” “connected”, “connecting,” “electrically connected,” etc., may be used interchangeably herein to generally refer to the condition of being electrically/electronically connected in an operative manner. Similarly, a first entity is considered to be in “communication” with a second entity (or entities) when the first entity electrically sends and/or receives (whether through wireline or wireless means) information signals (whether containing address, data, or control information) to/from the second entity regardless of the type (analog or digital) of those signals. It is further noted that various figures (including component diagrams) shown and discussed herein are for illustrative purpose only, and are not drawn to scale. Similarly, various waveforms and timing diagrams are shown for illustrative purpose only.
The terms “first,” “second,” etc., as used herein, are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.) unless explicitly defined as such. Furthermore, the same reference numerals may be used across two or more figures to refer to parts, components, blocks, circuits, units, or modules having the same or similar functionality. However, such usage is for simplicity of illustration and ease of discussion only; it does not imply that the construction or architectural details of such components or units are the same across all embodiments or such commonly-referenced parts/modules are the only way to implement the teachings of particular embodiments of the present disclosure.
It is observed here that the earlier-mentioned 3D technologies have many drawbacks. For example, a range gated TOF imager may use multiple laser pulses to provide illumination and an optical gate to allow the light to reach the imager only during a desired time period. The range gated TOF imagers can be used in two-dimensional (2D) imaging to suppress anything outside a specified distance range, such as to see through fog. However, the gated TOF imagers may provide only Black-and-White (B&W) output and may not have 3D imaging capability. Furthermore, current TOF systems typically operate over a range of few meters to several tens of meters, but their resolution may decrease for measurements over short distances, thereby making 3D imaging within a short distance—such as, for example, in a fog or hard-to-see conditions—almost impractical. Also, pixels in existing TOF sensors may be vulnerable to ambient light.
Direct TOF (DTOF) LiDAR sensors typically use Single Photon Avalanche Diodes (SPADs) or Avalanche Photo Diodes (APDs) in their pixel arrays for DTOF range measurements. Generally, SPAD and APD both require a high operating voltage—in the range of approximately 20V to 30V—and special fabrication processes to manufacture them. Furthermore, a SPAD has a low Photon Detection Efficiency (PDE) in the range of 5%. Thus, a SPAD based imager may not be optimum for a high speed 3D imaging system for all-weather autonomous navigation.
The stereoscopic imaging approach generally works only with textured surfaces. It has high computational complexity because of the need to match features and find correspondences between the stereo pair of images of an object. This requires high system power. Furthermore, stereo imaging requires two regular, high bit resolution sensors along with two lenses, making the entire assembly unsuitable where space is at a premium such as, for example, in an automobile-based autonomous navigation system. Additionally, stereo 3D cameras have difficulty to see through fog and to deal with motion blur.
In contrast, particular embodiments of the present disclosure provide for implementing a lost cost, high performance automotive LiDAR sensor or DTOF-based 3D imaging system on automotives for all weather conditions. Thus, improved vision for drivers may be provided under difficult conditions such as, for example, low light, bad weather, fog, strong ambient light, and the like. A DTOF range measurement system as per particular embodiments of the present disclosure may not include imaging, but, instead, may provide an audible and/or a visible alert. The measured range may be used in autonomous control of a vehicle such as, for example, automatically stopping a vehicle to avoid collision with another object. As discussed in more detail below, in a single pulse-based direct TOF system as per particular embodiments of the present disclosure, the TOF information is added to the received signal by means of controlled charge transferring and analog domain-based single-ended to differential converter inside the pixel itself. Thus, the present disclosure provides for a single chip solution that directly combines TOF and analog Amplitude Modulation (AM) within each pixel in the pixel array using a high conversion Photo Diode (PD)-having PDE in the range of 45% or more—in conjunction with a single Pinned Photo Diode (PPD) (or another time-to-charge converter) in each pixel. The high conversion PDs replace the SPADs in the current LiDAR imagers for DTOF range measurements. The output of the PD in a pixel is used to control the operation of the PPD to facilitate recording of TOF values and range of a 3D object. As a result, an improved autonomous navigation system may be offered that can “see through” inclement weather at short range and produce 3D images as well as 2D gray-scale images under a substantially lower operating voltage.
The system 15 may be any electronic device configured for 2D and 3D imaging applications as per teachings of the present disclosure. The system 15 may be portable or non-portable. Some examples of the portable version of the system 15 may include popular consumer electronic gadgets such as, for example, a mobile device, a cellphone, a smartphone, a User Equipment (UE), a tablet, a digital camera, a laptop or desktop computer, an automobile navigation unit, a Machine-to-Machine (M2M) communication unit, a Virtual Reality (VR) equipment or module, a robot, and the like. On the other hand, some examples of the non-portable version of the system 15 may include a game console in a video arcade, an interactive video terminal, an automobile with autonomous navigation capability, a machine vision system, an industrial robot, a VR equipment, and so on. The 3D imaging functionality provided as per teachings of the present disclosure may be used in many applications such as, for example, automobile applications such as all-weather autonomous navigation and driver assistance in low light or inclement weather conditions, human-machine interface and gaming applications, machine vision and robotics applications, and the like.
In particular embodiments of the present disclosure, the imaging module 17 may include a projector module (or light source module) 22 and an image sensor unit 24. As discussed in more detail with reference to
In one embodiment, the processor 19 may be a Central Processing Unit (CPU), which can be a general purpose microprocessor. In the discussion herein, the terms “processor” and “CPU” may be used interchangeably for ease of discussion. However, it is understood that, instead of or in addition to the CPU, the processor 19 may contain any other type of processors such as, for example, a microcontroller, a Digital Signal Processor (DSP), a Graphics Processing Unit (GPU), a dedicated Application Specific Integrated Circuit (ASIC) processor, and the like. Furthermore, in one embodiment, the processor/host 19 may include more than one CPU, which may be operative in a distributed processing environment. The processor 19 may be configured to execute instructions and to process data according to a particular Instruction Set Architecture (ISA) such as, for example, an x86 instruction set architecture (32-bit or 64-bit versions), a PowerPC® ISA, or a MIPS (Microprocessor without Interlocked Pipeline Stages) instruction set architecture relying on RISC (Reduced Instruction Set Computer) ISA. In one embodiment, the processor 19 may be a System on Chip (SoC) having functionalities in addition to a CPU functionality.
In particular embodiments, the memory module 20 may be a Dynamic Random Access Memory (DRAM) such as, for example, a Synchronous DRAM (SDRAM), or a DRAM-based Three Dimensional Stack (3DS) memory module such as, for example, a High Bandwidth Memory (HBM) module, or a Hybrid Memory Cube (HMC) memory module. In other embodiments, the memory module 20 may be a Solid State Drive (SSD), a non-3DS DRAM module, or any other semiconductor-based storage system such as, for example, a Static Random Access Memory (SRAM), a Phase-Change Random Access Memory (PRAM or PCRAM), a Resistive Random Access Memory (RRAM or ReRAM), a Conductive-Bridging RAM (CBRAM), a Magnetic RAM (MRAM), a Spin-Transfer Torque MRAM (STT-MRAM), and the like.
The light source (or projector) module 22 may illuminate the 3D object 26 by projecting a short pulse 28 as shown by an exemplary arrow 30 associated with a corresponding dotted line 31 representing an illumination path of a light signal or optical radiation that may be used to project on the 3D object 26 within an optical Field Of View (FOV). The system 15 may be a direct TOF imager in which a single pulse may be used per image frame (of pixel array). In certain embodiments, multiple, short pulses may be transmitted onto the 3D object 26 as well. An optical radiation source, which, in one embodiment, may be a laser light source 33 operated and controlled by a laser controller 34, may be used to project the short pulse 28 (here, a laser pulse) onto the 3D object 26. The short pulse 28 from the laser light source 33 may be projected—under the control of the laser controller 34—onto the surface of the 3D object 26 via projection optics 35. The projection optics may be a focusing lens, a glass/plastics surface, or other cylindrical optical element. In the embodiment of
In particular embodiments, the light source (or illumination source) 33 may be a diode laser or a Light Emitting Diode (LED) emitting visible light, a light source that produces light in the non-visible spectrum, an IR laser (for example, an NIR or an SWIR laser), a point light source, a monochromatic illumination source (such as, for example, a combination of a white lamp and a monochromator) in the visible light spectrum, or any other type of laser light source. In autonomous navigation applications, the more unobtrusive NIR or SWIR laser may be preferred as the pulsed laser light source 33. In certain embodiments, the laser light source 33 may be one of many different types of laser light sources such as, for example, a point source with 2D scanning capability, a sheet source with one-dimensional (1D) scanning capability, or a diffused laser with matching FOV of the image sensor unit 24. In particular embodiments, the laser light source 33 may be fixed in one position within the housing of the device 15, but may be rotatable in X-Y directions. The laser light source 33 may be X-Y addressable (for example, by the laser controller 34) to perform a scan of the 3D object 26. The laser pulse 28 may be projected onto the surface of the 3D object 26 using a mirror (not shown), or the projection may be completely mirror-less. In particular embodiments, the projector module 22 may include more or less components than those shown in the exemplary embodiment of
In the embodiment of
In TOF imaging, the light received from the illuminated 3D object 26 may be focused onto a 2D pixel array 42 via collection optics 44 in the image sensor unit 24. The pixel array 42 may include one or more pixels 43. Like the projection optics 35, the collection optics 44 may be a focusing lens, a glass/plastics surface, or other cylindrical optical element that concentrates the reflected light received from the 3D object 26 onto one or more pixels 43 in the 2D array 42. An optical band-pass filter (not shown) may be used as part of the collection optics 44 to pass only the light with the same wavelength as the wavelength of light in the laser pulse 28. This may help suppress collection/reception of non-relevant light and reduce noise. In the embodiment of
The TOF-based 3D imaging as per particular embodiments of the present disclosure may be performed using many different combinations of 2D pixel arrays 42 and laser light sources 33 such as, for example: (i) a 2D color (RGB) sensor with a visible light laser source, in which the laser source may be a red (R), green (G), or blue (B) light laser, or a laser source producing a combination of these lights; (ii) a visible light laser with a 2D RGB color sensor having an Infrared (IR) cut filter; (iii) an NIR or SWIR laser with a 2D IR sensor; (iv) an NIR laser with a 2D NIR sensor; (v) an NIR laser with a 2D RGB sensor (without an IR cut filter); (vi) an NIR laser with a 2D RGB sensor (without an NIR cut filter); (vii) a 2D RGB-IR sensor with visible or IR laser; (viii) a 2D RGBW (red, green, blue, white) or RWB (red, white, blue) sensor with either visible or NIR laser; and so on. In case of an NIR or other IR laser as, for example, in autonomous navigation applications, the 2D pixel array 42 may provide outputs to generate a grayscale image of the 3D object 26. These pixel outputs also may be processed to obtain the range measurements and, hence, to generate a 3D image of the object 26, as discussed in more detail below. Exemplary circuit details of individual pixels 43 are shown and discussed later with reference to
The pixel array 42 may convert the received photons into corresponding electrical signals, which are then processed by the associated image processing unit 46 to determine the range and 3D depth image of the object 26. In one embodiment, the image processing unit 46 and/or the processor 19 may carry out range measurements. As noted in
In the TOF system 15 in the embodiment of
The processor 19 may control the operations of the projector module 22 and the image sensor unit 24. Upon user input or automatically (as, for example, in a real-time autonomous navigation application), the processor 19 may repeatedly send a laser pulse 28 onto the surrounding 3D object(s) 26 and trigger the sensor unit 24 to receive and process incoming returned pulses 37. The processed image data received from the image processing unit 46 may be stored by the processor 19 in the memory 20 for TOF-based range computation and 3D image generation (if applicable). The processor 19 may also display a 2D image (for example, a grayscale image) and/or a 3D image on a display screen (not shown) of the device 15. The processor 19 may be programmed in software or firmware to carry out various processing tasks described herein. Alternatively or additionally, the processor 19 may comprise programmable hardware logic circuits for carrying out some or all of its functions. In particular embodiments, the memory 20 may store program code, look-up tables, and/or interim computational results to enable the processor 19 to carry out its functions.
In one embodiment, the second PD 56 may be similar to the first PD 55 in the sense that the second PD 56 also may be a low-voltage PD with a very high gain and high PDE. However, in contrast to the first PD 55, the second PD 56 may not be exposed to light—as illustrated by a grey circle around the PD 56 in
It is noted here that, simply for ease of discussion and depending on the context, the same reference numeral may be used in the discussion of
An amplifier unit 60 in the output unit 53 may be connected in series with the PDs 55-56, and may be operable to amplify the electrical signal 58. In some embodiments, the amplifier unit 60 may be a sense amplifier. Prior to such amplification, the sense amplifier 60 may reset the PDs 55-56. Thereafter, the PD 55 may receive the luminance 57 and generate the electrical signal 58. The sense amp 60 may operate to amplify the electrical signal only when an electronic shutter is turned on. Exemplary shutter signals are shown in
Exemplary circuit details for the Time-to-Charge Converter (TCC) unit 64 are shown in
As shown in
Exemplary circuit details for the TCC unit 79 are shown in
The PPD 89 may store analog charge similar to a capacitor. In one embodiment, the PPD 89 may be covered and does not respond to light. Thus, the PPD 89 may be used as a time-to-charge converter instead of a light sensing element. However, as noted before, the light-sensing functionality may be accomplished through the high gain PD 55 or 70. In certain embodiments, a photogate, a capacitor, or other semiconductor device—with suitable circuit modifications—may be used as a charge storage device instead of a PPD in the TCC units of
Under the operative control of the electronic Shutter signal 61, the charge transfer trigger portion—such as the logic unit 86—may generate a Transfer Enable (TXEN) signal 96 to trigger the transfer of charge stored in the PPD 89. A PD 55, 70 may detect a photon (which may be referred to as a “photon detection event”) in the light pulse that was transmitted and reflected off of an object, such as the object 26 in
In the charge generation and transfer portion, the PPD 89 may be initially set to its full well capacity using a Reset (RST) signal 98 in conjunction with the third transistor 92. The first transistor 90 may receive a Transfer Voltage (VTX) signal 99 at its drain terminal and the TXEN signal 96 at its gate terminal. A TX signal 100 may be available at the source terminal of the first transistor 90 and applied to the gate terminal of the second transistor 91. As shown, the source terminal of the first transistor 90 may be connected to the gate terminal of the second transistor 91. As discussed later below, the VTX signal 99 (or, equivalently, the TX signal 100) may be used as an analog modulating signal to control the analog charge to be transferred from the PPD 89, which may be connected to the source terminal of the transistor 91 in the configuration shown. The second transistor 91 may transfer the charge on the PPD 89 from its source terminal to its drain terminal, which may connect to the gate terminal of the fourth transistor 93 and form a charge “collection site” referred to as a Floating Diffusion (FD) node/junction 102. In particular embodiments, the charge transferred from the PPD 89 may depend on the modulation provided by the analog modulating signal 99 (or, equivalently, the TX signal 100). In the embodiments of
In the charge collection and output portion, the third transistor 92 may receive the RST signal 98 at its gate terminal and a Pixel Voltage (VPIX) signal 104 at its drain terminal. The source terminal of the third transistor 92 may be connected to the FD node 102. In one embodiment, the voltage level of the VPIX signal 104 may equal to the voltage level of the generic supply voltage VDD and may be in the range of 2.5V (volts) to 3V. The drain terminal of the fourth transistor 93 also may receive the VPIX signal 104 as shown. In particular embodiments, the fourth transistor 93 may operate as an NMOS source follower to function as a buffer amplifier. The source terminal of the fourth transistor 93 may be connected to the drain terminal of the fifth transistor 94, which may be in cascode with the source follower 93 and receiving a Select (SEL) signal 105 at its gate terminal. The charge transferred from the PPD 89 and “collected” at the FD node 102 may appear as the pixel-specific output PIXOUT 107 at the source terminal of the fifth transistor 94. The Pixout line/terminal 107 may represent either of the Pixout lines 65 (
Briefly, as mentioned before, the charge transferred from the PPD 89 to FD 102 is controlled by the VTX signal 99 (and, hence, the TX signal 100). The amount of charge reaching the FD node 102 is modulated by the TX signal 100. In one embodiment, the voltage VTX 99 (and, also TX 100) may be ramped to gradually transfer charge from the PPD 89 to FD 102. Thus, the amount of charge transferred may be a function of the analog modulating voltage TX 100, and the ramping of the TX voltage 100 is a function of time. Hence, the charge transferred from the PPD 89 to the FD node 102 also is a function of time. If, during the transfer of charge from the PPD 89 to FD 102, the second transistor 91 is turned off (for example, becomes open-circuited) due to the generation of the TXEN signal 96 by the logic unit 86 upon a photon detection event for the PD 55 (or 70), the transfer of charge from the PPD 89 to the FD node 102 stops. Consequently, the amount of charge transferred to FD 102 and the amount of charge remaining in the PPD 89 are both a function of the TOF of the incoming photon(s). The result is a time-to-charge conversion and a single-ended to differential signal conversion. The PPD 89 thus operates as a time-to-charge converter. The more the charge is transferred to the FD node 102, the more the voltage decreases on the FD node 102 and the more the voltage increases on the PPD 89. It is observed that the farther the object 26 (
The voltage at the floating diffusion 102 may be later transferred as the Pixout signal 107 to an Analog-to-Digital Converter (ADC) unit (not shown) using the transistor 94 and converted into an appropriate digital signal/value for subsequent processing. More details of the timing and operation of various signals in
In one embodiment, the ratio of one pixel output (for example, PIXOUT1) to the sum of the two pixel outputs (here, PIXOUT1+PIXOUT2) may be proportional to the time difference of “Ttof” and “Tdly” values, which are shown, for example, in
However, the present disclosure is not limited to the relationship present in equation (1). As discussed below, the ratio in equation (1) may be used to calculate depth or distance of a 3D object, and is less sensitive to pixel-to-pixel variations when Pixout1+Pixout2 is not always the same.
For ease of reference, the term “P1” may be used to refer to “Pixout1” and the term “P2” may be used to refer to “Pixout2” in the discussion below. It is seen from the relationship in equation (1) that the pixel-specific TOF value may be determined as a ratio of the pixel-specific output values P1 and P2. In certain embodiments, once the pixel-specific TOF value is so determined, the pixel-specific distance (“D”) or range (“R”) to an object (such as the 3D object 26 in
where the parameter “c” refers to the speed of light. Alternatively, in some other embodiments where the modulating signal—such as the VTX signal 99 (or the TX signal 100) in
In equation (3), the parameter “Tshutter” is the shutter duration or shutter “ON” period. The parameter “Tshutter” is referred to as the parameter “Tsh” in the embodiments of
In view of the present disclosure's analog modulation-based manipulation or control of the PPD charge distribution inside a pixel itself, the range measurement and resolution are also controllable. The pixel-level analog amplitude modulation of the PPD charge may work with an electronic shutter that may be a global shutter as, for example, in a Charge Coupled Device (CCD) image sensor. The global shutter may allow for a better image capture of a fast-moving object (such as a vehicle), which may be helpful in a driver assistant system or an autonomous navigation system. Furthermore, although the disclosure herein is primarily provided in the context of a one-pulse TOF imaging system, like the system 15 in
As noted before, the pixel-specific output (PIXOUT) 107 in
The two-input logic OR gate 116 may include a first input connected to the output of the latch 115, a second input for receiving a signal (TXRMD) 117, and an output to provide the TXEN signal 96. In one embodiment, the TXRMD signal 117 may be generated internally within the relevant pixel 50 (or 67). The OR gate 116 may logically OR the output of the latch 115 with the TXRMD signal 117 to obtain the final TXEN signal 96. Such internally-generated signal may remain low while the electronic shutter is “on”, but may be asserted “high” so that the TXEN signal 96 goes to a logic 1 to facilitate the transfer of the remaining charge in the PPD 89 (at event 135 in
As noted before, the waveforms shown in
In addition to various external signals (for example, VPIX 104, RST 98, and the like) and internal signals (for example, TX 100, TXEN 96, and FD voltage 102), the timing diagram 120 in
Referring to
In the embodiment shown in
Like the TCC unit 84 in
The signals RST, VTX, VPIX, TX2, and SEL may be supplied to the TCC unit 140 from an external unit, such as, for example, the image processing unit 46 in
It is observed that the configuration of the TCC unit 140 in
In the embodiment of
It is observed here that the pixel configurations shown in
In
It is noted that the PPD preset event 184, the delay time (Tdly) 185, the TOF period (Ttof) 186, the shutter “off” interval 187, and the shutter “on” or “active” period (Tsh) 188 or 189, and the FD reset event 190 in
In contrast to the embodiment in
During the first readout interval 191, after the initial charge is transferred from the SD node to the FD node and the TX2 signal 177 returns to the logic “low” level, the TXRMD signal 182 may be asserted (pulsed) “high” to generate a “high” pulse on the TXEN input 152, which, in turn, may generate a “high” pulse on the TX input 157 to allow transfer of the remaining charge in the PPD 142 to the SD node 175 (through the SD capacitor 172)—as indicated by the reference numeral “183” in
In summary, the pixel designs as per teachings of the present disclosure use one or more high-gain PDs in combination with a PPD (or similar analog charge storage device), which performs as a time-to-charge converter whose AM-based charge transfer operation is controlled by outputs from the one or more high-gain PDs in the pixel to determine TOF. In the present disclosure, the PPD charge transfer is stopped to record TOF only when an output from a high-gain PD is triggered within a very short, pre-defined time interval—such as, for example, when an electronic shutter is “on.” As a result, an all-weather autonomous navigation system as per teachings of the present disclosure may provide improved vision for drivers under difficult driving conditions such as, for example, low light, fog, bad weather, and so on.
At block 200, a returned pulse, such as the returned pulse 37, may be detected using the pixel 50 (or 67). As mentioned earlier, the returned pulse 37 is the projected laser pulse 28 reflected from the 3D object 26. As noted at block 200, the pixel 50 (or 67) may include a PD unit—such as the PD unit 52 (or the PD unit 68)—having at least one PD, like the PD 55 (or the PD 70), that converts luminance received in the returned pulse 37 into an electrical signal and that has a conversion gain that satisfies a threshold. In particular embodiments, the threshold is at least 400 μV per photon, as mentioned before. As noted at block 201, this electrical signal may be processed using an amplifier unit—such as the sense amplifier 60 (or the gainstage in the output unit 69)—in the pixel 50 (or 67) to responsively generate an intermediate output. In the embodiment of
As discussed earlier with reference to
As discussed earlier, the imaging module 17 may include the desired hardware shown in the exemplary embodiments of
As mentioned earlier, the system memory 20 may be any semiconductor-based storage system such as, for example, DRAM, SRAM, PRAM, RRAM, CBRAM, MRAM, STT-MRAM, and the like. In some embodiments, the memory unit 20 may include at least one 3DS memory module in conjunction with one or more non-3DS memory modules. The non-3DS memory may include Double Data Rate or Double Data Rate 2, 3, or 4 Synchronous Dynamic Random Access Memory (DDR/DDR2/DDR3/DDR4 SDRAM), or Rambus® DRAM, flash memory, various types of Read Only Memory (ROM), etc. Also, in some embodiments, the system memory 20 may include multiple different types of semiconductor memories, as opposed to a single type of memory. In other embodiments, the system memory 20 may be a non-transitory data storage medium.
The peripheral storage unit 206, in various embodiments, may include support for magnetic, optical, magneto-optical, or solid-state storage media such as hard drives, optical disks (such as Compact Disks (CDs) or Digital Versatile Disks (DVDs)), non-volatile Random Access Memory (RAM) devices, flash memories, and the like. In some embodiments, the peripheral storage unit 206 may include more complex storage devices/systems such as disk arrays (which may be in a suitable RAID (Redundant Array of Independent Disks) configuration) or Storage Area Networks (SANs), and the peripheral storage unit 206 may be coupled to the processor 19 via a standard peripheral interface such as a Small Computer System Interface (SCSI) interface, a Fibre Channel interface, a Firewire® (IEEE 1394) interface, a Peripheral Component Interface Express (PCI Express™) standard based interface, a Universal Serial Bus (USB) protocol based interface, or another suitable interface. Various such storage devices may be non-transitory data storage media.
The display unit 207 may be an example of an output device. Other examples of an output device include a graphics/display device, a computer screen, an alarm system, a CAD/CAM (Computer Aided Design/Computer Aided Machining) system, a video game station, a smartphone display screen, a dashboard-mounted display screen in an automobile, or any other type of data output device. In some embodiments, the input device(s), such as the imaging module 17, and the output device(s), such as the display unit 207, may be coupled to the processor 19 via an I/O or peripheral interface(s).
In one embodiment, the network interface 208 may communicate with the processor 19 to enable the system 15 to couple to a network (not shown). In another embodiment, the network interface 208 may be absent altogether. The network interface 208 may include any suitable devices, media and/or protocol content for connecting the system 15 to a network—whether wired or wireless. In various embodiments, the network may include Local Area Networks (LANs), Wide Area Networks (WANs), wired or wireless Ethernet, the Internet, telecommunication networks, satellite links, or other suitable types of network.
The system 15 may include an on-board power supply unit 210 to provide electrical power to various system components illustrated in
In one embodiment, the imaging module 17 may be integrated with a high-speed interface such as, for example, a Universal Serial Bus 2.0 or 3.0 (USB 2.0 or 3.0) interface or above, that plugs into any Personal Computer (PC) or laptop. A non-transitory, computer-readable data storage medium, such as, for example, the system memory 20 or a peripheral data storage unit such as a CD/DVD may store program code or software. The processor 19 and/or the image processing unit 46 (
In the preceding description, for purposes of explanation and not limitation, specific details are set forth (such as particular architectures, waveforms, interfaces, techniques, etc.) in order to provide a thorough understanding of the disclosed technology. However, it will be apparent to those skilled in the art that the disclosed technology may be practiced in other embodiments that depart from these specific details. That is, those skilled in the art will be able to devise various arrangements which, although not explicitly described or shown herein, embody the principles of the disclosed technology. In some instances, detailed descriptions of well-known devices, circuits, and methods are omitted so as not to obscure the description of the disclosed technology with unnecessary detail. All statements herein reciting principles, aspects, and embodiments of the disclosed technology, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, such as, for example, any elements developed that perform the same function, regardless of structure.
Thus, for example, it will be appreciated by those skilled in the art that block diagrams herein (e.g., in
When certain inventive aspects require software-based processing, such software or program code may reside in a computer-readable data storage medium. As noted earlier, such data storage medium may be part of the peripheral storage 206, or may be part of the system memory 20 or any internal memory (not shown) of the image sensor unit 24, or the processor's 19 internal memory (not shown). In one embodiment, the processor 19 and/or the image processing unit 46 may execute instructions stored on such a medium to carry out the software-based processing. The computer-readable data storage medium may be a non-transitory data storage medium containing a computer program, software, firmware, or microcode for execution by a general purpose computer or a processor mentioned above. Examples of computer-readable storage media include a ROM, a RAM, a digital register, a cache memory, semiconductor memory devices, magnetic media such as internal hard disks, magnetic tapes and removable disks, magneto-optical media, and optical media such as CD-ROM disks and DVDs.
Alternative embodiments of the imaging module 17 or the system 15 comprising such an imaging module according to inventive aspects of the present disclosure may include additional components responsible for providing additional functionality, including any of the functionality identified above and/or any functionality necessary to support the solution as per the teachings of the present disclosure. Although features and elements are described above in particular combinations, each feature or element can be used alone without the other features and elements or in various combinations with or without other features. As mentioned before, various 2D and 3D imaging functions discussed herein may be provided through the use of hardware (such as circuit hardware) and/or hardware capable of executing software/firmware in the form of coded instructions or microcode stored on a computer-readable data storage medium (mentioned above). Thus, such functions and illustrated functional blocks are to be understood as being either hardware-implemented and/or computer-implemented, and thus machine-implemented.
The foregoing describes a system and method in which a DTOF technique is combined with analog amplitude modulation (AM) within each pixel in a pixel array. No SPADs or APDs are used in the pixels. Instead, each pixel has a PD with a conversion gain of over 400 μV/e− and PDE of more than 45%, operating in conjunction with a PPD (or a similar analog storage device). The TOF information is added to the received light signal by the analog domain-based single-ended to differential converter inside the pixel itself. The output of the PD in a pixel is used to control the operation of the PPD. The charge transfer from the PPD is stopped—and, hence, TOF value and range of an object are recorded—when the output from the PD in the pixel is triggered within a pre-defined time interval. Such pixels provide for an improved autonomous navigation system—with an AM-based DTOF sensor—for drivers under difficult driving conditions such as, for example, low light, fog, bad weather, and so on.
As will be recognized by those skilled in the art, the innovative concepts described in the present application can be modified and varied over a wide range of applications. Accordingly, the scope of patented subject matter should not be limited to any of the specific exemplary teachings discussed above, but is instead defined by the following claims.
Claims
1. A pixel in an image sensor, said pixel comprising:
- a Photo Diode (PD) unit having at least one PD that converts received luminance into an electrical signal, wherein the at least one PD has a conversion gain that satisfies a threshold;
- an amplifier unit connected in series with the PD unit to amplify the electrical signal and to responsively generate an intermediate output; and
- a Time-to-Charge Converter (TCC) unit coupled to the amplifier unit and receiving the intermediate output therefrom, wherein the TCC unit includes: a device that stores an analog charge, and a control circuit coupled to the device, wherein the control circuit performs operations comprising: initiating transfer of a first portion of the analog charge from the device, terminating the transfer in response to receipt of the intermediate output within a pre-defined time interval, and generating a first pixel-specific output for the pixel based on the first portion of the analog charge transferred.
2. The pixel of claim 1, wherein each of the PD unit, the amplifier unit, and the TCC unit comprises a Complementary Metal Oxide Semiconductor (CMOS) portion.
3. The pixel of claim 1, wherein the PD unit includes:
- a first PD that receives the luminance and generates the electrical signal in response thereto, wherein the first PD has the conversion gain that satisfies the threshold; and
- a second PD connected in parallel to the first PD, wherein the second PD is unexposed to the luminance and generates a reference signal based on a level of darkness detected thereby.
4. The pixel of claim 3, wherein the amplifier unit includes:
- a sense amplifier connected in series with the first and the second PDs to amplify the electrical signal upon sensing the electrical signal vis-à-vis the reference signal, wherein the sense amplifier generates the intermediate output upon amplifying the electrical signal in response to a control signal received thereby.
5. The pixel of claim 4, wherein the sense amplifier is a current sense amplifier.
6. The pixel of claim 1, wherein the device is one of the following:
- a Pinned Photo Diode (PPD);
- a photogate; and
- a capacitor.
7. The pixel of claim 1, wherein the control circuit includes an output terminal, and wherein the control circuit further performs the operations comprising:
- receiving an analog modulating signal;
- further receiving an external input;
- transferring the first portion of the analog charge as the first pixel-specific output through the output terminal in response to the external input and based on modulation provided by the analog modulating signal; and
- transferring a second portion of the analog charge as a second pixel-specific output through the output terminal in response to the external input, wherein the second portion is substantially equal to a remainder of the analog charge after the first portion is transferred.
8. The pixel of claim 7, wherein the control circuit includes a first node and a second node, and wherein the control circuit further performs the operations comprising:
- transferring the first portion of the analog charge from the device to the first node, from the first node to the second node, and from the second node to the output terminal as the first pixel-specific output; and
- transferring the second portion of the analog charge from the device to the first node, from the first node to the second node, and from the second node to the output terminal as the second pixel-specific output.
9. The pixel of claim 1, wherein the threshold is at least 400 μV per photoelectron.
10. A method comprising:
- projecting a laser pulse onto a three-dimensional (3D) object;
- applying an analog modulating signal to a device in a pixel, wherein the device stores an analog charge;
- initiating transfer of a first portion of the analog charge from the device based on modulation received from the analog modulating signal;
- detecting a returned pulse using the pixel, wherein the returned pulse is the projected laser pulse reflected from the 3D object, and wherein the pixel includes a Photo Diode (PD) unit having at least one PD that converts luminance received in the returned pulse into an electrical signal and that has a conversion gain that satisfies a threshold;
- processing the electrical signal using an amplifier unit in the pixel to responsively generate an intermediate output;
- terminating the transfer of the first portion of the analog charge in response to generation of the intermediate output within a pre-defined time interval; and
- determining a Time of Flight (TOF) value of the returned pulse based on the first portion of the analog charge transferred upon termination.
11. The method of claim 10, further comprising:
- generating a first pixel-specific output of the pixel from the first portion of the analog charge transferred from the device;
- transferring a second portion of the analog charge from the device, wherein the second portion is substantially equal to a remainder of the analog charge after the first portion is transferred;
- generating a second pixel-specific output of the pixel from the second portion of the analog charge transferred from the device;
- sampling the first and the second pixel-specific outputs using an Analog-to-Digital Converter (ADC) unit; and
- based on the sampling, generating a first signal value corresponding to the first pixel-specific output and a second signal value corresponding to the second pixel-specific output using the ADC unit.
12. The method of claim 11, further comprising:
- determining the TOF value of the returned pulse using a ratio of the first signal value to a total of the first and the second signal values.
13. The method of claim 12, further comprising:
- determining a distance to the 3D object based on the TOF value.
14. The method of claim 10, further comprising:
- further applying a shutter signal to the amplifier unit, wherein the shutter signal is applied a pre-determined time period after projecting the laser pulse;
- detecting the returned pulse using the pixel while the shutter signal as well as the analog modulating signal are active;
- providing a termination signal upon generation of the intermediate output while the shutter signal is active; and
- terminating the transfer of the first portion of the analog charge in response to the termination signal.
15. The method of claim 10, wherein detecting the returned pulse includes:
- receiving the luminance at a first PD in the PD unit, wherein the first PD has the conversion gain that satisfies the threshold;
- generating the electrical signal using the first PD; and
- further generating a reference signal using a second PD in the PD unit, wherein the second PD is connected in parallel to the first PD, is unexposed to the luminance, and generates the reference signal based on a level of darkness detected thereby.
16. The method of claim 15, wherein the amplifier unit is a sense amplifier connected in series with the first and the second PDs, and wherein processing the electrical signal includes:
- providing a shutter signal to the sense amplifier;
- sensing the electrical signal vis-à-vis the reference signal using the sense amplifier while the shutter signal is active; and
- generating the intermediate output by amplifying the electrical signal using the sense amplifier while the shutter signal is active.
17. The method of claim 10, wherein projecting the laser pulse includes:
- projecting the laser pulse using a light source that is one of the following: a laser light source; a light source that produces light in a visible spectrum; a light source that produces light in a non-visible spectrum; a monochromatic illumination source; an Infrared (IR) laser; an X-Y addressable light source; a point source with two-dimensional (2D) scanning capability; a sheet source with one-dimensional (1D) scanning capability; and a diffused laser.
18. The method of claim 10, wherein the threshold is at least 400 μV per photon.
19. A system comprising:
- a light source that projects a laser pulse onto a three-dimensional (3D) object;
- a plurality of pixels, wherein each pixel includes: a pixel-specific Photo Diode (PD) unit having at least one PD that converts luminance received in a returned pulse into an electrical signal, wherein the at least one PD has a conversion gain that satisfies a threshold, and wherein the returned pulse results from reflection of the projected laser pulse by the 3D object, a pixel-specific amplifier unit connected in series with the pixel-specific PD unit to amplify the electrical signal and to responsively generate an intermediate output, and a pixel-specific Time-to-Charge Converter (TCC) unit coupled to the pixel-specific amplifier unit and receiving the intermediate output therefrom, wherein the pixel-specific TCC unit includes: a device that stores an analog charge, and a control circuit coupled to the device, wherein the control circuit performs operations comprising: initiating transfer of a pixel-specific first portion of the analog charge from the device, terminating the transfer of the pixel-specific first portion upon receipt of the intermediate output within a pre-defined time interval, generating a first pixel-specific output for the pixel based on the pixel-specific first portion of the analog charge transferred, transferring a pixel-specific second portion of the analog charge from the device, wherein the pixel-specific second portion is substantially equal to a remainder of the analog charge after the pixel-specific first portion is transferred, and generating a second pixel-specific output for the pixel based on the pixel-specific second portion of the analog charge transferred;
- a memory for storing program instructions; and
- a processor coupled to the memory and to the plurality of pixels, wherein the processor executes the program instructions, whereby the processor performs the following operations for each pixel in the plurality of pixels: facilitating transfers of the pixel-specific first and second portions of the analog charge, respectively, receiving the first and the second pixel-specific outputs, generating a pixel-specific pair of signal values based on the first and the second pixel-specific outputs, respectively, wherein the pixel-specific pair of signal values includes a pixel-specific first signal value and a pixel-specific second signal value, determining a corresponding pixel-specific Time of Flight (TOF) value of the returned pulse using the pixel-specific first signal value and the pixel-specific second signal value, and determining a pixel-specific distance to the 3D object based on the pixel-specific TOF value.
20. The system of claim 19, wherein the processor provides an analog modulating signal to the control circuit in the pixel-specific TCC unit in each pixel, and wherein the control circuit in the pixel-specific TCC unit controls an amount of the pixel-specific first portion of the analog charge to be transferred based on modulation provided by the analog modulating signal.
21. The system of claim 19, wherein the processor triggers the light source to project the laser pulse, wherein the light source is one of the following:
- a laser light source;
- a light source that produces light in a visible spectrum;
- a light source that produces light in a non-visible spectrum;
- a monochromatic illumination source;
- an Infrared (IR) laser;
- an X-Y addressable light source;
- a point source with two-dimensional (2D) scanning capability;
- a sheet source with one-dimensional (1D) scanning capability; and
- a diffused laser.
22. The system of claim 19, wherein the device in the pixel-specific TCC unit is one of the following:
- a Pinned Photo Diode (PPD);
- a photogate; and
- a capacitor.
23. The system of claim 19, wherein the threshold is at least 400 μV per photoelectron.
Type: Application
Filed: Mar 13, 2018
Publication Date: Jun 20, 2019
Inventor: Yibing Michelle WANG (Pasadena, CA)
Application Number: 15/920,430