Full-field light detection and ranging imaging system
Apparatuses and methods determine positional information from a reflected optical signal for an object on a per pixel basis. A spread spectrum imaging system includes a transmitting module transmitting a transmitted optical signal that illuminates a target space and contains a transmitted pulse that is modulated with a first pseudo-noise (PN) code. The imaging system includes a receiving module that receives a reflected optical signal from an object. The reflected signal is processed by an optical array that detects a detected signal from the reflected optical signal, where the detected signal contains a plurality of pixels spanning a target space. When the determined PN code corresponds to the selected PN code, image information and the positional information is presented for the object. When different positional information is obtained for different pixels in the image, the imaging system may determine that different objects appear in the received image.
Latest Science Applications International Corporation Patents:
- Self-sintering conductive inks
- Developing object ontologies and data usage models using machine learning
- System and method for video image registration and/or providing supplemental data in a heads up display
- Mechanical components with radiographic markers
- Projectile accelerator with heatable barrel
The present invention relates to an imaging system that includes light detection and ranging (LIDAR) data for a pixel of imaging information.
BACKGROUND OF THE INVENTIONThe United States military has designated mobile robotics and autonomous vehicle systems having a high priority for future combat and warfare, and consequently there is a strong demand for more intelligent and reliable vision sensor subsystems in today's tactical vehicles. Future combat vehicles will be highly dependent upon a wide variety of robust sensor technologies both inside the vehicle (monitoring occupant presence and position) as well as external to the vehicle (monitoring vehicle orientation, relative closing velocities, and potential obstacles).
Prior art multi-camera vision systems typically used in military applications incorporate high resolution, silicon-based day-cameras, as well as infrared (EOIR) focal plane arrays. The far-infrared (FIR and LWIR) cameras typically operate in the 6 to 14 micrometer range and are usually based upon micro-bolometer devices, while the near infrared systems (NIR and SWIR) are designed for ˜850 nm to 1600 nm and use Si-photodiodes or narrow-bandgap materials. For most applications, time-critical, image computing is crucial. Moreover, imaging systems have applications that span both military and commercial applications. Studies have shown, for example, that with conventional automobiles, an extra 0.5 second of pre-collision warning time, nearly 60% of all highway accidents could be avoided altogether. For military combat vehicles, the time-critical requirements are even more demanding.
Current optical light detection and ranging (LIDAR) cameras are typically deployed along with conventional millimeter wave (mmW)-based RADAR throughout the future combat system (FCS) fleet, including applications for internal inspection, weapons targeting, boresighting, indirect driving, obstacle/enemy detection, and autonomous and semi-autonomous navigation. LIDAR cameras typically have a separate LIDAR detector since the imaging arrays have integration times that are too long for LIDAR data capture.
LIDAR systems have proven to be relatively reliable even when incorporated in low cost systems. Multifunctional LIDAR systems are capable of measuring not only distance but can be pulsed and beam multiplexed to provide triangulation and angular information about potential road obstacles and targets and about environmental and situational awareness. LIDAR imaging systems offer a number of advantages over other conventional technologies. These advantages include:
-
- Good target discrimination and range resolution
- Capability of scanning both azimuthally and vertically using electronic monopulsing techniques
- Capability of imaging both near and far range objects using telescopic optics assuming a clear optical path
- Fast update and sampling rate (100 MHz is typical and is limited by the carrier frequency of 25 THz (3 psec/m pulse transit time)
- Good temperature stability over a relatively wide range (−50° to 80° C.)
- The technology is highly developed and commercially expanding in optical communications, leading to both cost and technology windfalls.
Consequently, there is a real market need to provide LIDAR imaging systems that provide imaging and associated positional information (e.g., velocity and/or range of another vehicle) in an expeditious and efficient manner.
BRIEF SUMMARY OF THE INVENTIONAn aspect of the invention provides an imaging system for determining positional information from a reflected optical signal for an object on a per pixel basis. Positional information may include ranging and/or velocity estimates of the object.
With another aspect of the invention, an imaging system includes a transmitting module that transmits an optical signal. The transmitted optical signal illuminates a target space (i.e., Field of View, or FOV) and contains a transmitted pulse that is modulated with a first pseudo-noise (PN) code.
With another aspect of the invention, an imaging system includes a receiving module that receives a reflected optical signal from an object. The reflected signal is processed by an optical array that detects a signal from the reflected optical signal, where the detected signal contains a plurality of pixels spanning a target space. A processing module determines pulse characteristics and subsequently obtains positional information on a per pixel basis. When the determined PN code corresponds to the selected PN code, image information and the positional information is presented for the object.
With another aspect of the invention, when different positional information is obtained for different pixels in the image, the imaging system determines that different objects appear in the image. The imaging system may distinguish the different objects in a display presented to a user.
With another aspect of the invention, an imaging system correlates a transmitted pulse and a received pulse to obtain a time delay and pulse width of the received pulse. The reflected optical signal is subsequently processed in accordance with the time delay and the pulse width.
With another aspect of the invention, an imaging system supports spread spectrum techniques, including direct sequence spread spectrum (DSSS), frequency hopping spread spectrum, and time hopping spread spectrum.
Not all of the above aspects are necessarily included in every embodiment of the invention.
A more complete understanding of the present invention and the advantages thereof may be acquired by referring to the following description in consideration of the accompanying drawings, in which like reference numbers indicate like features and wherein:
In the following description of the various embodiments, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration various embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural and functional modifications may be made without departing from the scope of the present invention.
for an unresolved object (i.e., an object which is on the same order or smaller than the diffraction beam pattern), where σ is the target cross-section, AT corresponds to target aperture area 107, and AR corresponds to receive aperture area 109. For “large” objects (often referred as a resolved object):
As modeled in EQ. 1B, objects of interest for vehicle applications are relatively large under unity magnification and are thus characterized by their reflectivity more than their “radar” cross section. One notes that the above equations are formulated as optical power quantities, and in some embodiments of an imaging system the optical intensity may be expressed as
Referring to
From EQ. 1A, one observes that the transmitted power is scattered, attenuated and diffracted isotropically and varies as 1/R4 (for the unresolved case) but varies as 1/R2 for the resolved case (corresponding to EQ. 1 B). As a result, imaging and telescoping optics are typically utilized for long distances in order to keep beam divergence and optical losses as low as possible in order to provide adequate optical power to an imaging system. As a result the target and detection apertures 107 and 109 are of the same order.
While
The closing velocity, or relative velocity, may be defined as νR=νsource−νtarget. In the above definition, one refers to light (optical) signals and quantities which occur at normal incidence, with the range effects and Doppler modulation occurring along the radial line connecting the imaged scene (2-D plane normal to the line of view) and the camera. To obtain vector motion, two such cameras would be necessary in order to determine for example the vector closing velocity or a distance parameter at an angle to image plane. Also, the exemplary geometry shown in
Imaging system 200 generates transmitted optical signal 251 from baseband signal 261 through laser driver circuit 203 and laser diode array 205, which is modulated by PN-code generator 201. With an embodiment of the invention, the transmitted pulse is modulated with a selected pseudo-noise (PN) code (as shown in
With some embodiments of the invention, each PN code is orthogonal to the other PN codes (which are contained in the set of orthogonal PN codes), where cn is the nth PN code and ∫pulse durationcncmdt=0 when, n≠m. As will be discussed, imaging system 200 pulse duration performs an autocorrelation of the received optical signal to determine whether the received signal is modulated with the selected PN code. Exemplary PN codes may be obtained from Walsh functions having a desired degree of robustness from interference generated by other sources. Embodiments of the invention may also utilize semi-orthogonal codes, in which the corresponding peak autocorrelation value (e.g., as shown in
While
Some embodiments of the invention process a received full-field, LIDAR image from reflected optical signal 253 through receiver lens 209 with video-based camera sensor 211, which comprises a (complementary metal-oxide semiconductor) CMOS, direct-addressable array and circumvents the need to optically heterodyne the reflected optical or infrared carrier with an optical reference signal in order to obtain the Doppler frequency shift or time-of-flight (TOF) information. However, embodiments of the invention support other types of imaging arrays, including a charge-coupled device (CCD) array.
With embodiments of the invention, camera sensor 211 typically comprises a basic focal plane array (FPA). Camera sensor 211 may provide the same video output as a standard Si black and white or color camera. Typically, the array comprises a CMOS FPA, since one typically desires to rapidly scan each pixel value into both time delay correlator 215 and pulse width comparator 217 as well as into the video ADC 219. System 200 typically scans an entire pulsed, PN sequence, with intensity information However, the added circuitry that one often finds in an off-the-shelf, packaged camera, is typically included as part of sensor 211. A typical camera typically includes the ADC, and usually a CODEC for modulating the video signal as a particular data transmission standard (e.g. 10-GbE, or DVI) with a particular compression format, (e.g., H.264/MPEG-4/JVT, or JPEG-2000, MPEG-7, etc.). A typical camera may also format the video signal as a Bayer pattern or as an RGB format for color images. However, with the operational characteristics of the exemplary embodiment sown in
Imaging system 200 supports measurements of pixel-specific range and radial closing velocity (i.e., the velocity of target vehicle 103 with respect to source vehicle 101) over the full image field and supports applications requiring full-field distance and velocity information, such as a positioned military vehicle in order to complement a standard visible camera or optical image sensor, and greatly enhancing the performance of the latter. Imaging system 200 determines positional LIDAR data (e.g., velocity and/or ranging estimates) for each pixel of the overall image. The positional information may be subsequently added to image information on a per pixel basis.
Imaging system 200 may be applied to indirect and semiautonomous driving of tactical vehicles, obstacle detection, reconnaissance and surveillance, targeting for weapon systems, and adaptive, fully autonomous navigation of vehicles such as an autonomous robotic vehicle. In addition, imaging system 200 may be employed for standard ground or air vehicles such as automobiles and trucks to provide pre-crash detection, cruise control and parking assist among other functionalities. Another area of applicability encompasses machine vision for industrial automation in production and manufacturing environments, where semi-stationary robots are becoming more and more prevalent.
Each pixel of camera sensor 211 captures the relative velocity and range of the corresponding image points, instantaneously, at any given time. A pulsed, active-illumination is utilized and is modulated with a pseudo-noise (PN) coding of the pulse. The active illumination may be typically generated from a laser diode or other suitable, coherent light source, e.g., a vertical cavity surface emitting laser (VCSEL) array, typically operating in the near infrared at ˜800 to 1500 nm. The PN coding of the pulse provides a wideband modulated optical carrier that may allow the imaging system to discriminate from other active imaging system or complementary active LIDAR detectors and may also reduce reflections or light signals from other sources, lowering noise and interference. The exemplary embodiment shown in
As previously discussed, transmitted optical signal 251 comprises a transmitted pulse that spans a target space (both the horizontal and vertical fields of view, denoted as HFOV and VFOV, respectively). Imaging system 200 processes the reflected optical signal 253 for each pixel of camera sensor 211 to obtain received baseband signal 263.
Reflected optical signal 253 comprises an ensemble of reflected optical components, each reflected component being reflected by a portion of an illuminated object and corresponding to a pixel. Ensemble components are spatially separated in the x and y directions so that an ensemble component (corresponding to a pixel) is selected through x-selection and y-selection circuitry of camera sensor 211. As shown in
Typically, imaging system 200 derives an analog signal for each pixel (e.g., 10 to 24-bit) with intensity information (and perhaps embedded color which is later converted to RGB information). The analog (quasi-digital) signal contains the original modulated PN code, which is used with correlator 215 and comparator 217. The correlation outputs are read into a high-speed electronic memory device 225 (e.g., an SDRAM, or other suitable memory device such as an FPGA, PLD or Flash-RAM). However, the generated digital delay and digital pulse width determine the time delay and pulse width.
High-speed electronic memory device 225 stores pulse characteristics for each pixel as determined by correlator 215 and comparator 217. Pulse characteristics may include the pulse delay and the pulse width of the ith pulse. Pulse characteristics may be different for different pixels. For example, pulse widths may be different if an optical signal is reflected by different objects moving at different velocities. Also, the pulse delay of a pulse varies with the range from system 200. Signal processor 221 subsequently utilizes the pulse characteristics to associate video information (obtained through video analog-to-digital converter 219) with each pixel.
With the following analysis one may assume that, in general, not all targets are resolvable, so that some images will not completely fill the field of receiver lens 209. Time-of-flight (TOF) techniques yield:
where the range distance R is the straight-line distance between the transmitter (source vehicle 101) and the target (target vehicle 103), ΔtR equals the round trip delay time for a pulse to return to the transceiver, and c equals the speed of light under atmospheric conditions. In order to obtain object-to-object spatial resolution within the round trip delay time, one observes that in order to resolve the reflections of coherent light from two different objects, the reflections must be separated in time by an amount similar to the pulse duration;
which gives the resolution δR for a single pixel LIDAR measurement, where τP is the pulse width. This is similar to the Rayleigh Criterion in optics where
D is the effective diameter of the optics/lens or aperture stops within the system, f is the focal length of these optics, λ is the optical carrier wavelength, and the resolution is given for two circular objects (as opposed to rectangular objects). One should note that the calculation by these two methods should, ostensibly, yield the same result. The sampling rate typically equals 1/τP. For distances relevant to the tactical vehicle range, such as obstacle detection and targeting, accuracies of 0.1 m or less may be desirable. This degree of accuracy gives a maximum pulse width of approximately 3 nsec. One may make a distinction here between resolution and precision. The resolution measures the ability of a LIDAR or RADAR to resolve or distinguish between separate targets that are close together in the measurement dimension. The precision gives the fluctuation or uncertainty in the measured value of a single target's position in the range, angle or velocity dimension. The actual range error of each pixel, ∈RANGE, assuming that the pulse width is simply related to the bandwidth as B=1/τPULSE, is given by:
where SNR equals the signal-to-noise ratio for the LIDAR camera. One can determine the closing velocity resolution ∈VR for a relative or closing velocity, as defined previously, by:
νR=νSOURCE−νTARGET EQ. 5
If the source (mission vehicle 101) and target (target vehicle 103) are moving toward each other, the reflected frequency will be increased (corresponding to a positive Doppler shift), whereas the opposite will occur if vehicles 101 and 103 are receding from each other (corresponding to a negative Doppler shift). The former situation is depicted in
where fR=f0±ΔfD=f0(1±2νR/c). The plus and minus sign are used to indicate if the source (camera) and target objects are approaching or receding from each other, respectively. Consequently, the Doppler shift resolution is given by:
from which one can determine the error in the measured closing velocity as:
Using EQs. 2-8, calculations suggest that range/velocity measurement accuracy and resolution can be significantly increased as compared to current schemes that utilize kinematics-based algorithms to determine range rate or closing velocity. Kinematics-based algorithms are typically implemented as software schemes that perform image segmentation, feature extraction, edge detection and image transformations. These techniques may require an extensive amount of computer memory, computational time for targeting assessment, and controller processing effort. Therefore, a significant decrease in the image processing hardware and computing software can be achieved, thus reducing overall system latency, decreasing the software overhead, and freeing up valuable processor resources thus enabling the full-field imaging. Consequently, the “raw” distance, angle and image velocity parameters can be provided to the image processor 221 without the need for the above statistical algorithms. In addition, continuous, stationary roadway and field obstacles or enemy vehicles, which presently cannot be targeted unambiguously by conventional optical or mmW RADAR system, can be more effectively identified with the proposed method.
Imaging system 200 allows a mapping of the dynamic LIDAR variables to a standard acquired image, by employing affine transformations and cosine matrices to correlate the LIDAR data with the standard image data, as well as utilizing extended Kalman filters for fusion with complementary image data such as mmW RADAR. Affine transformations are typically used to “boresight” or align the camera with other cameras or sensors on the vehicle. (An affine transform is a transformation that preserves collinearity between points, i.e., three points that lie on a line continue to be collinear after the transformation, and ratios of distances along a line.) In a general sense, any measurement may be translated to any other point on the vehicle (e.g., in alignment with a RADAR sensor, a weapon, or with the occupant of a passenger vehicle) by utilizing a cosine matrix corresponding to a process of applying an affine transformation. Kalman Filters are generally employed to do this for functions that are linear in 3-space. For non-linear functions of space, the extended Kalman filter is used.
As discussed above, in addition to object detection and invalid image detection (as will be discussed with
As an example of an application, consider a battlefield leader-follower situation for tactical vehicles which utilizes image data as the chaining modality. In order for the trailing vehicle to detect whether the leading vehicle is making a right turn, its prior art imaging system performs successive frame captures of the scene (or image) in front of it and uses feature extraction and statistical processing of the series of frames to determine whether the vectors from its own reference point to the various image points of the scene in front of it are changing and how they are changing successively. With imaging system 200, execution complexity may be significantly reduced to accessing the stored pixel data, which already contains each pixel range and velocity data. Near-range resolution may be obtained using pseudo-noise (PN) and correlation techniques, applied on a pixel by pixel basis. For suitable chip modulation, e.g., 1024 chips per pulse, a direct sequence PN scheme may achieve practically full discrimination against background noise.
Referring to
si(xi,yi)=si(ARi,τRi,t+ΔtRi) EQ.9
This electrical signal is what is actually presented to the Digital Time Correlator 215, the Digital Pulse Comparator 217, as well as the Video ADC (analog-to-digital converter) 219.
With embodiments of the invention, image array 211 detects illumination in the 850 nm to 1500 nm range (near infrared or SWIR) using Si-CMOS photo detection. Image array 211 delivers image data, sometimes at data rates of greater than 5-Gbits/sec depending upon the pixel resolution of the focal plane array camera (for reference 2048×2048˜4 Mpixels). Image array 211 may utilize sophisticated, information-rich image processing and control systems, in order to provide accurate image and target assessments to deliver the high mobility and lethality required for future combat systems. Image array 211 comprises a focal plane array which may be obtained by modifying an off-the-shelf camera or may be commonly manufactured. Field programmable gate array (FPGA) 221 is typically deployed in imaging system 200 in order to provide high-bandwidth, parallel data processing. In addition, sensor system data latency and high image quality and dynamic camera response (>100 dB) are typically desired. However, prior art LIDAR image cameras may not generate raw, pixel data for relative velocity and range. Three dimensional cameras typically incorporate the capability of extracting the image depth of focus but not ranging data. Flash cameras yield time-of-flight information using a single pulse, but are not enabled for measuring image velocity. Instead, flash cameras typically use a standard CCD or CMOS focal-plane array, coupled with a separate LIDAR subsystem (typically, a single-beam diode laser and a single Si-photodiode). The inclusion of LIDAR variable extraction may allow substantial relief to the on-board image processing and computing resources.
Components of imaging system 200 may be associated with different functional aspects including a transmitting module, a receiving module, and an optical array. As shown in
Imaging data is then selected by the CMOS de-multiplexing circuitry (corresponding to X-selection and Y-selection as shown in
While positional information is available on a per pixel basis, imaging system 200 may combine positional information (e.g., range estimates and velocity estimates) from a plurality of pixels to obtain estimates with greater certainty. For example, the range estimates and the velocity estimates over several pixels in an image may be averaged. If the difference between estimates of different pixels are sufficiently large (e.g., greater than a predetermined percentage of an averaged estimate), imaging system 200 may determine that more than one object appears in the image, where the different objects may have different ranges and different velocities. (Alternatively, imaging system 200 may determine an inconsistency in the determined positional information and consequently indicate the inconsistency.) If imaging system 200 determines that there are a plurality of objects (e.g., multiple target vehicles) in the reflected image, imaging system 200 may distinguish between different target vehicles in the display presented to a user. Process 300, as shown in
In step 303, image 303 derives a received pulse for the first pixel and determines whether the received PN code equals the selected PN code of transmitted optical signal 601. Process 300 provides a degree of immunity to interference. For example, other source vehicles may be illuminating an approximate target space but with a different PN code. In addition, a target vehicle may be equipped with apparatus that provides countermeasures to the operation of imaging system 200. If step 305 determines that the received PN code does not equal the selected PN code, imaging system 200 indicates an invalid image detection.
When the received PN code equals the selected PN code, first positional information for the first pixel is obtained in step 307. For example, a range estimate of the target vehicle is determined from a delay of the received pulse with respect to the transmitted pulse. Also, a velocity estimate of the target vehicle with respect to the source vehicle may be determined from the Doppler frequency.
In step 309, received optical signal 801 is processed for a second pixel, which may not be adjacent to the first pixel. In step 309, a corresponding pulse is derived for the second pixel, and the received PN code is compared to the selected PN code in step 311. (While the exemplary embodiment shown in
With embodiments of the invention, if the first positional information is sufficiently different from the second positional information and the first and second pixels correspond to a sufficiently large spatial separation, imaging system 200 determines that a second object (e.g., a second target vehicle) is present within the target space. For example, if the first positional information includes ranging and/or velocity information that is sufficiently different from the ranging and/velocity information of the second positional information, imaging system 200 may indicate that multiple objects are within the target space. If that is the case, imaging system 200 may include an indication (e.g., a text message informing the user of multiple objects) in a user output display (not shown in
With embodiments of the invention, if the first positional information is sufficiently similar to the second positional information, the positional information may be combined (e.g., averaging the ranging estimates and/or velocity estimates) to obtain combined positional information for the first and second pixels.
In step 315, imaging system 200 continues to process the received optical signal 801 for the remaining pixels in a similar matter as for the first and second pixels (steps 303-313).
Typical system parameters for military or robotic applications are estimated as follows:
-
- Laser Output Power=10 watts continuous (40 dBm)
- Wavelength=910 nm or 850 nm or 1550 nm
- Pulse Duration=15 nsec.
- Emitted Beam Angle: Azimuth=20°
- Elevation=12°
- Detector Sensitivity<1 μamp/μwatt
- Distance Range=5 to 100 m
- Range Resolution<1 cm
- System Temperature range=−45° C. to 125° C.
- Power Consumption=15 watts @ 28 volts nominal
Imaging system 200 may support different spread spectrum techniques including direct-sequence spread spectrum, frequency-hopping spread spectrum, and time-hopping frequency spectrum in order to obtain positional information on a per pixel basis. By expanding the frequency spectrum of signal 251, imaging system 200 determines positional information from received signal 253 on a per pixel basis.
where, φRi is the phase of the received optical intensity, and, φ0 is the phase of the transmitted beam.
Comparing
The baseband output subsequently modulates the transmitted optical signal through modulator 505. In this technique, the PRN sequence is applied directly to data entering the carrier modulator. The modulator therefore sees a much larger transmission rate, which corresponds to the chip rate of the PRN sequence. The result of modulating an RF carrier with such a code sequence is to produce a direct-sequence-modulated spread spectrum with ((sin x)/x)2 frequency spectrum, centered at the carrier frequency. The optical signal is amplified by amplifier 507 to obtain the desired power level for transmission through transmission lens 207 (shown in
The transmitted spectrum of a frequency hopping signal is quite different from that of a direct sequence system. Instead of a ((sin x)/x)2-shaped envelope, the frequency hopper's output is flat over the band of frequencies used. The bandwidth of a frequency-hopping signal is simply N times the number of frequency slots available, where N is the bandwidth of each hop channel.
While not explicitly shown in the figures, embodiments of the invention may support a time-hopping spread spectrum (THSS) signal, in which on and off sequences to a power amplifier are applied in accordance with the selected PN code.
One can express
such that the Doppler change in wavelength is proportional to the change in the Pulse Width, τRi. With respect to
where the M denotes the measured or correlated parameter. EQ. 11 becomes, for the ith pixel,
For the range, gating from the Time Delay Correlator 215 yields ΔtMi, and a measurement of the range variable of:
for the distance or range to the ith pixel object in the object field.
Referring to
where iPD is the photodiode current, POPT is the optical power, h is Planck's constant, e is the electronic charge, η is the quantum efficiency, λ0 is the transmitter wavelength, c is the speed of light in the atmosphere, and ω is the optical carrier frequency. Therefore a 3-dB drop in optical power results in a 6-dB drop in electrical power. Thus, the electrical bandwidth f3DBELEC is the frequency at which the optical power is 1/√{square root over (2)} times the D.C. value. Hence, f3DBELEC is given by:
One notes that ΔτPULSE is the dispersion in the pulse width and is added to ΔτD, corresponding to the Doppler shift in the pulse-width.
As discussed above, some embodiments of the invention support the following capabilities:
-
- Creates a full field image of the LIDAR data on a per pixel level basis: Prior art imaging systems typically use a plurality of pixels to compute LIDAR data (ranging and/or velocity estimates) from several or perhaps a line of pixels. For example, LIDAR data may be obtained from an average of image scene pixels from forward-looking capacity.
- Circumventing heterodyning: By measuring the correlation of the pulse width with an electronic version of the reference pulse, one may circumvent the complexities associated with externally heterodyning using beam splitters and associated optics. Prior art imaging systems may add a pulse train modulation on top of the single TOF pulse in order to reduce the difficulty of trying to measure the difference harmonics at 10 to 100 Terahertz (i.e., Doppler shift in frequency).
As can be appreciated by one skilled in the art, a computer system with an associated computer-readable medium containing instructions for controlling the computer system can be utilized to implement the exemplary embodiments that are disclosed herein. The computer system may include at least one computer such as a microprocessor, microcontroller, digital signal processor, and associated peripheral electronic circuitry.
While the invention has been described with respect to specific examples including presently preferred modes of carrying out the invention, those skilled in the art will appreciate that there are numerous variations and permutations of the above described systems and techniques that fall within the spirit and scope of the invention as set forth in the appended claims.
Claims
1. An imaging system, comprising:
- a transmitting module configured to transmit a transmitted optical signal containing a transmitted pulse, the transmitted pulse being modulated with a selected pseudo-noise (PN) code, the transmitted optical signal arranged to illuminate a target space containing a first object, wherein the transmitting module is configured to include identification information in addition to the selected PN code, the identification information is contained in the transmitted optical signal, the identification information comprises a plurality of bits that represent a source identification of the selected PN code, and the source identification comprises a vehicle identification;
- a receiving module configured to receive a reflected optical signal reflected from the first object and to derive a first received pulse from the reflected optical signal;
- an optical array configured to detect a detected signal from the reflected optical signal across a plurality of pixels of the optical array, the detected signal spanning the target space, a first pixel being contained in the plurality of pixels; and
- a processing module configured to process data for the plurality of pixels by:
- determining first pulse characteristics of the first received pulse;
- obtaining first positional information for the first object from the first pulse characteristics;
- when the selected PN code is detected from the first pixel, providing first image information and the first positional information for the first object; and
- discarding second pixel data for a second pixel when a PN code associated with the second pixel is different from the selected PN code, wherein the PN code associated with the second pixel is generated by a source that is different from the transmitting module that is associated with the selected PN code.
2. The imaging system of claim 1, the processing module further configured to obtain the first positional information by determining a first velocity estimate of the first object only from the first pulse characteristics.
3. The imaging system of claim 1, the processing module further configured to process the second pixel data for the second pixel by:
- determining second pulse characteristics of a second received pulse for the second pixel, the second received pulse being derived from the reflected optical signal;
- obtaining second positional information for a second object from the second pulse characteristics, the second positional information being sufficiently different from the first positional information, the target space containing the second object; and
- when the selected PN code is detected from the second pixel, providing second image information and the second positional information for the second object.
4. The imaging system of claim 3, the processing module further configured to provide an indication that distinguishes the first object from the second object.
5. The imaging system of claim 3, the processing module further configured to obtain the second positional information by determining a second range estimate and a second velocity estimate of the second object from the second pulse characteristics.
6. The imaging system of claim 1, the processing module further configured to process the second pixel data for the second pixel by:
- determining second pulse characteristics of a second received pulse, the second received pulse being derived from the reflected optical signal;
- obtaining second positional information for the first object from the second pulse characteristics, the second positional information being essentially equal to the first positional information; and
- when the selected PN code is detected from the second pixel, providing second image information for the first object.
7. The imaging system of claim 6, the processing module further configured to process the data by:
- determining combined positional information from the first positional information and the second positional information; and
- providing the second image information with the combined positional information.
8. The imaging system of claim 7, wherein the processing module is configured to average the first and second positional information to obtain the combined positional information.
9. The imaging system of claim 1, the processing module further configured to process the data for the plurality of pixels by:
- correlating the first pulse characteristics of the first received pulse and the transmitted pulse to obtain a time delay and a pulse width of the received pulse; and
- processing the reflected optical signal in accordance with the time delay and the pulse width.
10. The imaging system of claim 1, the optical array comprising a charge-coupled device (CCD) array.
11. The imaging system of claim 1, the optical array comprising a complementary metal-oxide semiconductor (CMOS) array.
12. The imaging system of claim 1, the transmitting module comprising a frequency hopping spread spectrum modulator.
13. The imaging system of claim 1, the transmitting module comprising a direct sequence spread spectrum (DSSS) modulator.
14. The imaging system of claim 1, the transmitting module comprising a time hopping spread spectrum modulator.
15. The imaging system of claim 1, the optical array configured to heterodyne an optical reference signal with the reflected optical signal.
16. The imaging system of claim 15, the transmitting module comprising an optical extractor extracting a portion of the transmitted optical signal to provide the optical reference signal.
17. The imaging system of claim 1, wherein the processing module is configured to discard third pixel data for a third pixel when received identification information in the third pixel data is different from the identification information.
18. The imaging system of claim 17, wherein a third pixel PN code associated with the third pixel equals the selected PN code.
19. The imaging system of claim 1, the processing module further configured to obtain the first positional information by determining a first range estimate of the first object from the first pulse characteristics.
20. A method for forming an image of a target space, comprising:
- transmitting an optical signal that illuminates the target space and contains a transmitted pulse modulated with a selected pseudo-noise (PN) code, the target space containing a first object;
- receiving a reflected optical signal reflected from the first object and deriving a first received pulse from the reflected optical;
- detecting a detected signal from the reflected optical signal across a plurality of pixels of an input optical array, the detected signal spanning the target space, a first pixel being contained in the plurality of pixels; and
- processing data for the plurality of pixels by:
- determining first pulse characteristics of the first received pulse;
- obtaining first positional information for the first object from the first pulse characteristics; and
- when the selected PN code is detected from the first pixel, providing first image information and the first positional information for the first object;
- discarding second pixel data for a second pixel when a PN code associated with the second pixel is different from the selected PN code, wherein the PN code associated with the second pixel is generated by a source that is different from a transmitting module that is associated with the selected PN code; and
- discarding third pixel data for a third pixel when received identification information is different from the identification information and a third pixel PN code associated with the third pixel matches the selected PN code, wherein the identification information is contained in the transmitted optical signal and comprises a plurality of bits that represent a source identification of the selected PN code.
21. The method of claim 20, wherein the obtaining the first positional information comprises:
- determining a first range estimate and a velocity estimate of the first object only from the first pulse characteristics.
22. The method of claim 20, further comprising:
- processing the second pixel data for the second pixel by:
- determining second pulse characteristics of a second received pulse, the second received pulse being derived from the reflected optical signal;
- obtaining second positional information for a second object from the second pulse characteristics, the second positional information being sufficiently different from the first positional information, the target space containing the second object; and
- when the selected PN code is detected from the second pixel, providing second image information and the second positional information for the second object.
23. The method of claim 22, further comprising:
- providing an indication that distinguishes the first object from the second object.
24. The method of claim 22, wherein the obtaining the second positional information comprises further comprising:
- determining a second range estimate and a second velocity estimate of the second object from the second pulse characteristics.
25. The method of claim 20, further comprising:
- processing the second pixel data for the second pixel by:
- determining second pulse characteristics of a second received pulse, the second received pulse being derived from the reflected optical signal;
- obtaining second positional information for the first object from the second pulse characteristics, the second positional information being essentially equal to the first positional information; and
- when the selected PN code is detected from the second pixel, providing second image information for the first object.
26. The method of claim 25, further comprising:
- determining combined positional information from the first positional information and the second positional information; and
- providing the second image information with the combined positional information.
27. The method of claim 20, wherein the processing further comprises:
- correlating the first pulse characteristics of the first received pulse and the transmitted pulse to obtain a time delay and a pulse width of the received pulse; and
- processing the reflected optical signal in accordance with the time delay and the pulse width.
28. The method of claim 20, further comprising:
- repeating the transmitting, the receiving, the detecting, and the processing for another pixel from the plurality of pixels.
29. The method of claim 28, further comprising:
- presenting an image with the plurality of pixels, each pixel being associated with corresponding positional information.
30. The method of claim 20, further comprising:
- bypassing the processing and routing the detected signal to an optical display when activating night vision operation.
31. An imaging system, comprising:
- a transmitting module configured to transmit a transmitted optical signal containing a transmitted pulse modulated with a selected pseudo-noise (PN) code using a direct sequence spread spectrum (DSSS) technique, the transmitted optical signal illuminating a target space containing an object;
- a receiving module configured to receive a reflected optical signal reflected from the object and deriving a received pulse from the reflected optical signal;
- an optical array configured to detect a detected signal from the reflected optical signal across a plurality of pixels of the optical array, the detected signal spanning the target space, a processed pixel being contained in the plurality of pixels; and
- a processing module configured to process data for the processed pixel by:
- determining pulse characteristics of the received pulse;
- correlating the pulse characteristics of the received pulse and the transmitted pulse to obtain a time delay and a pulse width of the received pulse;
- processing the reflected optical signal in accordance with the time delay and the pulse width to obtain a range estimate and a velocity estimate for the object;
- when the selected PN code is detected from the processed pixel, providing image
- information and the range estimate and the velocity estimate for the object;
- when an associated PN code from the processed pixel is different from the selected PN code, discarding the image information, wherein the associated PN code is generated by a source that is different from the transmitting module that is associated with the selected PN code; and
- discarding third pixel data for a third pixel when received identification information is different from the identification information and a third pixel PN code associated with the third pixel matches the selected PN code, wherein the identification information is contained in the transmitted optical signal and comprises a plurality of bits that represent a source identification of the selected PN code.
5751830 | May 12, 1998 | Hutchinson et al. |
6031601 | February 29, 2000 | McCusker et al. |
6147747 | November 14, 2000 | Kavaya et al. |
6714286 | March 30, 2004 | Wheel |
6860350 | March 1, 2005 | Beuhler |
20040051664 | March 18, 2004 | Frank |
20040077306 | April 22, 2004 | Shor et al. |
20050278098 | December 15, 2005 | Breed |
20060119833 | June 8, 2006 | Hinderling et al. |
20060227317 | October 12, 2006 | Henderson et al. |
20080088818 | April 17, 2008 | Mori |
Type: Grant
Filed: Aug 28, 2007
Date of Patent: Jun 29, 2010
Patent Publication Number: 20090059201
Assignee: Science Applications International Corporation (San Diego, CA)
Inventors: Christopher Allen Willner (Rochester, MI), James McDowell (Rochester Hills, MI)
Primary Examiner: Thomas H Tarcza
Assistant Examiner: Luke D Ratcliffe
Attorney: Banner & Witcoff, Ltd.
Application Number: 11/845,858
International Classification: G01P 3/36 (20060101);