PROCESSING TIME-SERIES MEASUREMENTS FOR LIDAR ACCURACY
An optical measurement system may include a light source and corresponding photosensor configured to emit and detect photons reflected from objects in a surrounding environment for optical measurements. An initial peak can be identified as resulting from reflections off a housing of the optical measurement system. This peak can be removed or used to calibrate measurement calculations of the system. Peaks resulting from reflections off surrounding objects can be processed using on-chip filters to identify potential peaks, and the unfiltered data can be passed to an off-chip processor for distance calculations and other measurements. A spatial filtering technique may be used to combine values from histograms for spatially adjacent pixels in a pixel array. This combination can be used to increase the confidence for distance measurements.
Latest Ouster, Inc. Patents:
This application is a continuation of International Application No. PCT/US2020/055265 filed Oct. 12, 2020, entitled “PROCESSING TIME-SERIES MEASUREMENTS FOR LIDAR ACCURACY” which claims the benefit of U.S. Provisional Patent Application No. 62/913,604, filed on Oct. 10, 2019, entitled “PROCESSING TIME-SERIES MEASUREMENTS FOR LIDAR ACCURACY.” The disclosures of these applications are incorporated herein by reference.
BACKGROUNDLight Detection And Ranging (LIDAR) systems are used for object detection and ranging, e.g., for vehicles such as cars, trucks, boats, etc. LIDAR systems also have uses in mobile applications (e.g., for face recognition), home entertainment (e.g., to capture gesture capture for video game input), and augmented reality. A LIDAR system measures the distance to an object by irradiating a landscape with pulses from a laser, and then measuring the time for photons to travel to an object and return after reflection, as measured by a receiver of the LIDAR system. A detected signal is analyzed to detect the presence of reflected signal pulses among background light. A distance to an object can be determined based on a time-of-flight from transmission of a pulse to reception of a corresponding reflected pulse.
It can be difficult to provide robust distance accuracy down to a few centimeters in all conditions, particularly at an economical cost for the LIDAR system. Promising new detector technologies, like single photon avalanche diodes (SPADs), are attractive but have significant drawbacks when used to measure time-of-flight and other signal characteristics due to their limited dynamic range, particularly over a broad range of ambient conditions and target distances. Additionally, because of their sensitivity to even a small number of photons, SPADs can be very susceptible to ambient levels of background noise light.
BRIEF SUMMARYIn some embodiments, an optical measurement system may include a housing of the optical measurement system and a light source configured to transmit one or more pulse trains over one or more time intervals as part of an optical measurement, where each of the one or more first time intervals may include one of the one or more pulse trains. The system may also include a photosensor configured to detect photons from the one or more pulse trains that are reflected off of a housing of the optical measurement system, and to detect photons from the one or more pulse trains that are reflected off of objects in an environment surrounding the optical measurement system. The system may additionally include a plurality of registers configured to accumulate photon counts from the photosensor received during the one or more time intervals. Each of the one or more time intervals may be subdivided into a plurality of time bins. Each of the plurality of registers may be configured to accumulate photon counts received during a corresponding one of the plurality of time bins in each of the one or more time intervals to represent a histogram of photon counts received during the one or more time intervals. The system may further include a circuit configured to identify an initial peak in the histogram of photon counts. The initial peak may represent the photons reflected from the housing of the optical measurement system.
In any embodiments, any or all of the following features may be included in any combination and without limitation. The circuit may be configured to identify the initial peak by identifying a predetermined number of registers in the plurality of registers that occur first in the plurality of registers. The circuit may be configured to identify the initial peak by identifying one or more registers in the plurality of registers storing a highest number of photon counts. The circuit may be configured to identify the initial peak by identifying registers in the plurality of registers with time bins that correspond to a distance between the light source and the housing of the optical measurement system. The circuit may be further configured to identify a subset of the plurality of registers that represents the initial peak. The subset of the plurality of registers may be identified by selecting a predetermined number of registers around a register storing a maximum value of the initial peak. The subset of the plurality of registers may be identified by selecting registers around a register storing a maximum value of the initial peak that store values that are within a predetermined percentage of the maximum value. The circuit may be further configured to estimate a distance between the light source and the housing of the optical measurement system based on a location of the initial peak in the plurality of registers. The circuit may be further configured to calibrate distance measurements using the distance estimated between the light source and the housing. The system may also include a processor configured to receive additional peaks in the histogram stored in the plurality of registers to calculate distances to objects in the surrounding environment corresponding to the additional peaks, where the initial peak may be excluded from the additional peaks received by the processor. The processor may be implemented in an integrated circuit that is separate and distinct from an integrated circuit in which the plurality of registers is implemented.
In some embodiments, a method of detecting a peak reflected from a housing in an optical measurement system may include transmitting one or more pulse trains over one or more time intervals as part of an optical measurement. Each of the one or more first time intervals may include one of the one or more pulse trains. The method may also include detecting photons from the one or more pulse trains that are reflected off of the housing of the optical measurement system, and detecting photons from the one or more pulse trains that are reflected off of objects in an environment surrounding the optical measurement system. The method may additionally include accumulating counts of the photons received during the one or more time intervals into a plurality of registers. Each of the one or more time intervals may be subdivided into a plurality of time bins, each of the plurality of registers accumulates photon counts received during a corresponding one of the plurality of time bins in each of the one or more time intervals to represent a histogram of photon counts received during the one or more time intervals. The method may further include identifying an initial peak in the histogram of photon counts. The initial peak may represent the photons reflected from the housing of the optical measurement system.
In any embodiments, any or all of the following features may be included in any combination and without limitation. The method may also include identifying a second initial peak as part of a second optical measurement; and comparing the second initial peak the initial peak. The method may also include characterizing a change in transparency of a window in a housing of the optical measurement system based on comparing the second initial peak to the initial peak. The method may additionally include identifying a plurality of initial peaks detected by a plurality of different photosensors in the optical measurement system; and determining a level of transparency for corresponding sections of a window in a housing of the optical measurement system in front of each of the plurality of photosensors based on the plurality of initial peaks. The method may also include comparing a maximum value of the initial peak to a threshold; and determining whether a blockage is located external to the optical measurement system based on comparing the maximum value of the initial peak to the threshold. The method may additionally include identifying a plurality of initial peaks over a plurality of measurements; and storing a baseline initial peak based on a combination of the plurality of initial peaks over the plurality of measurements for comparison to future optical measurements. The method may also include subtracting the baseline initial peak from the plurality of registers. A second peak may at least partially overlap with the initial peak, and subtracting the baseline initial peak may cause the second peak to be detectable by a peak detection circuit. The second peak may correspond to an object in environment surrounding the optical measurement system that is within two feet of the optical measurement system.
In some embodiments, an optical measurement system may include a light source configured to transmit one or more pulse trains over one or more time intervals as part of an optical measurement. Each of the one or more time intervals may include one of the one or more pulse trains. The system may also include a photosensor configured to detect photons from the one or more pulse trains that are reflected from an object in an environment surrounding the optical measurement system. The system may additionally include a plurality of registers configured to accumulate photon counts from the photosensor received during the one or more time intervals to represent an unfiltered histogram of photon counts received during the one or more time intervals. The system may further include a filter circuit configured to provide a filtered histogram of the photon counts from the plurality of registers. The system may also include a peak detection circuit configured to detect a location of a peak in the filtered histogram, and identify, using the location of the peak in the filtered histogram, locations in the plurality of registers storing an unfiltered representation of the peak.
In any embodiments, any or all of the following features may be included in any combination and without limitation. The system may also include a processor configured to receive the unfiltered representation of the peak and calculate a distance to the object in the environment surrounding the optical measurement system using the unfiltered representation of the peak. The filter circuit may be configured to provide the filtered histogram by applying a matched filter that corresponds to the one or more pulse trains. A pulse train in the one or more pulse trains may include a plurality of square pulses. The filter circuit may be configured to low-pass filter the unfiltered histogram. The system may also include a second plurality of registers that stores the filtered histogram. The filtered histogram may be generated on a single pass through the plurality of registers. The peak may be detected during the single pass through the plurality of registers such that the filtered histogram is not stored in its entirety. The peak detection circuit may be configured to detect the location of the peak by detecting increasing values followed by decreasing values in the plurality of registers. The processor may be implemented on an integrated circuit (IC) that is separate and distinct from an IC on which the plurality of registers is implemented. The light source and the photosensor may form a pixel in a plurality of pixels in the optical measurement system.
In some embodiments, a method of analyzing filtered and unfiltered data in an optical measurement system may include transmitting one or more pulse trains over one or more first time intervals as part of an optical measurement. Each of the one or more first time intervals may include one of the one or more pulse trains. The method may also include detecting photons from the one or more pulse trains that are reflected off an object in an environment surrounding the optical measurement system; populating a plurality of registers using the photons to represent an unfiltered histogram of photon counts received during the one or more first time intervals; filtering the unfiltered histogram in the plurality of registers to provide a filtered histogram of the photons from the plurality of registers; detecting a location of a peak in the filtered histogram; identifying, using the location of the peak in the filtered histogram, locations in the plurality of registers storing an unfiltered representation of the peak; and sending the unfiltered representation of the peak to a processor to calculate a distance to the object in the environment surrounding the optical measurement system using the unfiltered representation of the peak.
In any embodiments, any or all of the following features may be included in any combination and without limitation. Sending the unfiltered representation of the peak may include sending information identifying histogram time bins represented in the plurality of registers that store the unfiltered representation of the peak. Filtering the unfiltered histogram in the plurality of registers may include applying convolving the unfiltered histogram with at least one square filter having a plurality of identical values. The plurality of identical values may include a sequence of binary “1” values and/or a sequence of “−1” values. Filtering the unfiltered histogram in the plurality of registers may include convolving the unfiltered histogram with at least one sequence comprising a non-zero value followed by a plurality of zero values. The at least one sequence may include a single binary “1” value or a single binary “−1” followed by a plurality of “0” values. The method may also include sending a filtered representation of the peak to the processor in addition to sending the unfiltered representation of the peak. The location of the peak in the filtered histogram may be detected as a single peak in the filtered histogram; and the unfiltered representation of the peak may include at least two peaks in the unfiltered histogram. One of the at least two peaks in the unfiltered histogram may represent a peak resulting from a reflection of the one or more pulse trains off a housing or window of the optical measurement system. The photons may be detected using a plurality of photodetectors in a photosensor.
In some embodiments, an optical measurement system may include a plurality of light sources configured to emit one or more pulse trains over one or more time intervals as part of an optical measurement. The system may also include a plurality of photosensors configured to detect reflected photons from the one or more pulse trains emitted from corresponding light sources in the plurality of light sources. The plurality of photosensors may include a first photosensor and one or more other photosensors that are spatially adjacent to the first photosensor. The system may additionally include a plurality of memory blocks configured to accumulate photon counts of the photons received during the one or more time intervals by corresponding photosensors in the plurality of photosensors to represent a plurality of histograms of photon counts. The plurality of histograms may include a first histogram corresponding to the first photosensor and one or more histograms corresponding to the one or more other photosensors. The system may further include a circuit configured to combine information from the first histogram with information from the one or more other histograms to generate a distance measurement for the first photosensor.
In any embodiments, any or all of the following features may be included in any combination and without limitation. The one or more photosensors may be physically adjacent to the first photosensor in an array of photosensors. The array of photosensors may include a solid-state array of photosensors. The one or more photosensors may include eight photosensors that are orthogonally adjacent or diagonally adjacent to the first photosensor. The one or more photosensors need not be physically adjacent to the first photosensor in an array of photosensors, but the one or more photosensors may be positioned to receive photons from physical areas that are adjacent to a physical area from which photons are received by the first photosensor. The plurality of photosensors may be arranged in an array of photosensors that rotates around a central axis of the optical measurement system. The information from the first histogram may include a first distance measurement calculated based on the first histogram; the information from the one or more histograms may include one or more other distance measurements calculated based on the one or more other histograms; and the distance measurement may include a combination of the first distance measurement with the one or more other distance measurements. The first distance measurement may be below a detection limit of the optical measurement system before combining the first distance measurement with the plurality other of distance measurements. The distance measurement may be above the detection limit of the optical measurement system after combining the first distance measurement with the plurality of other distance measurements. The detection limit may represent a minimum number of photons received by a corresponding photosensor. The circuit to combine the information from the first histogram with the information from the one or more histograms may include a processor implemented on an integrated circuit that is different from an integrated circuit on which the plurality of memory blocks is implemented. The circuit and the plurality memory blocks may be implemented on a same integrated circuit.
In some embodiments, a method of using spatially adjacent pixel information in an optical measurement system may include transmitting one or more pulse trains over one or more first time intervals as part of an optical measurement; and detecting, using a plurality of photosensors, reflected photons from the one or more pulse trains. The plurality of photosensors may include a first photosensor and one or more photosensors that are spatially adjacent to the first photosensor. The method may also include accumulating photons counts received during the one or more time intervals by the plurality of photosensors to represent a plurality of histograms of photon counts. The plurality of histograms may include a first histogram corresponding to the first photosensor and one or more histograms corresponding to the one or more photosensors. The method may additionally include combining information from the first histogram with information from the one or more histograms to generate a distance measurement for the first photosensor.
In any embodiments, any or all of the following features may be included in any combination and without limitation. The reflected photons received by the first photosensor and received by the one or more photosensors may be reflected from a same object in the surrounding environment. The information from the first histogram may include photon counts in the first histogram; the information from the one or more histograms may include photon counts in the one or more histograms; and the distance measurement may be calculated based on an aggregation of the photon counts in the first histogram with the photon counts in the one or more histograms. The information from the first histogram may include first one or more peaks in the first histogram; the information from the one or more histograms may include second one or more peaks in the one or more histograms; and the distance measurement may be calculated based on a combination of the first one or more peaks and the second one or more peaks. The distance measurement may be calculated based on a summation of the first one or more peaks and the second one or more peaks. The distance measurement may be calculated based on a Gaussian combination of the first one or more peaks and the second one or more peaks. The distance measurement may be calculated based on a convolution of the first one or more peaks and the second one or more peaks. The distance measurement may be calculated based on a weighted combination of the first one or more peaks and the second one or more peaks.
A further understanding of the nature and advantages of various embodiments may be realized by reference to the remaining portions of the specification and the drawings, wherein like reference numerals are used throughout the several drawings to refer to similar components. In some instances, a sub-label is associated with a reference numeral to denote one of multiple similar components. When reference is made to a reference numeral without specification to an existing sub-label, it is intended to refer to all such multiple similar components.
The term “ranging,” particularly when used in the context of methods and devices for measuring an environment or assisting with vehicle operations, may refer to determining a distance or a distance vector from one location or position to another location or position. “Light ranging” may refer to a type of ranging method that makes use of electromagnetic waves to perform ranging methods or functions. Accordingly, a “light ranging device” may refer to a device for performing light ranging methods or functions. “Lidar” or “LIDAR” may refer to a type of light ranging method that measures a distance to a target by illuminating the target with a pulsed laser light, and thereafter measure the reflected pulses with a sensor. Accordingly, a “lidar device” or “lidar system” may refer to a type of light ranging device for performing lidar methods or functions. A “light ranging system” may refer to a system comprising at least one light ranging device, e.g., a lidar device. The system may further comprise one or more other devices or components in various arrangements.
A “pulse train” may refer to one or more pulses that are transmitted together. The emission and detection of a pulse train may be referred to as a “shot.” A shot can occur over a “detection time interval” (or “detection interval”).
A “measurement” may include N multiple pulse trains that are emitted and detected over N shots, each lasting a detection time interval. An entire measurement can be over a measurement time interval (or just “measurement interval”), which may equal the N detection interval of a measurement or be longer, e.g., when pauses occur between detection intervals.
A “photosensor” or “photosensitive element” can convert light into an electrical signal. A photosensor may include a plurality of “photodetectors,” e.g., single-photon avalanche diodes (SPADs). A photosensor can correspond to a particular pixel of resolution in a ranging measurement.
A “histogram” may refer to any data structure representing a series of values over time, as discretized over time bins. A histogram can have a value assigned to each time bin. For example, a histogram can store a counter of a number of photodetectors that fired during a particular time bin in each of one or more detection intervals. As another example, a histogram can correspond to the digitization of an analog signal at different times. A histogram can include signal (e.g., pulses) and noise. Thus, a histogram can be considered a combination of signal and noise as a photon time series or photon flux. A raw/digitized histogram (or accumulated photon time series) can contain the signal and the noise as digitized in memory without filtering. A “filtered histogram” may refer to the output after the raw histogram is passed through a filter.
An emitted signal/pulse may refer to the “nominal,” “ideal,” or “template” pulse or pulse train that is not distorted. A reflected signal/pulse may refer to the reflected laser pulse from an object and may be distorted. A digitized signal/pulse (or raw signal) may refer to the digitized result from the detection of one or more pulse trains of a detection interval as stored in memory, and thus may be equivalent to a portion of a histogram. A detected signal/pulse may refer to the location in memory that the signal was detected. A detected pulse train may refer to the actual pulse train found by a matched filter. An anticipated signal profile may refer to a shape of a digitized signal resulting from a particular emitted signal that has a particular distortion in the reflected signal.
DETAILED DESCRIPTIONThe present disclosure relates generally to the field of object detection and ranging, and more particularly to the use of time-of-flight optical receiver systems for applications such as real-time three-dimensional mapping and object detection, tracking and/or classification. Various improvements can be realized with various embodiments of the present invention.
Sections below introduce an illustrative automotive LIDAR system, followed descriptions of example techniques to detect signals by a light ranging system, and then different embodiments are described in more details.
I. Illustrative Automotive Lidar System
The scanning LIDAR system 101 shown in
For a stationary architecture, like solid state LIDAR system 103 shown in
In either the scanning or stationary architectures, objects within the scene can reflect portions of the light pulses that are emitted from the LIDAR light sources. One or more reflected portions then travel back to the LIDAR system and can be detected by the detector circuitry. For example, reflected portion 117 can be detected by detector circuitry 109. The detector circuitry can be disposed in the same housing as the emitters. Aspects of the scanning system and stationary system are not mutually exclusive and thus can be used in combination. For example, the individual LIDAR subsystems 103a and 103b in
LIDAR system 200 can interact with one or more instantiations of user interface 215. The different instantiations of user interface 215 can vary and may include, e.g., a computer system with a monitor, keyboard, mouse, CPU and memory; a touch-screen in an automobile; a handheld device with a touch-screen; or any other appropriate user interface. The user interface 215 may be local to the object upon which the LIDAR system 200 is mounted but can also be a remotely operated system. For example, commands and data to/from the LIDAR system 200 can be routed through a cellular network (LTE, etc.), a personal area network (Bluetooth, Zigbee, etc.), a local area network (WiFi, IR, etc.), or a wide area network such as the Internet.
The user interface 215 of hardware and software can present the LIDAR data from the device to the user but can also allow a user to control the LIDAR system 200 with one or more commands. Example commands can include commands that activate or deactivate the LIDAR system, specify photo-detector exposure level, bias, sampling duration and other operational parameters (e.g., emitted pulse patterns and signal processing), specify light emitters parameters such as brightness. In addition, commands can allow the user to select the method for displaying results. The user interface can display LIDAR system results which can include, e.g., a single frame snapshot image, a constantly updated video image, and/or a display of other light measurements for some or all pixels. In some embodiments, user interface 215 can track distances (proximity) of objects from the vehicle, and potentially provide alerts to a driver or provide such tracking information for analytics of a driver's performance.
In some embodiments, the LIDAR system can communicate with a vehicle control unit 217 and one or more parameters associated with control of a vehicle can be modified based on the received LIDAR data. For example, in a fully autonomous vehicle, the LIDAR system can provide a real time 3D image of the environment surrounding the car to aid in navigation. In other cases, the LIDAR system can be employed as part of an advanced driver-assistance system (ADAS) or as part of a safety system that, e.g., can provide 3D image data to any number of different systems, e.g., adaptive cruise control, automatic parking, driver drowsiness monitoring, blind spot monitoring, collision avoidance systems, etc. When a vehicle control unit 217 is communicably coupled to light ranging device 210, alerts can be provided to a driver or tracking of a proximity of an object can be tracked.
The LIDAR system 200 shown in
The Tx module 240 includes an emitter array 242, which can be a one-dimensional or two-dimensional array of emitters, and a Tx optical system 244, which when taken together can form an array of micro-optic emitter channels. Emitter array 242 or the individual emitters are examples of laser sources. The Tx module 240 further includes processor 245 and memory 246. In some embodiments, a pulse coding technique can be used, e.g., Barker codes and the like. In such cases, memory 246 can store pulse-codes that indicate when light should be transmitted. In one embodiment the pulse-codes are stored as a sequence of integers stored in memory.
The Rx module 230 can include sensor array 236, which can be, e.g., a one-dimensional or two-dimensional array of photosensors. Each photosensor or photosensitive element (also referred to as a sensor) can include a collection of photodetectors, e.g., APDs or the like, or a sensor can be a single photon detector (e.g., an SPAD). Like the Tx module 240, Rx module 230 includes an Rx optical system 237. The Rx optical system 237 and sensor array 236 taken together can form an array of micro-optic receiver channels. Each micro-optic receiver channel measures light that corresponds to an image pixel in a distinct field of view of the surrounding volume. Each sensor (e.g., a collection of SPADs) of sensor array 236 can correspond to a particular emitter of emitter array 242, e.g., as a result of a geometrical configuration of light sensing module 230 and light transmission module 240.
In one embodiment, the sensor array 236 of the Rx module 230 is fabricated as part of a monolithic device on a single substrate (using, e.g., CMOS technology) that includes both an array of photon detectors and an ASIC 231 for signal processing the raw histograms from the individual photon detectors (or groups of detectors) in the array. As an example of signal processing, for each photon detector or grouping of photon detectors, memory 234 (e.g., SRAM) of the ASIC 231 can accumulate counts of detected photons over successive time bins, and these time bins taken together can be used to recreate a time series of the reflected light pulse (i.e., a count of photons vs. time). This time-series of aggregated photon counts is referred to herein as an intensity histogram (or just histogram). The ASIC 231 can implement matched filters and peak detection processing to identify return signals in time. In addition, the ASIC 231 can accomplish certain signal processing techniques (e.g., by processor 238), such as multi-profile matched filtering to help recover a photon time series that is less susceptible to pulse shape distortion that can occur due to SPAD saturation and quenching. In some embodiments, all or parts of such filtering can be performed by processor 258, which may be embodied in an FPGA.
In some embodiments, the Rx optical system 237 can also be part of the same monolithic structure as the ASIC, with separate substrate layers for each receiver channel layer. For example, an aperture layer, collimating lens layer, an optical filter layer and a photo-detector layer can be stacked and bonded at the wafer level before dicing. The aperture layer can be formed by laying a non-transparent substrate on top of a transparent substrate or by coating a transparent substrate with an opaque film. In yet other embodiments, one or more components of the Rx module 230 may be external to the monolithic structure. For example, the aperture layer may be implemented as a separate metal sheet with pin-holes.
In some embodiments, the photon time series output from the ASIC are sent to the ranging system controller 250 for further processing, e.g., the data can be encoded by one or more encoders of the ranging system controller 250 and then sent as data packets to user interface 215. The ranging system controller 250 can be realized in multiple ways including, e.g., by using a programmable logic device such an FPGA, as an ASIC or part of an ASIC, using a processor 258 with memory 254, and some combination of the above. The ranging system controller 250 can cooperate with a stationary base controller or operate independently of the base controller (via pre-programed instructions) to control the light sensing module 230 by sending commands that include start and stop light detection and adjust photo-detector parameters. Similarly, the ranging system controller 250 can control the light transmission module 240 by sending commands, or relaying commands from the base controller, that include start and stop light emission controls and controls that can adjust other light-emitter parameters (e.g., pulse codes). In some embodiments, the ranging system controller 250 has one or more wired interfaces or connectors for exchanging data with the light sensing module 230 and with the light transmission module 240. In other embodiments, the ranging system controller 250 communicates with the light sensing module 230 and light transmission module 240 over a wireless interconnect such as an optical communication link.
The electric motor 260 may be an optional component needed when system components, e.g., the Tx module 240 and or Rx module 230, need to rotate. The system controller 250 controls the electric motor 260 and can start rotation, stop rotation and vary the rotation speed.
II. Detection of Reflected PulsesThe photosensors can be arranged in a variety of ways for detecting reflected pulses. For example, the photosensors can be arranged in an array, and each photosensor can include an array of photodetectors (e.g., SPADs). Different patterns of pulses (pulse trains) transmitted during a detection interval are also described below.
A. Time-of-Flight Measurements and Detectors
A start time 315 for the transmission of the pulse does not need to coincide with the leading edge of the pulse. As shown, the leading edge of light pulse 310 may be after the start time 315. One may want the leading edge to differ in situations where different patterns of pulses are transmitted at different times, e.g., for coded pulses.
An optical receiver system can start detecting received light at the same time as the laser is started, i.e., at the start time. In other embodiments, the optical receiver system can start at a later time, which is at a known time after the start time for the pulse. The optical receiver system detects background light 330 initially and after some time detects the laser pulse reflection 320. The optical receiver system can compare the detected light intensity against a threshold to identify the laser pulse reflection 320. The threshold can distinguish the background light 330 from light corresponding to the laser pulse reflection 320.
The time-of-flight 340 is the time difference between the pulse being sent and the pulse being received. The time difference can be measured by subtracting the transmission time of the pulse (e.g., as measured relative to the start time) from a received time of the laser pulse reflection 320 (e.g., also measured relative to the start time). The distance to the target can be determined as half the product of the time-of-flight and the speed of light. Pulses from the laser device reflect from objects in the scene at different times and the pixel array detects the pulses of radiation reflection.
B. Detection of Objects Using Array Lasers and Array of Photosensors
Light ranging system 400 includes a light emitter array 402 and a light sensor array 404. The light emitter array 402 includes an array of light emitters, e.g., an array of VCSELs and the like, such as emitter 403 and emitter 409. Light sensor array 404 includes an array of photosensors, e.g., sensors 413 and 415. The photosensors can be pixelated light sensors that employ, for each pixel, a set of discrete photodetectors such as single photon avalanche diodes (SPADs) and the like. However, various embodiments can deploy any type of photon sensors.
Each emitter can be slightly offset from its neighbor and can be configured to transmit light pulses into a different field of view from its neighboring emitters, thereby illuminating a respective field of view associated with only that emitter. For example, emitter 403 emits an illuminating beam 405 (formed from one or more light pulses) into the circular field of view 407 (the size of which is exaggerated for the sake of clarity). Likewise, emitter 409 emits an illuminating beam 406 (also called an emitter channel) into the circular field of view 410. While not shown in
Each field of view that is illuminated by an emitter can be thought of as a pixel or spot in the corresponding 3D image that is produced from the ranging data. Each emitter channel can be distinct to each emitter and be non-overlapping with other emitter channels, i.e., there is a one-to-one mapping between the set of emitters and the set of non-overlapping fields or view. Thus, in the example of
Each sensor can be slightly offset from its neighbor and, like the emitters described above, each sensor can see a different field of view of the scene in front of the sensor. Furthermore, each sensor's field of view substantially coincides with, e.g., overlaps with and is the same size as a respective emitter channel's field of view.
In
Because the fields of view of the emitters are overlapped with the fields of view of their respective sensors, each sensor channel ideally can detect the reflected illumination beam that originates from its respective emitter channel with ideally no cross-talk, i.e., no reflected light from other illuminating beams is detected. Thus, each photosensor can correspond to a respective light source. For example, emitter 403 emits an illuminating beam 405 into the circular field of view 407 and some of the illuminating beam reflects from the object 408. Ideally, a reflected beam 411 is detected by sensor 413 only. Thus, emitter 403 and sensor 413 share the same field of view, e.g., field of view 407, and form an emitter-sensor pair. Likewise, emitter 409 and sensor 415 form an emitter-sensor pair, sharing field of view 410. While the emitter-sensor pairs are shown in
During a ranging measurement, the reflected light from the different fields of view distributed around the volume surrounding the LIDAR system is collected by the various sensors and processed, resulting in range information for any objects in each respective field of view. As described above, a time-of-flight technique can be used in which the light emitters emit precisely timed pulses, and the reflections of the pulses are detected by the respective sensors after some elapsed time. The elapsed time between emission and detection and the known speed of light is then used to compute the distance to the reflecting surface. In some embodiments, additional information can be obtained by the sensor to determine other properties of the reflecting surface in addition to the range. For example, the Doppler shift of a pulse can be measured by the sensor and used to compute the relative velocity between the sensor and the reflecting surface. The pulse strength can be used to estimate the target reflectivity, and the pulse shape can be used to determine if the target is a hard or diffuse material.
In some embodiments, the LIDAR system can be composed of a relatively large 2D array of emitter and sensor channels and operate as a solid state LIDAR, i.e., it can obtain frames of range data without the need to scan the orientation of the emitters and/or sensors. In other embodiments, the emitters and sensors can be scanned, e.g., rotated about an axis, to ensure that the fields of view of the sets of emitters and sensors sample a full 360 degree region (or some useful fraction of the 360 degree region) of the surrounding volume. The range data collected from the scanning system, e.g., over some predefined time period, can then be post-processed into one or more frames of data that can then be further processed into one or more depth images or 3D point clouds. The depth images and/or 3D point clouds can be further processed into map tiles for use in 3D mapping and navigation applications.
C. Multiple Photodetectors in Each Photosensor
Array 520 shows a magnified view of a portion of array 510. As can be seen, each photosensor 515 is composed of a plurality of photodetectors 525. Signals from the photodetectors of a pixel collectively contribute to a measurement for that pixel.
In some embodiments, each pixel has a multitude of single-photon avalanche diode (SPAD) units that increase the dynamic range of the pixel itself. Each SPAD can have an analog front end circuit for biasing, quenching, and recharging. SPADs are normally biased with a biased voltage above the breakdown voltage. A suitable circuit senses the leading edge of the avalanche current, generates a standard output pulse synchronous with the avalanche build-up, quenches the avalanche by lowering the bias down below the breakdown voltage, and restore the photodiode to the operative level.
The SPADs may be positioned so as to maximize the fill factor in their local area, or a microlens array may be used, which allows for high optical fill factors at the pixel level. Accordingly, an imager pixel can includes an array of SPADs to increase the efficiency of the pixel detector. A diffuser may be used to spreads rays passed through an aperture and collimated by a microlens. The can diffuser serves to spread the collimated rays in a way that all the SPADs belonging to the same pixel receive some radiation.
Binary signal 545, avalanche current 534, and pixel counters 550 are examples of data values that can be provided by a photosensor composed of one or more SPADs. The data values can determined from respective signals from each of the plurality of photodetectors. Each of the respective signals can be compared to a threshold to determine whether a corresponding photodetector triggered. Avalanche current 534 is an example of an analog signal, and thus the respective signals can be analog signals.
Pixel counters 550 can use binary signal 545 to count the number of photodetectors for a given pixel that have been triggered by one or more photons during a particular time bin (e.g., a time window of 1, 2, 3, etc. ns) as controlled by periodic signal 560. Pixel counters 550 can store counters for each of a plurality of time bins for a given measurement. The value of the counter for each time bind can start at zero and be incremented based on binary signal 545 indicating a detection of a photon. The counter can increment when any photodetector of the pixel provide such a signal.
Periodic signal 560 can be produced by a phase-locked loop (PLL) or delay-locked loop (DLL) or any other method of producing a clock signal. The coordination of periodic signal 560 and pixel counter 550 can act as a time-to-digital converter (TDC), which is a device for recognizing events and providing a digital representation of the time they occurred. For example, a TDC can output the time of arrival for each detected photon or optical pulse. The measure time can be an elapsed time between two events (e.g., start time and detected photon or optical pulse) rather than an absolute time. Periodic signal 560 can be a relatively fast clock that switches between a bank of memory comprising pixel counter 550. Each register in memory can correspond to one histogram bin, and the clock can switch between them at the sampling interval. Accordingly, a binary value indicating a triggering can be sent to the histogram circuitry when the respective signal is greater than the threshold. The histogram circuitry can aggregate binary values across the plurality of photodetectors to determine a number of photodetectors that triggered during a particular time bin.
The time bins can be measured relative to a start signal, e.g., at start time 315 of
D. Pulse Trains
Ranging may also be accomplished by using a pulse train, defined as containing one or more pulses. Within a pulse train, the number of pulses, the widths of the pulses, and the time duration between pulses (collectively referred to as a pulse pattern) can be chosen based on a number of factors, some of which includes:
- 1—Maximum laser duty cycle—The duty cycle is the fraction of time the laser is on. For a pulsed laser this could be determined by the FWHM as explained above and the number of pulses emitted during a given period.
- 2—Eye safety limits—This is determined by maximum amount of radiation a device can emit without damaging the eyes of a bystander who happens to be looking in the direction of the LIDAR system.
- 3—Power consumption—This is the power that the emitter consumes for illuminating the scene.
For example, the spacing between pulses in a pulse train can be on the order of single digits or 10 s of nanoseconds.
Multiple pulse trains can be emitted during the time span of one measurement. Each pulse train can correspond to a different time interval, e.g., a subsequent pulse train is not emitted until an expiration of the time limit for detecting reflected pulses of a previous pulse train.
For a given emitter or laser device, the time between the emissions of pulse trains determines the maximum detectable range. For example, if pulse train A is emitted at time t0=0 ns, and pulse train B is emitted at time t1=1000 ns, then one must not assign reflected pulse trains detected after t1 to pulse train A, as they are much more likely to be reflections from pulse train B. Thus, the time between pulse trains and the speed of light define a maximum bound on the range of the system given in the following equation.
Rmax=c×(t1−t0)/2
The time between shots (emission and detection of pulse trains) can be on the order of 1 μs to allow enough time for the entire pulse train to travel to a distant object approximately 150 meters away and then back.
III. Histogram Signals from Photodetectors
One mode of operation of a LIDAR system is time-correlated single photon counting (TCSPC), which is based on counting single photons in a periodic signal. This technique works well for low levels of periodic radiation which is suitable in a LIDAR system. This time correlated counting may be controlled by periodic signal 560 of
The frequency of the periodic signal can specify a time resolution within which data values of a signal are measured. For example, one measured value can be obtained for each photosensor per cycle of the periodic signal. In some embodiments, the measurement value can be the number of photodetectors that the triggered during that cycle. The time period of the periodic signal corresponds to time bin, with each cycle being a different time bin.
The counter for each of the time bins corresponds to a different bar in histogram 600. The counters at the early time bins are relatively low and correspond to background noise 630. At some point, a reflected pulse 620 is detected. The corresponding counters are much larger, and may be above a threshold that discriminate between background and a detected pulse. The reflected pulse 620 (after digitizing) is shown corresponding to four time bins, which might result from a laser pulse of a similar width, e.g., a 4 ns pulse when time bins are each 1 ns. But, as described in more detail below, the number of time bins can vary, e.g., based on properties of a particular object in an angle of incidence of the laser pulse.
The temporal location of the time bins corresponding to reflected pulse 620 can be used to determine the received time, e.g., relative to start time 615. As described in more detail below, matched filters can be used to identify a pulse pattern, thereby effectively increasing the signal-to-noise ratio, but also to more accurately determine the received time. In some embodiments, the accuracy of determining a received time can be less than the time resolution of a single time bin. For instance, for a time bin of 1 ns, that resolution would correspond to about 15 cm. However, it can be desirable to have an accuracy of only a few centimeters.
Accordingly, a detected photon can result in a particular time bin of the histogram being incremented based on its time of arrival relative to a start signal, e.g., as indicated by start time 615. The start signal can be periodic such that multiple pulse trains are sent during a measurement. Each start signal can be synchronized to a laser pulse train, with multiple start signals causing multiple pulse trains to be transmitted over multiple detection intervals. Thus, a time bin (e.g., from 200 to 201 ns after the start signal) would occur for each detection interval. The histogram can accumulate the counts, with the count of a particular time bin corresponding to a sum of the measured data values all occurring in that particular time bin across multiple shots. When the detected photons are histogrammed based on such a technique, it results in a return signal with a signal to noise ratio greater than from a single pulse train by the square root of the number of shots taken.
In the first detected pulse train 710, the counters for time bins 712 and 714 are the same. This can result from a same number of photodetectors detecting a photon during the two time bins. Or, in other embodiments, approximately the same number of photons being detected during the two time bins. In other embodiments, more than one consecutive time bin can have a consecutive non-zero value; but for ease of illustration, individual nonzero time bins have been shown.
Time bins 712 and 714 respectively occur 458 ns and 478 ns after start time 715. The displayed counters for the other detected pulse trains occur at the same time bins relative to their respective start times. In this example, start time 715 is identified as occurring at time 0, but the actual time is arbitrary. The first detection interval for the first detected pulse train can be 1 μs. Thus, the number of time bins measured from start time 715 can be 1,000. After, this first detection interval ends, a new pulse train can be transmitted and detected. The start and end of the different time bins can be controlled by a clock signal, which can be part circuitry that acts as a time-to-digital converter (TDC), e.g., as is described in
For the second detected pulse train 720, the start time 725 is at 1 μs, e.g., at which the second pulse train can be emitted. Such a separate detection interval can occur so that any pulses transmitted at the beginning of the first detection interval would have already been detected, and thus not cause confusion for pulses detected in the second time interval. For example, if there is not extra time between shots, then the circuitry could confuse a retroreflective stop sign at 200 m with a much less reflective object at 50 m (assuming a shot period of about 1 us). The two detection time intervals for pulse trains 710 and 720 can be the same length and have the same relationship to the respective start time. Time bins 722 and 724 occur at the same relative times of 458 ns and 478 ns as time bin 712 and 714. Thus, when the accumulation step occurs, the corresponding counters can be added. For instance, the counter values at time bin 712 and 722 can be added together.
For the third detected pulse train 730, the start time 735 is at 2 μs, e.g., in which the third pulse train can be emitted. Time bin 732 and 734 also occur at 458 ns and 478 ns relative to its respective start time 735. The counters at different time bins may have different values even though the emitted pulses have a same power, e.g., due to the stochastic nature of the scattering process of light pulses off of objects.
Histogram 740 shows an accumulation of the counters from three detected pulse trains at time bins 742 and 744, which also correspond to 458 ns and 478 ns. Histogram 740 could have less number of time bins that are measured during the respective detection intervals, e.g., as a result of dropping time bins in the beginning or the end, or that have values less than a threshold. In some implementations, about 10-30 time bins can have appreciable values, depending on the pattern for a pulse train.
As examples, the number of pulse trains emitted during a measurement to create a single histogram can be around 1-40 (e.g., 24), but can also be much higher, e.g., 50, 100, or 500. Once a measurement is completed, the counters for the histogram can be reset, and a set of pulse trains can be emitted to perform a new measurement. In various embodiments and depending on the number of detection intervals in the respective duration, measurements can be performed every 25, 50, 100, or 500 μs. In some embodiments, measurement intervals can overlap, e.g., so a given histogram corresponds to a particular sliding window of pulse trains. In such an example, memory can exist for storing multiple histograms, each corresponding to a different time window. Any weights applied to the detected pulses can be the same for each histogram, or such weights could be independently controlled.
IV. Histogram Data PathEach photodetector in the photosensor 802 may include analog front-end circuitry for generating an output signal indicating when photons are received by the photodetectors. For example, referring back to
An arithmetic logic unit (ALU) 804 may be used to implement the functionality of the pixel counter 550 from
The ALU 804 is designed specifically to receive at least a number of inputs that correspond to the number of photodetectors in the photosensor 802. In the example of
As described above, the output of the ALU 804 may characterize the total number of photons received by the photosensor 802 during a particular time bin. Each time the ALU 804 completes an aggregation operation, the total signal count can be added to a corresponding memory location in a memory 806 representing histogram 818. In some embodiments, the memory 806 may be implemented using an SRAM. Thus, over the course of multiple shots (with each shot including a pulse train) the total signal count from the ALU 804 can be aggregated with an existing value in a corresponding memory location in the memory 806. A single measurement may be comprised of a plurality of shots that populate the memory 806 to generate the histogram 818 of values in the time bins that can be used to detect reflected signals, background noise, peaks, and/or other signals of interest.
The ALU 804 may also perform a second aggregation operation that adds the total signal count to an existing value in a memory location of the memory 806. Recall from
As described above in relation to
A clock circuit 810 may be used to generate the periodic signal 560 based on inputs that define shots and measurements for the optical measurement system. For example, a shot input 814 may correspond to the start signal illustrated in
The memory 806 may include a plurality of registers that accumulate photon counts from photodetectors. By accumulating photon counts in respective registers corresponding to time bins, the registers in the memory 806 can store photon counts based on arrival times of the photons. For example, photons arriving in a first time bin can be stored in a first register in the memory 806, photons arriving in a second time bin can be stored in a second register in the memory 806, and so forth. Each “shot” may include one traversal through each of the registers in the memory 806 corresponding to a time bin for that photosensor. The shot signal 814 may be referred to as an “enable” signal for the plurality of registers in the memory 806 in that the shot signal 814 enables the registers in the memory 806 to store results from the ALU 804 during the current shot.
The periodic signal 560 can be generated such that it is configured to capture the set of signals 816 as they are provided asynchronously from the photosensor 802. For example, the threshold circuitry 540 may be configured to hold the output signal high for a predetermined time interval. The periodic signal 560 may be timed such that it has a period that is less than or equal to the hold time for the threshold circuitry 540. Alternatively, the period of the periodic signal 560 may be a percentage of the hold time for the threshold circuitry 540, such as 90%, 80%, 75%, 70%, 50%, 110%, 120%, 125%, 150%, 200%, and so forth. Some embodiments may use rising-edge detection circuitry as illustrated in
With each subsequent shot, the histogram can be constructed in the memory 806 as illustrated in
In some embodiments, the timing of the measurement signal 812, the shot signal 814, and the periodic signal 506 that clocks the ALU 804 may all be coordinated and generated relative to each other. Thus, the clocking of the ALU may be triggered by, and dependent on, the start signal for each shot. Additionally the period of the periodic signal 506 may define the length of each time bin associated with each memory location in the histogram.
The data path illustrated in
V. Identifying Early Reflections from the System Housing
The examples above have illustrated single pulses or multiple pulses as part of a pulse train that are transmitted from a light source, reflected off of objects in the surrounding environment, and then detected by a photosensor. These reflections are recorded in a histogram memory and form “peaks” in the histogram after a number of repeated shots in a measurement. The locations of these peaks can then be used to determine a distance between the optical measurement system and objects in the surrounding environment. However, these examples are not meant to be limiting. In addition to the intended peaks caused by the pulse train reflecting off objects of interest in the surrounding environment, the histogram memory may also include other peaks resulting from more immediate, unintended reflections of the pulse train, as well as peaks that do not necessarily result from the pulse train at all. For example, additional peaks may correspond to external light sources, sunlight, reflections, or other ambient light phenomena in the surrounding environment. In these cases, instead of only detecting a single peak, the optical measurement system may detect multiple peaks that may consequently be present in the histogram memory. In addition to extraneous peaks resulting from external light sources, an initial peak may result from a reflection of the pulse train off of an internal housing of the optical measurement system.
In
As photon counts are received for the rest of the optical measurement, additional peaks may also be detected that result from photon reflections off of objects of interest in the surrounding environment. For example, peak 1004 in
However, some embodiments may receive and store the peak 1002 resulting from housing reflections in the histogram memory, and then proceed to use the peak to improve near-range performance of the optical measurement systems. These improvements may include calibrating the optical measurement system, characterizing the opacity of the window of the optical measurement system, detecting blockages of the optical measurement system, improving near-range performance of the optical measurement system, and so forth. Each of these various improvements that may be realized by first identifying the peak 1002 resulting from housing reflections are described in greater detail below.
At step 1102, the method may include transmitting one or more pulse trains from a light source over one or more first time intervals as a part of an optical measurement. Each of the one or more first time intervals may represent a “shot” that is repeated multiple times in the measurement. Each of the first time intervals may include one or more pulse trains that are encoded and transmitted by the light sources such that the pulse trains can be recognized as they are reflected off of objects in the surrounding environment. Each of the time intervals may be subdivided into a plurality of time bins such that each of the time bins represents a bin in a histogram of received photon counts during the optical measurement. An example of how a single measurement may include multiple shots subdivided into time bins that aggregate photon counts is described above in relation to
At step 1104, the method may also include detecting photons from the one or more pulse trains using a photosensor. As described in detail above, the photosensor may include a plurality of photodetectors, such as a plurality of SPADs. The photosensor may receive reflected light as well as ambient background noise received from the surrounding environment. The reflected light received by the photosensor may include light reflected off of objects of interest in the surrounding environment. For example, these objects of interest may be at least 30 cm away from the photosensor, and may represent surrounding vehicles, buildings, pedestrians, and/or any other object that may be encountered around the optical measurement system during use. These objects of interest may be distinguished from objects that are part of the optical measurement system itself, such as a housing. Reflected light received by the photosensor may also include light reflected off of the housing or other parts of the optical measurement system. These reflections may include primary reflections directly off the window or housing of the optical measurement system, as well as secondary reflections that are reflected around an interior of the optical measurement system. The photosensor may also be coupled to threshold detection circuitry and/or to arithmetic logic circuitry that accumulates photon counts. This combination may be referred to above as a “pixel.”
At step 1106, the method may also include accumulating photon counts from the photosensor into a plurality of registers to represent a histogram of photon counts received during the current measurement. These photon counts may be accumulated in a plurality of registers in a memory block corresponding to the photosensor. The plurality of registers may be implemented using the registers in an SRAM of a histogram data path as described above in
At step 1108, the method may additionally include identifying a peak in the histogram that represents the photons reflected off of the system housing. Detecting this peak may be performed using a number of different techniques, depending on the particular embodiment. In some embodiments, a pass may be made through the histogram memory to identify a first peak occurring in time. The system may sequentially access registers in the plurality of registers, starting at the beginning of the optical measurement to identify a peak that occurs first in time. A center of a peak may be identified by identifying values in the histogram with smaller values in time bins on either side. In the example of
Identifying the peak in the histogram may involve locating window of time bins that occurs around the center of the peak. This process may involve expanding outwards from the peak location to determine a range of registers in the histogram memory that include the entire peak. As illustrated in
Instead of detecting a variable peak location for each measurement, some embodiments may assume a known location of the optical measurement housing and designate a specific window of registers that are likely to include values from early reflections off the system housing. In the example of
It should be appreciated that the specific steps illustrated in
After identifying the location of a pulse representing reflections off the system housing, the location and/or peak values of that reflection may be used to improve the optical measurement system or characterize the optical measurement system in a number of different ways. This may include calibrating a known distance for us in additional distance measurements by the optical measurement system, characterizing the transparency of the window on the optical measurement system, detecting blockages or other obstacles that impede the field-of-view of the optical measurement system, and/or other similar improvements.
A. Calibrations Using the Housing Reflection
However, distance calculations may be inaccurate if any of the values assumed above are not precise or vary between different optical measurement systems. For example, the actual time represented by each time interval may not be exactly as predicted based on an assumed clock cycle. Therefore, it may be useful for some systems to perform a self-calibration process that can be used to accurately determine different constants used by the distance calculations.
Some embodiments may use the location of the peak 1002 representing the housing reflections to calibrate values for the other distance calculations of the optical measurement system. The distance between the light source and the housing of the optical measurement system may be precisely known based on a manufacturing process of the optical measurement system. Similarly, the distance between the housing of the system and the photosensor may also be precisely known. These distances may be used in conjunction with peak 1002 to determine the precise values of constants used in the distance calculations, such as the time represented by each time bin and/or any offsets in the system. These constants may be part of a transformation function that receives a time of a peak and provides an accurate distance measurement. Such a transformation may be constant (e.g., uniform delay), a linear transformation (e.g., later time bins offset more/less than earlier time bins by an amount proportional to the time bin number), or a non-linear transformation.
In the example of
The precise values of constants, such as the time represented by each time bin, may then be used to calculate distances for other peaks in the measurement. In the example of
Using the calibrated values from the peak reflected from the housing of the optical measurement system can thus be used to generate more accurate distance measurements. These techniques may overcome problems resulting from process variations and sensor drift over time. These techniques may also be used to detect movement of the system housing relative to the light source and/or photosensors. For example, if the location of peak 1002 changes over the lifecycle of the optical measurement system, this may indicate movement of the housing relative to the rest of the system.
B. Characterizing Window Transparency
In addition to calibrating values to be used in distance calculations, identifying the location of a reflection off the system housing may also be used to characterize various aspects of the optical measurement system during operation. Generally, the peak reflected off the housing of the system may be larger than peaks received from objects of interest in the surrounding environment. The size of this initial peak may be due to the proximity of the housing relative to the photosensors. Because the photosensors are so close to the reflection, the intensity of the reflected light may be relatively large. However, assuming that the size of the reflected peak is within the saturation limit of the histogram memory, certain measurements may be made based on the magnitude of the peak reflected off the housing to characterize portions of the housing itself.
Many practical applications of the optical measurement system may include uses in an open environment where the optical measurement system may be exposed to pollution, rain, snow, and/or other environmental elements that may damage, obscure, and/or otherwise affect the transparency of the window on the housing of the optical measurement system. For example, when the optical measurement system is installed in an automotive application, the system may be located on the top of a vehicle. The system may also be located on areas around the periphery of the vehicle, such as on the bumpers or sides of the vehicle. In these locations, the optical measurement system may be subjected to weather elements, rocks and debris, exhaust or fog, and/or other effects that may affect the transparency of the window. For example, rocks from the road may hit the optical measurement system, scratching or otherwise damaging the window. Humid environments may cause moisture to temporarily build up on the window. Rain or snow may accumulate on the outside of the window. Dirt, pollution, or grime from the roadway may build up on the exterior of the window. Many other adverse conditions in an automobile environment may also affect the window of the optical measurement system.
Each of these different environmental effects may cause the transparency of the window to change. For example, as dirt or scratches build up on the window, less light may be transmitted through the window. Consequently, more light may be reflected off of the window back to photosensors. Therefore, considering the window to be part of the housing of the optical measurement system, the techniques described above for monitoring the magnitude of the peak reflected off the housing of the optical measurement system over time may also be used to track differences in the transparency of the window over time. It may be assumed that the transparency of the rest of the housing remains constant (e.g., the housing is completely opaque). Therefore, any changes to the magnitude of the peak reflected off the housing may be attributed to changes in the transparency of the window itself.
As described above, the magnitude 1308 of the initial peak 1302 may be recorded as a baseline measurement when the window of the housing is clean. This initial value may be stored over time and used as a baseline for comparisons with future peak magnitudes to track the transparency of the window over time. For example, some embodiments may track a difference 1306 between the initial magnitude 1308 and the current magnitude 1310 of the peaks 1302, 1304 over time. As the difference 1306 increases, various outputs may be provided by the optical measurement system. For example, some embodiments may provide an output that triggers a warning or message to a user or to an automobile system indicating that the window is dirty (“please clean the window on your LIDAR system”). Some embodiments may automatically trigger systems on the vehicle to clean the optical measurement system, such as systems that spray cleaner on the optical measurement system.
In some embodiments, the optical measurement system may shut down photosensors that are behind the portion 1404 of the window that is dirty or blocked, or for rotating systems, shut down at times the photosensors are behind portion 1404. Because of the intensity of the reflected pulse, the histogram data paths of these photosensors may saturate such that the magnitude of the pulses reflected off the window have a magnitude that is higher than a bit-limit of the histogram data path. Reflections off this portion 1404 the window may also be scattered by the blockage and interfere with the photons received by other nearby photosensors. Because the measurements may be skewed by saturation, scattering, or other effects, the system may take corrective measures, such as causing these photosensors to no longer accumulate a photon count.
C. Detecting Window Blockage
In addition to detecting dirty or partially obscured windows, the first detected peak may also be used to determine whether the optical measurement system is blocked by a foreign object. As described above, many operating environments, such as an automotive environment, may provide situations where the optical measurement system may be subject to adverse conditions. Some of these environments may cause the optical measurement system to become completely blocked from the surrounding environment. For example, a plastic bag may blow onto the optical measurement system. A person may place their hand in front of the optical measurement system. The optical measurement system may be purposely disabled with tape or other objects placed in front of the optical measurement system. In any of these environments, it may be useful to detect when such a blockage occurs and where such blockage may be located.
When a blocking object is sufficiently close to the optical measurement system such that it prevents a substantial portion of the light emitted by the optical measurement system from being reflected off of other objects in the surrounding environment, the system may determine that a blockage has occurred. This determination may be made when the magnitude of the initial peak 1504 meets or exceeds the blockage threshold 1506. As illustrated in
The uses for a peak generated by reflections off the housing of the optical measurement system described above generally improve the performance of the optical measurement system by calibrating the system and compensating for external effects. Some embodiments may alternatively or additionally improve the performance of the optical measurement system by removing the peaks caused by the housing reflections from the histogram memory signals that are analyzed by a peak detection circuit or by an auxiliary processor. These embodiments may remove the initial peak from the housing reflections to improve the near-range accuracy of the optical measurement system.
A. Near-Range Detection
However, as the object moves closer to the optical measurement system, the residual effect of the initial peak 1602 may begin to affect the shape of the second peak 1604. For example, as photons begin to be received from reflections off the object, photons scattered off the internal housing of the optical measurement system may still be received by the photosensor. Because additional photon counts from reflected photons may continue to be stored in registers representing the time bins for the second peak 1604, the shape of the second peak 1604 may be skewed. When the shape changes, the peak detection circuitry may identify an inaccurate location of the peak. For example, if the beginning portion of the second peak 1604 is increased due to photons reflected off the housing, the center of the second peak 1604 may shift to the left as detected by the peak detection circuitry.
The overall effect of this combination of the initial peak 1602 and the second peak 1604 is to greatly reduce the near-range accuracy of the optical measurement system. Reflections off of objects a few feet from the optical measurement system may be affected by these housing reflections to the point that a “dead zone” may be created in the immediate vicinity of the optical measurement system.
B. Removing the Housing Reflections
In order to compensate for the effect of the reflections off the housing, the optical measurement system may use the baseline measurements for the initial peak as described above. Specifically, the optical measurement system may record a baseline shape and/or location of the initial peak as observed during measurements over time. In addition to using changes to the initial peak to characterize the window of the optical measurement system, the shape and/or location of the baseline measurement of the initial peak may be used to compensate for the reflections off the housing for near-range measurements.
The histogram data path illustrated in
The baseline peak 1804 may be provided to a subtraction circuit 1802 such that it may be subtracted from the corresponding registers in the histogram memory 806. For example, the memory interface may provide the first nine register values from the histogram memory 806 to the subtraction circuit 1802 and subtract the corresponding values from the baseline peak 1804 from those register values. The register values may then be returned to the histogram memory 806 such that the histogram memory 806 has the effects of the reflections off the housing removed from the histogram. The subtraction circuit 1802 may be implemented using an arithmetic logic unit (ALU) or digital logic gates implementing the subtraction function.
In some embodiments, the baseline peak 1804 may additionally or alternatively be provided to a processor 1806. Instead of compensating for the reflections off the housing using the subtraction circuit 1802, the baseline peak 1804 may be passed with the raw and/or filtered data in the histogram memory 806 to a processor 1806, which may then remove the baseline peak 1804 from the current measurement. The processor 1806 may be on a same integrated circuit as the rest of the optical measurement system, including the histogram memory 806 and the arithmetic logic circuit 804. Alternatively, the processor 1806 may be located on a separate integrated circuit that is distinct from an integrated circuit implementing the optical measurement system. The processor 1806 may receive filtered and/or raw data from the histogram memory 1806, and then may use a set of instructions to perform operations that remove the effect of the reflections off the system housing as described above.
VIII. Detecting Peaks Using Matched FiltersAs described in detail above, the optical measurement system may generate pulse trains that are emitted from a light source and directed into a surrounding environment. The light from these pulse trains may reflect off of objects in the surrounding environment and be received by a corresponding photosensor in the optical measurement system. As photon counts are received by the photosensor, they may be accumulated into registers of a histogram memory (e.g., an SRAM) to form a histogram representation of photons received during an optical measurement. The accumulated photon counts in the histogram memory may represent raw measurement data, which may include reflected photons from the light source as well as photons from background noise in the surrounding environment. Depending on the noise level of the background environment, this raw data in the histogram memory may have a relatively low signal-to-noise ratio (SNR). This level of noise in the histogram may increase the difficulty associated with performing on-chip peak detection. In order to improve the confidence in the on-chip peak detection circuitry, the optical measurement system may apply one or more filters to the raw data in the histogram memory, after which a peak detection algorithm using the filtered data may be performed.
While filtering the data does improve the SNR of the data in the histogram memory, it may also obscure characteristics of the histogram of reflected photons that may be useful for calculating distance measurements, performing statistical analyses of the histogram data, detecting edge cases, detecting near-range objects, distinguishing between adjacent reflected peaks, curve fitting, and so forth. The embodiments described below leverage the benefits of filtering the data in the histogram memory, as well as preserving the additional information provided by the raw data received from the photosensor. The filtered data may be used to perform on-chip peak detection to then identify time windows in the unfiltered histogram memory that may include detected peaks. After the time windows are identified using the filtered data, an integrated circuit in which the histogram memory is implemented may pass time bins containing the unfiltered histogram data during those time windows to an external processor for distance calculations and other analyses of the raw histogram data.
The sections below first describe how the filtering process may use matched filters for single pulse scenarios to detect time windows in the unfiltered data to be passed off-chip. Afterwards, more complex examples are presented using pulse trains that include multi-pulse codes and more complex matched filters. Some filters may be designed to compress the data for these longer pulse codes without sacrificing the shape information of the original data.
A. Far-Range Peak Detection
After the initial peak 1902, the histogram memory may later include one or more peaks resulting from photon reflections off of an object in the surrounding environment. For example, peak 1904 may occur subsequent to the initial peak 1902 and may represent photons reflected from an object in the surrounding environment. Note that both peak 1902 and peak 1904 may result from photons emitted as part of the same pulse train; however, peak 1902 may be reflected from the housing/window of the optical measurement system, while peak 1904 may be reflected from an object in the surrounding environment. The shape of the peaks 1902, 1904 may correspond roughly to the shape of the emitted pulse from the optical measurement system. For example, some embodiments may emit pulses having an square shape. As illustrated in
As described above, the photon counts in the histogram illustrated in
Some embodiments may use a matched filter to increase the SNR of the raw data in the histogram memory. The shape of the matched filter may correspond to a shape of the initial pulse emitted by the optical measurement system. For example, a matched filter 1906 may be applied to the raw data in the histogram memory having a generally square shape as illustrated in
In
Although the matched filter 1906 may cause the peaks 2002, 2004 to be more easily identified in the filtered histogram data, this filtering operation also distorts the shape of the histogram data. While this results in a higher SNR and a higher confidence level for peak detection, this also removes important information that is may be represented in the unfiltered histogram data. This information may be useful for a processor performing distance calculations, statistical analyses, and other calculations that may be best performed using the detail available in the unfiltered data. Although some embodiments may pass peak 2002 and/or peak 2004 from the filtered histogram data to the processor for distance calculations, some embodiments may use the location of the peaks 2002, 2004 identified in the filtered histogram data to identify corresponding peak locations in the unfiltered histogram data, and then send the unfiltered peaks from the unfiltered histogram data to the processor. This process will be described in greater detail below.
B. Adjacent Peak Detection
In addition to losing signal information through the filtering process described above, some embodiments may also have difficulty distinguishing between adjacent peaks without using the unfiltered histogram data.
Although the example above in
To solve these and other technical problems, some embodiments may filter the histogram data, but may pass the unfiltered data to a processor for distance calculations, statistical analyses, curve interpolation, peak fitting, and so forth. The filtered data may be used to identify the location of the peaks in the histogram; however, instead of only passing a window of time bins including the filtered peak data, these embodiments may alternatively or additionally use location(s) identified in the filtered histogram data to identify corresponding location(s) in the unfiltered histogram data. A window of time bins can then be sent to the processor from the unfiltered histogram data, and the processor can use the unfiltered histogram data to analyze peaks for distance calculations.
A. Locating Windows in Unfiltered Data
The peak detection circuit/algorithm may first identify the first peak 2002. However, as described above, the peak detection circuit may be configured to exclude the first peak 2002 detected in the unfiltered histogram data under the assumption that the first peak 2002 may correspond to an initial reflection of the light pulse off the housing/window of the optical measurement system. The peak detection algorithm may continue scanning through the unfiltered histogram data until the second peak 2004 is detected. Generally, the peak detection circuit/algorithm may identify location of a maximum value of peak 2004. For example, the peak detection circuit/algorithm may identify a time bin with a local maximum around which peak 2004 is centered.
After detecting the center of peak 2004, the peak detection circuit/algorithm may identify surrounding time bins that may be considered part of peak 2004. For example, some embodiments may identify a predetermined number of time bins that are centered around the peak location (e.g., 17 time bins). Some embodiments may identify time bins that are within a threshold amount of the maximum value of the peak (e.g., within 50% of the maximum value). Some embodiments may select a number of time bins centered around the maximum value based on the width of the light pulse emitted by the optical measurement system and the width of the filter. For example, if an emitted light pulse is four time bins wide, and the corresponding matched filter is also four time bins wide the resulting peak in the filtered histogram data may be expected to be at least eight time bins wide. Consequently, the peak detection circuit/algorithm may identify at least eight time bins (e.g., 10 time bins, 12 time bins, etc.) centered around the maximum value of the peak as representing the peak in the unfiltered histogram data.
Some embodiments may pass one or more windows of time bins off-chip to the processor, including the filtered peak data identified as described above. For example, an initial peak, along with three additional peaks corresponding to objects in the surrounding environment may be identified by the algorithm described above. Windows of time bins centered around these peaks may be identified, and the filtered data in those time bins may be sent to the processor for processing. Using the unfiltered data may be acceptable in situations where the filtered data can adequately capture distance information regarding the surrounding environment.
Alternatively or additionally, some embodiments may use the location of the peaks identified in the filtered data to identify windows of time bins in the unfiltered data, and the windows of time bins in the unfiltered data may be passed to the processor for processing. These windows of unfiltered data may be passed in addition to or in the place of the windows of filtered data described above.
To identify a window of unfiltered data in the unfiltered histogram, the optical measurement system may use the location of the maximum value of the peak identified in the filtered histogram data. For example, a location of the maximum value for peak 2004 in the filtered histogram data may be used as the location of the maximum value of the peak 1904 in the unfiltered histogram data. Once this location is identified in the unfiltered histogram data, a window of time bins surrounding the maximum value can be identified using the techniques described above around the corresponding location in the unfiltered data (e.g., a predetermined number of time bins, time bins within a percentage range of the maximum value, etc.). As described above, the number of time bins identified in time bin windows of the filtered histogram data may be based on the width of the emitted light pulse and the width of the applied filter. This may result in a window of time bins that was at least twice the width of the emitted light pulse. However, when identifying a time bin window in the unfiltered histogram data, a fewer number of time bins may be used. Instead of accounting for the way in which the low-pass filter increases the width of the peak 2004, the width of the unfiltered peak 1904 may be smaller. For example, some embodiments may use a number of time bins that is slightly larger than the width of the emitted light pulse (e.g., 2 time bins, 4 time bins, 6 time bins, 8 time bins larger than the emitted light pulse, etc.).
This method of using filtered data to identify time windows in the unfiltered data preserves the benefits of using the unfiltered data while also preserving the benefits of using the filtered data. The higher SNR and smoother peak profiles may allow the on-chip peak detection circuitry to accurately identify peaks in the histogram. This allows the chip on which the histogram memory is implemented to pass only a small subset of the information stored in the histogram memory to the processor for processing. However, by passing unfiltered data in these windows of time bins to the processor, the processor may maintain all the benefits of using the full unfiltered data for processing.
Turning back briefly to
B. Circuitry for Passing Unfiltered Data Off-Chip
In some embodiments, the output of the filter 2302 may be stored in a separate buffer 2306. As described above, the separate buffer 2306 may not be necessary in embodiments where the peak detection circuit 2308 operates on the direct outputs of the filter 2302. However, embodiments that store the output of the filter 2302 in a separate buffer 2306 may then simultaneously store both the unfiltered histogram data in the histogram memory 806 and the filtered histogram data in the buffer 2306. The peak detection circuit 2308 may then make a pass through the buffer 2306 to detect peaks in the filtered histogram data. Storing the filtered histogram data in the buffer 2306 allows the peak detection circuit 2308 to make multiple passes through the buffer and employee iterative techniques for identifying peaks in the filtered histogram. For example, each pass through the buffer 2306 may detect a maximum value peak that is different from maximum values detected during previous iterations of the peak detection algorithm.
In some embodiments, the histogram data path depicted in
Once the maximum values in the filtered data are identified by the peak detection circuit 2308, one or more time bin windows 2310 may be identified in the unfiltered data in the histogram memory 806. These windows 2310 may be comprised of a plurality of time bins surrounding the maximum values of the peaks identified by the peak detection circuit 2308. The windows 2310 may be populated with data from the histogram memory 806 as unfiltered values 2312. These unfiltered values 2312 may then be passed to a processor 2314 for processing, such as distance calculations, curve fitting, and so forth. In some embodiments, bin identifiers may be passed to the processor 2314 along with, or in the place of, the photon counts in the bins themselves.
The processor 2314 may be implemented on an integrated circuit that is separate and distinct from an integrated circuit on which the histogram memory 806 is implemented. In some embodiments, a first integrated circuit, such as an application-specific integrated circuit (ASIC) may be fabricated that includes the arithmetic logic circuit 804 and the histogram memory 806. The ASIC may also include the peak detection circuit 2308 and the optional buffer 2306 when it is part of the design. Thus, when referring to operations that are performed “on-chip,” these operations may be performed on the first integrated circuit.
Some embodiments may also include a second integrated circuit comprising the processor 2314. The processor 2314 may include a microcontroller, a microprocessor, a field programmable gate array (FPGA) implementing a processor core, an ASIC implementing a processor core, and/or the like. The unfiltered values 2312 may be transmitted through printed leads on a circuit board between the first integrated circuit and the second integrated circuit comprising the processor 2314.
In the example of
The embodiments described above use single-pulse codes as a simplified example of how filtered data can be used to identify and pass unfiltered data to a processor. However, these techniques described above are equally applicable to more complex multi-pulse codes. A multi-pulse code may include a pulse train or a plurality of pulses that are sent as part of a single shot during a measurement. These pulse codes may be duplicated during each shot during the measurement. Using multi-pulse codes provides a more distinct pattern for detection purposes, and this can minimize false positives and other false peaks that may result from ambient noise rather than photons reflected off of objects of interest in the surrounding environment. The examples below illustrate how these techniques may be used with multi-pulse codes to send unfiltered data to the processor.
A. Low-Pass Summation Filter
1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 −1 −1 −1 −1 0 0 0 0 1 1 1 1 0 0 0 0
This specific pulse code may be constructed from multiple shots from the light source of the optical measurement system during each shot in a measurement. For example, a first shot may include pulses with the positive “1” peaks in
When the multi-pulse code is received as reflected photons in the histogram memory, this may be represented as positive/negative peaks in the photon counts. In the example of
Using the techniques described above, the location of the peak 2504 in the filtered data can be used to identify the time bin window 2402 in the unfiltered data of
B. Compression Filters
In the example of
In order to preserve this shape in the filtered data, some embodiments may use a single binary designator that may still sum peaks 2604, 2606, 2608, 2609 into a single peak without also implementing the low-pass filter operation. The filter 2616 may use only a single value or designator at the beginning of each pulse instead of a uniform set of values throughout the duration of each pulse. In the example of
1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 −1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
This filter may generate a peak similar to the peak illustrated in
In some embodiments, both the filters illustrated in
Note that the example filters 2416, 2616 described above for multi-pulse codes use single bits as designators in the filter taps. However, this is done merely by way of example in this disclosure. The actual values for each filter may include multi-bit values, such as 10-bit values rather than single bits. As described above, these values can be convolved and calculated as a sliding window to detect the resulting peak in the filtered data.
XI. Method for Providing Unfiltered DataAt step 2702, the method may include transmitting one or more pulse trains from a light source over one or more first time intervals as a part of an optical measurement. Each of the one or more first time intervals may represent a “shot” that is repeated multiple times in the measurement. Each of the first time intervals may include one or more pulse trains that are encoded and transmitted by the light sources such that the pulse trains can be recognized as they are reflected off of objects in the surrounding environment. Each of the time intervals may be subdivided into a plurality of time bins such that each of the time bins represents a bin in a histogram of received photon counts during the optical measurement. An example of how a single measurement may include multiple shots subdivided into time bins that aggregate photon counts is described above in relation to
At step 2704, the method may also include detecting photons from the one or more pulse trains using a photosensor. As described in detail above, the photosensor may include a plurality of photodetectors, such as a plurality of SPADs. The photosensor may receive reflected light as well as ambient background noise received from the surrounding environment. The reflected light received by the photosensor may include light reflected off of objects of interest in the surrounding environment. For example, these objects of interest may be at least 30 cm away from the photosensor, and may represent surrounding vehicles, buildings, pedestrians, and/or any other object that may be encountered around the optical measurement system during use. These objects of interest may be distinguished from objects that are part of the optical measurement system itself, such as a housing. In some embodiments, reflected light received by the photosensor may also include light reflected off of the housing or other parts of the optical measurement system. These reflections may include primary reflections directly off the window or housing of the optical measurement system, as well as secondary reflections that are reflected around an interior of the optical measurement system. The photosensor may also be coupled to threshold detection circuitry and/or to arithmetic logic circuitry that accumulates photon counts. This combination may be referred to above as a “pixel.”
At step 2706, the method may also include accumulating photon counts from the photosensor into a plurality of registers to represent a histogram of photon counts received during the current measurement. These photon counts may be accumulated in a plurality of registers in a memory block corresponding to the photosensor. The plurality of registers may be implemented using the registers in an SRAM of a histogram data path as described above in
At step 2708, the method may additionally include filtering the histogram in the plurality of registers to provide a filtered histogram of the photon counts from the plurality of registers. The filter may include a matched filter that is based on a shape of the one or more pulse trains transmitted from the optical measurement system. The filter may be convolved with the unfiltered histogram in the plurality of registers and may be stored in a separate buffer. The filter may be comprised of a square pulse of repeated non-zero values. The filter may also be comprised of a single non-zero value followed by a plurality of approximately zero values. The filter may operate as described above in relation to
At step 2710, the method may further include detecting a location of a peak in the filtered histogram. Detecting this peak may be performed using a number of different techniques, depending on the particular embodiment. In some embodiments, a pass may be made through the filtered histogram to identify a peak. The system may sequentially access values in the filtered histogram, starting at the beginning of the measurement to identify a peak. A peak may be identified by identifying increasing values followed by decreasing values in the filtered histogram. Similarly, a center of a peak may be identified by identifying values in the histogram with smaller values in time bins on either side.
At step 2712, the method may also include identifying locations in the plurality of registers storing and unfiltered representation of the peak. These locations may be identified using the location of the peak in the filtered histogram. Thus, the filtered histogram data may be used to identify the location of a peak in the unfiltered histogram data. Identifying the peak in the unfiltered histogram may involve locating window of time bins that occurs around the center of the peak. This process may involve expanding outwards from the peak location to determine a range of registers in the histogram memory that include the entire peak. For example, some embodiments may identify a predetermined number of time bins around the location of the maximum value in the unfiltered histogram that may be designated as a peak. For example, a predetermined number of time bins, such as 3 time bins, 5 time bins, 9 time bins, 15 time bins, 17 time bins, and so forth, maybe identified and centered around the maximum value to represent the whole peak. Some embodiments may identify surrounding time bins with values within a percentage of the maximum value in the peak register. This may result in a variable number of time bins that may be used to represent a peak depending on the width of the peak. For example, time bins surrounding the maximum value may be included in the peak when their values are within 25% of the maximum value. This process of identifying a representation of the peak in the unfiltered histogram is described above in relation to
At step 2714, the method may also include sending the unfiltered representation of the peak to a processor to calculate a distance the object. The distance to the object may represent a distance between the optical measurement system and the object in the surrounding environment. This calculation may be performed using the unfiltered representation of the peak sent to the processor. The processor may be physically separate and distinct from an integrated circuit on which the histogram memory is implemented. Thus, the histogram memory may represent a first integrated circuit, and the processor may represent a second integrated circuit that communicate with each other through a printed circuit board. An example of this circuitry is described above in relation to
It should be appreciated that the specific steps illustrated in
In some environmental conditions, the optical measurement system may have difficulty providing a reliable response above a predetermined confidence level when measuring distances to some objects. These environmental conditions may include weather phenomena, such as rain, mist, fog, etc. These conditions may also include objects that are simply too far away from the optical measurement system to generate a reliable measurement based on the number of photons that are accurately reflected by objects in the environment and received by the photosensors. Any of these situations, the optical measurement system may use a detection threshold to prevent false positives from being detected by the system.
To improve the confidence level of some measurement conditions, some embodiments may use a form of spatial filtering. These embodiments may assume that there is a spatial correlation between photons received by spatially adjacent photosensors. For example, physically adjacent photosensors are likely to receive photons reflected off of a same object in the surrounding environment. Even if the responses of these individual photosensors fall below a detection threshold, techniques are described below for combining the responses of these adjacent photosensors to increase the confidence of the distance calculation such that the combined response is above the detection threshold.
Based on environmental conditions or the distance to the object 2800, some of the photosensor views may fail to generate results that meet or exceed a detection threshold. The detection threshold may be implemented in different ways depending on the embodiment. For example, the detection threshold may be based on receiving at least a threshold number of photon counts reflected from the objects 2800. The detection threshold may be based on a distance calculated to the object 2800. The detection threshold may be based on a confidence level in distinguishing peaks in the corresponding histogram from noise in the histogram. For example, particularly noisy environments with a low SNR may fail to generate results that meet or exceed the detection threshold. The detection threshold may also be implemented as a peak detection threshold for detecting peaks in a histogram.
In the example of
Some embodiments may disregard any photosensors that fall below the detection limit. However, other embodiments may leverage the spatial correlation of adjacent pixels. An assumption may be made that more often than not, adjacent photosensor views are spatially correlated. In the example of
In the example of
A. Combining Distance Measurements
In this example, a photosensor array may include a photosensor 2906 comprising one or more individual photodetectors as illustrated in
To calculate or characterize a distance measurement for the photosensor 2906, some embodiments may rely solely on the photon counts that are reflected off the object 2800 from the photosensor view 2810. In cases where the resulting histogram and/or distance calculation meet or exceed the detection threshold, relying on the response of the photosensor 2906 by itself may be sufficient. However, in some cases, the resulting histogram for the photosensor 2906 and/or the resulting distance calculation may fall below a detection threshold based on various factors described above (e.g., environmental conditions, distance to the object 2800, ambient noise, etc.). In these situations, the optical measurement system may utilize the responses from the neighboring photosensors 2904, 2908 to increase the confidence of the measurement obtained by the photosensor 2906 such that the resulting peaks or calculations meet or exceed the detection threshold.
Some embodiments may be configured to combine information from a histogram associated with photosensor 2906 with information from histograms of neighboring photosensors, such as photosensors 2904, 2908. In some embodiments, this process may share data from adjacent photosensors with each other. For example, the photon counts from the histograms of photosensors 2904, 2908 can be added to the histogram of photosensor 2906. Since the photons are all reflecting off the same object 2800, this may have the effect of boosting the reflected signal in the histogram of photosensor 2906. Note that if the photosensor views 2808, 2810, 2012 are not spatially adjacent as assumed, then accumulating photons from photosensors 2904, 2908 likely will not have a negative effect on the histogram of photosensor 2906, since there would not be any reflections at that distance. In effect, this may approximate a spatial matched filter across the histograms of neighboring photosensors. If two peaks coincide in time, then the photosensor responses may be assumed to be spatially correlated, and the two signals can be combined to generate a larger resulting peak in the histogram.
To combine the information from the histograms, the outputs of the arithmetic logic circuits for each of the neighboring photosensors 2904, 2908 may be sent to the arithmetic logic circuit for photosensor 2906. This may accumulate photons received by any of the photosensors 2904, 2906, 2908 into a single histogram to boost the return signal. In some embodiments, this accumulation calculation may be augmented to use a weighted summation that applies a weight to the responses of various photosensors. Using weights may allow greater accuracy in determining how much a neighboring photosensor should influence the response of another photosensor. For example, since the neighboring photosensors that are orthogonally adjacent may have photosensor views that are closer to the central photosensor view than photosensors that are diagonally adjacent, these orthogonally adjacent photosensors may have higher weights applied than the diagonally adjacent photosensors when their photon counts are added to the histogram of the central photosensor. Using weights may also allow the optical measurement system to use photosensor responses that are not directly adjacent. As described above, the object 2800 may include more photosensor views than the nine photosensor views illustrated in
B. Combining Histograms/Peaks
Instead of simply generating a weighted summation of surrounding photosensor responses, some embodiments may use more complex methods of combining information from histograms of surrounding photosensor views into a histogram for a central photosensor view. In this example, the photosensor view 2810 may be surrounded by eight orthogonally/diagonally adjacent photosensor views as illustrated in
Some embodiments may combine information from the histograms of the surrounding photosensors to improve the detection of the central photosensor. The information from the histograms may be specific peaks in the histogram memories. For example, when peaks are located at the same distance in multiple adjacent photosensors, the system may determine that the corresponding photosensor views are spatially correlated and cause the associated histograms to be combined. Some embodiments may combine the entire histograms for each photosensor, while other embodiments may execute a peak detection circuit/algorithm, and then only combine specific peak locations. Therefore, the information that is combined from the histograms may include histogram values themselves, portions of histograms representing peaks, and/or additional values derived from the histograms.
Instead of simply generating a weighted sum of surrounding histograms, some embodiments may use a Gaussian combination of the adjacent histograms combined with a convolution process. For example, each of the histograms that are spatially adjacent to the histogram of the central photosensor view 2810 may have a Gaussian function applied before the combination. Instead of summing the resulting histogram values together, they may be convolved together in the same way that a matched filter would be convolved with a single histogram. As described above, this may approximate a spatial convolution between spatially adjacent photosensor responses.
XIII Photosensor Array ConfigurationsThe examples described above use a rectangular photosensor array pattern with photosensors arranged in a grid layout. However, not all embodiments are limited to rectangular grids. Other embodiments may use different photosensor array patterns that may result in higher photosensor density. Other layout patterns may also be more amenable to spinning photosensor arrays. In any of these layout patterns, the techniques described above may be used to combine data from spatially adjacent photosensor views.
A. Solid State Array Configurations
To use the algorithms for spatial correlation described above with the rectangular photosensor array 3100, the histograms for adjacent photosensors may be assumed to be spatially adjacent or spatially correlated. For example, photosensor 3102 and photosensor 3104 are physically adjacent in the photosensor array 3100. Consequently, the corresponding photosensor views 3106, 3108 in the surrounding environment may also be adjacent.
B. Rotating Array Configurations
In contrast to the non-rotating photosensor array in
Another difference between the photosensor array 3204 of
This alternate pattern for the photosensor arrangement in the array 3204 may affect which photosensors are considered spatially adjacent to other photosensors in the array 3204. Physical adjacency need not necessarily imply spatial adjacency from the view of the photosensors in the surrounding environment. As the optical measurement system rotates, photosensors that are spatially adjacent to the photosensor 3202 may include photosensors on other photosensor arrays on other sides of the rotating member 3200 (not shown), as well as histograms from previous measurements that have been buffered and have not yet been overwritten. Spatial adjacency can accommodate any geometric arrangement of photosensors in an array configuration. Instead of relying on physical adjacency, embodiments can instead determine adjacent photosensor views in the surrounding environment during multiple measurements to perform the operations described above.
XIV. Circuit Implementations for Combining PixelsTo combine information from histograms as described above, the optical measurement system may include a summation/convolution circuit 3310 that combines the histograms together using mathematical operations. The summation/convolution circuit 3310 may be configured to scan or receive individual time bins from each of the memory blocks 3308 for a number of spatially adjacent photosensors. Each time bin from multiple histograms may be combined together into a time bin in a single histogram 3312 representing a single photosensor, such as photosensor 3302b. This circuit may use an arithmetic logic unit to add values together. This circuit may also use a multiplier to apply weights to each of the histograms 3306 as they are combined together. The final histogram 3312 can be passed through a peak detection circuit 3314 as described above to locate peaks in the histogram. Because the histograms 3306 have been combined from spatially adjacent photosensors, the resulting histogram 3312 is more likely to provide one or more peaks that exceed a detection threshold.
Any identified peaks can be sent to a processor 3316 for distance calculations as described above. The processor 3316 may be implemented on an integrated circuit 3320 that is physically separate and distinct from an integrated circuit 3322 on which the arithmetic logic circuits 3304 and/or memory blocks 3308 are implemented.
In
In the example of
In this example, the circuit for combining information from the histograms may be implemented in the processor in the second integrated circuit 3420 that is different from an integrated circuit 3422 where the peaks are detected. As described in detail above, peaks can be detected on-chip, and windows of time bins can be passed to the processor for distance measurements. The processor may also execute a summation/convolution circuit 3425 to combine individual peaks to form a single peak 3426 representation for the photosensor 3302b. This single peak 3426 may exceed a detection threshold and may be used by a distance calculation algorithm 3427 on the processor.
At step 3602, the method may include transmitting one or more pulse trains over one or more first time intervals as part of an optical measurement. Each of the one or more time intervals may represent a “shot” that is repeated multiple times in the measurement. Each of the time intervals may include one or more pulse trains that are encoded and transmitted by the light sources such that the pulse trains can be recognized as they are reflected off of objects in the surrounding environment. Each of the time intervals may be subdivided into a plurality of time bins such that each of the time bins represents a bin in a histogram of received photon counts during the optical measurement. An example of how a single measurement may include multiple shots subdivided into time bins that aggregate photon counts is described above in relation to
At step 3604, the method may include detecting reflected photons from the one or more pulse trains. These reflected photons may be detected using a plurality of photosensors. The plurality of photosensors may include a first photosensor, along with one or more photosensors that are spatially adjacent to the first photosensor. As described above, a photosensor may be “spatially adjacent” to the first photosensor when the views of the two photosensors are adjacent in the environment surrounding the optical measurement system. In some configurations, this may include photosensors that are physically adjacent on the optical measurement system, while other configurations (e.g., spinning configurations) need not require physical adjacency. As described in detail above, each photosensor may include a plurality of photodetectors, such as a plurality of SPADs. The photosensors may receive reflected light as well as ambient background noise received from the surrounding environment. The reflected light received by the photosensor may include light reflected off of objects of interest in the surrounding environment. For example, these objects of interest may be at least 30 cm away from the photosensor, and may represent surrounding vehicles, buildings, pedestrians, and/or any other object that may be encountered around the optical measurement system during use. These objects of interest may be distinguished from objects that are part of the optical measurement system itself, such as a housing. The photosensor may also be coupled to threshold detection circuitry and/or to arithmetic logic circuitry that accumulates photon counts.
At step 3606, the method may include accumulating photon counts received during the one or more time intervals. The photons may be accumulated using an arithmetic logic circuit and maybe accumulated into one or more memory blocks, such as the histogram memories described above. The memory blocks may be implemented using the registers in an SRAM of a histogram data path as described above in
At step 3608, the method may include combining information from the first histogram with information from the one or more histograms to generate a distance measurement for the photosensor. As described above, the information from the first histogram may include raw data from the histogram itself, identified peaks in the histogram, calculations or statistics derived from the histogram, distance measurements calculated based on the histogram, and/or any other information that may be calculated or derived from the photon counts in the histogram. Similarly, the information from the one or more histograms of the spatially adjacent photosensors may also include any of these types of information. Combining the information may include summoning the histogram information, applying numerical weights to the histogram information, convolving the histogram information, applying Gaussian functions to the histogram information, and/or any other mathematical operation that may combine information from the different histogram information. The combination may result in a new histogram, a new peak in a histogram, a new distance measurement, and/or another numerical value. A circuit for performing this combination and/or generating the distance measurement for the first photosensor may include on-chip circuits for summing/convolving histogram information, as well as all or part of a separate processor configured to calculate distance measurements.
It should be appreciated that the specific steps illustrated in
While some embodiments disclosed herein have focused on the application of light ranging within the context of 3D sensing for automotive use cases, systems disclosed herein can be used in any application without departing from the scope of the present disclosure. For example, systems can have a small, or even miniature, form factors that enable a number of additional use cases, e.g., for solid-state light ranging systems. For example, systems can be used in 3D cameras and/or depth sensors within devices, such as mobile phones, tablet PCs, laptops, desktop PCs, or within other peripherals and/or user-interface devices. For example, one or more embodiments could be employed within a mobile device to support facial recognition and facial tracking capabilities, eye tracking capabilities, and/or for 3D scanning of objects. Other use cases include forward-facing depth cameras for augmented and virtual reality applications in mobile devices.
Other applications include deployment of one or more systems on airborne vehicles, such as airplanes, helicopters, drones, and the like. Such examples could provide 3D sensing and depth imaging to assist with navigation (autonomous or otherwise) and/or to generate 3D maps for later analysis, e.g., to support geophysical, architectural, and/or archeological analyses.
Systems can also be mounted to stationary objects and structures, such as buildings, walls, poles, bridges, scaffolding, and the like. In such cases, the systems can be used to monitor outdoor areas, such as manufacturing facilities, assembly lines, industrial facilities, construction sites, excavation sites, roadways, railways, bridges, etc. Furthermore, systems can be mounted indoors and used to monitor movement of persons and or objects within a building, such as the movement of inventory within a warehouse or the movement of people, luggage, or goods within an office building, airport, train station, etc. As would be appreciated by one of ordinary skill in the art with the benefit of this disclosure, many different applications of light ranging systems are possible and, as such, the examples provided herein are provided for illustrative purposes only and shall not be construed to limit the uses of such systems to only the examples explicitly disclosed.
XVII. Computer SystemAny of the computer systems or circuits mentioned herein may utilize any suitable number of subsystems. The subsystems can be connected via a system bus 75. As examples, subsystems can include input/output (I/O) devices, system memory, storage device(s), and network adapter(s) (e.g. Ethernet, Wi-Fi, etc.), which can be used to connect a computer system other devices (e.g., an engine control unit). System memory and/or storage device(s) may embody a computer readable medium.
A computer system can include a plurality of the same components or subsystems, e.g., connected together by external interface, by an internal interface, or via removable storage devices that can be connected and removed from one component to another component. In some embodiments, computer systems, subsystem, or apparatuses can communicate over a network.
Aspects of embodiments can be implemented in the form of control logic using hardware circuitry (e.g. an application specific integrated circuit or field programmable gate array) and/or using computer software with a generally programmable processor in a modular or integrated manner. As used herein, a processor can include a single-core processor, multi-core processor on a same integrated chip, or multiple processing units on a single circuit board or networked, as well as dedicated hardware. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will know and appreciate other ways and/or methods to implement embodiments of the present invention using hardware and a combination of hardware and software.
Any of the software components or functions described in this application may be implemented as software code to be executed by a processor using any suitable computer language such as, for example, Java, C, C++, C#, Objective-C, Swift, or scripting language such as Perl or Python using, for example, conventional or object-oriented techniques. The software code may be stored as a series of instructions or commands on a computer readable medium for storage and/or transmission. A suitable non-transitory computer readable medium can include random access memory (RAM), a read only memory (ROM), a magnetic medium such as a hard-drive or a floppy disk, or an optical medium such as a compact disk (CD) or DVD (digital versatile disk), flash memory, and the like. The computer readable medium may be any combination of such storage or transmission devices.
Such programs may also be encoded and transmitted using carrier signals adapted for transmission via wired, optical, and/or wireless networks conforming to a variety of protocols, including the Internet. As such, a computer readable medium may be created using a data signal encoded with such programs. Computer readable media encoded with the program code may be packaged with a compatible device or provided separately from other devices (e.g., via Internet download). Any such computer readable medium may reside on or within a single computer product (e.g. a hard drive, a CD, or an entire computer system), and may be present on or within different computer products within a system or network. A computer system may include a monitor, printer, or other suitable display for providing any of the results mentioned herein to a user.
Any of the methods described herein may be totally or partially performed with a computer system including one or more processors, which can be configured to perform the steps. Thus, embodiments can be directed to computer systems configured to perform the steps of any of the methods described herein, potentially with different components performing a respective step or a respective group of steps. Although presented as numbered steps, steps of methods herein can be performed at a same time or at different times or in a different order. Additionally, portions of these steps may be used with portions of other steps from other methods. Also, all or portions of a step may be optional. Additionally, any of the steps of any of the methods can be performed with modules, units, circuits, or other means of a system for performing these steps.
The specific details of particular embodiments may be combined in any suitable manner without departing from the spirit and scope of embodiments of the invention. However, other embodiments of the invention may be directed to specific embodiments relating to each individual aspect, or specific combinations of these individual aspects.
The above description of example embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form described, and many modifications and variations are possible in light of the teaching above.
A recitation of “a”, “an” or “the” is intended to mean “one or more” unless specifically indicated to the contrary. The use of “or” is intended to mean an “inclusive or,” and not an “exclusive or” unless specifically indicated to the contrary. Reference to a “first” component does not necessarily require that a second component be provided. Moreover reference to a “first” or a “second” component does not limit the referenced component to a particular location unless expressly stated. The term “based on” is intended to mean “based at least in part on.”
All patents, patent applications, publications, and descriptions mentioned herein are incorporated by reference in their entirety for all purposes. None is admitted to be prior art.
Claims
1. An optical measurement system comprising:
- a plurality of light sources configured to emit one or more pulse trains over one or more time intervals as part of an optical measurement;
- a plurality of photosensors configured to detect reflected photons from the one or more pulse trains emitted from corresponding light sources in the plurality of light sources, wherein the plurality of photosensors comprises a first photosensor and one or more other photosensors that are spatially adjacent to the first photosensor;
- a plurality of memory blocks configured to accumulate photon counts of the photons received during the one or more time intervals by corresponding photosensors in the plurality of photosensors to represent a plurality of histograms of photon counts, wherein the plurality of histograms comprises a first histogram corresponding to the first photosensor and one or more histograms corresponding to the one or more other photosensors; and
- a circuit configured to combine information from the first histogram with information from the one or more other histograms to generate a distance measurement for the first photosensor.
2. The optical measurement system of claim 1, wherein the one or more photosensors are physically adjacent to the first photosensor in an array of photosensors.
3. The optical measurement system of claim 2, wherein the array of photosensors comprises a solid-state array of photosensors.
4. The optical measurement system of claim 2, wherein the one or more photosensors comprises eight photosensors that are orthogonally adjacent or diagonally adjacent to the first photosensor.
5. The optical measurement system of claim 1, wherein the one or more photosensors are not physically adjacent to the first photosensor in an array of photosensors, but wherein the one or more photosensors are positioned to receive photons from physical areas that are adjacent to a physical area from which photons are received by the first photosensor.
6. The optical measurement system of claim 5, wherein the plurality of photosensors are arranged in an array of photosensors that rotates around a central axis of the optical measurement system.
7. The optical measurement system of claim 1, wherein:
- the information from the first histogram comprises a first distance measurement calculated based on the first histogram;
- the information from the one or more histograms comprises one or more other distance measurements calculated based on the one or more other histograms; and
- the distance measurement comprises a combination of the first distance measurement with the one or more other distance measurements.
8. The optical measurement system of claim 7, wherein the first distance measurement is below a detection limit of the optical measurement system before combining the first distance measurement with the plurality other of distance measurements.
9. The optical measurement system of claim 8, wherein the distance measurement is above the detection limit of the optical measurement system after combining the first distance measurement with the plurality of other distance measurements.
10. The optical measurement system of claim 8, wherein the detection limit represents a minimum number of photons received by a corresponding photosensor.
11. The optical measurement system of claim 1, wherein the circuit to combine the information from the first histogram with the information from the one or more histograms comprises a processor implemented on an integrated circuit that is different from an integrated circuit on which the plurality of memory blocks is implemented.
12. The optical measurement system of claim 1, wherein the circuit and the plurality memory blocks are implemented on a same integrated circuit.
13. A method of using spatially adjacent pixel information in an optical measurement system, the method comprising:
- transmitting one or more pulse trains over one or more first time intervals as part of an optical measurement;
- detecting, using a plurality of photosensors, reflected photons from the one or more pulse trains, wherein the plurality of photosensors comprises a first photosensor and one or more photosensors that are spatially adjacent to the first photosensor;
- accumulating photons counts received during the one or more time intervals by the plurality of photosensors to represent a plurality of histograms of photon counts, wherein the plurality of histograms comprises a first histogram corresponding to the first photosensor and one or more histograms corresponding to the one or more photosensors; and
- combining information from the first histogram with information from the one or more histograms to generate a distance measurement for the first photosensor.
14. The method of claim 13, wherein the reflected photons received by the first photosensor and received by the one or more photosensors are reflected from a same object in the surrounding environment.
15. The method of claim 13, wherein:
- the information from the first histogram comprises photon counts in the first histogram;
- the information from the one or more histograms comprises photon counts in the one or more histograms; and
- the distance measurement is calculated based on an aggregation of the photon counts in the first histogram with the photon counts in the one or more histograms.
16. The method of claim 13, wherein:
- the information from the first histogram comprises first one or more peaks in the first histogram;
- the information from the one or more histograms comprises second one or more peaks in the one or more histograms; and
- the distance measurement is calculated based on a combination of the first one or more peaks and the second one or more peaks.
17. The method of claim 16, wherein the distance measurement is calculated based on a summation of the first one or more peaks and the second one or more peaks.
18. The method of claim 16, wherein the distance measurement is calculated based on a Gaussian combination of the first one or more peaks and the second one or more peaks.
19. The method of claim 16, wherein the distance measurement is calculated based on a convolution of the first one or more peaks and the second one or more peaks.
20. The method of claim 16, wherein the distance measurement is calculated based on a weighted combination of the first one or more peaks and the second one or more peaks.
Type: Application
Filed: Oct 20, 2021
Publication Date: Feb 10, 2022
Applicant: Ouster, Inc. (San Francisco, CA)
Inventors: Angus PACALA (San Francisco, CA), Marvin SHU (San Francisco, CA)
Application Number: 17/451,634