LIDAR DEVICES WITH FREQUENCY AND TIME MULTIPLEXING OF SENSING SIGNALS

The subject matter of this specification can be implemented in, among other things, systems and methods of optical sensing that utilize time and frequency multiplexing of sensing signals. Described are, among other things, a light source subsystem to produce a first beam having a first frequency and a second beam having a second frequency, a modulator to impart a modulation to the second beam, and an optical interface subsystem to receive a third beam caused by interaction of the first beam with an object and a fourth beam caused by interaction of the second beam with the object. Also described are one or more circuits to determine, based on a first phase information carried by the third beam, a velocity of the object, and then determine, based on a second phase information carried by the third beam and the first phase information, a distance to the object.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
RELATED APPLICATIONS

The instant specification claims the benefit of U.S. Provisional Application No. 63/199,207, filed Dec. 14, 2020, the entire contents of which is being incorporated herein by reference.

TECHNICAL FIELD

The instant specification generally relates to range and velocity sensing in applications that involve determining locations and velocities of moving objects using optical signals reflected from the objects. More specifically, the instant specification relates to increasing efficiency and sensitivity of light detection and ranging (lidar) devices using frequency and/or time multiplexing of sensing signals.

BACKGROUND

Various automotive, aeronautical, marine, atmospheric, industrial, and other applications that involve tracking locations and motion of objects benefit from optical and radar detection technology. A rangefinder (radar or optical) device operates by emitting a series of signals that travel to an object and then detecting signals reflected back from the object. By determining a time delay between a signal emission and an arrival of the reflected signal, the rangefinder can determine a distance to the object. Additionally, the rangefinder can determine the velocity (the speed and the direction) of the object's motion by emitting two or more signals in a quick succession and detecting a changing position of the object with each additional signal. Coherent rangefinders, which utilize the Doppler effect, can determine a longitudinal (radial) component of the object's velocity by detecting a change in the frequency of the arrived wave from the frequency of the emitted signal. When the object is moving away from (or towards) the rangefinder, the frequency of the arrived signal is lower (higher) than the frequency of the emitted signal, and the change in the frequency is proportional to the radial component of the object's velocity. Autonomous (self-driving) vehicles operate by sensing an outside environment with various electromagnetic (radio, optical, infrared) sensors and charting a driving path through the environment based on the sensed data. Additionally, the driving path can be determined based on positioning (e.g., Global Positioning System (GPS)) and road map data. While the positioning and the road map data can provide information about static aspects of the environment (buildings, street layouts, etc.), dynamic information (such as information about other vehicles, pedestrians, cyclists, etc.) is obtained from contemporaneous electromagnetic sensing data. Precision and safety of the driving path and of the speed regime selected by the autonomous vehicle depend on the quality of the sensing data and on the ability of autonomous driving computing systems to process the sensing data and to provide appropriate instructions to the vehicle controls and the drivetrain.

BRIEF DESCRIPTION OF THE DRAWINGS

The present disclosure is illustrated by way of examples, and not by way of limitation, and can be more fully understood with references to the following detailed description when considered in connection with the figures, in which:

FIG. 1 is a diagram illustrating components of an example autonomous vehicle that can deploy a lidar device capable of signal multiplexing for improved efficiency, accuracy, and speed of target characterization, in accordance with some implementations of the present disclosure.

FIG. 2A is a block diagram illustrating an example implementation of an optical sensing system capable of time multiplexing of lidar sensing signals, in accordance with some implementations of the present disclosure.

FIG. 2B is a schematic illustration of a phase encoding imparted to a sensing light beam transmitted by the optical sensing system of FIG. 2A, in accordance with some implementations of the present disclosure.

FIG. 2C is a schematic illustration of a frequency encoding imparted to a sensing light beam transmitted by the optical sensing system of FIG. 2A, in accordance with some implementations of the present disclosure.

FIG. 3A is a block diagram illustrating an example implementation of an optical sensing system capable of frequency multiplexing of lidar sensing signals, in accordance with some implementations of the present disclosure.

FIG. 3B is a schematic illustration of a phase encoding imparted to a sensing light beam transmitted by the optical sensing system of FIG. 3A, in accordance with some implementations of the present disclosure.

FIG. 3C is a schematic illustration of a frequency encoding imparted to a sensing light beam transmitted by the optical sensing system of FIG. 3A, in accordance with some implementations of the present disclosure.

FIG. 3D is a block diagram illustrating an example implementation of an optical sensing system that uses a frequency comb and frequency multiplexing for concurrent sensing of multiple objects, in accordance with some implementations of the present disclosure.

FIG. 4A is a block diagram illustrating an example implementation of an optical sensing system with frequency multiplexing in which one of the sensing signals is unmodulated and not shifted in frequency, in accordance with some implementations of the present disclosure.

FIG. 4B is a block diagram illustrating an example implementation of an optical sensing system with frequency multiplexing and efficient detection of targets in the presence of internal reflections and/or close-up returns, in accordance with some implementations of the present disclosure.

FIG. 4C is a block diagram illustrating an example implementation of an optical sensing system that uses optical locking to enable frequency multiplexing, in accordance with some implementations of the present disclosure.

FIG. 5A is a schematic illustration of a frequency encoding imparted to a sensing light beam together with a sequence of frequency chirps, for efficient disambiguation of returns from multiple objects, in accordance with some implementations of the present disclosure.

FIG. 5B is a schematic illustration of a phase encoding imparted to a sensing light beam together with a sequence of frequency chirps, for efficient disambiguation of returns from multiple objects, in accordance with some implementations of the present disclosure.

FIG. 6 depicts a flow diagram of an example method of time multiplexing of lidar sensing signals, in accordance with some implementations of the present disclosure.

FIG. 7 depicts a flow diagram of an example method of imparting a combination of frequency chirps together with a sequence of shifts, in accordance with some implementations of the present disclosure.

FIG. 8 depicts a flow diagram of an example method of frequency multiplexing of lidar sensing signals, in accordance with some implementations of the present disclosure.

SUMMARY

In one implementation, disclosed is a system that includes a light source subsystem configured to produce a first beam having a first frequency and a second beam having a second frequency; a modulator configured to impart a modulation to the second beam; an optical interface subsystem configured to: receive i) a third beam caused by interaction of the first beam with a first object and ii) a fourth beam caused by interaction of the second beam with the first object; and one or more circuits configured to: determine, based on a first phase information carried by the third beam, a velocity of the first object; and determine, based on a second phase information carried by the third beam and the first phase information, a distance to the first object.

In another implementation, disclosed is a system that includes a light source configured to generate a first beam; a first modulator configured to produce, based on the first beam, a second beam comprising a plurality of first portions interspersed with a plurality of second portions, wherein each of the plurality of second portions is modulated with a first sequence of shifts, the first sequence of shifts comprising at least one of a sequence of frequency shifts or a sequence of phase shifts; an optical interface subsystem configured to: receive a third beam caused by interaction of the second beam with an object, the third beam comprising a plurality of third portions interspersed with a plurality of fourth portions, wherein each of the plurality of fourth portions is modulated with a second sequence of shifts that is time-delayed relative to the first sequence of shifts; and one or more circuits configured to: determine a velocity of the object based on a Doppler frequency shift between the third beam and the second beam, identified using the plurality of first portions and the plurality of third portions; and determine, based on i) a time delay between the first sequence of shifts and the second sequence of shifts and ii) the identified Doppler frequency shift, a distance to the object.

In another implementation, disclosed is a system that includes a light source configured to generate a first beam; one or more modulators configured to produce, using the first beam, a second beam comprising a plurality of chirped portions, wherein each of the plurality of chirped portions comprises a monotonic modulation and a sequence of shifts, wherein the sequence of shifts comprises at least one of a sequence of frequency shifts or a sequence of phase shifts; an optical interface subsystem configured to: receive a third beam caused by interaction of the second beam with an object, the third beam comprising the plurality of chirped portions that are time-delayed; and one or more circuits configured to: determine, based on a phase difference of the third beam and the LO beam, a velocity of the object and a distance to the object.

DETAILED DESCRIPTION

An autonomous vehicle (AV) or a driver-operated vehicle that uses various driver-assistance technologies can employ a light detection and ranging (lidar) technology to detect distances to various objects in the environment and, sometimes, the velocities of such objects. A lidar emits one or more laser signals (pulses) that travel to an object and then detects arrived signals reflected from the object. By determining a time delay between the signal emission and the arrival of the reflected waves, a time-of-flight (ToF) lidar can determine the distance to the object. A typical lidar emits signals in multiple directions to obtain a wide view of the driving environment of the AV. The outside environment can be any environment including any urban environment (e.g., street, etc.), rural environment, highway environment, indoor environment (e.g., the environment of an industrial plant, a shipping warehouse, a hazardous area of a building, etc.), marine environment, and so on. The outside environment can include multiple stationary objects (roadways, buildings, bridges, road signs, shoreline, rocks, trees, etc.), multiple movable objects (e.g., vehicles, bicyclists, pedestrians, animals, ships, boats, etc.), and/or any other objects located outside the AV. For example, a lidar device can cover (e.g., scan) an entire 360-degree view by collecting a series of consecutive frames identified with timestamps. As a result, each sector in space is sensed in time increments that are determined by the angular velocity of the lidar's scanning speed. Sometimes, an entire 360-degree view of the outside environment can be obtained over a scan of the lidar. Alternatively, any smaller sector, e.g., a 1-degree sector, a 5-degree sector, a 10-degree sector, or any other sector can be scanned, as desired.

ToF lidars can also be used to determine velocities of objects in the outside environment, e.g., by detecting two (or more) locations {right arrow over (r)}(t1), {right arrow over (r)}(t2) of some reference point of an object (e.g., the front end of a vehicle) and inferring the velocity as the ratio, {right arrow over (v)}=[{right arrow over (r)}(t2)−{right arrow over (r)}(t1)]/[t2−t1]. By design, the measured velocity {right arrow over (v)} is not the instantaneous velocity of the object but rather the velocity averaged over the time interval t2−t1, as the ToF technology does not allow to ascertain whether the object maintained the same velocity {right arrow over (v)} during this time or experienced an acceleration or deceleration (with detection of acceleration/deceleration requiring additional locations {right arrow over (r)}(t3), {right arrow over (r)}(t4) . . . of the object).

Coherent or Doppler lidars operate by detecting, in addition to ToF, a change in the frequency of the reflected signal—the Doppler shift—indicative of the velocity of the reflecting surface. Measurements of the Doppler shift can be used to determine, based on a single sensing frame, radial components (along the line of beam propagation) of the velocities of various reflecting points belonging to one or more objects in the outside environment. A signal emitted by a coherent lidar can be modulated (in frequency and/or phase) with a radio frequency (RF) signal prior to being transmitted to a target. A local copy (referred to as a local oscillator (LO) herein) of the transmitted signal can be maintained on the lidar and mixed with a signal reflected from the target; a beating pattern between the two signals can be extracted and Fourier-analyzed to determine the Doppler shift and identify the radial velocity of the target. A coherent lidar can be used to determine the target's velocity and distance to the lidar using a single beam. The coherent lidar uses beams that are modulated (in frequency and/or phase) with radio frequency (RF) signals prior to being transmitted to a target. RF modulation can be sufficiently complex and detailed to allow detection, based on the relative shift (caused by the time-of-flight delays) of RF modulation of the LO copy and RF modulation of the reflected beam.

For example, an output signal (also stored as the LO copy) of frequency f may at time t have a phase 2πft+ϕ(t) that includes a sequence of (typically discrete) time-dependent phase shifts (encoding) ϕ(t). A signal reflected from a target may have a different phase 2πft+ϕR(t), where ϕR(t) includes the phase change 2πfDt due to the Doppler shift fD caused by a motion of the target and the time-delayed phase encoding: ϕR(t)=2πfDt+ϕ(t−τ). The delay time τ=2L/c is representative of the distance to the target L, with c being the speed of light. Accordingly, if the phase encoding ϕ(t) is suitably engineered, the phase of the LO signal at time t−τ in the past, ϕ(t−τ), is strongly correlated with the phase of the reflected signal, ϕR(t)−2πfDt, from which the additional phase associated with the Doppler shift is subtracted. More specifically, the following correlation function,

K ( f D , τ ) = t t + T dt e i ϕ ( t - τ ) × e i [ ϕ R ( t ) - 2 π f D t ] , ( cl )

taken over, e.g., a time period of phase encoding T, has a much larger value for the true Doppler shift fD and the true delay time (time-of-flight) τ than for various other (hypothesized) values of Doppler shifts and delay times. Correspondingly, by analyzing the correlator K(fD, τ) as a function of fD and τ, it is possible to identify the values of fD and τ for which the correlator has a maximum. These values represent the actual Doppler shift and travel time, from which the (radial) velocity V of the target relative to the lidar and the distance L to the target may be determined,

V = cf D 2 f , L = c τ 2 .

Finding the correct value of fD and τ, however, requires performing a large number of computations by combing through a large number of various possible pairs (fD, τ) of Doppler shifts/delay times (suitably discretized). On the other hand, reducing the number of the pairs that are being evaluated leads to a lower resolution of velocity V and distance L determination. Additionally, if the pairs (fD, τ) are sparse, the peaks in the correlation function K(fD, τ) may not be sufficiently pronounced for a reliable disambiguation.

Aspects and implementations of the present disclosure enable methods and systems that reduce processing load for reliable velocity and distance determination by multiplexing the output wave into a first wave whose reflection provides information about the Doppler shift (and velocity of the target) and a second wave, whose reflection provides information about the distance to the target. In some implementations, the first wave and the second wave are output concurrently and have different (e.g., shifted) frequencies. In some implementations, the first wave and the second wave have the same (or similar) frequencies and are multiplexed in time, e.g., transmitted one after another using the same carrier frequency. Numerous lidar system architectures that enable frequency and time multiplexing are disclosed. The advantages of the disclosed implementations include, but are not limited to, improving efficiency and speed of velocity and distance detections, reducing the amount of computations performed by lidar devices, and improving resolution of lidar detections. In turn, increasing the speed and accuracy of lidar detections improves the safety of lidar-based applications, such as autonomous vehicle driving missions.

FIG. 1 is a diagram illustrating components of an example autonomous vehicle (AV) 100 that can deploy a lidar device capable of signal multiplexing for improved efficiency, accuracy, and speed of target characterization, in accordance with some implementations of the present disclosure. Autonomous vehicles can include motor vehicles (cars, trucks, buses, motorcycles, all-terrain vehicles, recreational vehicle, any specialized farming or construction vehicles, and the like), aircraft (planes, helicopters, drones, and the like), naval vehicles (ships, boats, yachts, submarines, and the like), or any other self-propelled vehicles (e.g., robots, factory or warehouse robotic vehicles, sidewalk delivery robotic vehicles, etc.) capable of being operated in a self-driving mode (without a human input or with a reduced human input).

Vehicles, such as those described herein, may be configured to operate in one or more different driving modes. For instance, in a manual driving mode, a driver may directly control acceleration, deceleration, and steering via inputs such as an accelerator pedal, a brake pedal, a steering wheel, etc. A vehicle may also operate in one or more autonomous driving modes including, for example, a semi or partially autonomous driving mode in which a person exercises some amount of direct or remote control over driving operations, or a fully autonomous driving mode in which the vehicle handles the driving operations without direct or remote control by a person. These vehicles may be known by different names including, for example, autonomously driven vehicles, self-driving vehicles, and so on.

As described herein, in a semi or partially autonomous driving mode, even though the vehicle assists with one or more driving operations (e.g., steering, braking and/or accelerating to perform lane centering, adaptive cruise control, advanced driver assistance systems (ADAS), or emergency braking), the human driver is expected to be situationally aware of the vehicle's surroundings and supervise the assisted driving operations. Here, even though the vehicle may perform all driving tasks in certain situations, the human driver is expected to be responsible for taking control as needed.

Although, for brevity and conciseness, various systems and methods are described below in conjunction with autonomous vehicles, similar techniques can be used in various driver assistance systems that do not rise to the level of fully autonomous driving systems. In the United States, the Society of Automotive Engineers (SAE) have defined different levels of automated driving operations to indicate how much, or how little, a vehicle controls the driving, although different organizations, in the United States or in other countries, may categorize the levels differently. More specifically, disclosed systems and methods can be used in SAE Level 2 driver assistance systems that implement steering, braking, acceleration, lane centering, adaptive cruise control, etc., as well as other driver support. The disclosed systems and methods can be used in SAE Level 3 driving assistance systems capable of autonomous driving under limited (e.g., highway) conditions. Likewise, the disclosed systems and methods can be used in vehicles that use SAE Level 4 self-driving systems that operate autonomously under most regular driving situations and require only occasional attention of the human operator. In all such driving assistance systems, accurate lane estimation can be performed automatically without a driver input or control (e.g., while the vehicle is in motion) and result in improved reliability of vehicle positioning and navigation and the overall safety of autonomous, semi-autonomous, and other driver assistance systems. As previously noted, in addition to the way in which SAE categorizes levels of automated driving operations, other organizations, in the United States or in other countries, may categorize levels of automated driving operations differently. Without limitation, the disclosed systems and methods herein can be used in driving assistance systems defined by these other organizations' levels of automated driving operations.

A driving environment 110 can be or include any portion of the outside environment containing objects that can determine or affect how driving of the AV occurs. More specifically, a driving environment 110 can include any objects (moving or stationary) located outside the AV, such as roadways, buildings, trees, bushes, sidewalks, bridges, mountains, other vehicles, pedestrians, bicyclists, and so on. The driving environment 110 can be urban, suburban, rural, and so on. In some implementations, the driving environment 110 can be an off-road environment (e.g. farming or agricultural land). In some implementations, the driving environment can be inside a structure, such as the environment of an industrial plant, a shipping warehouse, a hazardous area of a building, and so on. In some implementations, the driving environment 110 can consist mostly of objects moving parallel to a surface (e.g., parallel to the surface of Earth). In other implementations, the driving environment can include objects that are capable of moving partially or fully perpendicular to the surface (e.g., balloons, leaves falling, etc.). The term “driving environment” should be understood to include all environments in which motion of self-propelled vehicles can occur. For example, “driving environment” can include any possible flying environment of an aircraft or a marine environment of a naval vessel. The objects of the driving environment 110 can be located at any distance from the AV, from close distances of several feet (or less) to several miles (or more).

The example AV 100 can include a sensing system 120. The sensing system 120 can include various electromagnetic (e.g., optical) and non-electromagnetic (e.g., acoustic) sensing subsystems and/or devices. The terms “optical” and “light,” as referenced throughout this disclosure, are to be understood to encompass any electromagnetic radiation (waves) that can be used in object sensing to facilitate autonomous driving, e.g., distance sensing, velocity sensing, acceleration sensing, rotational motion sensing, and so on. For example, “optical” sensing can utilize a range of light visible to a human eye (e.g., the 380 to 700 nm wavelength range), the UV range (below 380 nm), the infrared range (above 700 nm), the radio frequency range (above 1 m), etc. In implementations, “optical” and “light” can include any other suitable range of the electromagnetic spectrum.

The sensing system 120 can include a radar unit 126, which can be any system that utilizes radio or microwave frequency signals to sense objects within the driving environment 110 of the AV 100. Radar unit 126 may deploy a sensing technology that is similar to the lidar technology but uses a radio wave spectrum of the electromagnetic waves. For example, radar unit 126 may use 10-100 GHz carrier radio frequencies. Radar unit 126 may be a pulsed ToF radar, which detects a distance to the objects from the time of signal propagation, or a continuously-operated coherent radar, which detects both the distance to the objects as well as the velocities of the objects, by determining a phase difference between transmitted and reflected radio signals. Compared with lidars, radar sensing units have lower spatial resolution (by virtue of a much longer wavelength), but lack expensive optical elements, are easier to maintain, have a longer working range, and are less sensitive to adverse weather conditions. An AV may often be outfitted with multiple radar transmitters and receivers as part of the radar unit 126. The radar unit 126 can be configured to sense both the spatial locations of the objects (including their spatial dimensions) and their velocities (e.g., using the radar Doppler shift technology). The sensing system 120 can include a lidar sensor 122 (e.g., a lidar rangefinder), which can be a laser-based unit capable of determining distances to the objects in the driving environment 110 as well as, in some implementations, velocities of such objects. The lidar sensor 122 can utilize wavelengths of electromagnetic waves that are shorter than the wavelength of the radio waves and can thus provide a higher spatial resolution and sensitivity compared with the radar unit 126. The lidar sensor 122 can include a ToF lidar and/or a coherent lidar sensor, such as a frequency-modulated continuous-wave (FMCW) lidar sensor, phase-modulated lidar sensor, amplitude-modulated lidar sensor, and the like. Coherent lidar sensor can use optical heterodyne detection for velocity determination. In some implementations, the functionality of the ToF lidar sensor and coherent lidar sensor can be combined into a single (e.g., hybrid) unit capable of determining both the distance to and the radial velocity of the reflecting object. Such a hybrid unit can be configured to operate in an incoherent sensing mode (ToF mode) and/or a coherent sensing mode (e.g., a mode that uses heterodyne detection) or both modes at the same time. In some implementations, multiple lidar sensor units can be mounted on an AV, e.g., at different locations separated in space, to provide additional information about a transverse component of the velocity of the reflecting object.

Lidar sensor 122 can include one or more laser sources producing and emitting signals and one or more detectors of the signals reflected back from the objects. Lidar sensor 122 can include spectral filters to filter out spurious electromagnetic waves having wavelengths (frequencies) that are different from the wavelengths (frequencies) of the emitted signals. In some implementations, lidar sensor 122 can include directional filters (e.g., apertures, diffraction gratings, and so on) to filter out electromagnetic waves that can arrive at the detectors along directions different from the reflection directions for the emitted signals. Lidar sensor 122 can use various other optical components (lenses, mirrors, gratings, optical films, interferometers, spectrometers, local oscillators, and the like) to enhance sensing capabilities of the sensors.

In some implementations, lidar sensor 122 can include one or more 360-degree scanning units (which scan the outside environment in a horizontal direction, in one example). In some implementations, lidar sensor 122 can be capable of spatial scanning along both the horizontal and vertical directions. In some implementations, the field of view can be up to 90 degrees in the vertical direction (e.g., with at least a part of the region above the horizon scanned by the lidar signals or with at least part of the region below the horizon scanned by the lidar signals). In some implementations (e.g., in aeronautical environments), the field of view can be a full sphere (consisting of two hemispheres). For brevity and conciseness, when a reference to “lidar technology,” “lidar sensing,” “lidar data,” and “lidar,” in general, is made in the present disclosure, such reference shall be understood also to encompass other sensing technology that operate, generally, at the near-infrared wavelength, but can include sensing technology that operate at other wavelengths as well.

Lidar sensor 122 can include signal multiplexing function (SM) 124, which can include a combination of hardware elements and software components capable of implementing frequency and/or time multiplexing of lidar signals for improved efficiency, speed, and resolution of lidar sensing. SM 124 can deploy a variety of techniques as described below in conjunction with FIGS. 2-5. For example, SM 124 can include electronic circuitry and optical modulators that produce multiple signals with different frequencies, each signal enabling detection of a specific characteristic of targets, e.g., velocity of the targets, and distance to the targets. The signals can be imparted, as a combination, to the same sensing optical beam and transmitted to one or more targets. In one example, the signals can have the same (or a similar) frequency carrier and be time multiplexed. For example, a first portion of the signal (e.g., of time duration T1) can be unmodulated while the second portion of the signal (e.g., of time duration T2) can be modulated with phase or frequency encoding. The first portion can be used for the Doppler shift (target velocity) detection and the second portion can be used (in conjunction with the Doppler shift detected using the first portion) for the range (distance) detection. As another example, an unmodulated first signal of a first frequency F1 can be combined with a modulated (e.g., with phase, frequency, and/or amplitude modulation) second signal of a second frequency F2. The first signal can be used for the Doppler shift detection and the second signal can be used for the range detection. In some implementations, the first and the second signals can be imparted to the same optical beam. In some implementations, the first and the second signals can be imparted to two separate (but coherent) beams that are subsequently combined and transmitted to the target. In some implementations, the two separate beams can be produced by the same light source (e.g., laser) using beam splitting. In some implementations, the two separate beams can be produced by different lasers that are synchronized using a coherent feedback loop with a controlled frequency offset. Numerous other implementations of SM 124 functionality are described below.

The sensing system 120 can further include one or more cameras 129 to capture images of the driving environment 110. The images can be two-dimensional projections of the driving environment 110 (or parts of the driving environment 110) onto a projecting plane of the cameras (flat or non-flat, e.g. fisheye cameras). Some of the cameras 129 of the sensing system 120 can be video cameras configured to capture a continuous (or quasi-continuous) stream of images of the driving environment 110. Some of the cameras 129 of the sensing system 120 can be high resolution cameras (HRCs) and some of the cameras 129 can be surround view cameras (SVCs). The sensing system 120 can also include one or more sonars 128, which can be ultrasonic sonars, in some implementations.

The sensing data obtained by the sensing system 120 can be processed by a data processing system 130 of AV 100. In some implementations, the data processing system 130 can include a perception system 132. Perception system 132 can be configured to detect and track objects in the driving environment 110 and to recognize/identify the detected objects. For example, the perception system 132 can analyze images captured by the cameras 129 and can be capable of detecting traffic light signals, road signs, roadway layouts (e.g., boundaries of traffic lanes, topologies of intersections, designations of parking places, and so on), presence of obstacles, and the like. The perception system 132 can further receive the lidar sensing data (Doppler data and/or ToF data) to determine distances to various objects in the driving environment 110 and velocities (radial and transverse) of such objects. In some implementations, the perception system 132 can also receive the radar sensing data, which may similarly include distances to various objects as well as velocities of those objects. Radar data can be complementary to lidar data, e.g., whereas lidar data may high-resolution data for low and mid-range distances (e.g., up to several hundred meters), radar data may include lower-resolution data collected from longer distances (e.g., up to several kilometers or more). In some implementations, perception system 132 can use the lidar data and/or radar data in combination with the data captured by the camera(s) 129. In one example, the camera(s) 129 can detect an image of road debris partially obstructing a traffic lane. Using the data from the camera(s) 129, perception system 132 can be capable of determining the angular extent of the debris. Using the lidar data, the perception system 132 can determine the distance from the debris to the AV and, therefore, by combining the distance information with the angular size of the debris, the perception system 132 can determine the linear dimensions of the debris as well.

In another implementation, using the lidar data, the perception system 132 can determine how far a detected object is from the AV and can further determine the component of the object's velocity along the direction of the AV's motion. Furthermore, using a series of quick images obtained by the camera, the perception system 132 can also determine the lateral velocity of the detected object in a direction perpendicular to the direction of the AV's motion. In some implementations, the lateral velocity can be determined from the lidar data alone, for example, by recognizing an edge of the object (using horizontal scanning) and further determining how quickly the edge of the object is moving in the lateral direction. The perception system 132 can receive one or more sensor data frames from the sensing system 120. Each of the sensor frames can include multiple points. Each point can correspond to a reflecting surface from which a signal emitted by the sensing system 120 (e.g., lidar sensor 122) is reflected. The type and/or nature of the reflecting surface can be unknown. Each point can be associated with various data, such as a timestamp of the frame, coordinates of the reflecting surface, radial velocity of the reflecting surface, intensity of the reflected signal, and so on.

The perception system 132 can further receive information from a positioning subsystem, which can include a GPS transceiver (not shown), configured to obtain information about the position of the AV relative to Earth and its surroundings. The positioning data processing module 134 can use the positioning data (e.g., GPS and IMU data) in conjunction with the sensing data to help accurately determine the location of the AV with respect to fixed objects of the driving environment 110 (e.g. roadways, lane boundaries, intersections, sidewalks, crosswalks, road signs, curbs, surrounding buildings, etc.) whose locations can be provided by map information 135. In some implementations, the data processing system 130 can receive non-electromagnetic data, such as audio data (e.g., ultrasonic sensor data, or data from a mic picking up emergency vehicle sirens), temperature sensor data, humidity sensor data, pressure sensor data, meteorological data (e.g., wind speed and direction, precipitation data), and the like.

Data processing system 130 can further include an environment monitoring and prediction component 136, which can monitor how the driving environment 110 evolves with time, e.g., by keeping track of the locations and velocities of the moving objects. In some implementations, environment monitoring and prediction component 136 can keep track of the changing appearance of the driving environment due to motion of the AV relative to the environment. In some implementations, driving environment monitoring and prediction component 136 can make predictions about how various moving objects of the driving environment 110 will be positioned within a prediction time horizon. The predictions can be based on the current locations and velocities of the moving objects as well as on the tracked dynamics of the moving objects during a certain (e.g., predetermined) period of time. For example, based on stored data for object A indicating accelerated motion of object A during the previous 3-second period of time, environment monitoring and prediction component 136 can conclude that object A is resuming its motion from a stop sign or a red traffic light signal. Accordingly, environment monitoring and prediction component 136 can predict, given the layout of the roadway and presence of other vehicles, where object A is likely to be within the next 3 or 5 seconds of motion. As another example, based on stored data for object B indicating decelerated motion of object B during the previous 2-second period of time, environment monitoring and prediction component 136 can conclude that object B is stopping at a stop sign or at a red traffic light signal. Accordingly, environment monitoring and prediction component 136 can predict where object B is likely to be within the next 1 or 3 seconds. Environment monitoring and prediction component 136 can perform periodic checks of the accuracy of its predictions and modify the predictions based on new data obtained from the sensing system 120.

The data generated by the perception system 132, the GPS data processing module 134, and environment monitoring and prediction component 136 can be used by an autonomous driving system, such as AV control system (AVCS) 140. The AVCS 140 can include one or more algorithms that control how AV 100 is to behave in various driving situations and driving environments. For example, the AVCS 140 can include a navigation system for determining a global driving route to a destination point. The AVCS 140 can also include a driving path selection system for selecting a particular path through the immediate driving environment, which can include selecting a traffic lane, negotiating a traffic congestion, choosing a place to make a U-turn, selecting a trajectory for a parking maneuver, and so on. The AVCS 140 can also include an obstacle avoidance system for safe avoidance of various obstructions (rocks, stalled vehicles, a jaywalking pedestrian, and so on) within the driving environment of the AV. The obstacle avoidance system can be configured to evaluate the size, shape, and trajectories of the obstacles (if obstacles are moving) and select an optimal driving strategy (e.g., braking, steering, accelerating, etc.) for avoiding the obstacles.

Algorithms and modules of AVCS 140 can generate instructions for various systems and components of the vehicle, such as the powertrain, brakes, and steering 150, vehicle electronics 160, signaling 170, and other systems and components not explicitly shown in FIG. 1. The powertrain, brakes, and steering 150 can include an engine (internal combustion engine, electric engine, and so on), transmission, differentials, axles, wheels, steering mechanism, and other systems. The vehicle electronics 160 can include an on-board computer, engine management, ignition, communication systems, carputers, telematics, in-car entertainment systems, and other systems and components. The signaling 170 can include high and low headlights, stopping lights, turning and backing lights, horns and alarms, inside lighting system, dashboard notification system, passenger notification system, radio and wireless network transmission systems, and so on. Some of the instructions outputted by the AVCS 140 can be delivered directly to the powertrain, brakes, and steering 150 (or signaling 170) whereas other instructions output by the AVCS 140 are first delivered to the vehicle electronics 160, which generate commands to the powertrain and steering 150 and/or signaling 170.

In one example, the AVCS 140 can determine that an obstacle identified by the data processing system 130 is to be avoided by decelerating the vehicle until a safe speed is reached, followed by steering the vehicle around the obstacle. The AVCS 140 can output instructions to the powertrain, brakes, and steering 150 (directly or via the vehicle electronics 160) to 1) reduce, by modifying the throttle settings, a flow of fuel to the engine to decrease the engine rpm, 2) downshift, via an automatic transmission, the drivetrain into a lower gear, 3) engage a brake unit to reduce (while acting in concert with the engine and the transmission) the vehicle's speed until a safe speed is reached, and 4) perform, using a power steering mechanism, a steering maneuver until the obstacle is safely bypassed. Subsequently, the AVCS 140 can output instructions to the powertrain, brakes, and steering 150 to resume the previous speed settings of the vehicle.

FIG. 2A is a block diagram illustrating an example implementation of an optical sensing system 200 (e.g., as part of sensing system 120) capable of time multiplexing of lidar sensing signals, in accordance with some implementations of the present disclosure. Sensing system 200 can be a part of lidar sensor 122 that includes SM 124. Depicted in FIG. 2A is a light source 202 configured to produce one or more beams of light. “Beams” should be understood herein as referring to any signals of electromagnetic radiation, such as beams, wave packets, pulses, sequences of pulses, or other types of signals. Solid arrows in FIG. 2A (and other figures) indicate optical signal propagation and dashed arrows depict propagation of electrical (e.g., RF or other analog) signal or electronic (e.g., digital) signals. Light source 202 can be a broadband laser, a narrow-band laser, a light-emitting diode, a Gunn diode, and the like. Light source 202 can be a semiconductor laser, a gas laser, an ND:YAG laser, or any other type of a laser. Light source 202 can be a continuous wave laser, a single-pulse laser, a repetitively pulsed laser, a mode locked laser, and the like.

In some implementations, light output by light source 202 can be conditioned (pre-processed) by one or more components or elements of a beam preparation stage 210 of the optical sensing system 200 to ensure a narrow-band spectrum, target linewidth, coherence, polarization (e.g., circular or linear), and other optical properties that enable coherent (e.g., Doppler) measurements described below. Beam preparation can be performed using filters (e.g., narrow-band filters), resonators (e.g., resonator cavities, crystal resonators, etc.), polarizers, feedback loops, lenses, mirrors, diffraction optical elements, and other optical devices. For example, if light source 202 is a broadband light source, the output light can be filtered to produce a narrowband beam. In some implementations, in which light source 202 produces light that has a desired linewidth and coherence, the light can still be additionally filtered, focused, collimated, diffracted, amplified, polarized, etc., to produce one or more beams of a desired spatial profile, spectrum, duration, frequency, polarization, repetition rate, and so on. In some implementations, light source 202 can produce (alone or in combination with beam preparation stage 210) a narrow-linewidth light with a linewidth below 100 KHz.

After the light beam is configured by beam preparation stage 210, the light beam of frequency F0 can undergo spatial separation at a beam splitter 212, which produces a local oscillator (LO) copy 234 of the light beam. The LO copy 234 can be used as a reference signal to which a signal reflected from a target object can be compared. The beam splitter 212 can be a prism-based beam splitter, a partially-reflecting mirror, a polarizing beam splitter, a beam sampler, a fiber optical coupler (optical fiber adaptor), or any similar beam splitting element (or a combination of two or more beam-splitting elements). The light beam can be delivered to the beam splitter 212 (as well as between any other optical components depicted in FIG. 2A (or other figures) over air or over any suitable light carriers, such as optical fibers or waveguides.

An optical modulator 230 can impart optical modulation to a second light beam outputted by the beam splitter 212. “Optical modulation” is to be understood herein as referring to any form of angle modulation, such as phase modulation (e.g., any sequence of phase changes Δϕ(t) as a function of time t that are added to the phase of the beam), frequency modulation (e.g., any sequence of frequency changes Δf(t) as a function of time t), or any other type of modulation (including a combination of a phase and a frequency modulation) that affects the phase of the wave. Optical modulation is also to be understood herein to include, where applicable, amplitude modulation Δ(t) as a function of time t. Amplitude modulation can be applied to the beam in combination with angle modulation or separately, without angle modulation.

In some implementations, optical modulator 230 can impart angle modulation to the second light beam using one or more RF circuits, such as RF modulator 222, which can include one or more RF local oscillators, one or more mixers, amplifiers, filters, and the like. Even though, for brevity and conciseness, modulation is referred herein as being performed with RF signals, it should be understood that other frequencies can also be used for angle modulation, including but not limited to Terahertz frequencies, microwave frequencies, and so on. RF modulator 222 can impart optical modulation in accordance with a programmed modulation scheme, e.g., encoded in a sequence of control signals provided by a time multiplexing and phase/frequency encoding module 220 (herein also referred to, for simplicity, as encoding module). The control signals can be in an analog format or a digital format, in which case RF modulator 222 can further include a digital-to-analog convertor (DAC).

FIG. 2B is a schematic illustration of a phase encoding imparted to a sensing light beam transmitted by the optical sensing system 200 of FIG. 2A, in accordance with some implementations of the present disclosure. As depicted in FIG. 2B, the phase encoding (phase modulation) can be periodic with time period T1+T2. In some implementations, no modulation is imparted over a first portion (of duration T1) of the period. The first portion of the period can be used for detection of the Doppler frequency shift fD of the reflected signal. Over a second portion (of duration T2) of the period, any suitable sequence of phase shifts Δϕ(tj) can be imparted to the second light beam, where tj indicates time when the respective (e.g., j-th) phase shift Δϕ(tj) is applied. In some implementations, the phase shifts Δϕ(tj) include a discrete set of phase shifts applied for a fixed duration Δt=tj−1−tj; with M=T2/Δt phase shifts applied over the duration of the second portion of the period. In some implementations, the phase shifts applied can be based on maximum-length sentences, Gold codes, Hadamar codes, Kasami codes, Barker codes, or any similar codes. In some implementations, the phase shifts can be selected in such a way as to make the correlation function (i is the imaginary unit number)

K ( θ ) = 1 M j = 1 M e i Δϕ ( t j ) e i Δϕ ( t j - θ )

a sharply peaked function of the time offset θ, having a maximum (peak) at θ=0. The second portion T2 of the period can be used (after an additional phase resulting from the Doppler shift fD has been subtracted) to determine the time delay τ=2L/c of the light beam. The time delay can be determined by identifying a time offset θ that maximizes the correlation function of the phase shifts detected in the received reflected beam and phase shifts imparted to the transmitted beam.

FIG. 2C is a schematic illustration of a frequency encoding imparted to a sensing light beam transmitted by the optical sensing system 200 of FIG. 2A, in accordance with some implementations of the present disclosure. As depicted in FIG. 2C, similar to FIG. 2B, the second light signal can be unmodulated for the first portion (duration T1) of the time period T1+T2 while the second portion (duration T2) of the period is modulated with a set of frequency shifts Δf(tj). The first portion is sometimes referred herein to as a pilot tone. In some implementations, the frequency shifts Δf(tj) can be selected (e.g., by encoding module 220) and applied similarly to how the phase shifts Δϕ(tj) are imparted. Likewise, the autocorrelation function of the frequency shifts can be used to identify the time of travel of the modulated light beam to the target (and back) and the Doppler shift fD experienced by the reflected light beam.

Referring back to FIG. 2A, encoding module 220 can implement a time multiplexing scheme, e.g., identify the time period of modulation T, duration of various portions of the period, T1, T2, . . . , and so on. Encoding module 220 can further generate (e.g., using a linear feedback shift register or any other suitable signal generator) a code sentence of phase shift Δϕ(tj) and/or frequency shift Δf (tj). In some implementations, encoding module 220 can also generate a series of amplitude modulation signals, ΔA(tj), which can be imparted to light beam(s) alone or in combination with the phase and/or frequency shift. The data that includes the time multiplexing scheme and the code sentence can be provided to RF modulator 222 that can convert the provided data to RF electrical signals and apply the RF electrical signals to optical modulator 230 that modulates the second light beam.

In some implementations, optical modulator 230 can include an acousto-optic modulator (AOM), an electro-optic modulator (EOM), a Lithium Niobate modulator, a heat-driven modulator, a Mach-Zender modulator, and the like, or any combination thereof. In some implementations, optical modulator 230 can include a quadrature amplitude modulator (QAM) or an in-phase/quadrature modulator (IQM). Optical modulator 230 can include multiple AOMs, EOMs, IQMs, one or more beam splitters, phase shifters, combiners, and the like. For example, optical modulator 230 can split an incoming light beam into two beams, modify a phase of one of the split beams (e.g., by a 90-degree phase shift), and pass each of the two split beams through a separate optical modulator to apply angle modulation to each of the two beams using a target encoding scheme. The two beams can then be combined into a single beam. In some implementations, angle modulation can add phase/frequency shifts that are continuous functions of time. In some implementations, added phase/frequency shifts can be discrete and can take on a number of values, e.g., N discrete values across the phase interval 2π (or across a frequency band of a predefined width). Optical modulator 230 can add a predetermined time sequence of the phase/frequency shifts to the light beam. In some implementations, a modulated RF signal can cause optical modulator 230 to impart to the light beam a sequence of frequency up-chirps interspersed with down-chirps. In some implementations, phase/frequency modulation can have a duration between a microsecond and tens of microseconds and can be repeated with a repetition rate ranging from one or several kilohertz to hundreds of kilohertz.

The modulated light beam can be amplified by amplifier 250 before being transmitted through an optical circulator 254 and an optical interface 260 towards one or more objects 265 in the driving environment 110. Optical interface 260 can include one or more optical elements, e.g., apertures, lenses, mirrors, collimators, polarizers, waveguides, optical switches, optical phased arrays, and the like, or any such combination of optical elements. Optical interface 260 can include a transmission (TX) interface and a separate receiving (RX) interface. In some implementations, some of the optical elements (e.g., lenses, mirrors, collimators, optical fibers, waveguides, optical switches, optical phased arrays, beam splitters, and the like) can be shared by the TX interface and the RX interface. As shown in FIG. 2A, in a combined TX/RX optical interface 260, the transmitted beam and the received reflected beam follow the same (at least partially) optical path. The transmitted and received beams can be separated at an optical circulator 254, which can be a Faraday effect-based device, a birefringent crystal-based device, or any other suitable device. The optical circulator 254 can direct the received beam towards an optical hybrid stage 270 and a coherent detection stage 280. In some implementations, e.g., when various optical components are integrated on photonic circuits, a beam splitter (such as a 50-50 beam splitter) may be used in place of optical circulator 254.

The coherent detection stage 280 can include one or more coherent light analyzers, such as balanced photodetectors, that detect phase information carried by the received beam. A balanced photodetector can have photodiodes connected in series and can generate ac electrical signals that are proportional to a difference of intensities of the input optical modes (which can also be pre-amplified). A balanced photodetector can include photodiodes that are Si-based, InGaAs-based, Ge-based, Si-on-Ge-based, and the like (e.g. avalanche photodiode, etc.). In some implementations, balanced photodetectors can be manufactured on a single chip, e.g., using complementary metal-oxide-semiconductor (CMOS) structures, silicon photomultiplier (SiPM) devices, or similar systems. Balanced photodetector(s) can also receive LO copy 234 of the transmitted light beam. In the implementation depicted in FIG. 2A, the LO copy 234 is unmodulated, but it should be understood that in some implementations consistent with the present disclosure, LL copy 234 can be modulated. For example, optical modulator 230 can be positioned between beam preparation stage 210 and beam splitter 212.

Prior to being provided to the coherent detection stage 280, the received beam and the LO copy 234 of the transmitted beam can be processed by the optical hybrid stage 270. In some implementations, optical hybrid stage 270 can be a 180-degree hybrid stage capable of detecting the absolute value of a phase difference of the input beams. In some implementations, optical hybrid stage 270 can be a 90-degree hybrid stage capable of detecting both the absolute value and a sign of the phase difference of the input beams. For example, in the latter case, optical hybrid stage 270 can be designed to split each of the input beams into multiple copies (e.g., four copies, as depicted) and phase-delaying some of the copies, e.g., LO 234, whose electric field is denoted with ELO. Optical hybrid stage 270 can then apply controlled phase shifts (e.g., 90°, 180°, 270°) to some of the copies and mix the phase-delayed copies of the LO 234 with other input beams, e.g., copies of the received beam, whose electric field is denoted with ERX. As a result, the optical hybrid stage 270 can obtain the in-phase symmetric and anti-symmetric combinations (ERX+ELO)/2 and (ERX−ELO)/2 of the input beams, and the quadrature 90-degree-shifted combinations (ERX+iELO)/2 and (ERX−iELO)/2 of the input beams (i being the imaginary unit number). Each of the mixed signals can then be received by respective photodiodes connected in series. An in-phase electric current I can be produced by a first pair of the photodiodes and a quadrature current Q can be produced by a second pair of photodiodes. Each of the currents can be further processed by one or more operational amplifiers, intermediate frequency amplifiers, and the like. The in-phase I and quadrature Q currents can then be mixed into a complex photocurrent whose ac part

J = E R X + E L O 2 2 - E R X - E L O 2 2 + E R X + i E L O 2 2 - E R X - i E L O 2 2 = E R X E L O *

is representative of the phase difference between the LO beam and the received beam. Similarly, an 180-degree optical hybrid can produce only the in-phase photocurrent whose ac part

J = E R X + E L O 2 2 - E R X + E L O 2 2 = E R X E L O *

is sensitive to the absolute value of the phase difference between the LO beam and the received beam but not to the sign of this phase difference.

The photocurrent J can be digitized by analog-to-digital circuitry (ADC) 284 to produce a digitized electrical signal that can then be provided to digital signal processing (DSP) 290. The digitized electric signal is representative of a beating pattern between the LO copy 234 and the received signal. More specifically, the signal received by the optical system 200 at time t can be transmitted to the target at time t−τ, where τ=2L/c is the time of light travel to the target located at distance L and back (the delay time). If Δϕ(t) is the time-dependent phase encoding (which in case of frequency encoding is represented by the integral of the frequency modulation, Δϕ(t)=–0tΔf(t′)dt′, over time) that is being imparted to the transmitted beam of frequency F0, the phase ϕT of the transmitted beam at the time of transmission is


ϕT=2πF0×(t−τ)+Δϕ(t″τ).

The beam acquires an additional phase fD×(t−τ/2)+ϕR upon reflection (at time t−τ/2) from the target, when the beam's frequency is changed by Doppler shift fD; the phase ϕR is an additional phase change that can be experienced by the beam upon interaction with the reflecting surface. For example, for a reflection from a thick uniform medium, this phase change can be ϕR=π, but in a more general case can depend on a specific structure, properties, quality, and morphology of the reflecting surface. Accordingly, the total phase of phase of the received reflected beam is


ΦRX=2π(F0+fD)×(t−τ)+Δϕ(t−τ)+πfDτ+ϕR.

On the other hand, the phase of LO copy 234 at time of detection t is


ΦLO=2πF0t.

Correspondingly, the difference of the phases of the two beams is ΦRX−ΦLO=2πfDt+Δϕ(t−τ)+πfDτ+ϕR. (The last two terms, πfDτ+ϕR, represent a constant phase increment and will be ignored in the subsequent description.) The electrical signal having the phase difference ΦRX−ΦLO is generated by the coherent detection stage 280, amplified, filtered, digitized (by ADC 284) and provided to DSP 290. DSP 290 can include spectral analyzers, such as Fast Fourier Transform (FTT) analyzers, and other circuits configured to process digital signals 282, including central processing units (CPUs), graphic processing units (GPUs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and memory devices. In some implementations, the processing and memory circuits can be implemented as part of a microcontroller.

The digitized signal output by ADC 284 enables DSP 290 to determine the Doppler shift and the velocity of the object(s) 265. In conventional lidar devices, the encoding ϕ(t−τ) present in the phase difference ΦRX−ΦLO is masked by the Doppler-shift contribution 2πfDt. In various implementations of the present disclosure, this Doppler-shift contribution can be efficiently identified since the encoding ϕM is absent for a portion of the encoding period (pilot tone), e.g., for a duration T1 of the period T1+T2, as illustrated in FIG. 2B and FIG. 2C. DSP 290 can collect the phase difference ΦRX−ΦLO data (as a function of time) over an integration time, which can be of the order of microseconds to tens (or hundreds, in some implementations) of microseconds (although in some applications the integration times may be into milliseconds). In some implementations, the integration time can exceed the encoding period, e.g., can be a multiple of the encoding period.

Using the collected data for the phase difference ΦEX−ΦLO), DSP 290 can identify the unencoded portion of the encoding period (e.g., as a portion where the phase difference ΦRX−ΦLO is constant), determine the Doppler-shift contribution 2πfDt and extract fD (e.g., from a slope of the phase difference as a function of time). DSP 290 can then use the encoded portion of the encoding period and subtract the determined Doppler-shift contribution from the phase difference, ΦRX−ΦLO−2πfDt, to unmask the encoding, ϕ(t−τ). DSP 290 can then compute a correlation between ΦRX −ΦL0−2πfDt and the delayed phase encoding ϕ(t−θ) for various time delays θ. The phase encoding ϕ(t) of the transmitted signal can be provided to DSP 290 in real time by the encoding module 220 (as depicted schematically by the corresponding dashed arrow). In some implementations, instead of the difference ΦRX−ΦLO−2πfDt, the complex combinations eRX−2πifDt±ieLO (or similar combinations) can be provided to DSP 290 which performs Fourier processing to extract ϕRX−ΦLO−2πfDt. The computed correlation function K(θ) can have a maximum for the actual delay time θ=τ. Having determined the Dopper frequency shift fD and delay time τ, DSP 290 can compute the velocity of the target, V=cfD/(2F0), and the distance to the target, L=cτ/2. As a result, both the distance to the target and the velocity of the target are determined from the detected phase difference, ΦRX−ΦLO, with only one set of (at most) M correlators being computed, where M is a number of different phase (or frequency) shifts that are imparted to the transmitted beam over one period of encoding.

Multiple modifications of the optical sensing system 200 may be implemented. For example, in some systems the optical hybrid stage 270 can be a 180-degree optical hybrid. As a result, coherent detection stage 280 can generate in-phase electrical signal I (but not quadrature electrical signal Q). The in-phase electrical signal alone can determine the magnitude of the Doppler shift |fD|, but can be agnostic about the sign of fD as Doppler-shifted beams with frequencies F0+fD and F0−fD lead to the generation of the same in-phase signals. In particular, the in-phase electrical signal can be an even function of the Doppler shift, e.g., I∝cos(2πfDt). A 180-degree optical hybrid can nonetheless suffice provided that the symmetry between the positive and negative Doppler shifts is eliminated in some way. This can be achieved, for example, by imparting a frequency offset to the transmitted beam (or LO copy 234), e.g., as described below in relation to a frequency multiplexing implementation of FIG. 3A. For example, while the beam transmitted to the target has frequency F0, the frequency of LO copy 234 can be shifted by an offset frequency foff to the value F0+foff. Correspondingly, positive F0+fD and negative F0−fD Doppler shifts of the received reflected beam (corresponding to a target moving towards or away from the lidar receiver, respectively) cause beatings with intermediate frequencies foff−fD and foff+fD, which have different absolute values and are, therefore, detectable with only an in-phase electrical signal generated by a single balanced photodetector (e.g., a single pair of photodiodes connected in series). The offset frequency foff can be applied to the transmitted light beam (or, the LO copy) by an optical modulator 230 or an additional optical modulator not shown in FIG. 2A.

FIG. 3A is a block diagram illustrating an example implementation of an optical sensing system 300 capable of frequency multiplexing of lidar sensing signals, in accordance with some implementations of the present disclosure. Optical sensing system 300 can be a part of lidar sensor 122 that includes SM 124. Various devices and modules depicted in FIG. 3A (as well as other figures) that are indicated with numerals having the same two last digits as the corresponding devices and modules in FIG. 2A and/or the same (or similar) names can have similar functionality and can be implemented in any way described in conjunction with FIG. 2A.

In some implementations, a light beam of frequency F0 output by light source 302 and pre-processed by a beam preparation stage 310 can be processed by beam splitters 312 and 314 to produce three light beams. A first light beam can be directed to an optical modulator A 330 for frequency shifting. A second light beam can be directed to an optical modulator B 331 for both frequency shifting and optical modulation. A third beam can be an LO copy 334 of the beam output by light source 302 and beam preparation stage 310, to be used for coherent optical detection. Each of the optical modulators A 330 and B 331 can include one or more AOMs, EOMs, IQMs, or a combination thereof. The first light beam can be imparted a first frequency shift F1−F0 by optical modulator A 330 while the second light beam can be imparted a second frequency shift F2−F0 by optical modulator B 331. Additionally, optical modulator B 331 (or a separate optical modulator not explicitly shown) can impart a phase or frequency (or amplitude) encoding Δϕ(t) or Δf(t) (or Δ(t)) to the second light beam. The phase (or frequency) encoding can be generated by encoding module 320, e.g., as a sequence of analog or digital signals that can be converted into analog signals by an RF modulator 322, which can include one or more RF local oscillators, one or more mixers, amplifiers, filters, and the like. The first light beam of frequency F1 (also sometimes referred herein as a pilot tone) and the second light beam of frequency F2 can be combined into a single beam by an optical combiner 340.

FIG. 3B is a schematic illustration of a phase encoding imparted to a sensing light beam transmitted by the optical sensing system 300 of FIG. 3A, in accordance with some implementations of the present disclosure. As depicted in FIG. 3B, the combined beam produced by optical combiner 340 can include the first light beam of frequency F1, which can be unmodulated, and the second light beam of frequency F2 , which can be modulated with any suitable sequence of phase shifts Δϕ(tj), e.g., a discrete set of phase shifts applied for a duration Δt=tj+1−tj; with M phase shifts applied over a period of the encoding. In some implementations, the applied phase shifts can be characterized by a correlation function K(θ)=M−1Σj=1MeiΔϕ(tj)eiΔϕ(tj−θ) that is a sharply peaked function of the time offset θ. The correlation function of the phase shifts can be used to identify the time of flight of the modulated optical beam to the target. In some instances, the received signal can have a time-dependent noise component δN(t) present (e.g., eiΔϕ(tj−θ)+δN(tj) instead of eiΔϕ(tj−θ)), which adds a noise floor to the cross-correlation. The time of flight to the target can be successfully measured as long as the signal to noise ratio is sufficiently high.

FIG. 3C is a schematic illustration of a frequency encoding imparted to a sensing light beam transmitted by the optical sensing system 300 of FIG. 3A, in accordance with some implementations of the present disclosure. As depicted in FIG. 3C, similar to FIG. 3B, the second light beam can be modulated with a sequence of frequency shifts Δf (tj) whose correlation properties are similar to correlation properties of phase shifts Δϕ(tj) described in conjunction with FIG. 3B. The correlation function of the frequency shifts can be used to identify the time of flight of the modulated optical beam to the target similarly to how the phase shifts are used for the same purpose.

Referring back to FIG. 3A, encoding module 320 can define a frequency multiplexing scheme, e.g., the frequency shifts F1−F0 and F2−F0 and a code sentence of phase Δϕ(tj) and/or frequency Δf(tj) modulation (or amplitude modulation ΔA(tj)). The data that includes the frequency multiplexing scheme and the code sentence can be provided to one or more RF modulators 322 that can convert the provided data to RF electrical signals and apply the RF electrical signals to optical modulator A 330 and/or optical modulator B 331 that modulate the first light beam and the second light beam, respectfully. In some implementations, optical modulator B 331 imparts both the frequency shift F2−F0 and the phase Δϕ(tj) (or frequency Δf(tj), or amplitude Δ(tj)) modulation using a single device (e.g., AOM, EOM, IQM, etc.) and a combined RF electrical signal applied thereto. The combined RF electrical signal can include a first part, which is configured to impart the static frequency shift (e.g., F2−F0), and a second part, which is configured to impart a variable frequency or phase modulation (e.g., Δf(tj)). In some implementations, optical modulator B 331 uses one device to impart the frequency shift F2−F0 and another device to impart the phase Δϕ(tj) (or frequency Δf(tj)) encoding. In some implementations, one or the frequency shifts F1−F0 or F2−F0 is not imparted and the frequency of the first light beam (or second light beam) is the same frequency F0 as output by the beam preparation stage 310.

The first light beam and the second light beam can then be combined by optical combiner 340. The combined beam can be amplified by amplifier 350 before being transmitted through an optical circulator 354 and a TX/RX optical interface 360 towards one or more objects 365 in the driving environment 110. A beam reflected from object(s) 365 can be received through the same TX/RX optical interface 360. The transmitted and received beams can be separated at the optical circulator 354, which can direct the received reflected (RX) beam towards a coherent detection stage 380.

The coherent detection stage 380 can include a balanced photodetector that detects phase difference between the LO copy 334 and the RX beam. The LO copy 334 can have frequency F0 (and be unmodulated). The RX beam can be Doppler-shifted and can include light with frequency F1+fD (which can be unmodulated) and light with frequency F2+fD, which can be modulated with phase Δϕ(t), frequency Δf(t), and/or amplitude ΔA(t) modulation, e.g., as previously imparted by optical modulator B 331. The RX beam is time-delayed by the time of flight τ to and from the target.

The LO copy 334 can have an electric field (ELO) that has amplitude ALO and frequency F0,


ELO=ALOexp [2πiF0t],

and the RX beam can have an electric field (ERX) that is a sum of two light beams having Doppler-shifted frequencies and phases:


ERX=A1 exp [2πi(F1+fD)×(t−τ)]+A2 exp [2πi(F2+fD)×(t−τ)+iΔϕ(t=τ)].

The amplitudes A1 and A2 of the two parts of the RX beam can, in general, be different from each other, although in some implementations the difference A1−A2 can be small (compared with E1 or E2) by virtue of preparation of the phase coherent transmitted beam. For example, the equalization of the amplitudes can be achieved, e.g., by using optical amplifiers (not shown in FIG. 3A) prior to directing the first light beam and the second light beam through optical combiner 340. Both parts of the RX beam also experience a constant phase change (e.g., πfDτ, collected on the way back from target), which will be ignored henceforth.

Prior to being detected by coherent detection stage 380, the LO copy 334 and the RX beam can be inputted into a 180-degree optical hybrid (not shown in FIG. 3A) to obtain a symmetric, (ERX+ELO)/2, and antisymmetric, (ERX−ELO)/2, combinations. This can be achieved, e.g., using beam splitters/combiners and a mirror to add phase π to one of the beams (e.g., RX) to obtain the antisymmetric combination. In some implementations, a 90-degree optical hybrid can be used to obtain the additional 90-degree phase-shifted combinations, (ERX+iELO)/2 and (ERX−iELO)/2.

Each of the obtained combinations can be inputted into a respective photodiode of the coherent detection stage 380. For example, the symmetric combination can be inputted into a first of two photodiodes connected in series and the antisymmetric combination can be input into the second photodiode. As a result, the net electric current generated by the photodiodes is J=|(ERX+ELO)/2|2−|(ERX−ELO)/2|2=Re (ERXE*LO). This amounts to the electrical current J output by the coherent detection stage 380 that is a sum of two contributions, J=Jlow+Jhigh, where (e.g., if F0 is closer to F1 than to F2),


Jlow=cos A1ALO[2π(F1−F0+fD)t]


Jhigh=cos A2ALO[2π(F2−F0+fD)t+Δϕ(t−τ)],

and it is assumed for brevity that the amplitudes A1, A2, and ALO are real and the constant phases in the two signals are omitted.

A low-pass filter 382 and a high-pass filter 383 can process this electrical signal J to separate it into the two contributions, Jlow and Jhigh, For example, both the low-pass filter 382 and the high-pass filter 383 can have the respective cut-off frequencies above F1−F0 and below F2−F0. The signal Jlow can be digitized by ADC 384 and the signal Jhigh can be digitized by ADC 386. The digitized signals can be provided to DSP 390. DSP 390 can perform spectral analysis of the digitized Jlow and Jhigh signals and determine the Doppler shift ΔfD and the delay time τ, from which the velocity of the target, V=cΔfD/(2f), and the distance to the target, L=cτ/2, can be determined.

More specifically, the Doppler shift fD can be determined from Jlow. The presence of the beating term 2π(F1−F0)t in the phase of Jlow allows to disambiguate positive Doppler shifts from negative Doppler shifts. The determined Doppler shift fD can then be used in conjunction with the digitized signal Jhigh to extract the time-delayed phase encoding ϕ(t−τ) by determining the location θ of the maximum of the correlation function of the phase encoding extracted from signal Jhigh and a delayed phase encoding ϕ(t−θ), obtained from encoding module 320, for various time delays θ. The location of the maximum is then identified as the actual delay time τ. Having determined the Doppler shift fD and the delay time τ, DSP 390 can obtain the distance to the target, L=cτ/2, and the velocity of the target, V=cfD/(2F0).

Multiple modifications of the optical sensing system 300 can be implemented. For example, in some systems signal Jhigh can be modulated with signal Jlow (e.g., using an RF mixer) and then filtered using a low-pass filter (to exclude frequency F1+F2−2F0) to obtain a signal


Jmix=Jhigh·Jlow→cos [2π(F2−F1)t+Δϕ(t−τ)],

from which the Doppler shift has been excluded. Digitized Jmix signal can be used to obtain the time of flight τ, as described above. The Doppler shift can then be determined from the digitized copy of Jlow as described above.

In some instances, the RX light beam can include contributions from multiple targets. For example, object A and object B can be near the optical path of the transmitted beam, so that the reflected beam can be generated by both object A and object B and returned to the optical sensing system 300 along the same optical path. In addition to such “skirting” of multiple objects, on some occasions the transmitted beam can pass through some of the objects. For example, a part of the transmitted beam can reflect from a windshield of a first vehicle, while another part of the beam can pass through the windshield but reflect back from the rear window of the first vehicle. Yet another part of the beam can pass through the rear window of the first vehicle and reflect from a second vehicle (or some other object, e.g., a pedestrian, a road sign, a building, etc.).

On such occasions, DSP 390 can identify multiple Doppler shifts (e.g. multiple beat frequencies) fD(1), fD(2) . . . using the unmodulated portion of the RX beam. For each of the identified Doppler shifts, DSP 390 can perform correlation analysis and determine the corresponding time delay τ(1), τ(2) . . . that maximizes the correlation function of the observed (in the RX beam) modulation and the time-shifted transmitted beam modulation. Each pair (fD(1), τ(1)), (fD(2), τ(2)) . . . then determines the velocity of one of the reflecting objects and the distance to the respective object.

In some implementations, optical modulator A 330 (rather than optical modulator B 331) imparts phase or frequency encoding. In some implementations, optical modulator A 330 imparts frequency modulation while optical modulator B 331 imparts phase modulation (or vice versa). Further possible variations of the optical systems capable of implementing frequency multiplexing are illustrated in FIG. 4A-C.

FIG. 3D is a block diagram illustrating an example implementation of an optical sensing system 301 that uses a frequency comb and frequency multiplexing for concurrent sensing of multiple objects, in accordance with some implementations of the present disclosure. The optical sensing system 301 can include a light source, e.g., a pump laser 303, that generates pump light of frequency F0. The pump light can be used to excite resonance modes (e.g., whispering-gallery modes) in a resonator 311 that produces a frequency comb of equally spaced frequencies F0+nF, where n is an integer and F is a comb spacing determined by a resonant mode frequency of resonator 311, by the inverse time of light travel around resonator 311, and so on. The comb spacing can be between hundreds of Megahertz to hundreds of Gigahertz or more. As a result, the output of resonator 311 can be a train of pulses having the carrier frequency F0 and the repetition rate 1/F. The Fourier transform of such a train of pulses includes a set of sharp peaks at frequencies F0+nF. Pump laser 303 can be a Ti:sapphire solid-state laser, an Er:fiber laser, Cr:LiSAF laser, and the like. Resonator 311 can be a microresonator made of Silicon Nitride, Aluminum Nitride, Quartz, Hydex, and the like.

Each of the comb peaks (or “teeth”) can be modulated simultaneously, as described above in conjunction with FIG. 3A. More specifically, optical modulator A 330 can impart a first offset frequency f1 to a first set of comb peaks and optical modulator B 331 can impart both a second offset f1 and a phase (frequency and/or amplitude) modulation Δϕ(t) to a second set (copy) of comb peaks. In some implementations, one of the sets of comb peaks can be unshifted (e.g., no offset is applied to the first set of beams, f1=0). The two sets of the comb peaks can then be combined by optical combiner 340. The combined beam can be amplified by amplifier 350 and transmitted towards multiple objects 365 using a dispersive optical element 361 (as part of a TX/RX optical interface), which can be a prism, a diffraction grating, a dispersive crystal, or any other dispersive element configured to direct light of different frequencies along different optical paths. In some implementations, the number of different transmitted beams can be tens or even hundreds or more.

Multiple beams reflected from objects 365 can be received through DOE 361 and directed (e.g., by optical circulator 354) to coherent detection stage 380. Prior to coherent detection stage 380, the reflected beams (and the LO 334) can be demultiplexed by optical demultiplexer 381. Optical demultiplexer 381 can be or include one or more arrayed waveguide gratings (AWG), echelle gratings, Mach-Zehnder interferometer (MZI) lattice filters, or the like. Coherent detection stage 380 can include dedicated coherent detectors for each pair of demultiplexed beams. The coherent photodetectors can produce electrical signal representative of the phase difference of each pair of input optical and provide the electrical signals for digital processing to DSP 390, which can perform separate digital processing (e.g., FFT and correlation analysis) to determine the distances to various objects 365 and velocities of these objects, as described in more detail above in conjunction with FIG. 3A.

FIG. 4A is a block diagram illustrating an example implementation of an optical sensing system 400 with frequency multiplexing in which one of the sensing signals is unmodulated and not shifted in frequency, in accordance with some implementations of the present disclosure. Various devices and modules depicted in FIG. 4A that are indicated with the same numerals as in FIG. 2A or FIG. 3A can have similar functionality and can be implemented in any way described in conjunction with FIG. 2A or FIG. 3A. In the optical sensing system depicted in FIG. 4A, the light source 302 and beam preparation stage 310 output a beam of frequency F1 (rather than F0). The first light beam that is split off by beam splitter 312 is not shifted from the initial frequency F1 and remains unmodulated. The second light beam split-off by beam splitter 314, is processed by optical modulator A 330 and optical modulator B 331. More specifically, optical modulator A 330 shifts the frequency of the second beam to F2 and optical modulator B 331 imparts a phase encoding Δϕ(t) (or frequency encoding Δf(t), amplitude encoding ΔA(t)) to the second beam. The two beams are subsequently combined into a single beam by optical combiner 340 and transmitted to object(s) 365 in the driving environment via TX/RX optical interface 360.

Optical circulator 354 can direct the RX beam to 90-degree optical hybrid stage 270. A 90-degree hybrid enables detection of the sign of the Doppler shift fD since the LO copy 334 is not frequency-shifted relative to the carrier frequency of the first light beam (both beams having frequency F1). More specifically, the LO copy 334 can have the electric field (with a complex amplitude ALO),


ELO=ALO exp [2πiF1t],

and the RX beam can have the electric field that is a sum of two light beams having Doppler-shifted frequencies and phases (and complex amplitudes A1 and A2):


ERX=A1 exp [2πi(F1+fD)×(t+τ)]+A2 exp [2πi(F2+fD)×(t+τ)+iΔϕ(t−τ)].

The 90-degree optical hybrid stage 270 can generate the electrical signal (omitting the constant phases in the two parts of ERX),


J=ERXE*LO=A1A*LO exp [iπfDt]+A2A*LO exp [2πi(F2−F1+fD)t+iΔϕ(t−τ)],

which is sensitive to both the real part and the imaginary part of ERXE*LO and, therefore, carries information about the sign of fD as well as about its magnitude.

The electrical signal J can be digitized by ADC 384 and processed by DSP 390. In the implementation depicted in FIG. 4A, the electrical signal J is not filtered prior to ADC 384 and the separation of the two terms in J is performed by FFT analyzers of DSP 390. The time of flight τ and the Doppler shift fD can then be determined using techniques that are similar to the techniques described above in conjunction with FIG. 2A and FIG. 3A.

FIG. 4B is a block diagram illustrating an example implementation of an optical sensing system 403 with frequency multiplexing and efficient detection of targets in the presence of internal reflections and/or close-up returns, in accordance with some implementations of the present disclosure. Internal reflections, (close-up returns, back reflections, etc.) refer to reflections that occur inside the optical detection system, such as reflections from components TX/RX optical interface 360, leakage through optical circulator 354, and the like. Internal reflections also refer to various low-range artifacts, such as dust particles and air disturbances existing near the lidar, lidar transmission noise, and so on. In particular, a received reflected (by object(s) 365) beam may be a combination, ETar(t)+EInt(t)+EN(t) of the beam reflected from the target object, ETar(t), a spurious light EInt(t) that is caused by internal reflections, and a noise (e.g., additive noise) EN(t). In some instances, the spurious light EInt(t) can be significantly stronger than the reflected beam of interest, E(t). The implementations depicted in FIG. 4B facilitate evaluation of the strength of the internal reflection light EInt(t) and subtraction of this spurious light from the total signal detected by the lidar. As depicted in FIG. 4B, additional local oscillator copies of the light beams can be maintained on the lidar device (e.g., using beam splitters 412, 414, 416, and 418). More specifically, similar to the implementation shown in FIG. 4A, a received reflected beam can be processed by a first section 380-1 of coherent detection stage 380 together with LO copy 334 of the light beam generated by light source 302 (and further processed by a beam preparation stage, not shown in FIG. 4B). Correspondingly, first section 380-1 generates (e.g., using one or more photodetectors) an electrical signal (e.g., a current or voltage signal) STar(t)+SInt(t) that is representative of a combination of ETar(t) and EInt(t), where only the part ETar(t) carries information about the distance to the reflecting object (e.g., object 365) and the velocity of the reflecting object. Second section 380-2 of coherent detection stage 380 can receive another LO copy 336 of the light beam generated by light source 302. Second section 380-2 can further receive LO copy 344 of the beam output towards the TX/RX optical interface 360. Second section 380-2 can then generate an electrical signal (e.g., a current or voltage signal) SLO(t) representative of the LO copy 344 and, therefore, of the strength of the transmitted beam. Since the strength of the internal reflection beam (characterized by SInt(t)) is proportional to the strength of the transmitted beam (as both are driven by the same light source 302), the electrical signal SLO(t) also provides information about the electrical signal SInt(t) representative of the internal reflection, SInt(t)=αSLO(t−τ′), with some coefficient α that is also subject to empirical evaluation (e.g., experimentation). The time delay τ′ may arise in the course of beam propagation through various components of the optical system, e.g., optical modulators 330 and 331, optical combiner 340, amplifier 350, beam splitter 418, and so on. Correspondingly, by detecting ELO(t), second section 380-2 can generate an electrical signal that is representative of the strength of the internal reflections, αSLO (t). Each of the two sections of coherent detection stage 380 can output the respective generated electrical signals to a corresponding ADC (e.g., ADC 384 or ADC 386). The outputted digitized signals can be processed by DSP 390, which determines the difference STar(t)+SInt(t)−αSLO(t−τ′) and further determines the value of the scaling factor α and time delay τ′ (e.g., by analyzing correlations between STar(t)+SInt(t) and SLO(t)) that cancels the spurious signal component, SInt(t)−αSLO(t−τ′)=0. The remaining value of the difference then represents the electrical signal STar(t) representative of the beam reflected from the target object. The electrical signal STar(t) can then be used to obtain the velocity of object 365 and distance to object 365, as described above.

The parameters of amplifier 350 (e.g., amplification actor), and beam splitters 412, 414, 416, and 418 can be determined from empirical testing. For example, in one implementation, beam splitters 414 and 416 can be 50/50 beam splitters, while beam splitter 412 can have a 90/10 splitting ratio (with 90% of the beam directed towards beam splitter 414). Beam splitter 418 can have a 99/1 splitting ratio (with 99% of the beam stored directed towards amplifier 350). Numerous other combinations of splitting ratios can be used instead.

FIG. 4C is a block diagram illustrating an example implementation of an optical sensing system 404 that uses optical locking to enable frequency multiplexing, in accordance with some implementations of the present disclosure. FIG. 4C depicts multiple sources of light, e.g., a first light source 401 and a second light source 402 configured to produce separate beams. Each of the first light source 401 and second light source 402 can include a semiconductor laser, a gas laser, an ND:YAG laser, or any other type of laser. Each of first light source 401 and second light source 402 can be a continuous wave laser, a single-pulse laser, a repetitively pulsed laser, a mode locked laser, and the like. The beams output by first light source 401 and second light source 402 can be pre-processed by a respective beam preparation stage 409 and 410 to ensure narrow-band spectrum, target linewidth, coherence, polarization, and the like.

Second light source 402 can be an adjustable-frequency laser that is a part of an optical feedback loop (OFL) 405. OFL 405 can be used to lock a frequency of the beam output by second light source 402 to a predetermined offset frequency f relative to the frequency of the first light source 401. OFL 405 can include a coherent detection stage 422, a RF local oscillator (RF LO) 423, an RF mixer 424, a feedback electronics stage 426, as well as various other devices, such as one or more beam splitters, combiners, filters, amplifiers, and the like. In some implementations, first light source 401 can output a first beam of light that has (fixed) frequency F0. Second light source 402 can be configured to output a second beam of light with a target frequency F0+f that can be offset relative to F0. Because it can be difficult to achieve the target frequency F0+f using static laser settings, second light source 402 can be set up to output light with frequency F0+f′ that can be close (so that |f−f′|«f) to the target frequency but not exactly equal to the target frequency. The target frequency F0+f can be achieved via OFL 405 by fine-tuning the frequency offset from f′ to f and ensuring phase coherence of the outputs of second light source 402 and first light source 401.

In some implementations, a beam splitter 412 can direct a copy of the first beam to optical combiner 420 that also receives a copy of the second beam from a beam splitter 414. Optical combiner can include an optical hybrid (e.g., a 180-degree hybrid or a 90-degree hybrid) that produces one or more beams that represent a sum of the first beam, phase-shifted to 0 degrees, 90 degrees, 180 degrees, −90 degrees, and the like. The produced beams can be input into a coherent detection stage 422 which can include one or more photodiodes or phototransistors, e.g., arranged in a balanced photodetection setup that enables determining a phase difference between the first beam and the second beam. Prior to being inputted into coherent detection stage 422, any one (or both) of the input signals can be additionally processed (e.g., amplified) to have the same (or similar) amplitudes.

Coherent detection stage 422 can detect a difference between frequencies and phases of the input beams, e.g., between frequency F0 of the first beam and between frequency F0+f′ of the second beam. Coherent detection stage 422 can output an electrical signal (e.g., an RF electrical signal) having a beat pattern representative of the offset frequency f′ and the relative phase difference between the first beam and the second beam. The electrical signal representative of the beat pattern can be provided to RF mixer 424. A second input into RF mixer 424 can be a signal from RF LO 423 (e.g., a synthesizer) that has the target offset frequency f. RF mixer 424 can produce a first RF signal of frequency f−f′ and a second RF signal of frequency f+f′. A low-pass filter (not shown in FIG. 4C) can filter out the second RF signal and provide the first RF signal representative of the frequency difference f−f′ (and the relative phase between the first beam and the second beam) to feedback electronics stage 426. Feedback electronics stage 426 can operate in a frequency range that includes a low-frequency (e.g., dc and close to dc) domain but also extends above at least the linewidth of the second light source 402. In some implementations, the bandwidth at which feedback electronics stage 426 operates can be significantly higher than the linewidth of second light source 402, to improve line narrowing (or to prevent line broadening) during loop locking operations. For example, the bandwidth can be 1-10 MHz or even more (e.g., for the second light source 402 linewidth of 50-100 KHz). In some implementations, the bandwidth can be up to 50 MHz. Increasing the bandwidth can be cost-optimized against desired accuracy of the sensing system, with higher bandwidths acceptable in higher-accuracy sensing systems and lower bandwidths used in more economical devices that have a lower target accuracy.

Feedback electronics stage 426 can determine the frequency of the input signal, f−f′ and can modify settings of second light source 402 to minimize f−f′. For example, feedback electronics stage 426 can determine—by adjusting settings of second light source 402 and detecting a corresponding change in the frequency of the output of RF mixer 424—that increasing (decreasing) frequency of second light source 402 reduces (enhances) the frequency mismatch |f−f′| whereas decreasing (increasing) frequency of second light source 402 enhances (reduces) the frequency mismatch |f −f′|. Feedback electronics stage 426 can then change the settings of second light source 402, e.g., move frequency f in the direction that decreases the frequency mismatch |f−f′|. This procedure can be repeated iteratively (e.g., continuously or quasi-continuously) until the mismatch |f−f′| is minimized and/or brought within an acceptable (e.g., target) accuracy.

Similarly to how the frequency difference is minimized, RF mixer 424, RF LO 423, and feedback electronics stage 426 can be used to correct for the phase difference between the first beam output by first light source 401 and the second beam output by second light source 402. In some implementations, one or more filters (not shown in FIG. 4C) can filter out high frequency phase fluctuations while selecting, for processing by feedback electronics stage 426, those fluctuations whose frequency is of the order of (or higher, up to a certain predefined range, than) the linewidth of second light source 402. For example, the linewidth can be below 50-100 KHz whereas filter(s) bandwidth can be of the order of 1 MHz.

OFL 405 can include additional elements that are not explicitly depicted in FIG. 4C. For example, OFL 405 can include one or more electronic amplifiers, which can amplify outputs of at least some of coherent detection stage 422, RF mixer 424, filter(s), and so on. In some implementations, feedback electronics stage 426 can include an ADC with some components of feedback electronics stage 426 implemented as digital processing components. Feedback electronics stage 426 can include circuitry capable of adjusting various settings of second light source 402, such as parameters of optical elements (mirrors, diffraction gratings), including grating periods, angles, refractive indices, lengths of optical paths, relative orientations of optical elements, and the like. Feedback electronics stage 426 can be capable of tuning the amount of current injected into elements of second light source 402 to control temperature, charge carrier density, and other parameters responsible for control of the frequency and phase of light output by second light source 402.

With the synchronization of first light source 401 and second light source 402 enabled by OFL 405, second light source 402 operates in a mode that is frequency-offset and phase-locked relative to first light source 401. Correspondingly, a second copy of the first beam (output by beam splitter 412) can be used as an unmodulated part (pilot tone) of a frequency-multiplexed light beam that is transmitted to a target. A second copy of the second beam (output by beam splitter 414) can be used to carry frequency (and/or phase) encoding transmitted to a target, e.g., one or more objects 465 in the driving environment 110. The second copy of the second beam can be modulated, by optical modulator A 330, with a frequency and/or phase encoding, as described above in conjunction with FIG. 2A, FIG. 4A, and FIG. 4B. Optical combiner 340 then combines the two beams and delivers the combined beam to a TX optical interface 460. Transmitted beam 462 interacts with object(s) 465 and generates a reflected beam 466 that is received via RX optical interface 468. For illustration, FIG. 4C depicts an implementation in which TX optical interface 460 is separate from RX optical interface 468, but it should be understood that TX optical interface 460 and RX optical interface 468 can share any number of optical elements, e.g., apertures, lenses, mirrors, collimators, polarizers, waveguides, and the like. Subsequent processing of the received reflected beam can be performed similarly to the processing discussed in conjunction with FIG. 2A, FIG. 4A, and FIG. 4B.

In some implementations, more than two lasers can be used in a way that is similar to the setup of FIG. 4C. For example, an N-laser system can be used with one laser depoyed as a LO laser and N-I lasers deployed as signal lasers, each of the signal lasers having a different offset from the LO laser frequency and being optically locked to the LO laser (or one of the other N-I signal lasers) as described above.

Optical sensing systems 400, 403, and/or 404 can perform detection of velocities and distances to multiple objects in a manner that is similar to how such a detection is performed by optical sensing system 300 of FIG. 3A.

FIG. 5A is a schematic illustration of a frequency encoding imparted to a sensing light beam together with a sequence of frequency chirps, for efficient disambiguation of returns from multiple objects, in accordance with some implementations of the present disclosure. In conventional FMCW lidars simultaneous determination of a target's velocity V (via Doppler shift fD) a distance to the target (via time of flight τ) is often performed using a periodic sequence of linear frequency modulations, commonly referred to as up-chirps and down-chirps, each of duration T/2,

Δ f ( t ) = β × { t , 0 < t T / 2 , t - T / 2 , T / 2 < t T ,

repeated for each period of time T. The slope of the chirps β determines the bandwidth of the frequency modulation (βT/2) and is set in view of the target accuracy of distance and velocity detection (with large slopes and bandwidths required for higher target accuracy). The sequence of chirps in the RX signal can be both shifted by the time delay (time of flight) τ along the time axis and by the Doppler frequency fD along the frequency axis. The beat frequency f(t)=FRX(t)−FLO(t) representative of the difference between the frequency FRX(t) of the RX beam and the frequency FLO(t) of the LO copy can then be detected (in the analog or digital domain). The beat frequency on the up-chirp side fup can be different from the beat frequency on the down-chirp side fdown. The Doppler shift can then be determined from the difference of the two detected beat frequencies, fD=(fup−fdown)/2, and the delay time can be determined from the sum of the two beat frequencies, τ=β×(fup+fdown)/2.

This conventional way of using a chirp-up/chirp-down sequence has substantial drawbacks. If the received reflected beam is generated by two (or more) targets located close to the same direction, the lidar device can identify four (or more) beat frequencies. This presents a significant ambiguity, as there can be three different associations (pairings) of the four beat frequencies. A similar situation arises when the direction of the transmitted beam sweeps across multiple targets during the signal integration period. Reduction of the integration period, on the other hand, causes a signal-to-noise ratio (SNR) to diminish with the ensuing drop in the accuracy of lidar detections.

FIG. 5A illustrates a combination of an up-chirp sequence and a frequency encoding that can be used for more efficient distance-to-velocity disambiguation. FIG. 5A shows a periodic sequence (with period T) of up-chirps (wherein j is an integer number)


Δf(t)=β×(t−jT), jT<t≤(j+1)T,

additionally modulated with a set of frequency shifts Δf(tj). FIG. 5B is a schematic illustration of a phase encoding Δϕ(tj) imparted to a sensing light beam together with a sequence of frequency chirps, for efficient disambiguation of returns from multiple objects, in accordance with some implementations of the present disclosure. Although, for simplicity, an chip-up sequence is shown in FIG. 5A and FIG. 5B, in some implementations various other combinations of a frequency, phase, or amplitude encoding with a chirp-down sequence, a chirp-up/chirp-down sequence, or any other type of a chirp sequence, can be used including a non-linear chirp sequence.

Both the frequency and the phase encoding can be described on the same footing in terms of a time-dependent phase shift Δϕ(t), where in case of the frequency encoding, Δϕ(t)=2π∫0t Δ∫(t′)dt′. For example, within a single period of the chirp, Δϕ=2πβ∫0t t′dt′=πβt2. In some implementations, both the chirp sequence and the phase (or frequency) encoding are imparted to the transmitted beam whereas only the chirp sequence is applied to the LO copy that remains on the lidar device. In such instances, the difference of the applied (at time t−τ) and detected (at time of detection t) encodings Δϕ(t−τ)−Δϕ(t) can be analyzed in the digital domain, where the time delay τ is determined. In some implementations, the chirp sequence and the phase (or frequency) encoding are imparted to both the transmitted beam and the LO copy. In such instances, the difference of the encodings Δϕ(t−τ)−Δϕ(t) can be obtained in the analog domain, digitized, and then analyzed in the digital domain. In some implementations, the transmitted beam is also frequency shifted relative to the LO beam.

Assuming a chirped LO copy beam, within an overlapping portion of LO and RX beams, the electric field of the LO beam can be,


ELO=ALO exp [2πiF0t+πβt2],

and the electric field of the RX beam can be


ERX=ARX exp [2πi(F1+fD)×(t−τ)+πiβ×(t−τ)2+iΔϕ(t−τ)].

In implementations where a 90-degree optical hybrid combines the LO beam and the RX beam, the electrical signal generated by the optical hybrid can be (omitting a constant phase),


J=ERXE*LO=ARXA*LO exp [2π(F1−F0)t+iπ(fD−2βτ)×t+iΔπ(t−τ)],

where F1−F0 is a frequency offset between the transmitted beam and the LO copy. In some implementations, where the frequency offset is non-zero, the 90-degree optical hybrid can be used instead of the 180-degree hybrid. The electrical signal J can be digitized (e.g., by ADC) and digitally processed to determine the value fD−2βτ together with the time delay τ, e.g., based on identification of a location of the maximum of the correlation function of Δϕ(t−τ) and Δϕ(t).

The techniques described in relation to FIG. 5A and FIG. 5B can be implemented using various optical sensing systems disclosed above in conjunction with FIG. 2A, FIG. 3A, and/or FIGS. 4A-C. For example, the optical sensing system 200 of FIG. 2A can deploy optical modulator A 330 to impart frequency, phase, or amplitude encoding whereas an additional optical modulator (e.g., placed between beam preparation stage 210 and beam splitter 212) can impart the chirp sequence to the light beam before the light beam is split into LO copy 234 and the transmitted beam.

Numerous modifications and variations of the optical sensing systems described above in conjunction with FIGS. 2-5 are further within the scope of this disclosure. Any of the light beams depicted with solid lines can be additionally amplified. For example, whereas FIG. 3A depicts an amplifier 350 that amplifies the light beam prior to its transmission through the optical interface 360, additional amplifiers can amplify the following light beams: beams output by beam preparation stage 310, beams processed by optical modulator A 330 and/or optical modulator B 331, beams received through optical interface 360 (and directed to coherent detection 380 by optical circulator 354), and so on. In some implementations, amplifiers can be saturation amplifiers that are used to ensure a target composition of the light beams. For example, saturation amplifiers placed between optical modulator A 330 and optical combiner 340, on one hand, and between optical modulator B 331 and optical combiner 340, on the other hand, can be used to ensure that the light beams of frequency F1 and F2 have the same (or similar amplitudes) in the combined light beam that is output towards object(s) 365. Such placement of the amplifiers can also reduce a cross-talk between the light beams of frequency F1 and F2 by ensuring that each of the light beams' amplitude is saturated prior to combining the beams. This may be advantageous compared with combining the beams before passing the combined beam through a saturated amplifier. In the latter case, the proportion of each of the beams in the amplified combined beam would be maintained (and thus any initial difference in the beam strengths would not be eliminated) whereas in the setup in which each of the beams is amplified before combining, equalization of the strengths of the beams can be achieved more efficiently.

In some implementations, amplifiers can be configured to produce gain that is time-dependent. The time-dependent gain can be synchronized with the direction of the transmitted beam. For example, when the transmitted beam is used to scan close objects (e.g., objects for which the angle of transmission is below the plane of horizon), the amplifiers can be configured to produce lower gain and, correspondingly, lower intensity of the transmitted beam. When the transmitted beam is used to scan more distant objects (e.g., objects for which the angle of transmission is near or above the plane of horizon), the amplifiers can be configured to produce a higher gain and, correspondingly, higher intensity of the transmitted beam. Similarly, the intensity of the transmitted beam can be varied depending on the azimuthal angle of scanning, e.g., configured to have a stronger intensity along the directions parallel to the direction of motion of the AV and weaker intensities along the perpendicular directions. In some implementations, the intensity of the transmitted beam can be configured to be a continuous function of the angles (both horizontal and vertical angles) of scanning.

In some implementations, any number of optical elements depicted in FIGS. 2-5 can be implemented as part of a photonic integrated circuit. For example, one or more of light sources, waveguides, beam splitters, optical combiners, resonators, amplifiers, and other devices, can be implemented on one or more photonic integrated circuits.

In some implementations, the pilot tone can be modulated via a sequence of high-bit rate signals. For example, a sequence of 0-1-0-1-0-1 . . . bit values, each bit value having a duration of 10−8 sec can be used for the pilot tone. The bit values can be output by encoding module 320 and converted into analog signals by RF modulator 322, e.g., as a sequence of rectangular signals applied for a certain duration of the pilot tone (in the instances of time multiplexing) or continuously (in the instances of frequency multiplexing). The applied modulation can cause the carrier frequency (e.g., F0) to develop multiple sidebands, e.g., F0±100 MHz, F0±200 MHz, etc. Some of the sidebands can be used (e.g., as frequency offsets) during detection of the RX signals by coherent detection stage 380 and DSP 390. For example, low-pass filter 382 and/or high-pass filter 383 can be configured to process signals generated by coherent detection stage 380 that are shifted relative to the frequency (e.g., F0) of the LO copy 334 by ±100 MHz.

In some implementations, due to the motion of the scanning (transmitted) beam, multiple reflecting surfaces of the same target object (e.g., object 365) can be scanned resulting in a variation A(t) of the reflected beam amplitude and a corresponding variation of the electrical signal J(t) output by coherent detection stage 380. The spectral (frequency domain) representation of the electrical signal, J(ω)=∫0Tint dteiωtJ(t), can, therefore, identify the beat frequencies with an accuracy that is limited by the inverse time of the amplitude A(t) variation. To improve accuracy of Doppler shift and delay time detection, and improve the signal to noise ratio, in some implementations, the signal integration time Tint can be split into shorter time intervals T1, T2 . . . and the spectral representation of the electrical signal can be computed separately for each split time interval, Jk(ω)=∫Tk−1Tk dteiωtJ(t). The power density can then be obtained as the sum of power densities for various time intervals, P(ω)=Σk|Jk(ω)|2, and the determination of the Doppler shift and time of flight can be performed based on the summation of power density P(ω).

FIG. 6, FIG. 7, and FIG. 8 depict flow diagrams of example methods 600, 700, and 800 of using lidar sensing systems that deploy various time and frequency multiplexing techniques described above. Methods 600, 700, and 800 can be performed using systems and components described in relation to FIGS. 1-5, e.g., optical sensing system 200 of FIG. 2A, optical sensing system 300 of FIG. 3A, optical sensing system 400 of FIG. 4A, optical sensing system 403 of FIG. 4B, optical sensing system 404 of FIG. 4C, and/or various modifications or combinations of the aforementioned sensing systems. Methods 600, 700, and 800 can be performed as part of obtaining range and velocity data that characterizes a driving environment of an autonomous vehicle. Various operations of methods 600, 700, and 800 can be performed in a different order compared with the order shown in FIG. 6, FIG. 7, and FIG. 8. Some operations of methods 600, 700, and 800 can be performed concurrently with other operations. Some operations can be optional. Methods 600, 700, and 800 can be used to improve efficiency of velocity and distance detections by lidar devices, including speed and coverage of lidar detections (e.g., a number of objects that can be detected concurrently).

FIG. 6 depicts a flow diagram of an example method 600 of time multiplexing of lidar sensing signals, in accordance with some implementations of the present disclosure. Method 600 can include generating, at block 610, a first beam using a light source (e.g., light source 202 of FIG. 2A). At block 620, method 600 can continue with using a beam splitter (e.g., beam splitter 212) to produce an LO copy of the first beam (e.g., LO copy 234). At block 630 method 600 can include producing, using a first modulator (e.g., optical modulator 230) and based on the first beam, a second beam having a plurality of first portions interspersed with a plurality of second portions (e.g., as depicted in FIG. 2B or FIG. 2C). Each of the plurality of second portions (e.g., portions of duration T2) can be modulated with a first sequence of shifts. The first sequence of shifts can be a sequence of frequency shifts (e.g., as depicted in FIG. 2B) or a sequence of phase shifts (e.g., as depicted in FIG. 2C). The first and second plurality of portions can be periodically repeated.

In some implementations, the first sequence of shifts can be characterized by a correlation function K(θ) that is a peaked function of a time delay θ. For example, the first sequence of shifts can include one or more of Gold codes, Barker codes, maximum-length sentences, or the like. In some implementations, each of the plurality of first portions (e.g., portions of duration T2) of the second beam can be unmodulated. In some implementations, a second modulator can be configured to impart a frequency offset to the second beam relative to the first beam. In some implementations, the second modulator and the first modulator can be manufactured as a single optical modulator that receives a control signal that is a combination (e.g., a sum) of: control signals configured to impart the first sequence of shifts and control signals that impart the frequency offset.

At block 640, method 600 can continue with an optical interface subsystem (e.g., subsystem that includes optical circulator 254, optical interface 260, and various other optical devices, such as lenses, polarizers, collimators, waveguides, etc.) transmitting the second beam towards an object (e.g., an object in the driving environment of the AV). The optical interface subsystem can further receive a third beam. The third beam can be caused by interaction of the second beam with the object. The third beam can include a plurality of third portions (e.g., portions of duration T1) interspersed with a plurality of fourth portions (e.g., portions of duration T2). The third portions can correspond to the reflected first portions and the fourth portions can correspond to the reflected second portions. Correspondingly, the plurality of fourth portions can be modulated with a second sequence of shifts that is time-delayed relative to the first sequence of shifts. For example, if the first sequence of shifts is Δϕ(ti), the second sequence of shifts Δϕ(ti−τ) can be time-delayed by τ.

The received third beam can be input into a coherent photodetector (e.g., the combination of optical hybrid stage 270 and coherent detection stage 280). The LO beam can also be input into the coherent photodetector. At block 650, method 600 can continue with generating one or more electrical signals representative of a phase difference between the third beam and the LO beam. At block 660, method 600 can include determining, using one or more circuits, a velocity of the object based on a Doppler frequency shift fD between the third beam and the second beam. The one or more circuits can include ADC 284 and DSP 290, as well as multiple other circuits (e.g., filters, mixers, etc.). The Doppler frequency shift fD can be identified using the plurality of first portions of the second beam and the plurality of third portions of the third beam. At block 670, method 600 can continue with determining a distance to the object L, based on: i) a time delay τ between the first sequence of shifts and the second sequence of shifts and ii) the identified Doppler frequency shift. More specifically, a signal processing stage (e.g., DSP 290) can use the determined Doppler frequency shift fD to account for the Doppler shift-induced beating between the plurality of second portions of the second beam and the plurality of fourth portions of the third beam. The signal processing stage can then determine the time delay i based on the maximum value of the correlation function K(θ), e.g., as τ=max{K(θ): θ}, using the one or more electrical signals (representative of the delayed frequency or phase shifts) output by the coherent photodetector.

FIG. 7 depicts a flow diagram of an example method 700 of imparting a combination of frequency chirps together with a sequence of shifts, in accordance with some implementations of the present disclosure. At block 710, method 700 can include a light source generating a first beam. At block 720, a beam splitter can produce an LO copy of the first beam. At block 730, method 700 can continue with applying one or more modulators to the first beam to produce a second beam. The second beam can include a plurality of chirped portions (e.g., as depicted in FIG. 5A and FIG. 5B). Each of the plurality of chirped portions (of duration T) can include a monotonic modulation and a sequence of shifts. In some implementations, the monotonic modulation can include a linear frequency chirp (e.g., an up-chirp or a down-chirp or an up (or down) portion of an up-chirp/down-chirp sequence). In some implementations, a non-linear monotonic frequency chirp modulation can be used. The sequence of shifts can include a sequence of frequency shifts (e.g., as depicted in FIG. 5A), a sequence of phase shifts (e.g., as depicted in FIG. 5B), or any combination thereof.

At block 740, method 700 can continue with an optical interface subsystem transmitting the second beam towards an object. The optical interface subsystem can further receive a third beam. The third beam can be caused by interaction of the second beam with the object. The third beam can include the plurality of chirped portions that are time-delayed (e.g., by time of flight τ=L/2c). The third beam and the LO beam can be input into a coherent photodetector, which can generate one or more electrical signals representative of a phase difference between the third beam and the LO beam. At block 750, method 700 can include determining, using one or more circuits, such as a signal processing stage, the phase difference of the third beam and the LO beam, a velocity of the object and a distance to the object. In particular, the signal processing stage can determine the velocity of the object and the distance to the object using the one or more electrical signals, e.g., as described above in conjunction with blocks 660 and 670 of method 600.

FIG. 8 depicts a flow diagram of an example method 800 of frequency multiplexing of lidar sensing signals, in accordance with some implementations of the present disclosure. At block 810, method 800 can include using a light source subsystem to produce a first beam having a first frequency (e.g., F1) and a second beam having a second frequency (e.g., F2). The light source subsystem can include one or more light sources, e.g., light source 302 in FIG. 3A, pump laser 303 in FIG. 3D, first/second light sources 401/402 in FIG. 4C, and the like. The light source subsystem can further include a beam preparation stage (e.g., beam preparation stage 310 of FIG. 3A), one or more beam splitters, resonators (e.g., resonator 311 of FIG. 3D), and so on. For example, the light source subsystem can include a light source (e.g., light source 302 of FIG. 3A) configured to generate a common beam (e.g., of frequency F0) and a beam splitter 312 configured to split the common beam into the first beam (provided to optical modulator B 331) and the second beam (directed to beam splitter 314). Method 800 can include shifting the frequency of at least one of the first beam or the second beam from the frequency of the common beam (e.g., F1≠F0 or F2≠F0). In some implementations, the first frequency is shifted from a frequency of the LO beam by a first frequency offset (e.g., F1−F0) and the second frequency is shifted from the frequency of the common beam by a second frequency offset (e.g., F2−F0). Different optical modulators can impart the first frequency offset and the second frequency offset.

In some implementations, the light source subsystem can include a first light source (e.g., first light source 401) configured to output the first beam having a first frequency (e.g., F0) and a second light source (e.g., second light source 402) configured to output the second beam having a second frequency (e.g., F0+f′). The light source subsystem can further include an optical feedback loop (e.g., OFL 405) configured to lock one of the first frequency or the second frequency to another one of the second frequency or the first frequency (e.g., to lock frequency F0+f′ to frequency F0). As used herein, “locking” should be understood as dynamically causing one of the frequencies to maintain a target relationship to another frequency, including maintaining a target frequency offset (e.g., f′=f) between the two frequencies.

The top callout portion of FIG. 7 illustrates example operations of OFL. More specifically, at block 812, a coherent photodetector (e.g., coherent detection stage 422 of FIG. 4C) can receive a copy of the first beam and a copy of the second beam. At block 814, the coherent photodetector can generate an electrical signal representative of a phase difference between the copy of the first beam and the copy of the second beam (e.g., a signal having frequency f′). At block 816, one or more OFL circuits (e.g., RF LO 423, RF mixer 424, and feedback electronics stage 426) can adjust, in view of the electrical signal, at least one of the first frequency or the second frequency. For example, as depicted in FIG. 4C, feedback electronics stage 426 can output control signal of frequency f′−f configured to adjust the frequency of second light source 402 from F0+f′ to F0+f.

In some implementations, as depicted in FIG. 3D, the light source subsystem can be configured to generate a frequency comb. The frequency comb can include a plurality of comb teeth (e.g., teeth having frequencies F0+nF). In such implementations, the first beam and the second beam can be associated with a first comb tooth (e.g., m-th tooth, where m is an integer) of the plurality of comb teeth. At least one of the first frequency (F0+mF+f1) or the second frequency (F0+mF+f2) is obtained by shifting a frequency of the first comb tooth (F0+mF) by a respective offset frequency.

At block 820, method 800 can continue with a modulator (e.g., optical modulator B 331 in FIG. 3A) imparting a modulation to the second beam. In some implementations, the modulation imparted to the second beam can include a sequence of shifts characterized by a correlation function K(θ) that is a peaked function of a time delay θ. The sequence of shifts can include at least one of a sequence of frequency shifts Δf(ti), a sequence of phase shifts Δf(ti), or a sequence of amplitude shifts ΔA(ti). In some implementations, the sequence of shifts is based on at least one of a maximum-length sequence, a Gold code, or a Barker code.

At block 840, method 800 can continue with an optical interface subsystem (e.g., subsystem that includes optical circulator 254, optical interface 260, and various other optical devices, such as lenses, polarizers, collimators, waveguides, etc.). In some implementations, the optical interface subsystem can be configured to output, towards a first object (e.g., an object in the driving environment of the AV), the first beam and the second beam along the same (or similar)optical path. The optical interface subsystem can further receive: i) a third beam caused by interaction of the first beam with a first object and ii) a fourth beam caused by interaction of the second beam with the first object.

At block 850, method 800 can continue with one or more circuits determining the velocity of the first object, based on a first phase information carried by the third beam, a velocity of the first object. For example, the one or more circuits can compare the first phase information with a phase information carried by a local oscillator (LO) beam. In some implementations, the LO beam can be a copy of one of the first beam or the second beam. In some implementations, the LO beam can be frequency-shifted relative to the first beam by a first frequency offset and frequency-shifted relative to the second beam by a second frequency offset.

The middle callout portion of FIG. 8 illustrates example operations of block 850. More specifically, at block 852, a coherent photodetector can receive a combined beam that includes the third beam and the fourth beam and can further receive the LO beam. At block 854, method 800 can include generating a first electrical signal representative of a phase difference of the combined beam and the LO beam. The generated first electrical signal can be provided to the one or more circuits. The one or more circuits can include one or more filters, mixers, and a signal processing stage, which can include one or more ADCs, and a DSP. At block 856, method 800 can continue with a first filter (e.g., a low-pass filter) generating, based on the first electrical signal, a second electrical signal representative of a phase difference of the third beam and the LO beam. Similarly, a second filter (e.g., a high-pass filter) can generate, based on the first electrical signal, a third electrical signal representative of a phase difference of the fourth beam and the LO beam. At block 858 the signal processing stage can determine, based on the second electrical signal, the velocity of the first object.

At block 860, method 800 can continue with determining, based on a second phase information carried by the third beam and the first phase information, a distance to the first object. As depicted by the bottom callout portion of FIG. 8, operations of block 860 can include determining, at block 862, based on the second electrical signal and the third electrical signal, the distance to the first object.

In some implementations, where a frequency comb is being deployed by the lidar sensing system, method 800 can further include determining a velocity of a second object and a distance to the second object using one or more beams generated based on a second comb tooth of the plurality of comb teeth. This can be performed similarly to how the velocity of the first object and the distance to the first object are determined, e.g., by repeating blocks 810-862 multiple times (e.g., once for each tooth of the frequency comb).

Some portions of the detailed descriptions above are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.

It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “identifying,” “determining,” “storing,” “adjusting,” “causing,” “returning,” “comparing,” “creating,” “stopping,” “loading,” “copying,” “throwing,” “replacing,” “performing,” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.

Examples of the present disclosure also relate to an apparatus for performing the methods described herein. This apparatus can be specially constructed for the required purposes, or it can be a general purpose computer system selectively programmed by a computer program stored in the computer system. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic disk storage media, optical storage media, flash memory devices, other type of machine-accessible storage media, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.

The methods and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear as set forth in the description below. In addition, the scope of the present disclosure is not limited to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the present disclosure.

It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other implementation examples will be apparent to those of skill in the art upon reading and understanding the above description. Although the present disclosure describes specific examples, it will be recognized that the systems and methods of the present disclosure are not limited to the examples described herein, but can be practiced with modifications within the scope of the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than a restrictive sense. The scope of the present disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims

1. A system comprising:

a light source subsystem configured to produce a first beam having a first frequency and a second beam having a second frequency;
a modulator configured to impart a modulation to the second beam;
an optical interface subsystem configured to: receive a third beam caused by interaction of the first beam with a first object, and receive a fourth beam caused by interaction of the second beam with the first object; and
one or more circuits configured to: determine, based on a first phase information carried by the third beam, a velocity of the first object; and determine, based on a second phase information carried by the third beam and the first phase information, a distance to the first object.

2. The system of claim 1, wherein the modulation imparted to the second beam comprises a sequence of shifts characterized by a correlation function that is a peaked function of a time delay, wherein the sequence of shifts comprises at least one of a sequence of frequency shifts or a sequence of phase shifts.

3. The system of claim 2, wherein the sequence of shifts is based on at least one of a maximum-length sequence, a Gold code, or a Barker code.

4. The system of claim 1, wherein the optical interface subsystem is further configured to output, towards the first object, the first beam and the second beam along a same optical path.

5. The system of claim 1, wherein to determine the velocity of the first object, the one or more circuits compare the first phase information with a phase information carried by a local oscillator (LO) beam, wherein the first frequency is shifted from a frequency of the LO beam by a first frequency offset.

6. The system of claim 5, wherein the second frequency is shifted from the frequency of the LO beam by a second frequency offset.

7. The system of claim 1, wherein the light source subsystem comprises:

a light source configured to generate a common beam, wherein the first beam and the second beam are obtained from the common beam, and wherein at least one of the first beam or the second beam is shifted in frequency from the common beam.

8. The system of claim 1, wherein the light source subsystem comprises:

a first light source configured to output the first beam having a first frequency; and
a second light source configured to output the second beam having a second frequency; and
an optical feedback loop configured to lock one of the first frequency or the second frequency to another one of the second frequency or the first frequency.

9. The system of claim 8, wherein the optical feedback loop (OFL) comprises:

a coherent photodetector configured to: receive a copy of the first beam and a copy of the second beam; generate an electrical signal representative of a phase difference between the copy of the first beam and the copy of the second beam; and one or more OFL circuits configured to adjust, in view of the electrical signal, at least one of the first frequency or the second frequency.

10. The system of claim 1, further comprising:

a coherent photodetector configured to: receive a combined beam comprising the third beam and the fourth beam; receive a local oscillator (LO) beam; generate a first electrical signal representative of a phase difference of the combined beam and the LO beam;
wherein the one or more circuits are further configured to receive the first electric signal.

11. The system of claim 10, wherein the one or more circuits comprise:

a first filter to generate, based on the first electrical signal, a second electrical signal representative of a phase difference of the third beam and the LO beam;
a second filter to generate, based on the first electrical signal, a third electrical signal representative of a phase difference of the fourth beam and the LO beam; and
a signal processing stage configured to determine, based on the second electrical signal, the velocity of the first object; and determine, based on the second electrical signal and the third electrical signal, the distance to the first object.

12. The system of claim 1, wherein the light source subsystem is configured to generate a frequency comb comprising a plurality of comb teeth, and wherein the first beam and the second beam are associated with a first comb tooth of the plurality of comb teeth and at least one of the first frequency or the second frequency is obtained by shifting a frequency of the first comb tooth.

13. The system of claim 12, further configured to determine a velocity of a second object and a distance to the second object using one or more beams generated based on a second comb tooth of the plurality of comb teeth.

14. A system comprising:

a light source configured to generate a first beam;
a first modulator configured to produce, based on the first beam, a second beam comprising a plurality of first portions interspersed with a plurality of second portions, wherein each of the plurality of second portions is modulated with a first sequence of shifts, the first sequence of shifts comprising at least one of a sequence of frequency shifts or a sequence of phase shifts;
an optical interface subsystem configured to: receive a third beam caused by interaction of the second beam with an object, the third beam comprising a plurality of third portions interspersed with a plurality of fourth portions, wherein each of the plurality of fourth portions is modulated with a second sequence of shifts that is time-delayed relative to the first sequence of shifts; and
one or more circuits configured to: determine a velocity of the object based on a Doppler frequency shift between the third beam and the second beam, identified using the plurality of first portions and the plurality of third portions; and determine a distance to the object based on: a time delay between the first sequence of shifts and the second sequence of shifts, and the identified Doppler frequency shift.

15. The system of claim 14, wherein the first sequence of shifts is characterized by a correlation function that is a peaked function of a time delay.

16. The system of claim 14, wherein each of the plurality of first portions of the second beam is unmodulated.

17. The system of claim 14, further comprising:

a beam splitter configured to produce a local oscillator (LO) copy of the first beam;
a second modulator configured to impart a frequency offset to the second beam relative to the first beam; and
a coherent photodetector configured to: input the third beam and the LO beam; and generate one or more electrical signals representative of a phase difference between the third beam and the LO beam; and
a signal processing stage configured to determine the Doppler frequency shift and the time delay using the one or more electrical signals.

18. A system comprising:

a light source configured to generate a first beam;
one or more modulators configured to produce, using the first beam, a second beam comprising a plurality of chirped portions, wherein each of the plurality of chirped portions comprises a monotonic modulation and a sequence of shifts, wherein the sequence of shifts comprises at least one of a sequence of frequency shifts or a sequence of phase shifts;
an optical interface subsystem configured to: receive a third beam caused by interaction of the second beam with an object, the third beam comprising the plurality of chirped portions that are time-delayed; and
one or more circuits configured to: determine, based on a phase difference of the third beam and the LO beam, a velocity of the object and a distance to the object.

19. The system of claim 18, wherein the sequence of shifts is characterized by a correlation function that is a peaked function of a time delay.

20. The system of claim 18, further comprising:

a beam splitter configured to produce a local oscillator (LO) copy of the first beam; and
a coherent photodetector configured to: input the third beam and the LO beam; and generate one or more electrical signals representative of a phase difference between the third beam and the LO beam; and
a signal processing stage configured to determine the velocity of the object and the distance to the object using the one or more electrical signals.
Patent History
Publication number: 20220187458
Type: Application
Filed: Dec 13, 2021
Publication Date: Jun 16, 2022
Inventors: Alexander Piggott (Mountain View, CA), Bryce Remesch (Hollis, NH), Michael R. Matthews (Portola Valley, CA), David Sobel (Los Altos, CA), Imam Uz Zaman (San Jose, CA)
Application Number: 17/549,124
Classifications
International Classification: G01S 17/34 (20060101); G01S 17/48 (20060101); G01S 17/931 (20060101); B60W 30/08 (20060101);