High-speed imaging using periodic optically modulated detection
Methods and systems for imaging a target. In some examples, a system includes an optical modulator configured for applying, at each time of an exposure window, a respective optical modulation pattern to a received image of the target to output a modulated image. The system includes a camera configured for capturing a single image frame for the exposure window by receiving, at each of time, the modulated image of the target. The system includes a demodulator implemented on a computer system and configured for demodulating the single image frame based on the optical modulation patterns to recover a number of recovered image frames each depicting the target at a respective recovered time within the exposure window.
Latest University of Tennessee Research Foundation Patents:
- ELECTRODES COMPRISING LIQUID/GAS DIFFUSION LAYERS AND SYSTEMS AND METHODS FOR MAKING AND USING THE SAME
- Compounds for treatment of cancer
- METHODS AND APPARATUS FOR CONTROLLING AN INVERTER
- Discovery of soybean cyst nematode resistance genes based on epigenetic analysis
- Self-healing adhesive composition
This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/620,223, filed Jan. 22, 2019, the disclosure of which is incorporated herein by reference in its entirety.
STATEMENT OF GOVERNMENT INTERESTThis invention was made with government support under Contract No. PHY-141 8848 awarded by the National Science Foundation. The government has certain rights in the invention.
TECHNICAL FIELDThis specification relates generally to high-speed imaging using optical modulation, for example, gigahertz or higher rate imaging and measurements.
BACKGROUNDThe development of high-speed imaging and measurement systems has been a critical part of optical diagnostics and spectroscopy for decades. The structured illumination (SI) method was developed to bypass the diffraction limits for optical microscopy. The use of SI to obtain super-resolution information simply by illuminating a sample with structured light can be explained in the context of the ‘spatial frequency’ of an image. Since an optical image collects only low spatial frequency information, high-frequency information is lost after light passes through an objective lens, which sets the classical Abbe diffraction limit. In a typical SI setup, incident light is spatially modulated using a Ronchi ruling, digital micromirror device (DMD), or gratings before it reaches its target. By applying these bar code-like illumination patterns in different orientations and processing all acquired images using computer algorithms, a high spatial resolution image can be obtained. For combustion diagnostics, if one is doing a fluorescence experiment with two wavelengths, one incident laser light can be modulated with a Ronchi ruling and the other can be modulated with a Ronchi ruling at a different angle or spatial frequency. When the image from such an experiment is collected, it will contain information corresponding to each light source can be separated using Fourier domain analysis even though the light was collected during one exposure. This type of setup has been used for various flame and spray measurements.
SUMMARYThis specification describes methods and systems for imaging a target. In some examples, a system includes an optical modulator configured for applying, at each time of an exposure window, a respective optical modulation pattern to a received image of the target to output a modulated image. The system includes a camera configured for capturing a single image frame for the exposure window by receiving, at each of time, the modulated image of the target. The system includes a demodulator implemented on a computer system and configured for demodulating the single image frame based on the optical modulation patterns to recover a number of recovered image frames each depicting the target at a respective recovered time within the exposure window.
In some examples, a system includes one or more beam splitters configured for separating a received image of the target into a number of optical paths each having a different optical length, each path including a respective optical modulator configured to output a respective modulated image of the target. The system includes one or more beam combiners configured for combining the modulated images and a camera configured for capturing a single image frame for an exposure window by receiving, at each of time, an output from the beam combiners of an optical combination of the modulated images of the target. The system includes a demodulator implemented on a computer system and configured for demodulating the single image frame to recover a number of recovered image frames each depicting the target at a respective recovered time within the exposure window.
In some examples, a system includes an optical modulator configured for applying, to each field of view of a number of fields of view of the target, a respective optical modulation pattern to a received image of the target to output a modulated image. The system includes a camera configured for capturing a single image frame for the plurality of fields of view by receiving the modulated images of the target. The system includes a demodulator implemented on a computer system and configured for recovering, from the single image frame, a plurality of recovered image frames each depicting the target from a respective field of view.
The computer systems described in this specification may be implemented in hardware, software, firmware, or any combination thereof. In some examples, the computer systems may be implemented using a computer readable medium having stored thereon computer executable instructions that when executed by the processor of a computer control the computer to perform steps. Examples of suitable computer readable media include non-transitory computer readable media, such as disk memory devices, chip memory devices, programmable logic devices, and application specific integrated circuits. In addition, a computer readable medium that implements the subject matter described herein may be located on a single device or computing platform or may be distributed across multiple devices or computing platforms.
Thus, it is an object of the presently disclosed subject matter to provide methods and systems for imaging a target.
An object of the presently disclosed subject matter having been stated hereinabove, and which is achieved in whole or in part by the methods and systems disclosed herein, other objects will become evident as the description proceeds when taken in connection with the accompanying Figures as best described herein below.
This specification describes methods and systems for high-speed imaging of a target using optical modulation. The methods and systems are described below with reference to three studies that illustrate representative, non-limiting examples of the methods and systems. Section I describes high-speed flame chemiluminescence imaging using time-multiplex structured detection. Section II describes single-shot nanosecond-resolution multiframe passive imaging by multiplexed structured image capture. Section III describes multiplexed structured image capture for single exposure field of view increase.
Section I—High-Speed Flame Chemiluminescence Imaging Using Time-Multiplexed Structured Detection
1. IntroductionThe development of high-speed imaging and measurement systems has been a critical part of optical diagnostics and spectroscopy for decades. With high-repetition lasers at kilohertz (kHz) [1] and megahertz (MHz) [2] now increasingly common, there is a need for affordable high-speed imaging methods. Furthermore, high-speed imaging is especially desirable for diagnostics of combustion phenomena as shown by recent works on 2D Coherent Anti-Stokes Raman Spectroscopy (CARS) [3, 4], 2D spontaneous Raman scattering [5], and Planar Laser-induced Fluorescence (PLIF) [1, 6] at kilohertz (kHz) or higher rates. Tremendous insights about turbulence, combustion chemistry, and their interactions have been gained through high-speed diagnostics.
The structured illumination (SI) method was developed to bypass the diffraction limits for optical microscopy [7-9]. The use of SI to obtain super-resolution information simply by illuminating a sample with structured light can be explained in the context of the ‘spatial frequency’ of an image. Since an optical image collects only low spatial frequency information, high-frequency information is lost after light passes through an objective lens, which sets the classical Abbe diffraction limit [10]. In a typical SI setup, incident light is spatially modulated using a Ronchi ruling, digital micromirror device (DMD), or gratings before it reaches its target. By applying these bar code-like illumination patterns in different orientations and processing all acquired images using computer algorithms, a high spatial resolution image can be obtained. For combustion diagnostics, if one is doing a fluorescence experiment with two wavelengths, one incident laser light can be modulated with a Ronchi ruling and the other can be modulated with a Ronchi ruling at a different angle or spatial frequency. When the image from such an experiment is collected, it will contain information corresponding to each light source can be separated using Fourier domain analysis even though the light was collected during one exposure. This type of setup has been used for various flame and spray measurements [11, 12].
In this work, high-speed imaging of flame chemiluminescence is demonstrated using Time Multiplexed Structured Detection (TMSD) imaging method from a single snapshot. TMSD is an idea similar to structured illumination (SI), but only on the detection side of the setup without modification of illumination patterns. Modulation patterns are applied just before imaging, so that images can be separated using an analysis similar to that used for structured illumination. Furthermore, by using a DMD to apply spatial modulations, sub-exposure images corresponding to the rate of DMD operation can be extracted, revealing high-speed information [13]. This will allow any camera to effectively image at the DMD rate of pattern generation.
Additionally, TMSD is a detection method, which can be used for nonlinear optical spectroscopic techniques. It is generally difficult to apply SI to nonlinear spectroscopic techniques of Coherent Anti-stokes Raman Scattering (CARS) [3], Resonance-enhanced Multiphoton Ionization (REMPI) [14], and Laser-induced Breakdown Spectroscopy (LIBS) [15, 16], because the variation of illumination intensity yields nonlinear optical responses which may not be easy to interpret.
2. Theory of Time Multiplexed Structured DetectionSince structured illumination (SI) and structured detection (SD) are similar, the mathematical formalism is quite similar as well. In both techniques, there are two key steps: modulation and demodulation, as in radio and microwave detections [17]. First the modulation is achieved by either SI [7] or SD. The image I(x,y) detected by a camera is normally the convolution of the object distribution S(x,y) and the point spread function of the optical system H(x,y). For SD, the time-multiplexed image is given by:
I(x,y)=[S(x,y)×M(x,y)]*H(x,y) 1)
where M(x,y) is the modulation pattern, “x” denotes multiplication operator, and “*” denotes the convolution operator. SI modulates the illumination patterns on the target, while SD modulates the image with patterns. Mathematically SI and SD are equivalent within the bracket, i.e., [S(x,y)×M(x,y)]. For this work a periodic square wave is used to model the DMD modulation patterns. For simplicity, consider the case of only an x-dependence for the modulation pattern (i.e. pattern of vertical lines). As shown in
where 2T1 is the width of the total transmission, T is the period of the modulation pattern. For a rectangular function, 4T1=T. By using convolution theorem, the observed image can be represented in the spatial frequency domain
Ĩ(kx,ky)=[{tilde over (S)}(kx,ky)*{tilde over (M)}(kx,ky)]×{tilde over (H)}(kx,ky) 3)
where kx and ky denote spatial frequency in x and y directions. The Fourier transformation of the modulation function is [18]:
Thus, a periodic square wave modulation results in an under-sampled sinc function with sampling frequency nk0=2πn/T with n=1, 2, . . . , as shown in
Therefore, combining Eq. 3 and Eq. 4 gives the Fourier transform of an image modulated by a pattern described by Eq. 2 as:
Assuming perfect imaging, {tilde over (H)}(kx,ky) can be taken as unity and Eq. 5 reduces to:
Therefore, the Fourier domain of the modulated image is comprised of a primary component at kx=0 and higher-order components at harmonics of k0. A key point though is that there is a copy of the image information shifted to the location of each of the harmonics in the Fourier domain. Note however, that the harmonics decrease in magnitude as the order increases due to the decrease of the sinc function. Since the first harmonics are of comparable magnitude to the zeroth harmonic, these will be the focus of the analysis. Furthermore, the background in the Fourier domain suppresses the higher harmonics anyway.
Second, demodulation and reconstruction can be conducted by selectively localizing high frequency info to bypass the diffraction limit or low frequency info to time-multiply single exposure [13, 20, 21]. Here the spatially modulated images are demodulated or recovered by homodyne mixing with the modulation patterns and bandpass filtering around the first harmonics of the resultant homodyne mixing images. However, the image will still have the modulation pattern applied to it and will be lower resolution due to the filtering process. In order to recover an image without modulation from one of the offsets, it is necessary to first shift it back to the center of the Fourier domain. This can be done using the methodology for structured illumination [20]. Mathematically, it is equivalent to homodyne mixing with two reference patterns with 90° phase shift in the spatial domain [14].
Consider a pattern with only x-direction modulation (i.e. horizontally patterned stripes). Reference sine and cosine functions are first multiplied by the complete modulated image:
Rs(x,y)=I(x,y)sin(k0x+ϕx) 7)
Rc(x,y)=I(x,y)cos(k0x+ϕx) (8)
The modulated light, S(x,y)×M(x,y), can be expressed as a sum of various frequency components:
If H(x,y)=1 (i.e. ideal imaging), then Eq. 7 and Eq. 8 become:
Carrying out the multiplication and applying appropriate trigonometric relations yields:
A low pass filter can be used on these two equations to allow only the terms
from Eq. 12 and
from Eq. (13) to remain. Then the recovered and unmodulated image is given by:
While an image can be recovered in both SI and SD experiments, there are effects on the image quality. First, because filtering in the Fourier domain is equivalent to convolution of the image with the inverse-Fourier transform of the filtering function, there is naturally a loss in spatial resolution [22]. This is also apparent when one considers that filtering essentially determines a cutoff spatial frequency used in the recovery, and thus determines a resolution limit in the resulting image [13]. A second effect is the approximation of the square wave modulation pattern with a sinusoidal pattern during the recovery. This effect causes extra but weak artifacts to appear in the Fourier domain, as seen in
The DMD pattern generation relies on images being preloaded onto the device, which only holds two 24-bit images. After creating twelve binary images of the desired patterns, a single 24-bit image was constructed using the DMD software, such that each binary image was stored in a bit of the 24-bit image. Individual patterns could then be selected by choosing the corresponding bit within the software. For this work, patterns were set for and exposure time of 1 ms to demonstrate kHz imaging.
4. Results and DiscussionsFirst the TMSD is demonstrated on imaging a stationary USAF target to illustrate the impacts of low-pass filtering on the final image reconstruction. As previously mentioned, the individual images can be recovered from the time-multiplexed image by demultiplexing, which involves shifting the first-order harmonic corresponding to a particular patterned image to the center and applying a low-pass filter. This filtering has significant impact on the quality of the recovered image.
Thus, to apply SD for high-speed measurements, it is best to determine the desired spatial resolution and then choose the number of modulation patterns. Once the number of patterns is known, the hexagonal patterning can be applied to maximize frequency domain usage. Mathematically, the spatial resolution of time-multiplexing images is reduced and controlled by the low-pass filter, i.e., the size of the various circles in
Hexagon oriented patterns are chosen to maximize the frequency modulation and minimize the interference from higher harmonics. There are three key steps to properly design a hexagon oriented pattern. First, in a typical setup, the camera takes multiple images with different modulation frequencies, ranging from 1 pixel per cycle up to 20 pixels per cycle. This step determines the maximum spatial resolution of the system, i.e., the red circle in
To further understand the choice of filter size on the recovered image and to compare the recovered image to an actual image, a still image of a card with printed letters was imaged, as shown in
For both
In summary, time multiplexed structured detection (TMSD) is demonstrated for high-speed flame chemiluminescence measurements in the flames. from a single snapshot. TMSD sheers the time lapse into the spatial frequency shifts, which allows multiple high-speed images to be frequency upshifted into distinct spatial frequency regions from the original image. A cumulative exposure captured in a single snapshot image contains distinct time evolution. Each distinct image is demultiplexed by hyperdyne mixing with the modulation frequency. TMSD is an optical frequency domain analog to carrier frequency modulation in radio and microwave detections.
Based on the current study, the TMSD has the following representative key properties:
- (1) The fundamental principle is to sheer the time lapse into the spatial frequency domain. Thus, a cumulative exposure can be captured in a single image, i.e., shifting the multiple exposures to various locations in the spatial frequency domain, shown in
FIG. 1 . Demultiplexing in the post processing is achieved by homodyne mixing and low-pass filtering, shown inFIG. 4 . - (2) The maximum multiplexing, i.e., maximum number of frames with maximum spatial resolutions, is obtained by fully occupying the frequency domain with hexagonal frequency shifts. It is corresponding to modulate the illumination pattern with varying angles and cycle periods to fill the whole frequency domain up to the diffraction limit circle in
FIG. 2 . - (3) The spatial resolution of time-multiplexing images is reduced and controlled by the low-pass filter, i.e., the size of the red circle in
FIG. 4(b) . It is inherently determined by the uncertainty principle of Fourier transformation, Δfx·x≥1. It is critical to design proper multiplexing patterns to obtain proper spatial resolutions in fluid measurements, which will be filtered and processed to obtain turbulence statistics of strain rate and Reynold shear stress etc. for model development and validations. - (4) Since most of the info in the aerodynamic images are located at the low spatial frequency regime as shown in
FIG. 4(b) , low-pass filter only slightly reduces the spatial resolutions. The features in the time-multiplexing images can be controlled by choosing modulation patterns and cutoff frequency of the low-pass filtering.
Furthermore, using a DMD to generate the modulation patterns at a higher rate (commercially available up to 40 kHz), one can achieve high-speed imaging with essentially any camera in theory. Our application of the SD technique to imaging flame chemiluminescence shows that this technique has potential for applications of high-speed combustion imaging and spectroscopy if used in combination with an intensified camera.
REFERENCES
- 1. C. D. Carter, S. Hammack, and T. Lee, “High-speed planar laser-induced fluorescence of the CH radical using the $$C{circumflex over ( )}{2} †varSigma {circumflex over ( )}{+} {−}X{circumflex over ( )}{2} †varPi †left({0,0} †right)$$ C 2 Σ+−X 2 Π0, 0 band,” Applied Physics B 116, 515-519 (2014).
- 2. P. P. Wu and R. B. Miles, “High-energy pulse-burst laser system for megahertz-rate flow visualization,” Optics Letters 25, 1639-1641 (2000).
- 3. J. D. Miller, M. N. Slipchenko, J. G. Mance, S. Roy, and J. R. Gord, “1-kHz two-dimensional coherent anti-Stokes Raman scattering (2D-CARS) for gas-phase thermometry,” Opt. Express 24, 24971-24979 (2016).
- 4. A. Bohlin and C. J. Kliewer, “Communication: Two-dimensional gas-phase coherent anti-Stokes Raman spectroscopy (2D-CARS): Simultaneous planar imaging and multiplex spectroscopy in a single laser shot,” The Journal of Chemical Physics 138, 221101 (2013).
- 5. N. Jiang, P. S. Hsu, J. G. Mance, Y. Wu, M. Gragston, Z. Zhang, J. D. Miller, J. R. Gord, and S. Roy, “High-speed 2D Raman imaging at elevated pressures,” Optics Letters 42, 3678-3681 (2017).
- 6. Z. Wang, P. Stamatoglou, Z. Li, M. Alden, and M. Richter, “Ultra-high-speed PLIF imaging for simultaneous visualization of multiple species in turbulent flames,” Opt. Express 25, 30214-30228 (2017).
- 7. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” Journal of Microscopy 198, 82-87 (2000).
- 8. L. Schermelleh, R. Heintzmann, and H. Leonhardt, “A guide to super-resolution fluorescence microscopy,” The Journal of Cell Biology 190, 165-175 (2010).
- 9. R. Heintzmann and C. G. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” in BiOS Europe '98, (SPIE, 1999), 12.
- 10. J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).
- 11. E. Kristensson, Z. Li, E. Berrocal, M. Richter, and M. Alden, “Instantaneous 3D imaging of flame species using coded laser illumination,” Proceedings of the Combustion Institute 36, 4585-4591 (2017).
- 12. E. Kristensson, E. Berrocal, and M. Alden, “Two-pulse structured illumination imaging,” Opt. Lett. 39, 2584-2587 (2014).
- 13. S. R. Khan, M. Feldman, and B. K. Gunturk, “Extracting sub-exposure images from a single capture through Fourier-based optical modulation,” Signal Processing: Image Communication 60, 107-115 (2018).
- 14. Z. Zhang, M. N. Shneider, and R. B. Miles, “Coherent microwave rayleigh scattering from resonance-enhanced multiphoton ionization in argon,” Physical Review Letters 98,—(2007).
- 15. Y. Wu, J. C. Sawyer, L. Su, and Z. Zhang, “Quantitative measurement of electron number in nanosecond and picosecond laser-induced air breakdown,” Journal of Applied Physics 119, 173303 (2016).
- 16. P. S. Hsu, M. Gragston, Y. Wu, Z. Zhang, A. K. Patnaik, J. Kiefer, S. Roy, and J. R. Gord, “Sensitivity, stability, and precision of quantitative Ns-LIBS-based fuel-air-ratio measurements for methane-air flames at 1-11 bar,” Applied Optics 55, 8042-8048 (2016).
- 17. Z. Zhang, M. N. Shneider, and R. B. Miles, “Microwave diagnostics of laser-induced avalanche ionization in air,” Journal of Applied Physics 100, 6 (2006).
- 18. A. V. Oppenheim, A. S. Willsky, and S. H. Nawab, Signals & Systems, Prentice-Hall signal processing series (Prentice-Hall International, 1997).
- 19. T. C. Hales, “The Honeycomb Conjecture,” Discrete & Computational Geometry 25, 1-22 (2001).
- 20. K. Dorozynska and E. Kristensson, “Implementation of a multiplexed structured illumination method to achieve snapshot multispectral imaging,” Optics Express 25, 17211-17226 (2017).
- 21. G. Bub, M. Tecza, M. Helmes, P. Lee, and P. Kohl, “Temporal pixel multiplexing for simultaneous high-speed, high-resolution imaging,” Nature methods 7, 209-211 (2010).
- 22. E. Hecht, Optics (Addison-Wesley, 2002).
- 23. K. Lee, K. Kim, G. Kim, S. Shin, and Y. Park, “Time-multiplexed structured illumination using a DMD for optical diffraction tomography,” Opt. Lett. 42, 999-1002 (2017).
- 24. J. O'Connor, V. Acharya, and T. Lieuwen, “Transverse combustion instabilities: Acoustic, fluid mechanic, and flame processes,” Progress in Energy and Combustion Science 49, 1-39 (2015).
- 25. K. Mohri, S. Görs, J. Schöler, A. Rittler, T. Dreier, C. Schulz, and A. Kempf, “Instantaneous 3D imaging of highly turbulent flames using computed tomography of chemiluminescence,” Applied Optics 56, 7385-7395(2017).
Section II—Single-Shot Nanosecond-Resolution Multiframe Passive Imaging by Multiplexed Structured Image Capture
1. IntroductionPassive imaging at nanoseconds or less exposure time has many scientific and engineering applications, including laser-material interactions, femtosecond filament, high harmonic generations, ultrafast chemistry, and air lasing [1]. Current complementary metal-oxide semiconductor (CMOS) and charge-coupled device (CCD) imaging devices cannot reach this speed due to limited on-chip storage capacity and electronic readout speeds, although in theory silicon can reach sub-nanosecond speed [2, 3]. Various optical gating and pump-probe approaches, such as ultrashort pulse interference [4], the Kerr electro-optic effect for ballistic imaging [5, 6], can capture only a single image. Temporal scanning, i.e., repetitive measurements with a varied delay between the pump and probe or between laser pulse and camera gate can be used [7], but are significantly limited the applications to repetitive events, and therefore, only provide statistical measurements. Recent demonstrations of passive imaging methods utilize compressed sensing to recover ultrafast images from a streak camera or temporal pixel multiplexing [8], which is different from the current approach [9, 10]. Others have utilized spatial modulation of the light source for boosting imaging speeds and storing multiple images in a single frame [11]. Modulation of light just prior to collection for boosting imaging speeds has also been demonstrated [12, 13]. A recent review [14] explores in detail a variety of novel ultra-fast single-shot imaging techniques.
In the present work, a detailed study of single-shot, passive imaging with temporal resolution at the nanosecond level is presented using a unique high-speed computational imaging method, namely MUltiplexed Structured Image Capture (MUSIC). As a passive imaging method, MUSIC encodes the temporal evolution of the scenes in a single snapshot into spatial frequency shifts; thus, producing multiplexed images. These multiplexed images contain image information from several points in time, which can be separated computationally. Furthermore, MUSIC can essentially bypass the speed limits of electronic and/or mechanical shutters in the high-speed cameras since the encoding is done optically. In this current form of MUSIC, modulation patterns are applied just before imaging. Sub-exposure 2D images corresponding to the optical flight time of nanoseconds are demultiplexed and extracted, revealing nanosecond evolution of the 2D scene in the post processing. In theory, the MUSIC method might extend the single-shot passive imaging up to femtoseconds, which will provide an alternate approach for ultrafast passive imaging and unprecedented insights on ultrafast physics and chemistry. Theoretical limitations on maximum multiplexed patterns are discussed as combinations of diffraction limits and spatial resolutions of the recovered images.
Laser-induced ionization and plasma is used as an imaging target. It has been widely used for various applications ranging from basic scientific laboratories to space exploration, including femtosecond filament [15], high harmonic generation [16], air lasing [17], Laser-induced Breakdown Spectroscopy (LIBS) [18-20], laser-induced ignition [21] and etc. Laser plasma generation begins with the creation of seed electrons from multiphoton ionization and/or Keldysh tunneling ionization processes near the laser focal position by the front end of the high-intensity pulse, with the latter mechanism only becoming applicable for very high intensities. For the ns-pulse used in this work, non-resonant multiphoton ionization is the seed electron generation mode. The seed electrons absorb a large percentage of the remaining laser pulse and are accelerated by the laser's electromagnetic field via the inverse-bremsstrahlung effect. With a sufficient optical field applied, the electrons are accelerated to energies sufficient for electron impact ionization upon collisions with the neutral gas atoms and molecules. The newly liberated electrons are then accelerated by the field leading to an electron avalanche ionization process during the laser pulse in nanoseconds [22, 23]. The development of the laser plasma is a very dynamic process, and plasma parameters can change drastically within a nanosecond considering that electron collisions are on the order of tens of picoseconds. Therefore, the LIBS plasma is used as a target for high-speed imaging.
2. Experiment Setup 2.1 Three-Channel Multiplexed Structured Image Capture (MUSIC) SystemA variable zoom camera lens was used to relay the plasma emissions. The first beam splitter (30% transmission, 70% reflection) was used to separate the images into the path one, path two, and path three. The image was further split by the second beam splitter (50% transmissions, 50% reflection) into path two and three. The plasma image was projected onto three Ronchi rulings (10 grooves per millimeter) along three optical paths. The modulated images were merged by a beam combiner cube before being collected by a gated Intensified CCD camera (Princeton Instruments, PI Max 4). The optical paths along three channels are 10 cm, 40 cm, and 70 cm, respectively, which corresponds to 1 ns of delay for path two and 2 ns of delay for path three relative to the path one. The number of channels can be expanded as needed to extend more simultaneous imaging. The MUSIC system, as an ultrafast imaging system, can be used for single-shot nanosecond resolution or higher imaging and measurements of laser-induced plasma generation and visualization and tracking of fast objects.
2.2 Coherent Microwave Scattering SystemA 10-dBm tunable microwave source (HP 8350B sweep oscillator, set at ˜10 GHz) was split into two channels.[22, 24] One of the channels was used to illuminate the plasma by employing a microwave horn (WR75, 15-dB gain). The backscattering is monitored through a homodyne transceiver detection system. The scattering from the plasma is collected by the same microwave horn. The signal passes through a microwave circulator and is amplified 30 dB by one preamplifier at ˜10 GHz. After the frequency is down-converted with by mixing with the second channel, two other amplifiers with bandwidth in the range 2.5 kHz-500 MHz amplifies the signal 60 dB. Considering the geometry of dipole radiation of microwave, the polarization of the microwave is chosen to be along the propagation direction of the laser beam, maximizing the scattering signal. The coherent microwave scattering system can be used to monitor the generation and evolution of electrons in the laser-induced plasma region with a temporal resolution of ˜3 ns.
2.3 Imaging Target532 nm laser radiation from an Nd:YAG laser (Continuum Surelite) operating with a nominal 8 ns pulse width at 10 Hz repetition rate was focused with a 50 mm plano convex lens into a 20 μm spot, yielding the peak intensity of ˜1012 W/cm2. Coherent microwave scattering and MUSIC were used to simultaneously characterize the laser-induced ionization in air, as shown by the experiment setup in
A variable zoom camera lens was used to relay the plasma emissions into the three-channel MUSIC apparatus, as shown in
It should be noted that the current configuration uses beamsplitters and optical delays to gain the temporal resolutions among multiplexed images. The advantages are the simplicity in the experimental setup: optical delays can be on the order of picoseconds or femtoseconds for higher temporal resolutions. While it leads to a reduced optical efficiency for adding more channels.
3. Results 3.1 Imaging ModelThe multiplexed image intensity, ICAM, collected by the camera in multiple channels is
IG(,t)=Σn=13W(t)In(,t−ΔtDn)Mn()εn (1)
where ΔtDn=tDn−tD1 is the time delay that has traveled path n relative to path one and In is the image intensity traveling along path,
Here, I(, t) is the image intensity, Mn() is the spatial modulation mask for path n, and εn is the optical efficiency of path n. Imaging with a gated camera can be modeled as windowing in the time domain, integrating (i.e. summing) image intensity over the window, and sampling in the spatial domain, with the spatial sampling determined by the pixel layout and size. The windowing function is a square pulse centered at time t0 with width TG,
and t0=TGD+TG/2, with TGD denoting the gate delay time.
Each term in the sum in Eq. (1) represents image information that has traveled at a different path. Since the camera gate is finite in time, information delayed by traveling different paths is sliced and shortened by the gate, effectively giving the delayed information shorter gate times, i.e., time of flight acquisition. The spatial modulation resulting from applying a unique mask Mn() to each path, which is multiplication of the images with the masks in the spatial domain, is equivalent to the convolution of the images and masks in the spatial frequency domain. The Ronchi rulings used in this work to apply the modulation patterns are modeled as periodic square waves with spatially frequency k0.
In the spatial frequency domain, the multiplexed image can be shown as,
Here “⊗” denotes the convolution operator, where the right-hand side comes from the Fourier transform of the Ronchi ruling pattern, and kxn′=kx cos θn+ky sin θn. Rotation of the Ronchi ruling by angle θn rotates the Fourier transform of the mask by the same amount, which is a sinc function.
Images through each path contribute a unique shifted spatial frequency throughout the gate time TG due to kxn′ being different, which is shown in
The fundamental principle of MUSIC in this application is to encode the time lapse into the spatial frequency domain, thereby allowing a single, cumulative exposure to be captured that contains multiple individual images (the maximum number of images than can be stored is discussed later in this work). The encoded images are then recovered through selective filtering in the frequency domain. The spatial resolution of the multiplexed images is reduced and dependent on the bandwidth of the low-pass filter used during recovery. It is inherently determined by the uncertainty principle of Fourier transformation, i.e., Δfr·Δr≥1. Furthermore, it should be noted that the spatial frequencies of most images lie within low frequency ranges as discussed in compressed sensing techniques [25]. A loss of <5% of high spatial frequency components is generally used as the criterion for recovery.
Computational image recovery on the phantoms of multiplexed plasma emissions was conducted. A two-step recovery was used: first shifting the corresponding harmonic back to the center of the spatial frequency domain was to remove the modulation patterns from the multiplexed images and then applying a low-pass filter around the center region was to recover individual images, which is similar to methodology used for structured illumination technique [12]. The size of the filter determines the resolution, since any spatial information outside the filter is lost; however, the filter size must also be small enough to prevent interference from other harmonics in the Fourier domain.
Experimental demonstration of single-shot nanosecond-resolution imaging of laser-induced plasma was conducted, as shown in
To get insight into the physics of laser-induced ionization and the time scales associated with its evolution, comparisons of the MUSIC measurements with coherent microwave scattering and numerical simulations solving the Boltzmann kinetic equation for the electron energy distribution function (EEDF) were conducted. Emissions from laser-induced plasmas are initially broadband continuum as inverse bremsstrahlung and free-free transitions. The emissions become distinct atomic emission lines after the plasma cools down at 20-30 ns [22]. Coherent microwave scattering tracks total electron number in the plasma and is proportional to the total plasma emissions in the avalanche phase of the plasma generation. It should be noted that our emphasis here is a qualitative comparison of microwave scattering, MUSIC and plasma modeling to confirm temporal evolution of the plasma.
The plasma kinetic model is based on a non-stationary kinetic equation under Lorentz approximation and includes effects of collisional electron heating by the laser field, generation of new electrons in the process of optical field ionization (OFI) from the ground and electronically excited molecular states, elastic scattering of electrons on N2 and O2 molecules in air, inelastic processes of electron impact excitations of the A3Σu, B3Πg, a1Σu, a1Πg, C3Πu electronic states in molecular nitrogen, vibrational excitation in N2 and O2 molecules, and electron impact ionization from the ground and excited electronic states. The calculated EEDF provides reaction rates for the coupled set of balance equations for the densities of electrons, neutral, and ionic and electronically excited molecular and atomic species. The OFI source of electrons is described using Popov-Perelomov-Terent'ev (PPT) strong field ionization model [26] in the form suggested in [27] and the photoelectron energy distribution function derived in [28]. The calculations start 3 ns before the maximum of the laser pulse when the OFI generated electron density reaches 1010 cm−3. The plasma density predicted by the simulations reaches the value ≈6·1017 cm−3 which is in a very good agreement with the value ≈7.5·1017 cm−3 retrieved from the coherent microwave scattering measurements [23].
There is a theoretical limit for multiplexing single exposure, which is determined by the desired spatial resolution and imaging system diffraction limits. The maximum multiplexing is determined by first considering the maximum resolution desired for each recovered image, which has a corresponding spatial frequency, kfilter. This kfilter is used as the filter radius in the Fourier domain during image recovery. Furthermore, note that the only space available in the Fourier domain to shift image information to is the annulus defined between kI, a spatial frequency cutoff of the common fundamental harmonic, and kdiff, the spatial frequency corresponding to the diffraction limit. Then an upper bound on the number of patterns that can be used in the multiplexing, Nu, is the ratio the area of the annulus, AAnnulus, in the Fourier domain to the area of the filter, AFilter.
AFilter=πkfilter2 (5)
In the spatial frequency domain (i.e., Fourier domain), the usable region for multiplexing can be expressed as
AAnnulus=π[kdiff2−kl2] (6)
Therefore, an upper bound on the number of patterns than can be used is the ratio of the area of the annulus to the area of the filter (which represents how many none overlapping circles can be fit in the annulus:
The factor of two is due to the symmetry of the Fourier domain; specifically, the copying of information to both positive and negative frequency locations for a given modulation. Furthermore, this is an upper bound since it includes the gaps between the filling circles in the calculation. Hence,
The resolution of the recovered image is determined by the size of the filter, due to the exclusion of high frequency image information outside of the filter region. Since, kfilter2∝D−2, where D is the smallest resolvable distance, then the limit on the resolution of the recovered image is:
where α is a constant. Hence, D will increase if the number of modulation patterns is increased, representing lower image quality for the recovered images. Furthermore, in order to decrease D, one must choose to use less patterns and larger filter radii.
5. Summary and ConclusionsSingle-shot nanosecond-resolution multiframe passive imaging method, MUltiplexed Structured Imaging and Capture (MUSIC) was demonstrated to characterize avalanche ionization of laser-induced plasma in air. The MUSIC technique uses beamsplitters and optical delay lines to generate time evolution of a scene. On each beampath the image is uniquely coded by a Ronchi ruling, producing distinct spatial frequency shifts of the image in the spatial Fourier domain. The multiplexed images from individual beampaths are captured by a time-gated camera at a few nanoseconds. The final image, containing time evolution of the scene from each path, is demultiplexed in the after-processing to recover nanosecond-resolution images. The technique is used to monitor the temporal evolution of the avalanche ionization process in the laser-induced plasma in air. Comparisons with coherent microwave scattering measurements and plasma modeling yield good agreements.
The MUSIC technique as demonstrated here, is a passive imaging technique, which has the following representative characteristics:
- 1. The fundamental principle is to encode the time lapse into the spatial frequency domain using different spatial modulation patterns prior to arriving at the camera. Thus, a cumulative exposure can be captured in a single image, i.e., shifting the multiple exposures to various locations in the spatial frequency domain. Demultiplexing in the post-processing is achieved by homodyne mixing with the modulation pattern and low-pass filtering.
- 2. The spatial resolution of time-multiplexing images is reduced and controlled by the low-pass filter. It is inherently determined by the uncertainty principle of Fourier transformation, Δfr·Δr≥1.
- 3. The maximum multiplexing, i.e., maximum number of frames with maximum spatial resolutions, is obtained by fully occupying the frequency domain. It is corresponding to modulate the images with varying angles and cycle periods to fill the whole frequency domain up to the diffraction limit circle.
Overall, the ability to overlay multiple frames into a single image can be very beneficial in various applications where only a single camera is available (e.g. optical access restrictions). The MUSIC passive imaging technique can be useful for high temporal resolution applications in physics, chemistry and engineering.
REFERENCES
- 1. H. Mikami, L. Gao, and K. Goda, “Ultrafast optical imaging technology: principles and applications of emerging methods,” in Nanophotonics(2016), p. 497.
- 2. T. G. Etoh, V. T. S. Dao, T. Yamada, and E. Charbon, “Toward One Giga Frames per Second—Evolution of in Situ Storage Image Sensors,” Sensors (Basel, Switzerland) 13, 4640-4658 (2013).
- 3. T. G. Etoh, A. Q. Nguyen, Y. Kamakura, K. Shimonomura, T. Y. Le, and N. Mori, “The Theoretical Highest Frame Rate of Silicon Image Sensors,” Sensors (Basel, Switzerland) 17, 483 (2017).
- 4. G. P. Wakeham, and K. A. Nelson, “Dual-echelon single-shot femtosecond spectroscopy,” Opt. Lett. 25, 505-507 (2000).
- 5. M. Linne, M. Paciaroni, T. Hall, and T. Parker, “Ballistic imaging of the near field in a diesel spray,” Experiments in Fluids 40, 836-846(2006).
- 6. S. P. Duran, J. M. Porter, and T. E. Parker, “Ballistic imaging of diesel sprays using a picosecond laser: characterization and demonstration,” Appl. Opt. 54, 1743-1750 (2015).
- 7. A. Velten, D. Wu, B. Masia, A. Jarabo, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Imaging the propagation of light through scenes at picosecond resolution,” Commun. ACM 59, 79-86 (2016).
- 8. G. Bub, M. Tecza, M. Helmes, P. Lee, and P. Kohl, “Temporal pixel multiplexing for simultaneous high-speed, high-resolution imaging,” Nature Methods 7, 209 (2010).
- 9. J. Liang, L. Zhu, and L. V. Wang, “Single-shot real-time femtosecond imaging of temporal focusing,” Light: Science & Applications 7, 42 (2018).
- 10. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74 (2014).
- 11. A. Ehn, J. Bood, Z. Li, E. Berrocal, M. Alden, and E. Kristensson, “FRAME: femtosecond videography for atomic and molecular dynamics,” Light: Science &Amp; Applications 6, e17045 (2017).
- 12. M. Gragston, C. D. Smith, and Z. Zhang, “High-speed flame chemiluminescence imaging using time-multiplexed structured detection,” Appl. Opt. 57, 2923-2929 (2018).
- 13. S. R. Khan, M. Feldman, and B. K. Gunturk, “Extracting sub-exposure images from a single capture through Fourier-based optical modulation,” Signal Processing: Image Communication 60, 107-115 (2018).
- 14. J. Liang, and L. V. Wang, “Single-shot ultrafast optical imaging,” Optica 5, 1113-1127 (2018).
- 15. A. Braun, G. Korn, X. Liu, D. Du, J. Squier, and G. Mourou, “Self-channeling of high-peak-power femtosecond laser pulses in air,” Optics letters 20, 73-75 (1995).
- 16. B. Dromey, M. Zepf, A. Gopal, K. Lancaster, M. Wei, K. Krushelnick, M. Tatarakis, N. Vakakis, S. Moustaizis, and R. Kodama, “High harmonic generation in the relativistic limit,” Nature physics 2, 456 (2006).
- 17. A. Dogariu, J. B. Michael, M. O. Scully, and R. B. Miles, “High-gain backward lasing in air,” Science 331, 442-445 (2011).
- 18. M. B. Shattan, D. J. Miller, M. T. Cook, A. C. Stowe, J. D. Auxier, C. Parigger, and H. L. Hall, “Detection of uranyl fluoride and sand surface contamination on metal substrates by hand-held laser-induced breakdown spectroscopy,” Appl. Opt. 56, 9868-9875 (2017).
- 19. B. Sallé, J. L. Lacour, P. Mauchien, P. Fichet, S. Maurice, and G. Manhés, “Comparative study of different methodologies for quantitative rock analysis by Laser-Induced Breakdown Spectroscopy in a simulated Martian atmosphere,” Spectrochimica Acta Part B: Atomic Spectroscopy 61, 301-313 (2006).
- 20. P. S. Hsu, M. Gragston, Y. Wu, Z. Zhang, A. K. Patnaik, J. Kiefer, S. Roy, and J. R. Gord, “Sensitivity, stability, and precision of quantitative Ns-LIBS-based fuel-air-ratio measurements for methane-air flames at 1-11 bar,” Appl. Opt. 55, 8042-8048 (2016).
- 21. P. S. Hsu, S. Roy, Z. Zhang, J. Sawyer, M. N. Slipchenko, J. G. Mance, and J. R. Gord, “High-repetition-rate laser ignition of fuel-air mixtures,” Opt. Lett. 41, 1570-1573 (2016).
- 22. Y. Wu, J. C. Sawyer, L. Su, and Z. Zhang, “Quantitative measurement of electron number in nanosecond and picosecond laser-induced air breakdown,” Journal of Applied Physics 119, 173303 (2016).
- 23. Z. Zhang, M. N. Shneider, and R. B. Miles, “Microwave diagnostics of laser-induced avalanche ionization in air,” Journal of Applied Physics 100, 074912 (2006).
- 24. Z. Zhang, M. N. Shneider, and R. B. Miles, “Coherent microwave rayleigh scattering from resonance-enhanced multiphoton ionization in argon,” Physical Review Letters 98,—(2007).
- 25. D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory 52, 1289-1306 (2006).
- 26. A. Perelomov, and V. Popov, “Ionization of atoms in an alternating electric field,” Sov. Phys. JETP 23, 924-934 (1966).
- 27. J. Schwarz, P. Rambo, M. Kimmel, and B. Atherton, “Measurement of nonlinear refractive index and ionization rates in air using a wavefront sensor,” Opt. Express 20, 8791-8803 (2012).
- 28. V. D. Mur, S. V. Popruzhenko, and V. S. Popov, “Energy and momentum spectra of photoelectrons under conditions of ionization by strong laser radiation (The case of elliptic polarization),” Journal of Experimental and Theoretical Physics 92, 777-788 (2001).
Section III—Multiplexed Structured Image Capture for Single Exposure Field of View Increase
In this work, Multiplexed Structured Image Capture (MUSIC) is introduced as an approach to increase the field of view during a single exposure. MUSIC works by applying a unique spatial modulation pattern to the light collected from different parts of the scene. This work demonstrates two unique setups for collecting light from different parts of the scene, a single lens configuration and a dual lens configuration. Post-processing of the modulated images allows for the two scenes to be easily separated using a Fourier analysis of the captured image. We demonstrate MUSIC for still scene, schlieren, and flame chemiluminescence imaging to increase the field of view. Though we demonstrate only two scene imaging, more scenes can be added by using extra patterns and extending the optical setup.
1. IntroductionImaging remains one of the most important and reliable means of measurement in aerospace and combustion science. Schlieren imaging, developed in 1864, is still used today for visualization of fluid flow in even the most modern wind-tunnel experiments [1-3]. PLIF [4], 2D Raman scattering [5, 6], and chemiluminescence [4, 7] imaging techniques are essential for combustion diagnostics; providing information about local species concentration, geometry, and more. Modern tomographic measurements for combustion processes are an example of the high-quality information that can be gained from imaging [8, 9]. Often however, a reduction in the field of view occurs when imaging with high-speed cameras due to reducing the on-camera region of interest to account for limitations on electronic read-out times. Furthermore, multiple cameras are sometimes needed for imaging of larger objects. Thus, more spatial information from a single measurement would be beneficial for capturing dynamics on a larger scale.
Light modulation techniques have shown great promise in imaging [10]. Techniques such as CUP [11, 12], structured illumination [13-15], FRAME [16] and MUSIC [17] have all been demonstrated for ultrafast-imaging or for resolution enhancement. However, the image multiplexing involved in the multiplexed structured image capture (MUSIC) technique can be used to not only store transient information from the same scene, but also multiple scenes at a given point in time. Image multiplexing exploits the fact that most of the important information in an image occupies the center of the Fourier domain (i.e. is low frequency information) [18]. Hence, the Fourier domain of an image is usually negligible away from the origin due to the low magnitude of high-frequency image components, meaning the image generally has a sparse Fourier domain. This sparseness is one of the key ideas behind compressed sensing techniques for imaging purposes.
A mathematical description of MUSIC for field of view (FOV) extension begins by considering image information exiting the optical systems in
Here, In is the image information defined as:
Mn is the modulation applied by the Ronchi rulings, and εn is the optical efficiency along path n. Also, ΔtDn is the time delay relative to path one caused by the extra travel distance. Assuming the camera gate is much longer than the time delay ΔtDn, the approximation ΔtDn≈0 can be made. This holds true for MHz and kHz imaging systems given that our system has ΔtD2˜1 ns. Therefore, both arms of the imaging system will contribute information from nearly the same moment in time. Note that the modulation masks consist of periodic stripes due to the structure of the Ronchi rulings. Also, note that vertical stripes can be modeled as period square waves in the x-direction [19]:
Where T is the period of the modulation and T1=T/4. Combining Eq. (1) and Eq. (3) and taking the spatial Fourier transform yields:
Here, k0 is the fundamental spatial frequency for the modulation mask, and is the kxn is the x-spatial frequency variable for path n. Each path must have a unique modulation, which is best accomplished by rotation of the spatial modulation pattern. Since a coordinate rotation in the spatial domain corresponds to a rotation in the frequency domain, Eq. (4) becomes:
In Eq. (5), kxn′=kx cos θn+ky sin θn and θn is the rotation angle of the modulation pattern. Eq. (5) shows that the delta function will create copies of the original image's Fourier transform and shift it to various harmonics each while reducing amplitude. However, since these copies contain the original images information, but shifted to a unique frequency location corresponding to the modulation pattern, overlapped images can be separated. Recovery of images is done using an algorithm detailed in previous work, and was developed for structured illumination measurements [13, 19]. The algorithm multiplies the composite multiplexed image by sinusoidal functions to shift the information from one of the sinc function offsets to the center of the Fourier domain, and then a low-pass filter is used to isolate the Fourier information corresponding to one of the multiplexed images, which is then inverse-transformed to recover the image. We use a Gaussian filter for the filtering step, similar to our previous work with MUSIC [17, 19].
To show the implications of Eq. (5) in practice, a computer-based example of image multiplexing was done, as shown in
The multiplexing apparatus was constructed it two ways, as shown in
Results and Discussion
3.1 USAF Resolution CardExperimental demonstration of the Multiplexed Structured Image Capture (MUSIC) technique for FOV extension was first carried out on a USAF 1951 optical test pattern to provide a standard for determining the quality of recovered images and to aid in alignment of recovered images from the two channels.
To establish proper truth images for comparison, Ronchi rulings were removed and imaging was done one channel at a time with one channel physically blocked.
MUSIC imaging of methane flame chemiluminescence is shown in
In summary, the MUSIC imaging technique has been successfully applied for extending the field of view during a single exposure. The technique was demonstrated for flame chemiluminescence and schlieren imaging, which are very popular imaging techniques in the aerospace and combustion optical diagnostics community. Note that although we applied the technique to look at scenes near each other, it can be applied for simultaneous imaging of two unrelated or vastly separated scenes, which could be useful for imaging of large scale models in wind-tunnels with a single camera. Furthermore, since high-speed cameras generally require restriction of the camera region of interest, MUSIC could allow researchers to capture more data to help offset this restriction. Two optical setups for FOV extension using MUSIC have been demonstrated, a single lens approach suitable for applications with optical access restrictions, and a more efficient dual-lens approach when no such restrictions exist. Though we only demonstrate two-scene imaging, additional scenes can be encoded into a single image using more patterns and optics. However, restrictions on reconstruction resolution limit the number of usable patterns, as discussed in literature [17, 19].
REFERENCES
- 1. G. S. Settles, Schlieren and Shadowgraph Techniques: Visualizing Phenomena in Transparent Media (Springer Berlin Heidelberg, 2012).
- 2. D. Baccarella, Q. Liu, A. Passaro, T. Lee, and H. Do, “Development and testing of the ACT-1 experimental facility for hypersonic combustion research,” Measurement Science and Technology 27, 045902 (2016).
- 3. S. J. Laurence, A. Wagner, and K. Hannemann, “Experimental study of second-mode instability growth and breakdown in a hypersonic boundary layer using high-speed schlieren visualization,” Journal of Fluid Mechanics 797, 471-503 (2016).
- 4. J. D. Miller, S. J. Peltier, M. N. Slipchenko, J. G. Mance, T. M. Ombrello, J. R. Gord, and C. D. Carter, “Investigation of transient ignition processes in a model scramjet pilot cavity using simultaneous 100 kHz formaldehyde planar laser-induced fluorescence and CH* chemiluminescence imaging,” Proceedings of the Combustion Institute 36, 2865-2872 (2017).
- 5. N. Jiang, P. S. Hsu, J. G. Mance, Y. Wu, M. Gragston, Z. Zhang, J. D. Miller, J. R. Gord, and S. Roy, “High-speed 2D Raman imaging at elevated pressures,” Opt. Lett. 42, 3678-3681 (2017).
- 6. J. D. Miller, M. N. Slipchenko, J. G. Mance, S. Roy, and J. R. Gord, “1-kHz two-dimensional coherent anti-Stokes Raman scattering (2D-CARS) for gas-phase thermometry,” Opt. Express 24, 24971-24979 (2016).
- 7. B. A. Rankin, D. R. Richardson, A. W. Caswell, A. G. Naples, J. L. Hoke, and F. R. Schauer, “Chemiluminescence imaging of an optically accessible non-premixed rotating detonation engine,” Combustion and Flame 176, 12-22 (2017).
- 8. B. R. Halls, D. J. Thul, D. Michaelis, S. Roy, T. R. Meyer, and J. R. Gord, “Single-shot, volumetrically illuminated, three-dimensional, tomographic laser-induced-fluorescence imaging in a gaseous free jet,” Opt. Express 24, 10040-10049(2016).
- 9. T. R. Meyer, B. R. Halls, N. Jiang, M. N. Slipchenko, S. Roy, and J. R. Gord, “High-speed, three-dimensional tomographic laser-induced incandescence imaging of soot volume fraction in turbulent flames,” Opt. Express 24, 29547-29555 (2016).
- 10. J. Liang, and L. V. Wang, “Single-shot ultrafast optical imaging,” Optica 5, 1113-1127 (2018).
- 11. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74 (2014).
- 12. J. Liang, L. Gao, P. Hai, C. Li, and L. V. Wang, “Encrypted Three-dimensional Dynamic Imaging using Snapshot Time-of-flight Compressed Ultrafast Photography,” Scientific Reports 5, 15504 (2015).
- 13. K. Dorozynska, and E. Kristensson, “Implementation of a multiplexed structured illumination method to achieve snapshot multispectral imaging,” Opt. Express 25, 17211-17226 (2017).
- 14. E. Kristensson, and E. Berrocal, “Recent development of methods based on structured illumination for combustion studies,” in Imaging and Applied Optics 2016(Optical Society of America, Heidelberg, 2016), p. LT4F.1.
- 15. M. Saxena, G. Eluru, and S. S. Gorthi, “Structured illumination microscopy,” Adv. Opt. Photon. 7, 241-275 (2015).
- 16. A. Ehn, J. Bood, Z. Li, E. Berrocal, M. Aldön, and E. Kristensson, “FRAME: femtosecond videography for atomic and molecular dynamics,” Light: Science &Amp; Applications 6, e17045 (2017).
- 17. M. Gragston, C. D. Smith, D. Kartashov, M. N. Shneider, and Z. Zhang, “Single-Shot Nanosecond-Resolution Multiframe Passive Imaging by Multiplexed Structured Image Capture,” Opt. Express 26, 28441-28452 (2018).
- 18. D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory 52, 1289-1306 (2006).
- 19. M. Gragston, C. D. Smith, and Z. Zhang, “High-speed flame chemiluminescence imaging using time-multiplexed structured detection,” Appl. Opt. 57, 2923-2929 (2018).
The system 100 includes a camera 120 configured for capturing image frames that include multiple images modulated into a single image frame. The system 100 includes a computer system 102 including at least one processor 104 and memory 106 storing executable instructions for the processor 104.
The computer system 102 includes an image importer 108 configured for communicating with the camera 120 and for receiving images from the camera 120. The computer system 102 includes a demodulator 110, implemented on the processor 104, configured for demodulating images to recover, from each single image frame, a number of recovered image frames each depicting the target. The demodulator 110 can be configured to recover multiple recovered images each depicting the target at different times within an exposure window. The demodulator 110 can be configured to recover multiple recovered images each depicting the target from different fields of view at a same time. In general, the demodulator 110 is configured to perform selective filtering in the frequency domain to isolate the recovered images using data specifying modulation patterns of the optical system 130.
The system 100 can be configured for periodic optically modulated detection. The optical modulation is “periodic” in that different optical modulation patterns are applied at different pre-determined times (or on different optical paths, and therefore at different times) and are not applied randomly. For example, different optical modulation patterns can be applied at evenly spaced intervals across an exposure window across a camera, or at other intervals that can be programmed into the computer system 102 for demodulation.
The method 200 includes multiplying the image by a sinusoidal phase matched pattern (202). The method 200 includes performing a Fourier transform on the resulting image (204). The method 200 includes applying a low pass filter to isolate one of the recovered image frames (206). The method 200 includes performing an inverse Fourier transform on the resulting image (208), which results in the recovered image. The method 200 includes determining whether there are more images to recover (210), and if so, repeating the method 200 (return to 202).
The method includes executing a loop 306 for each image to recover until all of the recovered images are recovered from the composite image. Executing the loop 306 includes selecting one of the shifted copies to recover (308) and applying a low-pass filter centered around the corresponding shifted information (310). Executing the loop includes performing an inverse Fourier transform (312) and taking the absolute value of the resulting image (314), which results in one image being recovered from the composite image. To further illustrate the method, consider the following discussion.
Consider Eq. 4 from Section II above,
The information on the right side of the operator represents can be rewritten with the coordinate system rotated to match the left hand side:
Since the limited pixel density and optical resolution of the system will limited the number of higher-order terms in the inner sum that actually appear in experimental data, the above can be written as:
where only the fundamental and first-order terms contribute significant information and the coefficients “a” are constants from the coefficient in the inner sum. Now for the new reconstruction process, we simply apply the low pass filter centered around the still shifted information. Since we only apply this to one pattern at a time, the effect on the previous equation is (choosing information for n=1 for example):
The convolution with this delta function will perfectly sample the images Fourier transform but will shift it:
Taking the inverse Fourier transformation with respect to the spatial coordinates gives:
So taking the absolute value will eliminate the phase factor:
Therefore, the image is recovered with no modulation leftover and corresponds to the correct portion of the camera gate. There is a reduction in amplitude.
In brief, the executing the method includes performing steps to:
-
- 1. Take the Fourier transform of the modulated image.
- 2. Select one of the shifted copies and apply a filter centered around that copy. Everything outside of the filter is set to zero.
- 3. Inverse Fourier transform the result.
- 4. Take the absolute value. The image is now recovered.
- 5. Repeat for each of the encoded images.
Accordingly, while the methods and systems have been described in reference to specific embodiments, features, and illustrative embodiments, it will be appreciated that the utility of the subject matter is not thus limited, but rather extends to and encompasses numerous other variations, modifications and alternative embodiments, as will suggest themselves to those of ordinary skill in the field of the present subject matter, based on the disclosure herein.
Various combinations and sub-combinations of the structures and features described herein are contemplated and will be apparent to a skilled person having knowledge of this disclosure. Any of the various features and elements as disclosed herein may be combined with one or more other disclosed features and elements unless indicated to the contrary herein. Correspondingly, the subject matter as hereinafter claimed is intended to be broadly construed and interpreted, as including all such variations, modifications and alternative embodiments, within its scope and including equivalents of the claims.
Claims
1. A system for imaging a target, the system comprising:
- an optical modulator configured for applying, at each time of a plurality of times of an exposure window, a respective optical modulation pattern to a received image of the target to output a modulated image;
- a camera configured for capturing a single image frame for the exposure window by receiving, at each of time of the plurality of times of the exposure window, the modulated image of the target; and
- a demodulator implemented on a computer system comprising at least one processor, wherein the demodulator is configured for demodulating the single image frame based on the optical modulation patterns to recover a plurality of recovered image frames each depicting the target at a respective recovered time within the exposure window;
- wherein demodulating the single image frame comprises, for each recovered image frame, multiplying the single image frame by a sinusoidal phase matched pattern, applying a Fourier transform, applying a low pass filter, and applying an inverse Fourier transform.
2. The system of claim 1, wherein multiplying the single image frame by a sinusoidal phase matched pattern comprises shifting a first-order harmonic for the recovered image frame to a center of a spatial frequency domain for the Fourier transform.
3. The system of claim 1, wherein the optical modulator comprises a digital micromirror device comprising an array of micromirrors and a controller storing digital specifications of the optical modulation patterns.
4. The system of claim 1, wherein the optical modulator comprises a plurality of optical gratings.
5. The system of claim 1, comprising one or more lenses configured to focus light emitted by the target onto the optical modulator.
6. A system for imaging a target, the system comprising:
- an optical modulator configured for applying, at each time of a plurality of times of an exposure window, a respective optical modulation pattern to a received image of the target to output a modulated image;
- a camera configured for capturing a single image frame for the exposure window by receiving, at each of time of the plurality of times of the exposure window, the modulated image of the target; and
- a demodulator implemented on a computer system comprising at least one processor, wherein the demodulator is configured for demodulating the single image frame based on the optical modulation patterns to recover a plurality of recovered image frames each depicting the target at a respective recovered time within the exposure window;
- wherein the optical modulator is configured to apply the optical modulation patterns to upshift, in a spatial frequency domain, the received images to different patterned hexagons in the spatial frequency domain.
7. A method for imaging a target, the method comprising:
- applying, at each time of a plurality of times of an exposure window, a respective optical modulation pattern, using an optical modulator, to a received image of the target to output a modulated image;
- capturing, at a camera, a single image frame for the exposure window by receiving, at each of time of the plurality of times of the exposure window, the modulated image of the target; and
- demodulating, at a computer system comprising at least one processor, the single image frame based on the optical modulation patterns to recover a plurality of recovered image frames each depicting the target at a respective recovered time within the exposure window;
- wherein demodulating the single image frame comprises, for each recovered image frame, multiplying the single image frame by a sinusoidal phase matched pattern, applying a Fourier transform, applying a low pass filter, and applying an inverse Fourier transform.
8. The method of claim 7, wherein multiplying the single image frame by a sinusoidal phase matched pattern comprises shifting a first-order harmonic for the recovered image frame to a center of a spatial frequency domain for the Fourier transform.
9. The method of claim 7, wherein the optical modulator comprises a digital micromirror device comprising an array of micromirrors and a controller storing digital specifications of the optical modulation patterns.
10. The method of claim 7, wherein the optical modulator comprises a plurality of optical gratings.
11. The method of claim 7, comprising focusing light emitted by the target onto the optical modulator using one or more lenses.
12. A method for imaging a target, the method comprising:
- applying, at each time of a plurality of times of an exposure window, a respective optical modulation pattern, using an optical modulator, to a received image of the target to output a modulated image;
- capturing, at a camera, a single image frame for the exposure window by receiving, at each of time of the plurality of times of the exposure window, the modulated image of the target; and
- demodulating, at a computer system comprising at least one processor, the single image frame based on the optical modulation patterns to recover a plurality of recovered image frames each depicting the target at a respective recovered time within the exposure window;
- wherein the optical modulator is configured to apply the optical modulation patterns to upshift, in a spatial frequency domain, the received images to different patterned hexagons in the spatial frequency domain.
13. A system for imaging a target, the system comprising:
- one or more beam splitters configured for separating a received image of the target into a plurality of optical paths each having a different optical length, each path including a respective optical modulator configured to output a respective modulated image of the target;
- one or more beam combiners configured for combining the modulated images;
- a camera configured for capturing a single image frame for an exposure window by receiving, at each of time of the plurality of times of the exposure window, an output from the one or more beam combiners of an optical combination of the modulated images of the target; and
- a demodulator implemented on a computer system comprising at least one processor, wherein the demodulator is configured for demodulating the single image frame to recover a plurality of recovered image frames each depicting the target at a respective recovered time within the exposure window;
- wherein demodulating the single image frame comprises performing selective filtering in the frequency domain to isolate the recovered images using data specifying modulation patterns of the optical modulators.
14. The system of claim 13, wherein each optical modulator comprises a Ronchi ruling having a unique optical rotation.
15. A system for imaging a target, the system comprising:
- an optical modulator configured for applying, to each field of view of a plurality of fields of view of the target, a respective optical modulation pattern to a received image of the target to output a modulated image;
- a camera configured for capturing a single image frame for the plurality of fields of view by receiving the modulated images of the target; and
- a demodulator implemented on a computer system comprising at least one processor, wherein the demodulator is configured for recovering, from the single image frame, a plurality of recovered image frames each depicting the target from a respective field of view;
- wherein the demodulator is configured for performing selective filtering in the frequency domain to isolate the recovered images using data specifying modulation patterns, and wherein the demodulator is configured for combining the recovered image frames to create a single full scene image frame.
16. The system of claim 15, wherein the optical modulator comprises a digital micromirror device or a plurality of optical gratings, and wherein the system comprises one or more lenses configured for focusing light emitted by the target onto the optical modulator.
4331877 | May 25, 1982 | Barrett et al. |
6239909 | May 29, 2001 | Hayashi |
20030020922 | January 30, 2003 | Crowley et al. |
20080219535 | September 11, 2008 | Mistretta et al. |
20120098951 | April 26, 2012 | Borovytsky |
20130136318 | May 30, 2013 | Hassebrook et al. |
20170214861 | July 27, 2017 | Rachlin |
WO-2012/069810 | May 2012 | WO |
- International Search Report and Written Opinion corresponding to U.S Patent Application No. PCT/US2019/014473 dated Apr. 30, 2019.
- International Preliminary Report on Patentability corresponding to U.S Patent Application No. PCT/US2019/014473 dated Aug. 6, 2020.
- Bohlin et al., “Communication: Two-dimensional gas-phase coherent anti-Stokes Raman spectroscopy (2D-CARS): Simultaneous planar imaging and multiplex spectroscopy in a single laser shot,” The Journal of Chemical Physics 138, 221101 (2013).
- Bub et al., “Temporal pixel multiplexing for simultaneous high-speed, high-resolution imaging,” Nature Methods 7, 209 (2010).
- Dogariu et al., “High-gain backward lasing in air,” Science 331, 442-445 (2011).
- Donoho, “Compressed sensing,” IEEE Transactions on Information Theory 52, 1289-1306 (2006).
- Dorozynska et al., “Implementation of a multiplexed structured illumination method to achieve snapshot multispectral imaging,” Optics Express 25, 17211-17226 (2017).
- Dromey et al., “High harmonic generation in the relativistic limit,” Nature physics 2, 456 (2006).
- Ehn et al., “Frame: femtosecond videography for atomic and molecular dynamics,” Light: Science & Applications 6, e17045 (2017).
- Etoh et al., “Toward One Giga Frames per Second—Evolution of in Situ Storage Image Sensors,” Sensors (Basel, Switzerland) 13, 4640-4658 (2013).
- Etoh et al., “The Theoretical Highest Frame Rate of Silicon Image Sensors,” Sensors (Basel, Switzerland) 17, 483 (2017).
- Gao et al., “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74 (2014).
- Gragston et al., “High-speed flame chemiluminescence imaging using time-multiplexed structured detection,” Applied Optics 57, 2923-2929 (2018).
- Gragston et al., “Single-Shot Nanosecond-Resolution Multiframe Passive Imaging by Multiplexed Structured Image Capture,” Optics Express 26, 28441-28452 (2018).
- Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” Journal of Microscopy 198, 82-87 (2000).
- Hales, “The Honeycomb Conjecture,” Discrete & Computational Geometry 25, 1-22 (2001).
- Halls et al., “Single-shot, volumetrically illuminated, three-dimensional, tomographic laser-induced-fluorescence imaging in a gaseous free jet,” Opt. Express 24, 10040-10049 (2016).
- Heintzmann et al., “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” in BiOS Europe '98, (SPIE, 1999), 12.
- Hsu et al., “Sensitivity, stability, and precision of quantitative Ns-LIBS-based fuel-air-ratio measurements for methane-air flames at 1-11 bar,” Applied Optics 55, 8042-8048 (2016).
- Kristensson et al., “Two-pulse structured illumination imaging,” Opt. Lett. 39, 2584-2587 (2014).
- Kristensson et al., “Recent development of methods based on structured illumination for combustion studies,” in Imaging and Applied Optics 2016 (Optical Society of America, Heidelberg, 2016), p. LT4F.1.
- Kristensson et al., “Instantaneous 3D imaging of flame species using coded laser illumination,” Proceedings of the Combustion Institute 36, 4585-4591 (2017).
- Liang et al., “Encrypted Three-dimensional Dynamic Imaging using Snapshot Time-of-flight Compressed Ultrafast Photography,” Scientific Reports 5, 15504 (2015).
- Liang et al., “Single-shot real-time femtosecond imaging of temporal focusing,” Light: Science & Applications 7, 42 (2018).
- Liang et al., “Single-shot ultrafast optical imaging,” Optica 5, 1113-1127 (2018).
- Mikami et al., “Ultrafast optical imaging technology: principles and applications of emerging methods,” in Nanophotonics 5, 497-509 (2016).
- Miller et al., “1-kHz two-dimensional coherent anti-Stokes Raman scattering (2D-CARS) for gas-phase thermometry,” Opt. Express 24, 24971-24979 (2016).
- Perelomov et al., “Ionization of atoms in an alternating electric field,” Sov. Phys. JETP 23, 924-934 (1966).
- Schermelleh et al., “A guide to super-resolution fluorescence microscopy,” The Journal of Cell Biology 190, 165-175 (2010).
- Schwarz et al., “Measurement of nonlinear refractive index and ionization rates in air using a wavefront sensor,” Opt. Express 20, 8791-8803 (2012).
- Wang et al., “Ultra-high-speed PLIF imaging for simultaneous visualization of multiple species in turbulent flames,” Opt. Express 25, 30214-30228 (2017).
- Wu et al., “High-energy pulse-burst laser system for megahertz-rate flow visualization,” Optics Letters 25, 1639-1641 (2000).
- Wu et al., “Quantitative measurement of electron number in nanosecond and picosecond laser-induced air breakdown,” Journal of Applied Physics 119, 173303 (2016).
- Zhang et al., “Microwave diagnostics of laser-induced avalanche ionization in air,” Journal of Applied Physics 100, 6 (2006).
- Baccarella et al., “Development and testing of the ACT-1 experimental facility for hypersonic combustion research,” Measurement Science and Technology 27, 045902 (2016).
- Braun et al., “Self-channeling of high-peak-power femtosecond laser pulses in air,” Optics letters 20, 73-75 (1995).
- Carter et al., “High-speed planar laser-induced fluorescence of the CH radical using the C22Σ+—X2Π(0, 0) band,” Applied Physics B 116, 515-519 (2014).
- Duran et al., “Ballistic imaging of diesel sprays using a picosecond laser: characterization and demonstration,” Appl. Opt. 54, 1743-1750 (2015).
- Goodman, J.W., Introduction to Fourier Optics (Roberts and Company Publishers, 2005).
- Hecht, E., Optics (Addison-Wesley, 2002).
- Hsu et al., “High-repetition-rate laser ignition of fuel—air mixtures,” Opt. Lett. 41, 1570-1573 (2016).
- Jiang et al., “High-speed 2D Raman imaging at elevated pressures,” Optics Letters 42, 3678-3681 (2017).
- Khan et al., “Extracting sub-exposure images from a single capture through Fourier-based optical modulation,” Signal Processing: Image Communication 60, 107-115 (2018).
- Laurence et al., “Experimental study of second-mode instability growth and breakdown in a hypersonic boundary layer using high-speed schlieren visualization,” Journal of Fluid Mechanics 797, 471-503 (2016).
- Lee et al., “Time-multiplexed structured illumination using a DMD for optical diffraction tomography,” Opt. Lett. 42, 999-1002 (2017).
- Linne et al., “Ballistic imaging of the near field in a diesel spray,” Experiments in Fluids 40, 836-846 (2006).
- Meyer et al., “High-speed, three-dimensional tomographic laser-induced incandescence imaging of soot volume fraction in turbulent flames,” Opt. Express 24, 29547-29555 (2016).
- Miles et al. “Microwave Scattering from Laser Ionized Molecules: A New Approach to Nonintrusive Diagnostics,” Aerospace letters, AIAA Journal, 2007.
- Mohri et al., “Instantaneous 3D imaging of highly turbulent flames using computed tomography of chemiluminescence,” Applied Optics 56, 7385-7395 (2017).
- Mur et al., “Energy and momentum spectra of photoelectrons under conditions of ionization by strong laser radiation (The case of elliptic polarization),” Journal of Experimental and Theoretical Physics 92, 777-788 (2001).
- O'Connor et al., “Transverse combustion instabilities: Acoustic, fluid mechanic, and flame processes,” Progress in Energy and Combustion Science 49, 1-39 (2015).
- Oppenheim, A.V., Signals & Systems, Prentice-Hall signal processing series (Prentice-Hall International, 1997).
- Rankin et al., “Chemiluminescence imaging of an optically accessible non-premixed rotating detonation engine,” Combustion and Flame 176, 12-22 (2017).
- Sallé et al., “Comparative study of different methodologies for quantitative rock analysis by Laser-Induced Breakdown Spectroscopy in a simulated Martian atmosphere,” Spectrochimica Acta Part B: Atomic Spectroscopy 61, 301-313 (2006).
- Saxena et al., “Structured illumination microscopy,” Adv. Opt. Photon. 7, 241-275 (2015).
- Shattan et al., “Detection of uranyl fluoride and sand surface contamination on metal substrates by hand-held laser-induced breakdown spectroscopy,” Appl. Opt. 56, 9868-9875 (2017).
- Settles, G.S., Schlieren and Shadowgraph Techniques: Visualizing Phenomena in Transparent Media (Springer Berlin Heidelberg, 2012).
- Velten et al., “Imaging the propagation of light through scenes at picosecond resolution,” Commun. ACM 59, 79-86 (2016).
- Wakeham et al., “Dual-echelon single-shot femtosecond spectroscopy,” Opt. Lett. 25, 505-507 (2000).
- Zhang et al., “Coherent microwave rayleigh scattering from resonance-enhanced multiphoton ionization in argon,” Physical Review Letters 98 (2007).
Type: Grant
Filed: Jan 22, 2019
Date of Patent: Apr 16, 2024
Patent Publication Number: 20210136319
Assignee: University of Tennessee Research Foundation (Knoxville, TN)
Inventors: Zhili Zhang (Knoxville, TN), Mark Terrell Gragston (Knoxville, TN), Cary Dean Smith (Knoxville, TN)
Primary Examiner: Thanh Luu
Application Number: 16/757,200
International Classification: G02B 27/10 (20060101); G02B 26/08 (20060101); G03B 39/00 (20210101); G06T 3/40 (20060101); G06T 3/4084 (20240101);