Patents by Inventor Qionghai Dai

Qionghai Dai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954870
    Abstract: Provided are a three-dimensional reconstruction method, apparatus and system of a dynamic scene, a server and a medium. The method includes: acquiring multiple continuous depth image sequences of the dynamic scene, where the multiple continuous depth image sequences are captured by an array of drones equipped with depth cameras; fusing the multiple continuous depth image sequences to establish a three-dimensional reconstruction model of the dynamic scene; obtaining target observation points of the array of drones through calculation according to the three-dimensional reconstruction model and current poses of the array of drones; and instructing the array of drones to move to the target observation points to capture, and updating the three-dimensional reconstruction model according to multiple continuous depth image sequences captured by the array of drones at the target observation points.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: April 9, 2024
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Mengqi Ji, Yebin Liu, Lan Xu, Wei Cheng, Qionghai Dai
  • Patent number: 11782285
    Abstract: The present disclosure provides a material identification method and a device based on laser speckle and modal fusion, an electronic device and a non-transitory computer readable storage medium. The method includes: performing data acquisition on an object by using a structured light camera to obtain a color modal image, a depth modal image and an infrared modal image; preprocessing the color modal image, the depth modal image and the infrared modal image; and inputting the color modal image, the depth modal image and the infrared modal image preprocessed into a preset depth neural network for training, to learn a material characteristic from a speckle structure and a coupling relation between color modal and depth modal, to generate a material classification model for classifying materials, and to generate a material prediction result in testing by the material classification model of the object.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: October 10, 2023
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Mengqi Ji, Shi Mao, Qionghai Dai
  • Patent number: 11719921
    Abstract: The present disclosure provides a rapid three-dimensional imaging system based on a multi-angle 4Pi microscope. The system includes: an illumination module, configured to obtain a parallel light of which a size covering a projection surface of a spatial light modulator; a wavefront modulation module, configured to place the LCOS device on a Fourier plane of an illumination end; a two-dimensional scanning module, configured to control a light beam to realize a two-dimensional scanning on an object plane; an illumination interference module, configured to generate point spread function PSFs of a 4Pi through an illumination interference to irradiate a fluorescent sample; an imaging module, configured to acquire interference images of two fluorescent signals; and a controller, configured to control the wavefront modulation module to adjust a polarization direction of the light to generate PSFs of the 4Pi with different inclination angles.
    Type: Grant
    Filed: May 7, 2020
    Date of Patent: August 8, 2023
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Qionghai Dai, You Zhou, Jiamin Wu, Guoxun Zhang
  • Patent number: 11715186
    Abstract: The present disclosure provides a multi-image-based image enhancement method and device, an electronic device and a non-transitory computer readable storage medium. The method includes: aligning a low-resolution target image and a reference image in an image domain; performing, an alignment in a feature domain; and synthesizing features corresponding to the low-resolution target image and features corresponding to the reference image to generate a final output.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: August 1, 2023
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Yinheng Zhu, Qionghai Dai
  • Publication number: 20230214961
    Abstract: A direct structured illumination microscopy (dSIM) reconstruction method is provided. First, a time domain modulation signal is extracted through a wavelet. Then, an incoherent signal is converted into a coherent signal. Next, an accumulation amount at each pixel is calculated. Finally, a super-resolution image is generated by using a correlation between signals at different spatial positions. An autocorrelation algorithm of dSIM is insensitive to an error of a reconstruction parameter. dSIM bypasses a complex frequency domain operation in structured illumination microscopy (SIM) image reconstruction, and prevents an artifact caused by the parameter error in the frequency domain operation. The dSIM algorithm has high adaptability and can be used in laboratory SIM, nonlinear SIM imaging systems, or commercial systems.
    Type: Application
    Filed: August 20, 2020
    Publication date: July 6, 2023
    Inventors: Peng Xi, Shan Jiang, Hui Qiao, Qionghai Dai
  • Publication number: 20230117456
    Abstract: The present disclosure relates to optical logic element technologies, and more particularly, to an optical logic element for photoelectric digital logic operation and a logic operation method thereof. Here, the element includes a driver member configured to drive a photoelectric integrated member, generate digital modulation information that is capable of being recognized by the photoelectric integrated member, and read an electrical signal outputted by the photoelectric integrated member; and the photoelectric integrated member configured to carry, by using a coherent optical signal, the digital modulation information inputted by the drive member, and perform, in a predetermined optical diffraction neural network, a digital logic operation on the coherent optical signal to obtain an operation result, generate, from the operation result based on a digital logic mapping relationship, the electrical signal, and output, after reading the electrical signal by using the drive member, the operation result.
    Type: Application
    Filed: October 12, 2022
    Publication date: April 20, 2023
    Inventors: Qionghai DAI, Jiyuan ZHENG, Chenchen DENG, Jiamin WU
  • Patent number: 11600060
    Abstract: The present disclosure discloses a nonlinear all-optical deep-learning system and method with multistage space-frequency domain modulation. The system includes an optical input module, configured to convert input information to optical information, a multistage space-frequency domain modulation module, configured to perform multistage space-frequency domain modulation on the optical information generated by the optical input module so as to generate modulated optical information, and an information acquisition module, configured to transform the modulated optical information onto a Fourier plane or an image plane, and to acquire the transformed optical information so as to generate processed optical information.
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: March 7, 2023
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Qionghai Dai, Tao Yan, Jiamin Wu, Xing Lin
  • Patent number: 11514667
    Abstract: A method and an apparatus for camera-free light field video processing with all-optical neural network are disclosed. The method includes: mapping the light field video by a digital micro-mirror device (DMD) and an optical fiber coupler, a two-dimensional 2D spatial optical signal into a one-dimensional 1D input optical signal; realizing a multiply-accumulate computing model in a structure of all-optical recurrent neural network structure, and processing the 1D input signal to obtain the processed signal; and receiving the processed signal and outputting an electronic signal by a photodetector, or receiving the processed signal by a relay optical fiber for relay transmission of the processed signal. The method and system here realize light field video processing without the use of a camera and the whole system is all-optical, thus possessing the advantage in computing speed and energy-efficiency.
    Type: Grant
    Filed: March 25, 2022
    Date of Patent: November 29, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Tiankuang Zhou, Siyuan Gu, Xiaoyun Yuan, Qionghai Dai
  • Patent number: 11514607
    Abstract: The disclosure provides 3D reconstruction methods and devices. The method includes: obtaining data captured by the camera and data captured by the inertial measurement unit; obtaining a pose of the camera based on the data; obtaining an adjustment value of the pose of the camera and an adjustment value of bias of the inertial measurement unit; updating the pose of the camera based on the adjustment value of the pose of the camera; determining whether the adjustment value of bias of the inertial measurement unit is less than a preset value; in response to the adjustment value of bias of the inertial measurement unit being greater than or equal to the preset value, determining that a current loop for 3-dimensional reconstruction is an error loop; removing the error loop; and constructing a 3-dimensional model for surroundings of the camera based on the updated pose of the camera and remaining loops.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: November 29, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Dawei Zhong, Qionghai Dai
  • Patent number: 11461911
    Abstract: A depth information calculation method and device based on a light-field-binocular system. The method includes obtaining a far-distance disparity map based on binocular information of calibrated input images, setting respective first confidences pixels in the disparity map, and obtaining a first target confidence; detecting the first confidence of a pixel being smaller than a preset value and responsively determining a new disparity value based on light field information of the input images, determining an update depth value based on the new disparity value, and obtaining a second target confidence of the pixel; and combining the far-distance disparity map and a disparity map formed by the new disparity value on a same unit into an index map, combining the first confidence and the first target confidence into a confidence map, optimizing the index and confidence maps to obtain a final disparity map, which is converted to a final depth map.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: October 4, 2022
    Assignee: Tsinghua University
    Inventors: Lu Fang, Dingjian Jin, Anke Zhang, Qionghai Dai
  • Patent number: 11450017
    Abstract: A method for intelligent light field depth classification based on optoelectronic computing includes capturing and identifying binocular images of a scene within a depth range through a pair of binocular cameras; mapping each depth value in the depth range to a disparity value between the binocular images, to obtain a disparity range of the scene within the depth range; labeling training data based on the disparity range to obtain a pre-trained diffraction neural network model; loading a respective weight for each layer of a network obtained after training into a corresponding optical element based on the pre-trained diffraction neural network model; and after the respective weight for each layer of the network is loaded, performing forward propagation inference on new input data of the scene, and outputting a depth classification result corresponding to each pixel in the binocular images of the scene.
    Type: Grant
    Filed: May 12, 2022
    Date of Patent: September 20, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Zhihao Xu, Xiaoyun Yuan, Tiankuang Zhou, Qionghai Dai
  • Patent number: 11425292
    Abstract: A method and an apparatus for camera-free light field imaging with optoelectronic intelligent computing are provided. The method includes: obtaining an optical computing result by an optical computing module in response to receiving a light signal of an object to be imaged, in which the optical computing result includes light field imaging of the object to be imaged; computing by an electronic computing module the optical computing result to obtain an electronic computing result; and in response to determining based on the electronic computing result that cascading is required, forming a cascade structure by taking the electronic computing result at a previous level as an input of the optical computing module at a current level, and in response to determining that cascading is not required, outputting a final result.
    Type: Grant
    Filed: February 23, 2022
    Date of Patent: August 23, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Yong Wang, Xiaoyun Yuan, Tiankuang Zhou, Qionghai Dai
  • Publication number: 20220164634
    Abstract: An optical diffractive processing unit includes input nodes, output nodes; and neurons. The neurons are connected to the input nodes through optical diffractions. Weights of connection strength of the neurons are determined based on diffractive modulation. Each optoelectronic neuron is configured to perform an optical field summation of weighted inputs and generate a unit output by applying a complex activation to an optical field occurring naturally in a photoelectronic conversion. Each neuron is a programmable device.
    Type: Application
    Filed: November 2, 2021
    Publication date: May 26, 2022
    Inventors: Qionghai DAI, Tiankuang ZHOU, Xing LIN, Jiamin WU
  • Patent number: 11262568
    Abstract: A microscopic imaging system and a microscopic imaging method. The system includes: an illumination module configured to generate a laser illumination, an LCOS device located in a Fourier plane of the laser illumination and configured to modulate a phase of the laser illumination, a 4-F system configured to adjust a size of a light beam of the laser illumination, an excitation lens group configured to generate a point illumination focused in a sample plane, a detecting lens group configured to capture an image of a PSF of the point illumination, a camera sensor, and a controller configured to synchronously control a change in a phase pattern of the LCOS device and an image capture of the camera sensor.
    Type: Grant
    Filed: March 19, 2019
    Date of Patent: March 1, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Qionghai Dai, You Zhou, Jiamin Wu, Guoxun Zhang
  • Patent number: 11169367
    Abstract: Provided are a 3D microscopic imaging method and a 3D microscopic imaging system. The method includes: acquiring a first PSF of a 3D sample from an object plane to a plane of a main camera sensor and a second PSF of the 3D sample from the object plane to a plane of a secondary camera sensor, and generating a first forward projection matrix corresponding to the first PSF and a second forward projection matrix corresponding to the second PSF; acquiring a light field image captured by the main camera sensor and a high resolution image captured by the secondary camera sensor; generating a reconstruction result of the 3D sample by reconstructing the light field image, the first forward projection matrix, the high resolution image and the second forward projection matrix according to a preset algorithm.
    Type: Grant
    Filed: March 19, 2019
    Date of Patent: November 9, 2021
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Qionghai Dai, Zhi Lu, Jiamin Wu
  • Patent number: 11131841
    Abstract: A scanning light field microscopic imaging system includes: a microscope configured to magnify a sample and image the sample onto a first image plane of the microscope; a relay lens configured to magnify or minify the first image plane; a 2D scanning galvo configured to rotate an angle of a light path in the frequency domain plane; the microlens array configured to modulate a beam with a preset angle to a target spatial position at a back focal plane of the microlens array and modulate the first image plane to obtain a modulated image; an image sensor configured to record the modulated image; and a reconstruction module configured to reconstruct a 3D structure of the sample based on the modulated image acquired from the image sensor.
    Type: Grant
    Filed: March 19, 2019
    Date of Patent: September 28, 2021
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Qionghai Dai, Zhi Lu, Jiamin Wu
  • Publication number: 20210118162
    Abstract: A depth information calculation method and device based on a light-field-binocular system. The method includes obtaining a far-distance disparity map based on binocular information of calibrated input images, setting respective first confidences pixels in the disparity map, and obtaining a first target confidence; detecting the first confidence of a pixel being smaller than a preset value and responsively determining a new disparity value based on light field information of the input images, determining an update depth value based on the new disparity value, and obtaining a second target confidence of the pixel; and combining the far-distance disparity map and a disparity map formed by the new disparity value on a same unit into an index map, combining the first confidence and the first target confidence into a confidence map, optimizing the index and confidence maps to obtain a final disparity map, which is converted to a final depth map.
    Type: Application
    Filed: September 28, 2020
    Publication date: April 22, 2021
    Inventors: Lu FANG, Dingjian Jin, Anke Zhang, Qionghai Dai
  • Publication number: 20210118123
    Abstract: The present disclosure provides a material identification method and a device based on laser speckle and modal fusion, an electronic device and a non-transitory computer readable storage medium. The method includes: performing data acquisition on an object by using a structured light camera to obtain a color modal image, a depth modal image and an infrared modal image; preprocessing the color modal image, the depth modal image and the infrared modal image; and inputting the color modal image, the depth modal image and the infrared modal image preprocessed into a preset depth neural network for training, to learn a material characteristic from a speckle structure and a coupling relation between color modal and depth modal, to generate a material classification model for classifying materials, and to generate a material prediction result in testing by the material classification model of the object.
    Type: Application
    Filed: September 29, 2020
    Publication date: April 22, 2021
    Inventors: Lu Fang, Mengqi Ji, Shi Mao, Qionghai Dai
  • Publication number: 20210110576
    Abstract: The disclosure provides 3D reconstruction methods and devices. The method includes: obtaining data captured by the camera and data captured by the inertial measurement unit; obtaining a pose of the camera based on the data; obtaining an adjustment value of the pose of the camera and an adjustment value of bias of the inertial measurement unit; updating the pose of the camera based on the adjustment value of the pose of the camera; determining whether the adjustment value of bias of the inertial measurement unit is less than a preset value; in response to the adjustment value of bias of the inertial measurement unit being greater than or equal to the preset value, determining that a current loop for 3-dimensional reconstruction is an error loop; removing the error loop; and constructing a 3-dimensional model for surroundings of the camera based on the updated pose of the camera and remaining loops.
    Type: Application
    Filed: September 29, 2020
    Publication date: April 15, 2021
    Inventors: Lu Fang, Dawei Zhong, Qionghai Dai
  • Publication number: 20210110599
    Abstract: Provided are a depth camera based three-dimensional reconstruction method and apparatus, a device and a storage medium. The method includes: acquiring at least two frames of images obtained by capturing a target scenario by a depth camera; determining, according to the at least two frames of images, relative camera poses in response to capturing the target scenario by the depth camera; by adopting a manner of at least two levels of nested screening, determining at least one feature voxel from each frame of image, where each level of screening adopts a respective voxel partitioning rule; fusing and calculating the at least one feature voxel of each frame of image according to a respective relative camera pose of each frame of image to obtain a grid voxel model of the target scenario; and generating an isosurface of the grid voxel model to obtain a three-dimensional reconstruction model of the target scenario.
    Type: Application
    Filed: April 28, 2019
    Publication date: April 15, 2021
    Applicant: Tsinghua University
    Inventors: Lu Fang, Mengqi Ji, Lei Han, Zhuo Su, Qionghai Dai