Patents by Inventor Yair Rivenson

Yair Rivenson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220253685
    Abstract: A broadband diffractive optical neural network simultaneously processes a continuum of wavelengths generated by a temporally-incoherent broadband source to all-optically perform a specific task learned using network learning. The optical neural network design was verified by designing, fabricating and testing seven different multi-layer, diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize (1) a series of tunable, single passband as well as dual passband spectral filters, and (2) spatially-controlled wavelength de-multiplexing. Merging the native or engineered dispersion of various material systems with a deep learning-based design, broadband diffractive optical neural networks help engineer light-matter interaction in 3D, diverging from intuitive and analytical design methods to create task-specific optical components that can all-optically perform deterministic tasks or statistical inference for optical machine learning.
    Type: Application
    Filed: September 9, 2020
    Publication date: August 11, 2022
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yi Luo, Deniz Mengu, Yair Rivenson
  • Patent number: 11392830
    Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: July 19, 2022
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Xing Lin, Deniz Mengu, Yi Luo
  • Publication number: 20220206434
    Abstract: A method for performing color image reconstruction of a single super-resolved holographic sample image includes obtaining a plurality of sub-pixel shifted lower resolution hologram images of the sample using an image sensor by simultaneous illumination at multiple color channels. Super-resolved hologram intensity images for each color channel are digitally generated based on the lower resolution hologram images. The super-resolved hologram intensity images for each color channel are back propagated to an object plane with image processing software to generate a real and imaginary input images of the sample for each color channel. A trained deep neural network is provided and is executed by image processing software using one or more processors of a computing device and configured to receive the real input image and the imaginary input image of the sample for each color channel and generate a color output image of the sample.
    Type: Application
    Filed: April 21, 2020
    Publication date: June 30, 2022
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Tairan Liu, Yibo Zhang, Zhensong Wei
  • Publication number: 20220121940
    Abstract: A deep learning-based spectral analysis device and method are disclosed that employs a spectral encoder chip containing a plurality of nanohole array tiles, each with a unique geometry and, thus, a unique optical transmission spectrum. Illumination impinges upon the encoder chip and a CMOS image sensor captures the transmitted light, without any lenses, gratings, or other optical components. A spectral reconstruction neural network uses the transmitted intensities from the image to faithfully reconstruct the input spectrum. In one embodiment that used a spectral encoder chip with 252 nanohole array tiles, the network was trained on 50,352 spectra randomly generated by a supercontinuum laser and blindly tested on 14,648 unseen spectra. The system identified 96.86% of spectral peaks, with a peak localization error of 0.19 nm, peak height error of 7.60%, and peak bandwidth error of 0.18 nm.
    Type: Application
    Filed: October 17, 2021
    Publication date: April 21, 2022
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Calvin Brown, Artem Goncharov, Zachary Ballard, Yair Rivenson
  • Publication number: 20220122313
    Abstract: A deep learning-based volumetric image inference system and method are disclosed that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network (RNN) (referred to herein as Recurrent-MZ), 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. elegans and nanobead samples, Recurrent-MZ is demonstrated to increase the depth-of-field of a 63×/1.4 NA objective lens by approximately 50-fold, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. The generalization of this recurrent network for 3D imaging is further demonstrated by showing its resilience to varying imaging conditions, including e.g.
    Type: Application
    Filed: October 19, 2021
    Publication date: April 21, 2022
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Luzhe Huang
  • Publication number: 20220114711
    Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.
    Type: Application
    Filed: November 19, 2021
    Publication date: April 14, 2022
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
  • Publication number: 20220058776
    Abstract: A fluorescence microscopy method includes a trained deep neural network. At least one 2D fluorescence microscopy image of a sample is input to the trained deep neural network, wherein the input image(s) is appended with a digital propagation matrix (DPM) that represents, pixel-by-pixel, an axial distance of a user-defined or automatically generated surface within the sample from a plane of the input image. The trained deep neural network outputs fluorescence output image(s) of the sample that is digitally propagated or refocused to the user-defined surface or automatically generated. The method and system cross-connects different imaging modalities, permitting 3D propagation of wide-field fluorescence image(s) to match confocal microscopy images at different sample planes, The method may be used to output a time sequence of images (e.g., time-lapse video) of a 2D or 3D surface within a sample.
    Type: Application
    Filed: December 23, 2019
    Publication date: February 24, 2022
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Yichen Wu
  • Publication number: 20220012850
    Abstract: A trained deep neural network transforms an image of a sample obtained with a holographic microscope to an image that substantially resembles a microscopy image obtained with a microscope having a different microscopy image modality. Examples of different imaging modalities include bright-field, fluorescence, and dark-field. For bright-field applications, deep learning brings bright-field microscopy contrast to holographic images of a sample, bridging the volumetric imaging capability of holography with the speckle-free and artifact-free image contrast of bright-field microscopy.
    Type: Application
    Filed: November 14, 2019
    Publication date: January 13, 2022
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Yichen Wu
  • Patent number: 11222415
    Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: January 11, 2022
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
  • Publication number: 20210264214
    Abstract: A deep learning-based digital staining method and system are disclosed that provides a label-free approach to create a virtually-stained microscopic images from quantitative phase images (QPI) of label-free samples. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses a convolutional neural network trained using a generative adversarial network model to transform QPI images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample. This label-free digital staining method eliminates cumbersome and costly histochemical staining procedures, and would significantly simplify tissue preparation in pathology and histology fields.
    Type: Application
    Filed: March 29, 2019
    Publication date: August 26, 2021
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Zhensong Wei
  • Publication number: 20210142170
    Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.
    Type: Application
    Filed: April 12, 2019
    Publication date: May 13, 2021
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Xing Ling, Deniz Mengu, Yi Luo
  • Publication number: 20210043331
    Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.
    Type: Application
    Filed: March 29, 2019
    Publication date: February 11, 2021
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
  • Publication number: 20190333199
    Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.
    Type: Application
    Filed: April 26, 2019
    Publication date: October 31, 2019
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
  • Publication number: 20190294108
    Abstract: A method of performing phase retrieval and holographic image reconstruction of an imaged sample includes obtaining a single hologram intensity image of the sample using an imaging device. The single hologram intensity image is back-propagated to generate a real input image and an imaginary input image of the sample with image processing software, wherein the real input image and the imaginary input image contain twin-image and/or interference-related artifacts. A trained deep neural network is provided that is executed by the image processing software using one or more processors and configured to receive the real input image and the imaginary input image of the sample and generate an output real image and an output imaginary image in which the twin-image and/or interference-related artifacts are substantially suppressed or eliminated.
    Type: Application
    Filed: March 20, 2019
    Publication date: September 26, 2019
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Yichen Wu, Yibo Zhang, Harun Gunaydin