Patents by Inventor Yair Rivenson

Yair Rivenson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135544
    Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.
    Type: Application
    Filed: December 18, 2023
    Publication date: April 25, 2024
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
  • Patent number: 11946854
    Abstract: A fluorescence microscopy method includes a trained deep neural network. At least one 2D fluorescence microscopy image of a sample is input to the trained deep neural network, wherein the input image(s) is appended with a digital propagation matrix (DPM) that represents, pixel-by-pixel, an axial distance of a user-defined or automatically generated surface within the sample from a plane of the input image. The trained deep neural network outputs fluorescence output image(s) of the sample that is digitally propagated or refocused to the user-defined surface or automatically generated. The method and system cross-connects different imaging modalities, permitting 3D propagation of wide-field fluorescence image(s) to match confocal microscopy images at different sample planes. The method may be used to output a time sequence of images (e.g., time-lapse video) of a 2D or 3D surface within a sample.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: April 2, 2024
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Yichen Wu
  • Patent number: 11915360
    Abstract: A deep learning-based volumetric image inference system and method are disclosed that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network (RNN) (referred to herein as Recurrent-MZ), 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. elegans and nanobead samples, Recurrent-MZ is demonstrated to increase the depth-of-field of a 63×/1.4 NA objective lens by approximately 50-fold, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. The generalization of this recurrent network for 3D imaging is further demonstrated by showing its resilience to varying imaging conditions, including e.g.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: February 27, 2024
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Luzhe Huang
  • Patent number: 11893739
    Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: February 6, 2024
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
  • Publication number: 20230401436
    Abstract: A method of forming an optical neural network for processing an input object image or optical signal that is invariant to object transformations includes training a software-based neural network model to perform one or more specific optical functions for a multi-layer optical network having physical features located in each of the layers of the optical neural network. The training includes feeding different input object images or optical signals that have random transformations or shifts and computing at least one optical output of optical transmission and/or reflection through the optical neural network using an optical wave propagation model and iteratively adjusting transmission/reflection coefficients for each layer until optimized transmission/reflection coefficients are obtained.
    Type: Application
    Filed: October 22, 2021
    Publication date: December 14, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Deniz Mengu, Yair Rivenson
  • Publication number: 20230401447
    Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.
    Type: Application
    Filed: May 12, 2023
    Publication date: December 14, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Xing Lin, Deniz Mengu, Yi Luo
  • Publication number: 20230394716
    Abstract: A method of generating virtually-stained image annotations. The method includes providing a neural network executed by image processing software. The image processing software runs on a processor of a computing device. The method includes training the neural network with a plurality of chemically stained patterns of endogenous signals to identify virtual staining patterns. The method includes producing and annotating an image of a biological sample for a biomarker. The biological sample includes endogenous signals. The method includes identifying, using the trained neural network, the virtual staining patterns in the image of the biological sample. Lastly, the method includes overlaying the virtual staining patterns in the image of the biological sample with annotations using spatial matching to produce virtually-stained image annotations.
    Type: Application
    Filed: May 18, 2023
    Publication date: December 7, 2023
    Applicant: PICTOR LABS, INC.
    Inventors: Kevin de HAAN, Yair Rivenson, Francesco Colonnese, Tairan Liu, Raymond Kozikowski
  • Publication number: 20230251189
    Abstract: A diffractive network is disclosed that utilizes, in some embodiments, diffractive elements, which are used to shape an arbitrary broadband pulse into a desired optical waveform, forming a compact and passive pulse engineering system. The diffractive network was experimentally shown to generate various different pulses by designing passive diffractive layers that collectively engineer the temporal waveform of an input terahertz pulse. The results constitute the first demonstration of direct pulse shaping in terahertz spectrum, where the amplitude and phase of the input wavelengths are independently controlled through a passive diffractive device, without the need for an external pump. Furthermore, a modular physical transfer learning approach is presented to illustrate pulse-width tunability by replacing part of an existing diffractive network with newly trained diffractive layers, demonstrating its modularity. This learning-based diffractive pulse engineering framework can find broad applications in e.g.
    Type: Application
    Filed: June 28, 2021
    Publication date: August 10, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Deniz Mengu, Yair Rivenson, Muhammed Veli
  • Patent number: 11694082
    Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.
    Type: Grant
    Filed: June 17, 2022
    Date of Patent: July 4, 2023
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Xing Lin, Deniz Mengu, Yi Luo
  • Publication number: 20230153600
    Abstract: A machine vision task, machine learning task, and/or classification of objects is performed using a diffractive optical neural network device. Light from objects passes through or reflects off the diffractive optical neural network device formed by multiple substrate layers. The diffractive optical neural network device defines a trained function between an input optical signal from the object light illuminated at a plurality or a continuum of wavelengths and an output optical signal corresponding to one or more unique wavelengths or sets of wavelengths assigned to represent distinct data classes or object types/classes created by optical diffraction and/or reflection through/off the substrate layers. Output light is captured with detector(s) that generate a signal or data that comprise the one or more unique wavelengths or sets of wavelengths assigned to represent distinct data classes or object types or object classes which are used to perform the task or classification.
    Type: Application
    Filed: May 4, 2021
    Publication date: May 18, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Jingxi Li, Deniz Mengu, Yair Rivenson
  • Publication number: 20230085827
    Abstract: A deep learning-based offline autofocusing method and system is disclosed herein, termed a Deep-R trained neural network, that is trained to rapidly and blindly autofocus a single-shot microscopy image of a sample or specimen that is acquired at an arbitrary out-of-focus plane. The efficacy of Deep-R is illustrated using various tissue sections that were imaged using fluorescence and brightfield microscopy modalities and demonstrate single snapshot autofocusing under different scenarios, such as a uniform axial defocus as well as a sample tilt within the field-of-view. Deep-R is significantly faster when compared with standard online algorithmic autofocusing methods. This deep learning-based blind autofocusing framework opens up new opportunities for rapid microscopic imaging of large sample areas, also reducing the photon dose on the sample.
    Type: Application
    Filed: March 18, 2021
    Publication date: March 23, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Yilin Luo, Luzhe Huang
  • Publication number: 20230060037
    Abstract: A system for the detection and classification of live microorganisms in a sample includes a light source and an incubator holding one or more sample-containing growth plates. A translation stage moves the image sensor and/or the growth plate(s) along one or more dimensions to capture time-lapse holographic images of microorganisms. Image processing software executed by a computing device captures time-lapse holographic images of the microorganisms or clusters of microorganisms on the one or more growth plates. The image processing software is configured to detect candidate microorganism colonies in reconstructed, time-lapse holographic images based on differential image analysis. The image processing software includes one or more trained deep neural networks that process the time-lapsed image(s) of candidate microorganism colonies to detect true microorganism colonies and/or output a species associated with each true microorganism colony.
    Type: Application
    Filed: January 27, 2021
    Publication date: February 23, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Hatice Ceylan Koydemir, Yunzhe Qiu
  • Publication number: 20230030424
    Abstract: A deep learning-based digital/virtual staining method and system enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples. In one embodiment, the method of generates digitally/virtually-stained microscope images of label-free or unstained samples using fluorescence lifetime (FLIM) image(s) of the sample(s) using a fluorescence microscope. In another embodiment, a digital/virtual autofocusing method is provided that uses machine learning to generate a microscope image with improved focus using a trained, deep neural network. In another embodiment, a trained deep neural network generates digitally/virtually stained microscopic images of a label-free or unstained sample obtained with a microscope having multiple different stains. The multiple stains in the output image or sub-regions thereof are substantially equivalent to the corresponding microscopic images or image sub-regions of the same sample that has been histochemically stained.
    Type: Application
    Filed: December 22, 2020
    Publication date: February 2, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Yilin Luo, Kevin de Haan, Yijie Zhang, Bijie Bai
  • Patent number: 11514325
    Abstract: A method of performing phase retrieval and holographic image reconstruction of an imaged sample includes obtaining a single hologram intensity image of the sample using an imaging device. The single hologram intensity image is back-propagated to generate a real input image and an imaginary input image of the sample with image processing software, wherein the real input image and the imaginary input image contain twin-image and/or interference-related artifacts. A trained deep neural network is provided that is executed by the image processing software using one or more processors and configured to receive the real input image and the imaginary input image of the sample and generate an output real image and an output imaginary image in which the twin-image and/or interference-related artifacts are substantially suppressed or eliminated.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: November 29, 2022
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Yichen Wu, Yibo Zhang, Harun Gunaydin
  • Publication number: 20220366253
    Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.
    Type: Application
    Filed: June 17, 2022
    Publication date: November 17, 2022
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Xing Lin, Deniz Mengu, Yi Luo
  • Publication number: 20220327371
    Abstract: A diffractive optical neural network device includes a plurality of diffractive substrate layers arranged in an optical path. The substrate layers are formed with physical features across surfaces thereof that collectively define a trained mapping function between an optical input and an optical output. A plurality of groups of optical sensors are configured to sense and detect the optical output, wherein each group of optical sensors has at least one optical sensor configured to capture a positive signal from the optical output and at least one optical sensor configured to capture a negative signal from the optical output. Circuitry and/or computer software receives signals or data from the optical sensors and identifies a group of optical sensors in which a normalized differential signal calculated from the positive and negative optical sensors within each group is the largest or the smallest of among all the groups.
    Type: Application
    Filed: June 5, 2020
    Publication date: October 13, 2022
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Jingxi Li, Deniz Mengu, Yi Luo
  • Publication number: 20220253685
    Abstract: A broadband diffractive optical neural network simultaneously processes a continuum of wavelengths generated by a temporally-incoherent broadband source to all-optically perform a specific task learned using network learning. The optical neural network design was verified by designing, fabricating and testing seven different multi-layer, diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize (1) a series of tunable, single passband as well as dual passband spectral filters, and (2) spatially-controlled wavelength de-multiplexing. Merging the native or engineered dispersion of various material systems with a deep learning-based design, broadband diffractive optical neural networks help engineer light-matter interaction in 3D, diverging from intuitive and analytical design methods to create task-specific optical components that can all-optically perform deterministic tasks or statistical inference for optical machine learning.
    Type: Application
    Filed: September 9, 2020
    Publication date: August 11, 2022
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yi Luo, Deniz Mengu, Yair Rivenson
  • Patent number: 11392830
    Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: July 19, 2022
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Xing Lin, Deniz Mengu, Yi Luo
  • Publication number: 20220206434
    Abstract: A method for performing color image reconstruction of a single super-resolved holographic sample image includes obtaining a plurality of sub-pixel shifted lower resolution hologram images of the sample using an image sensor by simultaneous illumination at multiple color channels. Super-resolved hologram intensity images for each color channel are digitally generated based on the lower resolution hologram images. The super-resolved hologram intensity images for each color channel are back propagated to an object plane with image processing software to generate a real and imaginary input images of the sample for each color channel. A trained deep neural network is provided and is executed by image processing software using one or more processors of a computing device and configured to receive the real input image and the imaginary input image of the sample for each color channel and generate a color output image of the sample.
    Type: Application
    Filed: April 21, 2020
    Publication date: June 30, 2022
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Tairan Liu, Yibo Zhang, Zhensong Wei
  • Publication number: 20220121940
    Abstract: A deep learning-based spectral analysis device and method are disclosed that employs a spectral encoder chip containing a plurality of nanohole array tiles, each with a unique geometry and, thus, a unique optical transmission spectrum. Illumination impinges upon the encoder chip and a CMOS image sensor captures the transmitted light, without any lenses, gratings, or other optical components. A spectral reconstruction neural network uses the transmitted intensities from the image to faithfully reconstruct the input spectrum. In one embodiment that used a spectral encoder chip with 252 nanohole array tiles, the network was trained on 50,352 spectra randomly generated by a supercontinuum laser and blindly tested on 14,648 unseen spectra. The system identified 96.86% of spectral peaks, with a peak localization error of 0.19 nm, peak height error of 7.60%, and peak bandwidth error of 0.18 nm.
    Type: Application
    Filed: October 17, 2021
    Publication date: April 21, 2022
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Calvin Brown, Artem Goncharov, Zachary Ballard, Yair Rivenson