Patents by Inventor Aydogan Ozcan

Aydogan Ozcan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135544
    Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.
    Type: Application
    Filed: December 18, 2023
    Publication date: April 25, 2024
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
  • Patent number: 11946854
    Abstract: A fluorescence microscopy method includes a trained deep neural network. At least one 2D fluorescence microscopy image of a sample is input to the trained deep neural network, wherein the input image(s) is appended with a digital propagation matrix (DPM) that represents, pixel-by-pixel, an axial distance of a user-defined or automatically generated surface within the sample from a plane of the input image. The trained deep neural network outputs fluorescence output image(s) of the sample that is digitally propagated or refocused to the user-defined surface or automatically generated. The method and system cross-connects different imaging modalities, permitting 3D propagation of wide-field fluorescence image(s) to match confocal microscopy images at different sample planes. The method may be used to output a time sequence of images (e.g., time-lapse video) of a 2D or 3D surface within a sample.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: April 2, 2024
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Yichen Wu
  • Patent number: 11915360
    Abstract: A deep learning-based volumetric image inference system and method are disclosed that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network (RNN) (referred to herein as Recurrent-MZ), 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. elegans and nanobead samples, Recurrent-MZ is demonstrated to increase the depth-of-field of a 63×/1.4 NA objective lens by approximately 50-fold, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. The generalization of this recurrent network for 3D imaging is further demonstrated by showing its resilience to varying imaging conditions, including e.g.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: February 27, 2024
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Luzhe Huang
  • Patent number: 11893739
    Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: February 6, 2024
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
  • Patent number: 11893779
    Abstract: Systems and methods for detecting motile objects (e.g., parasites) in a fluid sample by utilizing the locomotion of the parasites as a specific biomarker and endogenous contrast mechanism. The imaging platform includes one or more substantially optically transparent sample holders. The imaging platform has a moveable scanning head containing light sources and corresponding image sensor(s) associated with the light source(s). The light source(s) are directed at a respective sample holder containing a sample and the respective image sensor(s) are positioned below a respective sample holder to capture time-varying holographic speckle patterns of the sample contained in the sample holder. The image sensor(s). A computing device is configured to receive time-varying holographic speckle pattern image sequences obtained by the image sensor(s). The computing device generates a 3D contrast map of motile objects within the sample use deep learning-based classifier software to identify the motile objects.
    Type: Grant
    Filed: October 18, 2019
    Date of Patent: February 6, 2024
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yibo Zhang, Hatice Ceylan Koydemir
  • Publication number: 20230401436
    Abstract: A method of forming an optical neural network for processing an input object image or optical signal that is invariant to object transformations includes training a software-based neural network model to perform one or more specific optical functions for a multi-layer optical network having physical features located in each of the layers of the optical neural network. The training includes feeding different input object images or optical signals that have random transformations or shifts and computing at least one optical output of optical transmission and/or reflection through the optical neural network using an optical wave propagation model and iteratively adjusting transmission/reflection coefficients for each layer until optimized transmission/reflection coefficients are obtained.
    Type: Application
    Filed: October 22, 2021
    Publication date: December 14, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Deniz Mengu, Yair Rivenson
  • Publication number: 20230401447
    Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.
    Type: Application
    Filed: May 12, 2023
    Publication date: December 14, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Xing Lin, Deniz Mengu, Yi Luo
  • Publication number: 20230251189
    Abstract: A diffractive network is disclosed that utilizes, in some embodiments, diffractive elements, which are used to shape an arbitrary broadband pulse into a desired optical waveform, forming a compact and passive pulse engineering system. The diffractive network was experimentally shown to generate various different pulses by designing passive diffractive layers that collectively engineer the temporal waveform of an input terahertz pulse. The results constitute the first demonstration of direct pulse shaping in terahertz spectrum, where the amplitude and phase of the input wavelengths are independently controlled through a passive diffractive device, without the need for an external pump. Furthermore, a modular physical transfer learning approach is presented to illustrate pulse-width tunability by replacing part of an existing diffractive network with newly trained diffractive layers, demonstrating its modularity. This learning-based diffractive pulse engineering framework can find broad applications in e.g.
    Type: Application
    Filed: June 28, 2021
    Publication date: August 10, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Deniz Mengu, Yair Rivenson, Muhammed Veli
  • Patent number: 11697833
    Abstract: A method of performing antimicrobial susceptibility testing (AST) on a sample uses a reader device that mounts on a mobile phone having a camera. A microtiter plate containing wells preloaded with the bacteria-containing sample, growth medium, and drugs of differing concentrations is loaded into the reader device. The wells are illuminated using an array of illumination sources contained in the reader device. Images of the wells are acquired with the camera of the mobile phone. In one embodiment, the images are transmitted to a separate computing device for processing to classify each well as turbid or not turbid and generating MIC values and a susceptibility characterization for each drug in the panel based on the turbidity classification of the array of wells. The MIC values and the susceptibility characterizations for each drug are transmitted or returned to the mobile phone for display thereon.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: July 11, 2023
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Omai Garner, Dino Di Carlo, Steve Wei Feng
  • Patent number: 11694082
    Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.
    Type: Grant
    Filed: June 17, 2022
    Date of Patent: July 4, 2023
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Xing Lin, Deniz Mengu, Yi Luo
  • Publication number: 20230153600
    Abstract: A machine vision task, machine learning task, and/or classification of objects is performed using a diffractive optical neural network device. Light from objects passes through or reflects off the diffractive optical neural network device formed by multiple substrate layers. The diffractive optical neural network device defines a trained function between an input optical signal from the object light illuminated at a plurality or a continuum of wavelengths and an output optical signal corresponding to one or more unique wavelengths or sets of wavelengths assigned to represent distinct data classes or object types/classes created by optical diffraction and/or reflection through/off the substrate layers. Output light is captured with detector(s) that generate a signal or data that comprise the one or more unique wavelengths or sets of wavelengths assigned to represent distinct data classes or object types or object classes which are used to perform the task or classification.
    Type: Application
    Filed: May 4, 2021
    Publication date: May 18, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Jingxi Li, Deniz Mengu, Yair Rivenson
  • Publication number: 20230085827
    Abstract: A deep learning-based offline autofocusing method and system is disclosed herein, termed a Deep-R trained neural network, that is trained to rapidly and blindly autofocus a single-shot microscopy image of a sample or specimen that is acquired at an arbitrary out-of-focus plane. The efficacy of Deep-R is illustrated using various tissue sections that were imaged using fluorescence and brightfield microscopy modalities and demonstrate single snapshot autofocusing under different scenarios, such as a uniform axial defocus as well as a sample tilt within the field-of-view. Deep-R is significantly faster when compared with standard online algorithmic autofocusing methods. This deep learning-based blind autofocusing framework opens up new opportunities for rapid microscopic imaging of large sample areas, also reducing the photon dose on the sample.
    Type: Application
    Filed: March 18, 2021
    Publication date: March 23, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Yilin Luo, Luzhe Huang
  • Publication number: 20230060037
    Abstract: A system for the detection and classification of live microorganisms in a sample includes a light source and an incubator holding one or more sample-containing growth plates. A translation stage moves the image sensor and/or the growth plate(s) along one or more dimensions to capture time-lapse holographic images of microorganisms. Image processing software executed by a computing device captures time-lapse holographic images of the microorganisms or clusters of microorganisms on the one or more growth plates. The image processing software is configured to detect candidate microorganism colonies in reconstructed, time-lapse holographic images based on differential image analysis. The image processing software includes one or more trained deep neural networks that process the time-lapsed image(s) of candidate microorganism colonies to detect true microorganism colonies and/or output a species associated with each true microorganism colony.
    Type: Application
    Filed: January 27, 2021
    Publication date: February 23, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Hatice Ceylan Koydemir, Yunzhe Qiu
  • Publication number: 20230030424
    Abstract: A deep learning-based digital/virtual staining method and system enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples. In one embodiment, the method of generates digitally/virtually-stained microscope images of label-free or unstained samples using fluorescence lifetime (FLIM) image(s) of the sample(s) using a fluorescence microscope. In another embodiment, a digital/virtual autofocusing method is provided that uses machine learning to generate a microscope image with improved focus using a trained, deep neural network. In another embodiment, a trained deep neural network generates digitally/virtually stained microscopic images of a label-free or unstained sample obtained with a microscope having multiple different stains. The multiple stains in the output image or sub-regions thereof are substantially equivalent to the corresponding microscopic images or image sub-regions of the same sample that has been histochemically stained.
    Type: Application
    Filed: December 22, 2020
    Publication date: February 2, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Yilin Luo, Kevin de Haan, Yijie Zhang, Bijie Bai
  • Publication number: 20230024787
    Abstract: An all-optical hologram reconstruction system and method is disclosed that can instantly retrieve the image of an unknown object from its in-line hologram and eliminate twin-image artifacts without using a digital processor or a computer. Multiple transmissive diffractive layers are trained using deep learning so that the diffracted light from an arbitrary input hologram is processed all-optically to reconstruct the image of an unknown object at the speed of light propagation and without the need for any external power. This passive diffractive optical network, which successfully generalizes to reconstruct in-line holograms of unknown, new objects and exhibits improved diffraction efficiency as well as extended depth-of-field at the hologram recording distance. The system and method can find numerous applications in coherent imaging and holographic display-related applications owing to its major advantages in terms of image reconstruction speed and computer-free operation.
    Type: Application
    Filed: July 14, 2022
    Publication date: January 26, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Md Sadman Sakib Rahman
  • Patent number: 11514325
    Abstract: A method of performing phase retrieval and holographic image reconstruction of an imaged sample includes obtaining a single hologram intensity image of the sample using an imaging device. The single hologram intensity image is back-propagated to generate a real input image and an imaginary input image of the sample with image processing software, wherein the real input image and the imaginary input image contain twin-image and/or interference-related artifacts. A trained deep neural network is provided that is executed by the image processing software using one or more processors and configured to receive the real input image and the imaginary input image of the sample and generate an output real image and an output imaginary image in which the twin-image and/or interference-related artifacts are substantially suppressed or eliminated.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: November 29, 2022
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Yichen Wu, Yibo Zhang, Harun Gunaydin
  • Publication number: 20220366253
    Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.
    Type: Application
    Filed: June 17, 2022
    Publication date: November 17, 2022
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Xing Lin, Deniz Mengu, Yi Luo
  • Patent number: 11501544
    Abstract: An imaging flow cytometer device includes a housing holding a multi-color illumination source configured for pulsed or continuous wave operation. A microfluidic channel is disposed in the housing and is fluidically coupled to a source of fluid containing objects that flow through the microfluidic channel. A color image sensor is disposed adjacent to the microfluidic channel and receives light from the illumination source that passes through the microfluidic channel. The image sensor captures image frames containing raw hologram images of the moving objects passing through the microfluidic channel. The image frames are subject to image processing to reconstruct phase and/or intensity images of the moving objects for each color. The reconstructed phase and/or intensity images are then input to a trained deep neural network that outputs a phase recovered image of the moving objects. The trained deep neural network may also be trained to classify object types.
    Type: Grant
    Filed: June 4, 2019
    Date of Patent: November 15, 2022
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Zoltan Gorocs
  • Publication number: 20220327371
    Abstract: A diffractive optical neural network device includes a plurality of diffractive substrate layers arranged in an optical path. The substrate layers are formed with physical features across surfaces thereof that collectively define a trained mapping function between an optical input and an optical output. A plurality of groups of optical sensors are configured to sense and detect the optical output, wherein each group of optical sensors has at least one optical sensor configured to capture a positive signal from the optical output and at least one optical sensor configured to capture a negative signal from the optical output. Circuitry and/or computer software receives signals or data from the optical sensors and identifies a group of optical sensors in which a normalized differential signal calculated from the positive and negative optical sensors within each group is the largest or the smallest of among all the groups.
    Type: Application
    Filed: June 5, 2020
    Publication date: October 13, 2022
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Jingxi Li, Deniz Mengu, Yi Luo
  • Patent number: 11460395
    Abstract: A portable colorimetric assay system includes an opto-mechanical reader configured to be detachably mounted to a mobile phone having a camera or other camera-containing portable electronic device. The opto-mechanical reader includes one or more light sources configured to illuminate a test sample holder and control sample holder disposed in the opto-mechanical reader along an optical path aligned with a camera of the mobile phone or other camera-containing portable electronic device. One or more serum separation membranes are disposed in the opto-mechanical reader and define a sample receiving pad configured to receive a blood sample. A moveable serum collection membrane is membrane is also disposed in the reader and is configured to contact the sample receiving pad in a first position and moveable to a second position where the serum collection membrane is disposed inside the test sample holder.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: October 4, 2022
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Aniruddha Ray, Hyouarm Joung, Derek Tseng, Isidro B. Salusky