Patents by Inventor Aydogan Ozcan

Aydogan Ozcan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210374381
    Abstract: Systems and methods for detecting motile objects (e.g., parasites) in a fluid sample by utilizing the locomotion of the parasites as a specific biomarker and endogenous contrast mechanism. The imaging platform includes one or more substantially optically transparent sample holders. The imaging platform has a moveable scanning head containing light sources and corresponding image sensor(s) associated with the light source(s). The light source(s) are directed at a respective sample holder containing a sample and the respective image sensor(s) are positioned below a respective sample holder to capture time-varying holographic speckle patterns of the sample contained in the sample holder. The image sensor(s). A computing device is configured to receive time-varying holographic speckle pattern image sequences obtained by the image sensor(s). The computing device generates a 3D contrast map of motile objects within the sample use deep learning-based classifier software to identify the motile objects.
    Type: Application
    Filed: October 18, 2019
    Publication date: December 2, 2021
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yibo Zhang, Hatice Ceylan Koydemir
  • Publication number: 20210285864
    Abstract: A lens-free microscope system for automatically analyzing yeast cell viability in a stained sample includes a portable, lens-free microscopy device that includes a housing containing a light source coupled to an optical fiber, the optical fiber spaced away several centimeters from an image sensor disposed at one end of the housing, wherein the stained sample is disposed on the image sensor or a sample holder adjacent to the image sensor. Hologram images are transferred to a computing device having image processing software contained therein, the image processing software identifying yeast cell candidates of interest from back-propagated images of the stained sample, whereby a plurality of spatial features are extracted from the yeast cell candidates of interest and subject to a trained machine learning model to classify the yeast cell candidates of interest as live or dead.
    Type: Application
    Filed: September 22, 2017
    Publication date: September 16, 2021
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Alborz Feizi, Alon Greenbaum
  • Publication number: 20210264214
    Abstract: A deep learning-based digital staining method and system are disclosed that provides a label-free approach to create a virtually-stained microscopic images from quantitative phase images (QPI) of label-free samples. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses a convolutional neural network trained using a generative adversarial network model to transform QPI images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample. This label-free digital staining method eliminates cumbersome and costly histochemical staining procedures, and would significantly simplify tissue preparation in pathology and histology fields.
    Type: Application
    Filed: March 29, 2019
    Publication date: August 26, 2021
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Zhensong Wei
  • Publication number: 20210209337
    Abstract: An imaging flow cytometer device includes a housing holding a multi-color illumination source configured for pulsed or continuous wave operation. A microfluidic channel is disposed in the housing and is fluidically coupled to a source of fluid containing objects that flow through the microfluidic channel. A color image sensor is disposed adjacent to the microfluidic channel and receives light from the illumination source that passes through the microfluidic channel. The image sensor captures image frames containing raw hologram images of the moving objects passing through the microfluidic channel. The image frames are subject to image processing to reconstruct phase and/or intensity images of the moving objects for each color. The reconstructed phase and/or intensity images are then input to a trained deep neural network that outputs a phase recovered image of the moving objects. The trained deep neural network may also be trained to classify object types.
    Type: Application
    Filed: June 4, 2019
    Publication date: July 8, 2021
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Zoltan Gorocs
  • Patent number: 11054357
    Abstract: A lens-free microscope for monitoring air quality includes a housing that contains a vacuum pump configured to draw air into an impaction nozzle. The impaction nozzle has an output located adjacent to an optically transparent substrate for collecting particles. One or more illumination sources are disposed in the housing and are configured to illuminate the collected particles on the optically transparent substrate. An image sensor is located adjacent to the optically transparent substrate, wherein the image sensor collects particle diffraction patterns or holographic images cast upon the image sensor. At least one processor is disposed in the housing and controls the vacuum pump and the one or more illumination sources. Image files are transferred to a separate computing device for image processing using machine learning to identify particles and perform data analysis to output particle images, particle size, particle density, and/or particle type data.
    Type: Grant
    Filed: March 9, 2018
    Date of Patent: July 6, 2021
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yichen Wu, Steve Wei Feng
  • Publication number: 20210181673
    Abstract: A method for lens-free imaging of a sample or objects within the sample uses multi-height iterative phase retrieval and rotational field transformations to perform wide FOV imaging of pathology samples with clinically comparable image quality to a benchtop lens-based microscope. The solution of the transport-of-intensity (TIE) equation is used as an initial guess in the phase recovery process to speed the image recovery process. The holographically reconstructed image can be digitally focused at any depth within the object FOV (after image capture) without the need for any focus adjustment, and is also digitally corrected for artifacts arising from uncontrolled tilting and height variations between the sample and sensor planes. In an alternative embodiment, a synthetic aperture approach is used with multi-angle iterative phase retrieval to perform wide FOV imaging of pathology samples and increase the effective numerical aperture of the image.
    Type: Application
    Filed: November 19, 2020
    Publication date: June 17, 2021
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Alon Grinbaum, Yibo Zhang, Alborz Feizi, Wei Luo
  • Publication number: 20210142170
    Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.
    Type: Application
    Filed: April 12, 2019
    Publication date: May 13, 2021
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Xing Ling, Deniz Mengu, Yi Luo
  • Publication number: 20210043331
    Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.
    Type: Application
    Filed: March 29, 2019
    Publication date: February 11, 2021
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
  • Patent number: 10871745
    Abstract: A method for lens-free imaging of a sample or objects within the sample uses multi-height iterative phase retrieval and rotational field transformations to perform wide FOV imaging of pathology samples with clinically comparable image quality to a benchtop lens-based microscope. The solution of the transport-of-intensity (TIE) equation is used as an initial guess in the phase recovery process to speed the image recovery process. The holographically reconstructed image can be digitally focused at any depth within the object FOV (after image capture) without the need for any focus adjustment, and is also digitally corrected for artifacts arising from uncontrolled tilting and height variations between the sample and sensor planes. In an alternative embodiment, a synthetic aperture approach is used with multi-angle iterative phase retrieval to perform wide FOV imaging of pathology samples and increase the effective numerical aperture of the image.
    Type: Grant
    Filed: July 31, 2015
    Date of Patent: December 22, 2020
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Alon Greenbaum, Yibo Zhang, Alborz Feizi, Wei Luo
  • Publication number: 20200393793
    Abstract: A method of generating a color image of a sample includes obtaining a plurality of low resolution holographic images of the sample using a color image sensor, the sample illuminated simultaneously by light from three or more distinct colors, wherein the illuminated sample casts sample holograms on the image sensor and wherein the plurality of low resolution holographic images are obtained by relative x, y, and z directional shifts between sample holograms and the image sensor. Pixel super-resolved holograms of the sample are generated at each of the three or more distinct colors. De-multiplexed holograms are generated from the pixel super-resolved holograms. Phase information is retrieved from the de-multiplexed holograms using a phase retrieval algorithm to obtain complex holograms. The complex hologram for the three or more distinct colors is digitally combined and back-propagated to a sample plane to generate the color image.
    Type: Application
    Filed: August 28, 2020
    Publication date: December 17, 2020
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yichen Wu, Yibo Zhang, Wei Luo
  • Publication number: 20200393359
    Abstract: A portable colorimetric assay system includes an opto-mechanical reader configured to be detachably mounted to a mobile phone having a camera or other camera-containing portable electronic device. The opto-mechanical reader includes one or more light sources configured to illuminate a test sample holder and control sample holder disposed in the opto-mechanical reader along an optical path aligned with a camera of the mobile phone or other camera-containing portable electronic device. One or more serum separation membranes are disposed in the opto-mechanical reader and define a sample receiving pad configured to receive a blood sample. A moveable serum collection membrane is membrane is also disposed in the reader and is configured to contact the sample receiving pad in a first position and moveable to a second position where the serum collection membrane is disposed inside the test sample holder.
    Type: Application
    Filed: June 12, 2020
    Publication date: December 17, 2020
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Aniruddha Ray, Hyouarm Joung, Derek Tseng, Isidro B. Salusky
  • Patent number: 10838192
    Abstract: Methods and systems for generating a high-color-fidelity and high-resolution color image of a sample are disclosed; which fuses or merges a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform based colorization method. A holographic microscope is used to obtain holographic images which are used to computationally reconstruct a high-resolution mono-color holographic image of the sample. A lens-based microscope is used to obtain low resolution color images. A discrete wavelet transform (DWT) is used to generate a final image that merges the low-resolution components from the lens-based color image and the high-resolution components from the high-resolution mono-color holographic image.
    Type: Grant
    Filed: May 9, 2017
    Date of Patent: November 17, 2020
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yibo Zhang, Yichen Wu
  • Publication number: 20200340901
    Abstract: A label-free bio-aerosol sensing platform and method uses a field-portable and cost-effective device based on holographic microscopy and deep-learning, which screens bio-aerosols at a high throughput level. Two different deep neural networks are utilized to rapidly reconstruct the amplitude and phase images of the captured bio-aerosols, and to output particle information for each bio-aerosol that is imaged. This includes, a classification of the type or species of the particle, particle size, particle shape, particle thickness, or spatial feature(s) of the particle. The platform was validated using the label-free sensing of common bio-aerosol types, e.g., Bermuda grass pollen, oak tree pollen, ragweed pollen, Aspergillus spore, and Alternaria spore and achieved >94% classification accuracy. The label-free bio-aerosol platform, with its mobility and cost-effectiveness, will find several applications in indoor and outdoor air quality monitoring.
    Type: Application
    Filed: April 24, 2020
    Publication date: October 29, 2020
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yichen Wu
  • Patent number: 10795315
    Abstract: A method of generating a color image of a sample includes obtaining a plurality of low resolution holographic images of the sample using a color image sensor, the sample illuminated simultaneously by light from three or more distinct colors, wherein the illuminated sample casts sample holograms on the image sensor and wherein the plurality of low resolution holographic images are obtained by relative x, y, and z directional shifts between sample holograms and the image sensor. Pixel super-resolved holograms of the sample are generated at each of the three or more distinct colors. De-multiplexed holograms are generated from the pixel super-resolved holograms. Phase information is retrieved from the de-multiplexed holograms using a phase retrieval algorithm to obtain complex holograms. The complex hologram for the three or more distinct colors is digitally combined and back-propagated to a sample plane to generate the color image.
    Type: Grant
    Filed: May 10, 2017
    Date of Patent: October 6, 2020
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yichen Wu, Yibo Zhang, Wei Luo
  • Publication number: 20200310100
    Abstract: Methods and systems for generating a high-color-fidelity and high-resolution color image of a sample are disclosed; which fuses or merges a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform based colorization method. A holographic microscope is used to obtain holographic images which are used to computationally reconstruct a high-resolution mono-color holographic image of the sample. A lens-based microscope is used to obtain low resolution color images. A discrete wavelet transform (DWT) is used to generate a final image that merges the low-resolution components from the lens-based color image and the high-resolution components from the high-resolution mono-color holographic image.
    Type: Application
    Filed: May 9, 2017
    Publication date: October 1, 2020
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yibo Zhang, Yichen Wu
  • Patent number: 10663466
    Abstract: A portable rapid diagnostic test reader system includes a mobile phone having a camera and one or more processors contained within the mobile phone and a modular housing configured to mount to the mobile phone. The modular housing including a receptacle configured to receive a sample tray holding a rapid diagnostic test. At least one illumination source is disposed in the modular housing and located on one side of the rapid diagnostic test. An optical demagnifier is disposed in the modular housing interposed between the rapid diagnostic test and the mobile phone camera.
    Type: Grant
    Filed: October 31, 2014
    Date of Patent: May 26, 2020
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Onur Mudanyali, Stoyan Dimitrov, Uzair Sikora, Swati Padmanabhan, Isa Navrus
  • Publication number: 20200103328
    Abstract: A lens-free microscope for monitoring air quality includes a housing that contains a vacuum pump configured to draw air into an impaction nozzle. The impaction nozzle has an output located adjacent to an optically transparent substrate for collecting particles. One or more illumination sources are disposed in the housing and are configured to illuminate the collected particles on the optically transparent substrate. An image sensor is located adjacent to the optically transparent substrate, wherein the image sensor collects particle diffraction patterns or holographic images cast upon the image sensor. At least one processor is disposed in the housing and controls the vacuum pump and the one or more illumination sources. Image files are transferred to a separate computing device for image processing using machine learning to identify particles and perform data analysis to output particle images, particle size, particle density, and/or particle type data.
    Type: Application
    Filed: March 9, 2018
    Publication date: April 2, 2020
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yichen Wu, Steve Wei Feng
  • Publication number: 20190346369
    Abstract: A multi-well plate reader device includes an opto-mechanical attachment configured to attach/detach to a portable electronic device having a camera. The reader includes a plurality of excitation illumination sources and a slot that receives a well plate. Excitation and emission filters are incorporated into the housing. Optical fibers are located in the attachment and transmit fluorescent light emitted from the wells of the well plate through an optional lens and into the camera. The optical fibers have an input end adjacent to the wells and an output end formed in a header, wherein, in one embodiment, multiple optical fibers are positioned within a cross-sectional area projection defined by the wells and wherein the output ends of optical fibers are mounted in the header. The pattern of the optical fibers is mapped to individual wells in a calibration operation and stored in a fiber map.
    Type: Application
    Filed: January 17, 2018
    Publication date: November 14, 2019
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Omai Garner, Dino Di Carlo, Qingshan Wei, Derek Tseng, Janay Kong
  • Publication number: 20190333199
    Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.
    Type: Application
    Filed: April 26, 2019
    Publication date: October 31, 2019
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
  • Publication number: 20190316172
    Abstract: A method of performing antimicrobial susceptibility testing (AST) on a sample uses a reader device that mounts on a mobile phone having a camera. A microtiter plate containing wells preloaded with the bacteria-containing sample, growth medium, and drugs of differing concentrations is loaded into the reader device. The wells are illuminated using an array of illumination sources contained in the reader device. Images of the wells are acquired with the camera of the mobile phone. In one embodiment, the images are transmitted to a separate computing device for processing to classify each well as turbid or not turbid and generating MIC values and a susceptibility characterization for each drug in the panel based on the turbidity classification of the array of wells. The MIC values and the susceptibility characterizations for each drug are transmitted or returned to the mobile phone for display thereon.
    Type: Application
    Filed: November 29, 2017
    Publication date: October 17, 2019
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Omai Garner, Dino Di Carlo, Steve Wei Feng