Patents by Inventor Aydogan Ozcan
Aydogan Ozcan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250138296Abstract: Quantitative phase imaging (QPI) is a label-free computational imaging technique that provides optical path length information of objects. Here, a diffractive QPI network architecture is disclosed that can synthesize the quantitative phase image of an object by converting the input phase information of a scene or object(s) into intensity variations at the output plane. A diffractive QPI network is a specialized all-optical device designed to perform a quantitative phase-to-intensity transformation through passive diffractive/reflective surfaces that are spatially engineered using deep learning and image data. Forming a compact, all-optical network that axially extends only ˜200-300? (?=illumination wavelength), this framework replaces traditional QPI systems and related digital computational burdens with a set of passive substrate layers.Type: ApplicationFiled: January 13, 2023Publication date: May 1, 2025Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Deniz Mengu
-
Patent number: 12270068Abstract: A system for the detection and classification of live microorganisms in a sample includes a light source and an incubator holding one or more sample-containing growth plates. A translation stage moves the image sensor and/or the growth plate(s) along one or more dimensions to capture time-lapse holographic images of microorganisms. Image processing software executed by a computing device captures time-lapse holographic images of the microorganisms or clusters of microorganisms on the one or more growth plates. The image processing software is configured to detect candidate microorganism colonies in reconstructed, time-lapse holographic images based on differential image analysis. The image processing software includes one or more trained deep neural networks that process the time-lapsed image(s) of candidate microorganism colonies to detect true microorganism colonies and/or output a species associated with each true microorganism colony.Type: GrantFiled: January 27, 2021Date of Patent: April 8, 2025Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Hatice Ceylan Koydemir, Yunzhe Qiu
-
Publication number: 20250046069Abstract: A deep learning-based virtual HER2 IHC staining method uses a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images, matching the standard HER2 IHC staining that is chemically performed on the same tissue sections. The efficacy of this staining framework was demonstrated by quantitative analysis of blindly graded HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images (WSIs). A second quantitative blinded study revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail, membrane clearness, and absence of staining artifacts with respect to their immunohistochemically stained counterparts.Type: ApplicationFiled: November 30, 2022Publication date: February 6, 2025Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Bijie Bai, Hongda Wang
-
Patent number: 12190478Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.Type: GrantFiled: November 19, 2021Date of Patent: January 7, 2025Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
-
Publication number: 20240354907Abstract: A deep learning framework, termed Fourier Imager Network (FIN) is disclosed that can perform end-to-end phase recovery and image reconstruction from raw holograms of new types of samples, exhibiting success in external generalization. The FIN architecture is based on spatial Fourier transform modules with the deep neural network that process the spatial frequencies of its inputs using learnable filters and a global receptive field. FIN exhibits superior generalization to new types of samples, while also being much faster in its image inference speed, completing the hologram reconstruction task in ˜0.04 s per 1 mm2 of the sample area. Beyond holographic microscopy and quantitative phase imaging applications, FIN and the underlying neural network architecture may open up various new opportunities to design broadly generalizable deep learning models in computational imaging and machine vision fields.Type: ApplicationFiled: April 16, 2024Publication date: October 24, 2024Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Hanlong Chen, Luzhe Huang
-
Patent number: 12106552Abstract: A deep learning-based digital staining method and system are disclosed that provides a label-free approach to create a virtually-stained microscopic images from quantitative phase images (QPI) of label-free samples. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses a convolutional neural network trained using a generative adversarial network model to transform QPI images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample. This label-free digital staining method eliminates cumbersome and costly histochemical staining procedures, and would significantly simplify tissue preparation in pathology and histology fields.Type: GrantFiled: March 29, 2019Date of Patent: October 1, 2024Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Zhensong Wei
-
Publication number: 20240310782Abstract: Digital holography is one of the most widely used label-free microscopy techniques in biomedical imaging. Recovery of the missing phase information of a hologram is an important step in holographic image reconstruction. A convolutional recurrent neural network (RNN)-based phase recovery approach is employed that uses multiple holograms, captured at different sample-to-sensor distances to rapidly reconstruct the phase and amplitude information of a sample, while also performing autofocusing through the same trained neural network. The success of this deep learning-enabled holography method is demonstrated by imaging microscopic features of human tissue samples and Papanicolaou (Pap) smears. These results constitute the first demonstration of the use of recurrent neural networks for holographic imaging and phase recovery, and compared with existing methods, the presented approach improves the reconstructed image quality, while also increasing the depth-of-field and inference speed.Type: ApplicationFiled: February 9, 2022Publication date: September 19, 2024Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Luzhe Huang, Tairan Liu
-
Patent number: 12086717Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.Type: GrantFiled: May 12, 2023Date of Patent: September 10, 2024Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Xing Lin, Deniz Mengu, Yi Luo
-
Publication number: 20240290473Abstract: A deep learning-based system and method is provided that uses a convolutional neural network to rapidly transform in vivo reflectance confocal microscopy (RCM) images of unstained skin into virtually-stained hematoxylin and eosin-like images with microscopic resolution, enabling visualization of epidermis, dermal-epidermal junction, and superficial dermis layers. The network is trained using ex vivo RCM images of excised unstained tissue and microscopic images of the same tissue labeled with acetic acid nuclear contrast staining as the ground truth. The trained neural network can be used to rapidly perform virtual histology of in vivo, label-free RCM images of normal skin structure, basal cell carcinoma and melanocytic nevi with pigmented melanocytes, demonstrating similar histological features of traditional histology from the same excised tissue. The system and method enables more rapid diagnosis of malignant skin neoplasms and reduces invasive skin biopsies.Type: ApplicationFiled: June 29, 2022Publication date: August 29, 2024Applicants: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA, UNITED STATES GOVERNMENT AS REPRESENTED BY THE DEPARTMENT OF VETERANS AFFAIRSInventors: Aydogan Ozcan, Jingxi Li, Yair Rivenson, Xiaoran Zhang, Philip O. Scumpia, Jason Garfinkel, Gennady Rubinstein
-
Publication number: 20240288701Abstract: A computer-free system and method is disclosed that uses an all-optical image reconstruction method to see through random diffusers at the speed of light. Using deep learning, a set of transmissive layers are trained to all-optically reconstruct images of arbitrary objects that are distorted by random phase diffusers. After the training stage, the resulting diffractive layers are fabricated and form a diffractive optical network that is physically positioned between the unknown object and the image plane to all-optically reconstruct the object pattern through an unknown, new phase diffuser. Unlike digital methods, all-optical diffractive reconstructions do not require power except for the illumination light. This diffractive solution to see through diffusive and/or scattering media can be extended to other wavelengths, and can fuel various applications in biomedical imaging, astronomy, atmospheric sciences, oceanography, security, robotics, autonomous vehicles, among many others.Type: ApplicationFiled: June 29, 2022Publication date: August 29, 2024Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yi Luo, Ege Cetintas, Yair Rivenson
-
Publication number: 20240255527Abstract: A quantitative particle agglutination assay device is disclosed that combines portable lens-free microscopy and deep learning for rapidly measuring the concentration of a target analyte. As one example of a target analyte, the assay device was used to test for high-sensitivity C-reactive protein (hs-CRP) using human serum samples. A dual-channel capillary lateral flow device is designed to host the agglutination reaction using a small volume of serum. A portable lens-free microscope records time-lapsed inline holograms of the lateral flow device, monitoring the agglutination process over several minutes. These captured holograms are processed, and at each frame the number and area of the particle clusters are automatically extracted and fed into shallow neural networks to predict the CRP concentration. The system can be used to successfully differentiate very high CRP concentrations (e.g., >10-500 ?g/mL) from the hs-CRP range.Type: ApplicationFiled: May 24, 2022Publication date: August 1, 2024Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Hyou-Arm Joung, Yi Luo
-
Patent number: 12038370Abstract: A computational cytometer operates using magnetically modulated lensless speckle imaging, which introduces oscillatory motion to magnetic bead-conjugated rare cells of interest through a periodic magnetic force and uses lensless time-resolved holographic speckle imaging to rapidly detect the target cells in three-dimensions (3D). Detection specificity is further enhanced through a deep learning-based classifier that is based on a densely connected pseudo-3D convolutional neural network (P3D CNN), which automatically detects rare cells of interest based on their spatio-temporal features under a controlled magnetic force. This compact, cost-effective and high-throughput computational cytometer can be used for rare cell detection and quantification in bodily fluids for a variety of biomedical applications.Type: GrantFiled: July 2, 2020Date of Patent: July 16, 2024Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Aniruddha Ray, Yibo Zhang, Dino Di Carlo
-
Patent number: 12020165Abstract: A trained deep neural network transforms an image of a sample obtained with a holographic microscope to an image that substantially resembles a microscopy image obtained with a microscope having a different microscopy image modality. Examples of different imaging modalities include bright-field, fluorescence, and dark-field. For bright-field applications, deep learning brings bright-field microscopy contrast to holographic images of a sample, bridging the volumetric imaging capability of holography with the speckle-free and artifact-free image contrast of bright-field microscopy.Type: GrantFiled: November 14, 2019Date of Patent: June 25, 2024Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Yichen Wu
-
Patent number: 12013395Abstract: A multiplexed vertical flow serodiagnostic testing device for diseases such as Lyme disease includes one or more multi-piece cassettes that include vertical stacks of functionalized porous layers therein. A bottom piece of the cassette includes a sensing membrane with a plurality of spatially multiplexed immunoreaction spots or locations. Top pieces are used to deliver sample and/or buffer solutions along with antibody-conjugated nanoparticles for binding with the immunoreaction spots or locations. A colorimetric signal is generated by the nanoparticles captured on the sensing membrane containing disease-specific antigens. The sensing membrane is imaged by a cost-effective portable reader device. The images captured by the reader device are subject to image processing and analysis to generate positive (+) or negative (?) indication for the sample. A concentration of one or more biomarkers may also be generated.Type: GrantFiled: October 18, 2019Date of Patent: June 18, 2024Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Hyou-Arm Joung, Zachary S. Ballard, Omai Garner, Dino Di Carlo, Aydogan Ozcan
-
Publication number: 20240135544Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.Type: ApplicationFiled: December 18, 2023Publication date: April 25, 2024Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
-
Patent number: 11946854Abstract: A fluorescence microscopy method includes a trained deep neural network. At least one 2D fluorescence microscopy image of a sample is input to the trained deep neural network, wherein the input image(s) is appended with a digital propagation matrix (DPM) that represents, pixel-by-pixel, an axial distance of a user-defined or automatically generated surface within the sample from a plane of the input image. The trained deep neural network outputs fluorescence output image(s) of the sample that is digitally propagated or refocused to the user-defined surface or automatically generated. The method and system cross-connects different imaging modalities, permitting 3D propagation of wide-field fluorescence image(s) to match confocal microscopy images at different sample planes. The method may be used to output a time sequence of images (e.g., time-lapse video) of a 2D or 3D surface within a sample.Type: GrantFiled: December 23, 2019Date of Patent: April 2, 2024Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Yichen Wu
-
Patent number: 11915360Abstract: A deep learning-based volumetric image inference system and method are disclosed that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network (RNN) (referred to herein as Recurrent-MZ), 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. elegans and nanobead samples, Recurrent-MZ is demonstrated to increase the depth-of-field of a 63×/1.4 NA objective lens by approximately 50-fold, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. The generalization of this recurrent network for 3D imaging is further demonstrated by showing its resilience to varying imaging conditions, including e.g.Type: GrantFiled: October 19, 2021Date of Patent: February 27, 2024Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Luzhe Huang
-
Patent number: 11893779Abstract: Systems and methods for detecting motile objects (e.g., parasites) in a fluid sample by utilizing the locomotion of the parasites as a specific biomarker and endogenous contrast mechanism. The imaging platform includes one or more substantially optically transparent sample holders. The imaging platform has a moveable scanning head containing light sources and corresponding image sensor(s) associated with the light source(s). The light source(s) are directed at a respective sample holder containing a sample and the respective image sensor(s) are positioned below a respective sample holder to capture time-varying holographic speckle patterns of the sample contained in the sample holder. The image sensor(s). A computing device is configured to receive time-varying holographic speckle pattern image sequences obtained by the image sensor(s). The computing device generates a 3D contrast map of motile objects within the sample use deep learning-based classifier software to identify the motile objects.Type: GrantFiled: October 18, 2019Date of Patent: February 6, 2024Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yibo Zhang, Hatice Ceylan Koydemir
-
Patent number: 11893739Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.Type: GrantFiled: March 29, 2019Date of Patent: February 6, 2024Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
-
Publication number: 20230401436Abstract: A method of forming an optical neural network for processing an input object image or optical signal that is invariant to object transformations includes training a software-based neural network model to perform one or more specific optical functions for a multi-layer optical network having physical features located in each of the layers of the optical neural network. The training includes feeding different input object images or optical signals that have random transformations or shifts and computing at least one optical output of optical transmission and/or reflection through the optical neural network using an optical wave propagation model and iteratively adjusting transmission/reflection coefficients for each layer until optimized transmission/reflection coefficients are obtained.Type: ApplicationFiled: October 22, 2021Publication date: December 14, 2023Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Deniz Mengu, Yair Rivenson