Patents by Inventor Aydogan Ozcan
Aydogan Ozcan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220299525Abstract: A system for detecting the presence of and/or quantifying the amount or concentration of one or more analytes in a sample includes a flow assay cartridge having a multiplexed sensing membrane that has immunoreaction or biological reaction spots of varying conditions spatially arranged across the surface of the membrane defining an optimized spot map. A reader device is provided that uses a camera to image the multiplexed sensing membrane. Image processing software obtains normalized pixel intensity values of the plurality of immunoreaction or biological reaction spots and which are used as inputs to one or more trained neural networks configured to generate one or more outputs that: (i) quantify the amount or concentration of the one or more analytes in the sample; and/or (ii) indicate the presence of the one or more analytes in the sample; and/or (ii) determines a diagnostic decision or classification of the sample.Type: ApplicationFiled: May 22, 2020Publication date: September 22, 2022Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Hyou-Arm Joung, Zachary S. Ballard, Omai Garner, Dino Di Carlo, Artem Goncharov
-
Patent number: 11450121Abstract: An optical readout method for detecting a precipitate (e.g., a precipitate generated from the LAMP reaction) contained within a droplet includes generating a plurality of droplets, at least some which have a precipitate contained therein. The droplets are imaged using a brightfield imaging device. The image is subject to image processing using image processing software executed on a computing device. Image processing isolates individual droplets in the image and performs feature detection within the isolated droplets. Keypoints and information related thereto are extracted from the detected features within the isolated droplets. The keypoints are subject to a clustering operation to generate a plurality of visual “words.” The word frequency obtained for each droplet is input into a trained machine learning droplet classifier, wherein the trained machine learning droplet classifier classifies each droplet as positive for the precipitate or negative for the precipitate.Type: GrantFiled: June 26, 2018Date of Patent: September 20, 2022Assignee: The Regents of the University of CaliforniaInventors: Dino Di Carlo, Aydogan Ozcan, Omai B. Garner, Hector E. Munoz, Carson Riche
-
Publication number: 20220268754Abstract: A system for detecting the presence of E. coli and total coliform in a water sample includes a sample holder that holds smaller, divided volumes of the sample and a testing reagent. A plurality of light sources are disposed above sample holder. The divided sample volumes are are illuminated with first and second light sources emitting light at different wavelengths. A bundle of optical fibers is provided with having an input end located adjacent to the divided sample volumes and is configured to receive light passing through the sample volumes. Light is output from the bundle of optical fibers and is captured with a camera. Image processing software is provided and is configured to calculate a light intensity in first and second wavelength channels at different times and outputs a positive/negative indication for E. coli and total coliform for the water sample.Type: ApplicationFiled: July 29, 2020Publication date: August 25, 2022Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Kevin de Haan, Hatice Ceylan Koydemir, Sabiha Tok, Derek Tseng
-
Patent number: 11422503Abstract: A method for lens-free imaging of a sample or objects within the sample uses multi-height iterative phase retrieval and rotational field transformations to perform wide FOV imaging of pathology samples with clinically comparable image quality to a benchtop lens-based microscope. The solution of the transport-of-intensity (TIE) equation is used as an initial guess in the phase recovery process to speed the image recovery process. The holographically reconstructed image can be digitally focused at any depth within the object FOV (after image capture) without the need for any focus adjustment, and is also digitally corrected for artifacts arising from uncontrolled tilting and height variations between the sample and sensor planes. In an alternative embodiment, a synthetic aperture approach is used with multi-angle iterative phase retrieval to perform wide FOV imaging of pathology samples and increase the effective numerical aperture of the image.Type: GrantFiled: November 19, 2020Date of Patent: August 23, 2022Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Alon Grinbaum, Yibo Zhang, Alborz Feizi, Wei Luo
-
Publication number: 20220260481Abstract: A computational cytometer operates using magnetically modulated lensless speckle imaging, which introduces oscillatory motion to magnetic bead-conjugated rare cells of interest through a periodic magnetic force and uses lensless time-resolved holographic speckle imaging to rapidly detect the target cells in three-dimensions (3D). Detection specificity is further enhanced through a deep learning-based classifier that is based on a densely connected pseudo-3D convolutional neural network (P3D CNN), which automatically detects rare cells of interest based on their spatio-temporal features under a controlled magnetic force. This compact, cost-effective and high-throughput computational cytometer can be used for rare cell detection and quantification in bodily fluids for a variety of biomedical applications.Type: ApplicationFiled: July 2, 2020Publication date: August 18, 2022Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Aniruddha Ray, Yibo Zhang, Dino Di Carlo
-
Publication number: 20220253685Abstract: A broadband diffractive optical neural network simultaneously processes a continuum of wavelengths generated by a temporally-incoherent broadband source to all-optically perform a specific task learned using network learning. The optical neural network design was verified by designing, fabricating and testing seven different multi-layer, diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize (1) a series of tunable, single passband as well as dual passband spectral filters, and (2) spatially-controlled wavelength de-multiplexing. Merging the native or engineered dispersion of various material systems with a deep learning-based design, broadband diffractive optical neural networks help engineer light-matter interaction in 3D, diverging from intuitive and analytical design methods to create task-specific optical components that can all-optically perform deterministic tasks or statistical inference for optical machine learning.Type: ApplicationFiled: September 9, 2020Publication date: August 11, 2022Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yi Luo, Deniz Mengu, Yair Rivenson
-
Patent number: 11397405Abstract: A method of generating a color image of a sample includes obtaining a plurality of low resolution holographic images of the sample using a color image sensor, the sample illuminated simultaneously by light from three or more distinct colors, wherein the illuminated sample casts sample holograms on the image sensor and wherein the plurality of low resolution holographic images are obtained by relative x, y, and z directional shifts between sample holograms and the image sensor. Pixel super-resolved holograms of the sample are generated at each of the three or more distinct colors. De-multiplexed holograms are generated from the pixel super-resolved holograms. Phase information is retrieved from the de-multiplexed holograms using a phase retrieval algorithm to obtain complex holograms. The complex hologram for the three or more distinct colors is digitally combined and back-propagated to a sample plane to generate the color image.Type: GrantFiled: August 28, 2020Date of Patent: July 26, 2022Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yichen Wu, Yibo Zhang, Wei Luo
-
Patent number: 11392830Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.Type: GrantFiled: April 12, 2019Date of Patent: July 19, 2022Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Xing Lin, Deniz Mengu, Yi Luo
-
Publication number: 20220206434Abstract: A method for performing color image reconstruction of a single super-resolved holographic sample image includes obtaining a plurality of sub-pixel shifted lower resolution hologram images of the sample using an image sensor by simultaneous illumination at multiple color channels. Super-resolved hologram intensity images for each color channel are digitally generated based on the lower resolution hologram images. The super-resolved hologram intensity images for each color channel are back propagated to an object plane with image processing software to generate a real and imaginary input images of the sample for each color channel. A trained deep neural network is provided and is executed by image processing software using one or more processors of a computing device and configured to receive the real input image and the imaginary input image of the sample for each color channel and generate a color output image of the sample.Type: ApplicationFiled: April 21, 2020Publication date: June 30, 2022Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Tairan Liu, Yibo Zhang, Zhensong Wei
-
Patent number: 11347000Abstract: A micro-plate reader for use with a portable electronic device having a camera includes an opto-mechanical attachment configured to attach/detach to the portable electronic device and includes an array of illumination sources. A slot in the opto-mechanical attachment is dimensioned to receive an optically transparent plate containing an array of wells. Optical fibers are located in the opto-mechanical attachment and transmit light from each well to a reduced size header having, wherein the fiber array in the header has a cross-sectional area that is ?10× the cross-sectional area of the wells in the plate. A lens located in the opto-mechanical attachment transmits light from the header fibers to the camera. Software executed on the portable electronic device or other computer is used to process the images to generate qualitative clinical determinations and/or quantitative index values for the separate wells.Type: GrantFiled: June 17, 2016Date of Patent: May 31, 2022Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Brandon Berg, Bingen Cortazar, Derek Tseng, Steve Wei Feng, Haydar Ozkan, Omai Garner, Dino Di Carlo
-
Patent number: 11320362Abstract: A lens-free microscope system for automatically analyzing yeast cell viability in a stained sample includes a portable, lens-free microscopy device that includes a housing containing a light source coupled to an optical fiber, the optical fiber spaced away several centimeters from an image sensor disposed at one end of the housing, wherein the stained sample is disposed on the image sensor or a sample holder adjacent to the image sensor. Hologram images are transferred to a computing device having image processing software contained therein, the image processing software identifying yeast cell candidates of interest from back-propagated images of the stained sample, whereby a plurality of spatial features are extracted from the yeast cell candidates of interest and subject to a trained machine learning model to classify the yeast cell candidates of interest as live or dead.Type: GrantFiled: September 22, 2017Date of Patent: May 3, 2022Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Alborz Feizi, Alon Greenbaum
-
Publication number: 20220121940Abstract: A deep learning-based spectral analysis device and method are disclosed that employs a spectral encoder chip containing a plurality of nanohole array tiles, each with a unique geometry and, thus, a unique optical transmission spectrum. Illumination impinges upon the encoder chip and a CMOS image sensor captures the transmitted light, without any lenses, gratings, or other optical components. A spectral reconstruction neural network uses the transmitted intensities from the image to faithfully reconstruct the input spectrum. In one embodiment that used a spectral encoder chip with 252 nanohole array tiles, the network was trained on 50,352 spectra randomly generated by a supercontinuum laser and blindly tested on 14,648 unseen spectra. The system identified 96.86% of spectral peaks, with a peak localization error of 0.19 nm, peak height error of 7.60%, and peak bandwidth error of 0.18 nm.Type: ApplicationFiled: October 17, 2021Publication date: April 21, 2022Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Calvin Brown, Artem Goncharov, Zachary Ballard, Yair Rivenson
-
Publication number: 20220122313Abstract: A deep learning-based volumetric image inference system and method are disclosed that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network (RNN) (referred to herein as Recurrent-MZ), 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. elegans and nanobead samples, Recurrent-MZ is demonstrated to increase the depth-of-field of a 63×/1.4 NA objective lens by approximately 50-fold, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. The generalization of this recurrent network for 3D imaging is further demonstrated by showing its resilience to varying imaging conditions, including e.g.Type: ApplicationFiled: October 19, 2021Publication date: April 21, 2022Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Luzhe Huang
-
Publication number: 20220114711Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.Type: ApplicationFiled: November 19, 2021Publication date: April 14, 2022Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
-
Publication number: 20220113671Abstract: A UV holographic imaging device offers a low-cost, portable and robust technique to image and distinguish protein crystals from salt crystals, without the need for any expensive and bulky optical components. This “on-chip” device uses a UV LED and a consumer-grade CMOS image sensor de-capped and interfaced to a processor or microcontroller, the information from the crystal samples, which are placed very close to the sensor active area, is captured in the form of in-line holograms and extracted through digital back-propagation. In these holographic amplitude and/or phase reconstructions, protein crystals appear significantly darker compared to the background due to the strong UV absorption, unlike salt crystals, enabling one to clearly distinguish protein and salt crystals. The on-chip UV holographic microscope serves as a low-cost, sensitive, and robust alternative to conventional lens-based UV-microscopes used in protein crystallography.Type: ApplicationFiled: December 3, 2019Publication date: April 14, 2022Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Aniruddha Ray, Mustafa Daloglu
-
Patent number: 11262286Abstract: A label-free bio-aerosol sensing platform and method uses a field-portable and cost-effective device based on holographic microscopy and deep-learning, which screens bio-aerosols at a high throughput level. Two different deep neural networks are utilized to rapidly reconstruct the amplitude and phase images of the captured bio-aerosols, and to output particle information for each bio-aerosol that is imaged. This includes, a classification of the type or species of the particle, particle size, particle shape, particle thickness, or spatial feature(s) of the particle. The platform was validated using the label-free sensing of common bio-aerosol types, e.g., Bermuda grass pollen, oak tree pollen, ragweed pollen, Aspergillus spore, and Alternaria spore and achieved >94% classification accuracy. The label-free bio-aerosol platform, with its mobility and cost-effectiveness, will find several applications in indoor and outdoor air quality monitoring.Type: GrantFiled: April 24, 2020Date of Patent: March 1, 2022Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yichen Wu
-
Publication number: 20220058776Abstract: A fluorescence microscopy method includes a trained deep neural network. At least one 2D fluorescence microscopy image of a sample is input to the trained deep neural network, wherein the input image(s) is appended with a digital propagation matrix (DPM) that represents, pixel-by-pixel, an axial distance of a user-defined or automatically generated surface within the sample from a plane of the input image. The trained deep neural network outputs fluorescence output image(s) of the sample that is digitally propagated or refocused to the user-defined surface or automatically generated. The method and system cross-connects different imaging modalities, permitting 3D propagation of wide-field fluorescence image(s) to match confocal microscopy images at different sample planes, The method may be used to output a time sequence of images (e.g., time-lapse video) of a 2D or 3D surface within a sample.Type: ApplicationFiled: December 23, 2019Publication date: February 24, 2022Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Yichen Wu
-
Publication number: 20220012850Abstract: A trained deep neural network transforms an image of a sample obtained with a holographic microscope to an image that substantially resembles a microscopy image obtained with a microscope having a different microscopy image modality. Examples of different imaging modalities include bright-field, fluorescence, and dark-field. For bright-field applications, deep learning brings bright-field microscopy contrast to holographic images of a sample, bridging the volumetric imaging capability of holography with the speckle-free and artifact-free image contrast of bright-field microscopy.Type: ApplicationFiled: November 14, 2019Publication date: January 13, 2022Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Yichen Wu
-
Patent number: 11222415Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.Type: GrantFiled: April 26, 2019Date of Patent: January 11, 2022Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
-
Publication number: 20210382052Abstract: A multiplexed vertical flow serodiagnostic testing device for diseases such as Lyme disease includes one or more multi-piece cassettes that include vertical stacks of functionalized porous layers therein. A bottom piece of the cassette includes a sensing membrane with a plurality of spatially multiplexed immunoreaction spots or locations. Top pieces are used to deliver sample and/or buffer solutions along with antibody-conjugated nanoparticles for binding with the immunoreaction spots or locations. A colorimetric signal is generated by the nanoparticles captured on the sensing membrane containing disease-specific antigens. The sensing membrane is imaged by a cost-effective portable reader device. The images captured by the reader device are subject to image processing and analysis to generate positive (+) or negative (?) indication for the sample. A concentration of one or more biomarkers may also be generated.Type: ApplicationFiled: October 18, 2019Publication date: December 9, 2021Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Hyou-Arm Joung, Zachary S. Ballard, Omai Garner, Dino Di Carlo, Aydogan Ozcan