Patents by Inventor Hongda Wang
Hongda Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12270068Abstract: A system for the detection and classification of live microorganisms in a sample includes a light source and an incubator holding one or more sample-containing growth plates. A translation stage moves the image sensor and/or the growth plate(s) along one or more dimensions to capture time-lapse holographic images of microorganisms. Image processing software executed by a computing device captures time-lapse holographic images of the microorganisms or clusters of microorganisms on the one or more growth plates. The image processing software is configured to detect candidate microorganism colonies in reconstructed, time-lapse holographic images based on differential image analysis. The image processing software includes one or more trained deep neural networks that process the time-lapsed image(s) of candidate microorganism colonies to detect true microorganism colonies and/or output a species associated with each true microorganism colony.Type: GrantFiled: January 27, 2021Date of Patent: April 8, 2025Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Hatice Ceylan Koydemir, Yunzhe Qiu
-
Patent number: 12236612Abstract: A computer-implemented method for tracking multiple targets includes identifying a primary target from a plurality of targets based on a plurality of images obtained from an imaging device carried by an aerial vehicle via a carrier, determining a target group including one or more targets from the plurality of targets, where the primary target is always in the target group. Determining the target group includes determining one or more remaining targets in the target group based on a spatial relationship or a relative distance between the primary target and each target of the plurality of targets other than the primary target. The method further includes controlling at least one of the aerial vehicle or the carrier to track the target group as a whole.Type: GrantFiled: July 17, 2023Date of Patent: February 25, 2025Assignee: SZ DJI TECHNOLOGY CO., TD.Inventors: Jie Qian, Hongda Wang, Qifeng Wu
-
Publication number: 20250046069Abstract: A deep learning-based virtual HER2 IHC staining method uses a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images, matching the standard HER2 IHC staining that is chemically performed on the same tissue sections. The efficacy of this staining framework was demonstrated by quantitative analysis of blindly graded HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images (WSIs). A second quantitative blinded study revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail, membrane clearness, and absence of staining artifacts with respect to their immunohistochemically stained counterparts.Type: ApplicationFiled: November 30, 2022Publication date: February 6, 2025Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Bijie Bai, Hongda Wang
-
Publication number: 20250040368Abstract: Disclosed are a display panel and a display apparatus, the display panel includes a display region, the display region includes a light transmitting display region and a conventional display region located on at least one side of the light transmitting display region, the conventional display region includes a first region, a second region, and a third region; at least one circuit unit of the first region in the conventional display region is connected with a light emitting device in the light transmitting display region, at least one circuit unit of the third region in the conventional display region includes a data connection line, a second high-voltage power supply line includes a first sub-high-voltage power supply line and a second sub-high-voltage power supply line connected with each other, the second sub-high-voltage power supply line is located on a side of the first sub-high-voltage power supply line away from the base substrate.Type: ApplicationFiled: August 22, 2022Publication date: January 30, 2025Inventors: Hongda CUI, Qiwei WANG
-
Patent number: 12190478Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.Type: GrantFiled: November 19, 2021Date of Patent: January 7, 2025Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
-
Publication number: 20240135544Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.Type: ApplicationFiled: December 18, 2023Publication date: April 25, 2024Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
-
Patent number: 11893739Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.Type: GrantFiled: March 29, 2019Date of Patent: February 6, 2024Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
-
Publication number: 20230360230Abstract: A computer-implemented method for tracking multiple targets includes identifying a primary target from a plurality of targets based on a plurality of images obtained from an imaging device carried by an aerial vehicle via a carrier, determining a target group including one or more targets from the plurality of targets, where the primary target is always in the target group. Determining the target group includes determining one or more remaining targets in the target group based on a spatial relationship or a relative distance between the primary target and each target of the plurality of targets other than the primary target. The method further includes controlling at least one of the aerial vehicle or the carrier to track the target group as a whole.Type: ApplicationFiled: July 17, 2023Publication date: November 9, 2023Inventors: Jie QIAN, Hongda WANG, Qifeng WU
-
Patent number: 11704812Abstract: A computer-implemented method for tracking multiple targets includes identifying a plurality of targets based on a plurality of images obtained from an imaging device carried by an unmanned aerial vehicle (UAV) via a carrier, determining a target group comprising one or more targets from the plurality of targets, and controlling at least one of the UAV or the carrier to track the target group.Type: GrantFiled: December 19, 2019Date of Patent: July 18, 2023Assignee: SZ DJI TECHNOLOGY CO., LTD.Inventors: Jie Qian, Hongda Wang, Qifeng Wu
-
Publication number: 20230060037Abstract: A system for the detection and classification of live microorganisms in a sample includes a light source and an incubator holding one or more sample-containing growth plates. A translation stage moves the image sensor and/or the growth plate(s) along one or more dimensions to capture time-lapse holographic images of microorganisms. Image processing software executed by a computing device captures time-lapse holographic images of the microorganisms or clusters of microorganisms on the one or more growth plates. The image processing software is configured to detect candidate microorganism colonies in reconstructed, time-lapse holographic images based on differential image analysis. The image processing software includes one or more trained deep neural networks that process the time-lapsed image(s) of candidate microorganism colonies to detect true microorganism colonies and/or output a species associated with each true microorganism colony.Type: ApplicationFiled: January 27, 2021Publication date: February 23, 2023Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Hatice Ceylan Koydemir, Yunzhe Qiu
-
Publication number: 20230030424Abstract: A deep learning-based digital/virtual staining method and system enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples. In one embodiment, the method of generates digitally/virtually-stained microscope images of label-free or unstained samples using fluorescence lifetime (FLIM) image(s) of the sample(s) using a fluorescence microscope. In another embodiment, a digital/virtual autofocusing method is provided that uses machine learning to generate a microscope image with improved focus using a trained, deep neural network. In another embodiment, a trained deep neural network generates digitally/virtually stained microscopic images of a label-free or unstained sample obtained with a microscope having multiple different stains. The multiple stains in the output image or sub-regions thereof are substantially equivalent to the corresponding microscopic images or image sub-regions of the same sample that has been histochemically stained.Type: ApplicationFiled: December 22, 2020Publication date: February 2, 2023Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Yilin Luo, Kevin de Haan, Yijie Zhang, Bijie Bai
-
Publication number: 20220114711Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.Type: ApplicationFiled: November 19, 2021Publication date: April 14, 2022Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
-
Patent number: 11222415Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.Type: GrantFiled: April 26, 2019Date of Patent: January 11, 2022Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
-
Publication number: 20210043331Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.Type: ApplicationFiled: March 29, 2019Publication date: February 11, 2021Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
-
Publication number: 20210011490Abstract: A flight control method includes determining a distance of a target relative to an aircraft based on a depth map acquired by an imaging device carried by the aircraft, determining an orientation of the target relative to the aircraft, and controlling flight of the aircraft based on the distance and the orientation.Type: ApplicationFiled: July 21, 2020Publication date: January 14, 2021Inventors: Jie QIAN, Qifeng WU, Hongda WANG
-
Publication number: 20200126239Abstract: A computer-implemented method for tracking multiple targets includes identifying a plurality of targets based on a plurality of images obtained from an imaging device carried by an unmanned aerial vehicle (UAV) via a carrier, determining a target group comprising one or more targets from the plurality of targets, and controlling at least one of the UAV or the carrier to track the target group.Type: ApplicationFiled: December 19, 2019Publication date: April 23, 2020Inventors: Jie QIAN, Hongda WANG, Qifeng WU
-
Publication number: 20190333199Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.Type: ApplicationFiled: April 26, 2019Publication date: October 31, 2019Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
-
Patent number: 7745206Abstract: An atomic force microscope and a method for detecting interactions between a probe and two or more sensed agents on a scanned surface and determining the relative location of two or more sensed agents is provided. The microscope has a scanning probe with a tip that is sensitive to two or more sensed agents on said scanned surface; two or more sensing agents tethered to the tip of the probe; and a device for recording the displacement of said probe tip as a function of time, topographic images, and the spatial location of interactions between said probe and the two or more sensed agents on said surface.Type: GrantFiled: January 29, 2008Date of Patent: June 29, 2010Assignee: Arizona State UniversityInventors: Hongda Wang, Stuart Lindsay
-
Publication number: 20080209989Abstract: An atomic force microscope and a method for detecting interactions between a probe and two or more sensed agents on a scanned surface and determining the relative location of two or more sensed agents is provided. The microscope has a scanning probe with a tip that is sensitive to two or more sensed agents on said scanned surface; two or more sensing agents tethered to the tip of the probe; and a device for recording the displacement of said probe tip as a function of time, topographic images, and the spatial location of interactions between said probe and the two or more sensed agents on said surface.Type: ApplicationFiled: January 29, 2008Publication date: September 4, 2008Inventors: Hongda Wang, Stuart Lindsay