Patents by Inventor Hongda Wang

Hongda Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12270068
    Abstract: A system for the detection and classification of live microorganisms in a sample includes a light source and an incubator holding one or more sample-containing growth plates. A translation stage moves the image sensor and/or the growth plate(s) along one or more dimensions to capture time-lapse holographic images of microorganisms. Image processing software executed by a computing device captures time-lapse holographic images of the microorganisms or clusters of microorganisms on the one or more growth plates. The image processing software is configured to detect candidate microorganism colonies in reconstructed, time-lapse holographic images based on differential image analysis. The image processing software includes one or more trained deep neural networks that process the time-lapsed image(s) of candidate microorganism colonies to detect true microorganism colonies and/or output a species associated with each true microorganism colony.
    Type: Grant
    Filed: January 27, 2021
    Date of Patent: April 8, 2025
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Hatice Ceylan Koydemir, Yunzhe Qiu
  • Patent number: 12236612
    Abstract: A computer-implemented method for tracking multiple targets includes identifying a primary target from a plurality of targets based on a plurality of images obtained from an imaging device carried by an aerial vehicle via a carrier, determining a target group including one or more targets from the plurality of targets, where the primary target is always in the target group. Determining the target group includes determining one or more remaining targets in the target group based on a spatial relationship or a relative distance between the primary target and each target of the plurality of targets other than the primary target. The method further includes controlling at least one of the aerial vehicle or the carrier to track the target group as a whole.
    Type: Grant
    Filed: July 17, 2023
    Date of Patent: February 25, 2025
    Assignee: SZ DJI TECHNOLOGY CO., TD.
    Inventors: Jie Qian, Hongda Wang, Qifeng Wu
  • Publication number: 20250046069
    Abstract: A deep learning-based virtual HER2 IHC staining method uses a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images, matching the standard HER2 IHC staining that is chemically performed on the same tissue sections. The efficacy of this staining framework was demonstrated by quantitative analysis of blindly graded HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images (WSIs). A second quantitative blinded study revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail, membrane clearness, and absence of staining artifacts with respect to their immunohistochemically stained counterparts.
    Type: Application
    Filed: November 30, 2022
    Publication date: February 6, 2025
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Bijie Bai, Hongda Wang
  • Publication number: 20250040368
    Abstract: Disclosed are a display panel and a display apparatus, the display panel includes a display region, the display region includes a light transmitting display region and a conventional display region located on at least one side of the light transmitting display region, the conventional display region includes a first region, a second region, and a third region; at least one circuit unit of the first region in the conventional display region is connected with a light emitting device in the light transmitting display region, at least one circuit unit of the third region in the conventional display region includes a data connection line, a second high-voltage power supply line includes a first sub-high-voltage power supply line and a second sub-high-voltage power supply line connected with each other, the second sub-high-voltage power supply line is located on a side of the first sub-high-voltage power supply line away from the base substrate.
    Type: Application
    Filed: August 22, 2022
    Publication date: January 30, 2025
    Inventors: Hongda CUI, Qiwei WANG
  • Patent number: 12190478
    Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.
    Type: Grant
    Filed: November 19, 2021
    Date of Patent: January 7, 2025
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
  • Publication number: 20240135544
    Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.
    Type: Application
    Filed: December 18, 2023
    Publication date: April 25, 2024
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
  • Patent number: 11893739
    Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: February 6, 2024
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
  • Publication number: 20230360230
    Abstract: A computer-implemented method for tracking multiple targets includes identifying a primary target from a plurality of targets based on a plurality of images obtained from an imaging device carried by an aerial vehicle via a carrier, determining a target group including one or more targets from the plurality of targets, where the primary target is always in the target group. Determining the target group includes determining one or more remaining targets in the target group based on a spatial relationship or a relative distance between the primary target and each target of the plurality of targets other than the primary target. The method further includes controlling at least one of the aerial vehicle or the carrier to track the target group as a whole.
    Type: Application
    Filed: July 17, 2023
    Publication date: November 9, 2023
    Inventors: Jie QIAN, Hongda WANG, Qifeng WU
  • Patent number: 11704812
    Abstract: A computer-implemented method for tracking multiple targets includes identifying a plurality of targets based on a plurality of images obtained from an imaging device carried by an unmanned aerial vehicle (UAV) via a carrier, determining a target group comprising one or more targets from the plurality of targets, and controlling at least one of the UAV or the carrier to track the target group.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: July 18, 2023
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Jie Qian, Hongda Wang, Qifeng Wu
  • Publication number: 20230060037
    Abstract: A system for the detection and classification of live microorganisms in a sample includes a light source and an incubator holding one or more sample-containing growth plates. A translation stage moves the image sensor and/or the growth plate(s) along one or more dimensions to capture time-lapse holographic images of microorganisms. Image processing software executed by a computing device captures time-lapse holographic images of the microorganisms or clusters of microorganisms on the one or more growth plates. The image processing software is configured to detect candidate microorganism colonies in reconstructed, time-lapse holographic images based on differential image analysis. The image processing software includes one or more trained deep neural networks that process the time-lapsed image(s) of candidate microorganism colonies to detect true microorganism colonies and/or output a species associated with each true microorganism colony.
    Type: Application
    Filed: January 27, 2021
    Publication date: February 23, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Hatice Ceylan Koydemir, Yunzhe Qiu
  • Publication number: 20230030424
    Abstract: A deep learning-based digital/virtual staining method and system enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples. In one embodiment, the method of generates digitally/virtually-stained microscope images of label-free or unstained samples using fluorescence lifetime (FLIM) image(s) of the sample(s) using a fluorescence microscope. In another embodiment, a digital/virtual autofocusing method is provided that uses machine learning to generate a microscope image with improved focus using a trained, deep neural network. In another embodiment, a trained deep neural network generates digitally/virtually stained microscopic images of a label-free or unstained sample obtained with a microscope having multiple different stains. The multiple stains in the output image or sub-regions thereof are substantially equivalent to the corresponding microscopic images or image sub-regions of the same sample that has been histochemically stained.
    Type: Application
    Filed: December 22, 2020
    Publication date: February 2, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Yilin Luo, Kevin de Haan, Yijie Zhang, Bijie Bai
  • Publication number: 20220114711
    Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.
    Type: Application
    Filed: November 19, 2021
    Publication date: April 14, 2022
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
  • Patent number: 11222415
    Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: January 11, 2022
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
  • Publication number: 20210043331
    Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.
    Type: Application
    Filed: March 29, 2019
    Publication date: February 11, 2021
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
  • Publication number: 20210011490
    Abstract: A flight control method includes determining a distance of a target relative to an aircraft based on a depth map acquired by an imaging device carried by the aircraft, determining an orientation of the target relative to the aircraft, and controlling flight of the aircraft based on the distance and the orientation.
    Type: Application
    Filed: July 21, 2020
    Publication date: January 14, 2021
    Inventors: Jie QIAN, Qifeng WU, Hongda WANG
  • Publication number: 20200126239
    Abstract: A computer-implemented method for tracking multiple targets includes identifying a plurality of targets based on a plurality of images obtained from an imaging device carried by an unmanned aerial vehicle (UAV) via a carrier, determining a target group comprising one or more targets from the plurality of targets, and controlling at least one of the UAV or the carrier to track the target group.
    Type: Application
    Filed: December 19, 2019
    Publication date: April 23, 2020
    Inventors: Jie QIAN, Hongda WANG, Qifeng WU
  • Publication number: 20190333199
    Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.
    Type: Application
    Filed: April 26, 2019
    Publication date: October 31, 2019
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
  • Patent number: 7745206
    Abstract: An atomic force microscope and a method for detecting interactions between a probe and two or more sensed agents on a scanned surface and determining the relative location of two or more sensed agents is provided. The microscope has a scanning probe with a tip that is sensitive to two or more sensed agents on said scanned surface; two or more sensing agents tethered to the tip of the probe; and a device for recording the displacement of said probe tip as a function of time, topographic images, and the spatial location of interactions between said probe and the two or more sensed agents on said surface.
    Type: Grant
    Filed: January 29, 2008
    Date of Patent: June 29, 2010
    Assignee: Arizona State University
    Inventors: Hongda Wang, Stuart Lindsay
  • Publication number: 20080209989
    Abstract: An atomic force microscope and a method for detecting interactions between a probe and two or more sensed agents on a scanned surface and determining the relative location of two or more sensed agents is provided. The microscope has a scanning probe with a tip that is sensitive to two or more sensed agents on said scanned surface; two or more sensing agents tethered to the tip of the probe; and a device for recording the displacement of said probe tip as a function of time, topographic images, and the spatial location of interactions between said probe and the two or more sensed agents on said surface.
    Type: Application
    Filed: January 29, 2008
    Publication date: September 4, 2008
    Inventors: Hongda Wang, Stuart Lindsay