Patents by Inventor Hongda Wang

Hongda Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240134114
    Abstract: A dispersion-compensation microstructure fiber uses pure silica glass as the background material. It includes the core, the first-type defects, the second-type defects and the cladding. The air holes in the fiber cross section are arranged in the equilateral triangle lattice with the same adjacent air-hole to air-hole spacing. The core is formed by omitting 1 air hole. The first-type defects are formed by the 6 air holes locating at the vertices of hexagonal third-layer porous structure surrounding the core and their surrounding background material. The second-type defects are formed by the air holes in the first air-hole layer surrounding each first-type defect and their surrounding background material. The second-type defects act as the porous structure to surround the first-type defects and the fundamental defect modes, and can also combine with the first-type defects to act as the core of the second-order defect modes.
    Type: Application
    Filed: December 22, 2023
    Publication date: April 25, 2024
    Applicant: YANSHAN UNIVERSITY
    Inventors: Wei WANG, Chang ZHAO, Xiaochen KANG, Hongda YANG, Wenchao LI, Zheng LI, Lin SHI
  • Publication number: 20240135544
    Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.
    Type: Application
    Filed: December 18, 2023
    Publication date: April 25, 2024
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
  • Publication number: 20240078976
    Abstract: Disclosed is a pixel circuit arranged in a display substrate, which comprises a first driving mode and a second driving mode. Content displayed in the display substrate comprises multiple display frames. In the first driving mode and the second driving mode, the display frames comprise refresh frames. A signal of a second scanning line is the same as that of a third scanning line. The time of which the signal of the second scanning line is an active level signal comprises a first refresh time period, a second refresh time period and a third refresh time period, which sequentially occur at intervals. During the second refresh time period, a signal of a first scanning line is an inactive level signal. The voltage of a signal at a reset voltage end is a positive voltage, and the voltage of a signal at a first initial voltage end is a negative voltage.
    Type: Application
    Filed: July 29, 2022
    Publication date: March 7, 2024
    Inventors: Tianyi CHENG, Haigang QING, Hongda CUI, Sifei AI, Guowei ZHAO, Yang YU, Li WANG, Baoyun WU
  • Publication number: 20240073028
    Abstract: The present disclosure provides an anti-counterfeiting verifying method, a hardware apparatus, a system, an electronic device and a storage medium, which aim at improving anti-counterfeiting effectiveness for the electronic products, the method includes: executing a step of generating to-be-verified information of the first device in response to a triggered verification event; outputting the to-be-verified information to indicate a second device to send the to-be-verified information to a verifying terminal, the verifying terminal being configured to for verifying authenticity of the first device according to the to-be-verified information, and feeding back a verification result to the first device and/or the second device for displaying, wherein the step of generating the to-be-verified information of the first device includes: obtaining a device identifier of the first device and a private key pre-stored in the first device.
    Type: Application
    Filed: August 30, 2022
    Publication date: February 29, 2024
    Applicant: BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Hongjun Du, Tao Li, Xingxing Zhao, Huailiang Wang, Hongda Yu
  • Patent number: 11893739
    Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: February 6, 2024
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
  • Publication number: 20230360230
    Abstract: A computer-implemented method for tracking multiple targets includes identifying a primary target from a plurality of targets based on a plurality of images obtained from an imaging device carried by an aerial vehicle via a carrier, determining a target group including one or more targets from the plurality of targets, where the primary target is always in the target group. Determining the target group includes determining one or more remaining targets in the target group based on a spatial relationship or a relative distance between the primary target and each target of the plurality of targets other than the primary target. The method further includes controlling at least one of the aerial vehicle or the carrier to track the target group as a whole.
    Type: Application
    Filed: July 17, 2023
    Publication date: November 9, 2023
    Inventors: Jie QIAN, Hongda WANG, Qifeng WU
  • Patent number: 11704812
    Abstract: A computer-implemented method for tracking multiple targets includes identifying a plurality of targets based on a plurality of images obtained from an imaging device carried by an unmanned aerial vehicle (UAV) via a carrier, determining a target group comprising one or more targets from the plurality of targets, and controlling at least one of the UAV or the carrier to track the target group.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: July 18, 2023
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Jie Qian, Hongda Wang, Qifeng Wu
  • Publication number: 20230060037
    Abstract: A system for the detection and classification of live microorganisms in a sample includes a light source and an incubator holding one or more sample-containing growth plates. A translation stage moves the image sensor and/or the growth plate(s) along one or more dimensions to capture time-lapse holographic images of microorganisms. Image processing software executed by a computing device captures time-lapse holographic images of the microorganisms or clusters of microorganisms on the one or more growth plates. The image processing software is configured to detect candidate microorganism colonies in reconstructed, time-lapse holographic images based on differential image analysis. The image processing software includes one or more trained deep neural networks that process the time-lapsed image(s) of candidate microorganism colonies to detect true microorganism colonies and/or output a species associated with each true microorganism colony.
    Type: Application
    Filed: January 27, 2021
    Publication date: February 23, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Hatice Ceylan Koydemir, Yunzhe Qiu
  • Publication number: 20230030424
    Abstract: A deep learning-based digital/virtual staining method and system enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples. In one embodiment, the method of generates digitally/virtually-stained microscope images of label-free or unstained samples using fluorescence lifetime (FLIM) image(s) of the sample(s) using a fluorescence microscope. In another embodiment, a digital/virtual autofocusing method is provided that uses machine learning to generate a microscope image with improved focus using a trained, deep neural network. In another embodiment, a trained deep neural network generates digitally/virtually stained microscopic images of a label-free or unstained sample obtained with a microscope having multiple different stains. The multiple stains in the output image or sub-regions thereof are substantially equivalent to the corresponding microscopic images or image sub-regions of the same sample that has been histochemically stained.
    Type: Application
    Filed: December 22, 2020
    Publication date: February 2, 2023
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Yilin Luo, Kevin de Haan, Yijie Zhang, Bijie Bai
  • Publication number: 20220114711
    Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.
    Type: Application
    Filed: November 19, 2021
    Publication date: April 14, 2022
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
  • Patent number: 11222415
    Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: January 11, 2022
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
  • Publication number: 20210043331
    Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.
    Type: Application
    Filed: March 29, 2019
    Publication date: February 11, 2021
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
  • Publication number: 20210011490
    Abstract: A flight control method includes determining a distance of a target relative to an aircraft based on a depth map acquired by an imaging device carried by the aircraft, determining an orientation of the target relative to the aircraft, and controlling flight of the aircraft based on the distance and the orientation.
    Type: Application
    Filed: July 21, 2020
    Publication date: January 14, 2021
    Inventors: Jie QIAN, Qifeng WU, Hongda WANG
  • Publication number: 20200126239
    Abstract: A computer-implemented method for tracking multiple targets includes identifying a plurality of targets based on a plurality of images obtained from an imaging device carried by an unmanned aerial vehicle (UAV) via a carrier, determining a target group comprising one or more targets from the plurality of targets, and controlling at least one of the UAV or the carrier to track the target group.
    Type: Application
    Filed: December 19, 2019
    Publication date: April 23, 2020
    Inventors: Jie QIAN, Hongda WANG, Qifeng WU
  • Publication number: 20190333199
    Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.
    Type: Application
    Filed: April 26, 2019
    Publication date: October 31, 2019
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
  • Patent number: 7745206
    Abstract: An atomic force microscope and a method for detecting interactions between a probe and two or more sensed agents on a scanned surface and determining the relative location of two or more sensed agents is provided. The microscope has a scanning probe with a tip that is sensitive to two or more sensed agents on said scanned surface; two or more sensing agents tethered to the tip of the probe; and a device for recording the displacement of said probe tip as a function of time, topographic images, and the spatial location of interactions between said probe and the two or more sensed agents on said surface.
    Type: Grant
    Filed: January 29, 2008
    Date of Patent: June 29, 2010
    Assignee: Arizona State University
    Inventors: Hongda Wang, Stuart Lindsay
  • Publication number: 20080209989
    Abstract: An atomic force microscope and a method for detecting interactions between a probe and two or more sensed agents on a scanned surface and determining the relative location of two or more sensed agents is provided. The microscope has a scanning probe with a tip that is sensitive to two or more sensed agents on said scanned surface; two or more sensing agents tethered to the tip of the probe; and a device for recording the displacement of said probe tip as a function of time, topographic images, and the spatial location of interactions between said probe and the two or more sensed agents on said surface.
    Type: Application
    Filed: January 29, 2008
    Publication date: September 4, 2008
    Inventors: Hongda Wang, Stuart Lindsay