Patents by Inventor Zhengpeng Wu

Zhengpeng Wu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240312604
    Abstract: Embodiments of this application disclose a tumor cell content evaluation method. A digital pathology slide image is obtained, and an effective pathological region is determined based on the digital pathology slide image; a tumor cell region corresponding to the effective pathological region is identified by using a deep learning-based pathology image classifier; and tumor cell content of the digital pathology slide image is determined based on the tumor cell region according to a preset evaluation rule. In this way, the tumor cell content of the digital pathology slide image is automatically evaluated, and accuracy and objectivity of evaluating the tumor cell content are improved. In addition, a tumor cell content evaluation system, a computer device, and a storage medium are further provided.
    Type: Application
    Filed: December 17, 2020
    Publication date: September 19, 2024
    Applicant: GUANGZHOU KINGMED CENTER FOR CLINICAL LABORATORY
    Inventors: Shuanlong CHE, Pifu LUO, Yinghua LI, Si LIU, Weisong QIU, Tao WU, Zhengpeng HA, Haihui LV
  • Patent number: 11106193
    Abstract: A neural network-based error compensation method for 3D printing includes: compensating an input model by a deformation network/inverse deformation network constructed and trained according to a 3D printing deformation function/inverse deformation function, and performing the 3D printing based on the compensated model. Training samples of the deformation network/inverse deformation network include to-be-printed model samples and printed model samples. The deformation network constructed according to the 3D printing deformation function is marked as a first network. During training of the first network, the to-be-printed model samples are used as real input models, and the printed model samples are used as real output models. The inverse deformation network constructed according to the 3D printing inverse deformation function is marked as a second network.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: August 31, 2021
    Assignees: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES, BEIJING TEN DIMENSIONS TECHNOLOGY CO., LTD.
    Inventors: Zhen Shen, Gang Xiong, Yuqing Li, Hang Gao, Yi Xie, Meihua Zhao, Chao Guo, Xiuqin Shang, Xisong Dong, Zhengpeng Wu, Li Wan, Feiyue Wang
  • Publication number: 20210247737
    Abstract: A neural network-based error compensation method for 3D printing includes: compensating an input model by a deformation network/inverse deformation network constructed and trained according to a 3D printing deformation function/inverse deformation function, and performing the 3D printing based on the compensated model. Training samples of the deformation network/inverse deformation network include to-be-printed model samples and printed model samples. The deformation network constructed according to the 3D printing deformation function is marked as a first network. During training of the first network, the to-be-printed model samples are used as real input models, and the printed model samples are used as real output models. The inverse deformation network constructed according to the 3D printing inverse deformation function is marked as a second network.
    Type: Application
    Filed: September 16, 2019
    Publication date: August 12, 2021
    Applicants: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES, BEIJING TEN DIMENSIONS TECHNOLOGY CO., LTD.
    Inventors: Zhen SHEN, Gang XIONG, Yuqing LI, Hang GAO, Yi XIE, Meihua ZHAO, Chao GUO, Xiuqin SHANG, Xisong DONG, Zhengpeng WU, Li WAN, Feiyue WANG
  • Publication number: 20160320479
    Abstract: The invention relates to a method for extracting ground attribute data in interferometry synthetic aperture radar data, and aims at providing a method for extracting ground attribute data in PS data. The method comprises the following steps that ground geographic data are led; ground boundary data are contracted inwards by Tx in the east-west direction, and are contracted inwards by Ty in the south-north direction, and a new ground boundary is determined again; whether each PS point falls into the new ground boundary or not is judged, and if yes, the data are extracted; if not, the data are removed; a first data set after extraction is obtained; whether the gray value of a PS point image in the first data set is between Vmin and Vmax or not is judged, and if yes, the data are further extracted; if not, the data are removed. The ground attribute data are extracted through the method, so that the accuracy of PS points is 95.1 percent.
    Type: Application
    Filed: February 24, 2016
    Publication date: November 3, 2016
    Inventors: Huashan Ma, Junwei Liu, Ke Hu, Jie Wang, Tie Sun, Kui Yang, Chu Chen, Zhengpeng Wu, Yongqing Hu