Patents by Inventor Qichun CAO

Qichun CAO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230409885
    Abstract: A hardware environment-based data operation method, apparatus and device, and a storage medium. The method includes: determining data to be operated and target hardware, wherein the target hardware is a hardware resource that needs to perform convolution computation on the data to be operated currently; determining the maximum number of channels in which the target hardware executes parallel computation, and determining a data layout corresponding to the maximum number of channels to be an optimal data layout; and converting the data layout of the data to be operated into the optimal data layout, and performing the convolution computation on the data to be operated by using the target hardware after the conversion is completed.
    Type: Application
    Filed: July 29, 2021
    Publication date: December 21, 2023
    Inventors: Qichun CAO, Gang DONG, Lingyan LIANG, Wenfeng YIN, Jian ZHAO
  • Publication number: 20230401834
    Abstract: An image processing method, apparatus and device, and a readable storage medium are disclosed, including: obtaining a target image; inputting the target image into a quantized target deep neural network model for classification/detection to obtain an output result; and processing the target image according to a policy corresponding to the output result. A process of performing quantization to obtain the target deep neural network model includes: obtaining a pre-trained floating point type deep neural network model; extracting weight features of a deep neural network model; determining a quantization policy using the weight features; and quantizing the deep neural network model according to the quantization policy to obtain the target deep neural network model.
    Type: Application
    Filed: July 29, 2021
    Publication date: December 14, 2023
    Applicant: Inspur (Beijing) Electronic Information Industry Co., Ltd.
    Inventors: Lingyan LIANG, Dong GANG, Yaqian ZHAO, Qichun CAO, Wenfeng YIN
  • Publication number: 20230360358
    Abstract: Provided are a method, apparatus and device for extracting image features, and a storage medium. The method includes: obtaining parameters to be quantized of a network layer in a neural network model (S101); determining whether values of the parameters to be quantized are all positive numbers (S102); when the values of the parameters to be quantized are all positive numbers, executing, based on an asymmetric linear quantization logic, a quantization operation on the parameters to be quantized (S103); when the values of the parameters to be quantized are not all positive numbers, executing, based on a symmetric linear quantization logic, a quantization operation on the parameters to be quantized (S104); and extracting features of an input image by using the neural network model for which the quantization operation has been executed (S105).
    Type: Application
    Filed: September 28, 2021
    Publication date: November 9, 2023
    Inventors: Jian ZHAO, Gang DONG, Hongzhi SHI, Qichun CAO, Xingchen CUI
  • Publication number: 20230297846
    Abstract: A neural network compression method, apparatus and device, and a storage medium are provided. The method includes: performing forward inference on target data by using a target parameter sharing network to obtain an output feature map of the last convolutional module; extracting a channel related feature from the output feature map; inputting the extracted channel related feature and a target constraint condition into a target meta-generative network; and predicting an optimal network architecture under the target constraint condition by using the target meta-generative network to obtain a compressed neural network model. By using the technical solution, the computation load of a performance evaluation process of a neural architecture search may be reduced, and the speed of the searching for a high-performance neural network architecture may be increased.
    Type: Application
    Filed: January 25, 2021
    Publication date: September 21, 2023
    Inventors: Wenfeng YIN, Gang DONG, Yaqian ZHAO, Qichun CAO, Lingyan LIANG, Haiwei LIU, Hongbin YANG
  • Patent number: 11748970
    Abstract: A hardware environment-based data quantization method includes: parsing a model file under a current deep learning framework to obtain intermediate computational graph data and weight data that are independent of a hardware environment; performing calculation on image data in an input data set through a process indicated by an intermediate computational graph to obtain feature map data; separately performing uniform quantization on the weight data and the feature map data of each layer according to a preset linear quantization method, and calculating a weight quantization factor and a feature map quantization factor (S103); combining the weight quantization factor and the feature map quantization factor to obtain a quantization parameter that makes hardware use shift instead of division; and finally, writing the quantization parameter and the quantized weight data to a bin file according to a hardware requirement so as to generate quantized file data (S105).
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: September 5, 2023
    Assignee: INSPUR SUZHOU INTELLIGENT TECHNOLOGY CO., LTD.
    Inventors: Qichun Cao, Yaqian Zhao, Gang Dong, Lingyan Liang, Wenfeng Yin
  • Publication number: 20230252757
    Abstract: Disclosed is an image processing method. In the invention, weighted calculation is carried out on a first quantification threshold value obtained with a saturated mapping method and a second quantification threshold value obtained with an unsaturated mapping method, that is, two quantification threshold values are fused, and an obtained optimal quantification threshold value can be suitable for most activation output layers, and therefore, effective information of the activation output layers can be more effectively reserved, the optimal quantification threshold value is used for subsequent image processing work, and the precision of reasoning and calculation of a quantized deep neural network on a low-bit-width hardware platform is improved. An image processing apparatus and device are disclosed, which have the same beneficial effects as the image processing method.
    Type: Application
    Filed: September 29, 2019
    Publication date: August 10, 2023
    Inventors: Lingyan Liang, Gang Dong, Yaqian Zhao, Qichun Cao, Haiwei Liu, Hongbin Yang
  • Publication number: 20230055313
    Abstract: A hardware environment-based data quantization method includes: parsing a model file under a current deep learning framework to obtain intermediate computational graph data and weight data that are independent of a hardware environment; performing calculation on image data in an input data set through a process indicated by an intermediate computational graph to obtain feature map data; separately performing uniform quantization on the weight data and the feature map data of each layer according to a preset linear quantization method, and calculating a weight quantization factor and a feature map quantization factor (S103); combining the weight quantization factor and the feature map quantization factor to obtain a quantization parameter that makes hardware use shift instead of division; and finally, writing the quantization parameter and the quantized weight data to a bin file according to a hardware requirement so as to generate quantized file data (S105).
    Type: Application
    Filed: November 16, 2020
    Publication date: February 23, 2023
    Inventors: Qichun CAO, Yaqian ZHAO, Gang DONG, Lingyan LIANG, Wenfeng YIN