Patents by Inventor Zidong Du

Zidong Du has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210103818
    Abstract: The present disclosure provides a neural network computing method, system and device therefor to be applied in the technical field of computers. The computing method comprises the following steps: A. dividing a neural network into a plurality of subnetworks having consistent internal data characteristics; B. computing each of the subnetworks to obtain a first computation result for each subnetwork; and C. computing a total computation result of the neural network on the basis of the first computation result of each subnetwork. By means of the method, the present disclosure improves the computing efficiency of the neutral network.
    Type: Application
    Filed: August 9, 2016
    Publication date: April 8, 2021
    Inventors: Zidong DU, Qi GUO, Tianshi CHEN, Yunji CHEN
  • Patent number: 10971221
    Abstract: Aspect for storage device with fault tolerance capability for neural networks are described herein. The aspects may include a first storage unit of a storage device. The first storage unit is configured to store one or more first bits of data and the data includes floating point type data and fixed point type data. The first bits include one or more sign bits of the floating point type data and the fixed point type data. The aspect may further include a second storage unit of the storage device. The second storage unit may be configured to store one or more second bits of the data. In some examples, the first storage unit may include an ECC memory and the second storage unit may include a non-ECC memory. The ECC memory may include an ECC check Dynamic Random Access Memory and an ECC check Static Random Access Memory.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: April 6, 2021
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Shaoli Liu, Xuda Zhou, Zidong Du, Daofu Liu
  • Publication number: 20210035628
    Abstract: Aspect for storage device with fault tolerance capability for neural networks are described herein. The aspects may include a first storage unit of a storage device. The first storage unit is configured to store one or more first bits of data and the data includes floating point type data and fixed point type data. The first bits include one or more sign bits of the floating point type data and the fixed point type data. The aspect may further include a second storage unit of the storage device. The second storage unit may be configured to store one or more second bits of the data. In some examples, the first storage unit may include an ECC memory and the second storage unit may include a non-ECC memory. The ECC memory may include an ECC check Dynamic Random Access Memory and an ECC check Static Random Access Memory.
    Type: Application
    Filed: April 30, 2020
    Publication date: February 4, 2021
    Inventors: Shaoli LIU, Xuda ZHOU, Zidong DU, Daofu LIU
  • Publication number: 20200387800
    Abstract: Disclosed are a scheduling method and a related apparatus. A computing apparatus in a server can be chosen to implement a computation request, thereby improving the running efficiency of the server.
    Type: Application
    Filed: August 2, 2018
    Publication date: December 10, 2020
    Inventors: Zidong DU, Luyang JIN
  • Publication number: 20200387400
    Abstract: An allocation system for machine learning, comprising a terminal server and a cloud server. The terminal server is used for: acquiring demand information; generating a control instruction according to the demand information, wherein the control instruction comprises a terminal control instruction and a cloud control instruction; parsing the terminal control instruction to obtain a terminal control signal; and calculating a terminal workload of a machine learning algorithm of each stage according to the terminal control signal to obtain a terminal computation result. The cloud server is used for parsing the cloud control instruction to obtain a cloud control signal, and calculating a cloud workload of the machine learning algorithm of each stage according to the cloud control signal to obtain a cloud computation result. The terminal computation result and the cloud computation result together compose an output result.
    Type: Application
    Filed: August 19, 2020
    Publication date: December 10, 2020
    Inventors: Xiaofu MENG, Yongzhe SUN, Zidong DU
  • Patent number: 10755772
    Abstract: Aspect for storage device with fault tolerance capability for neural networks are described herein. The aspects may include a first storage unit of a storage device. The first storage unit is configured to store one or more first bits of data and the data includes floating point type data and fixed point type data. The first bits include one or more sign bits of the floating point type data and the fixed point type data. The aspect may further include a second storage unit of the storage device. The second storage unit may be configured to store one or more second bits of the data. In some examples, the first storage unit may include an ECC memory and the second storage unit may include a non-ECC memory. The ECC memory may include an ECC check Dynamic Random Access Memory and an ECC check Static Random Access Memory.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: August 25, 2020
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Shaoli Liu, Xuda Zhou, Zidong Du, Daofu Liu
  • Publication number: 20200265300
    Abstract: The application provides an operation method and device. Quantized data is looked up to realize an operation, which simplifies the structure and reduces the computation energy consumption of the data, meanwhile, a plurality of operations are realized.
    Type: Application
    Filed: March 26, 2020
    Publication date: August 20, 2020
    Inventors: Shaoli LIU, Xuda ZHOU, Zidong DU, Daofu LIU
  • Publication number: 20200250539
    Abstract: The application provides a processing method and device. Weights and input neurons are quantized respectively, and a weight dictionary, a weight codebook, a neuron dictionary, and a neuron codebook are determined. A computational codebook is determined according to the weight codebook and the neuron codebook. Meanwhile, according to the application, the computational codebook is determined according to two types of quantized data, and the two types of quantized data are combined, which facilitates data processing.
    Type: Application
    Filed: July 13, 2018
    Publication date: August 6, 2020
    Inventors: Shaoli LIU, Xuda ZHOU, Zidong DU, Daofu LIU
  • Patent number: 10657439
    Abstract: The application provides an operation method and device. Quantized data is looked up to realize an operation, which simplifies the structure and reduces the computation energy consumption of the data, meanwhile, a plurality of operations are realized.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: May 19, 2020
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Shaoli Liu, Xuda Zhou, Zidong Du, Daofu Liu
  • Publication number: 20200150971
    Abstract: The disclosure provides a data processing device and method. The data processing device may include: a task configuration information storage unit and a task queue configuration unit. The task configuration information storage unit is configured to store configuration information of tasks. The task queue configuration unit is configured to configure a task queue according to the configuration information stored in the task configuration information storage unit. According to the disclosure, a task queue may be configured according to the configuration information.
    Type: Application
    Filed: November 28, 2019
    Publication date: May 14, 2020
    Inventors: Shaoli LIU, Shengyuan ZHOU, Zidong DU
  • Publication number: 20200134460
    Abstract: The present disclosure provides a processing device including: a coarse-grained pruning unit configured to perform coarse-grained pruning on a weight of a neural network to obtain a pruned weight, an operation unit configured to train the neural network according to the pruned weight. The coarse-grained pruning unit is specifically configured to select M weights from the weights of the neural network through a sliding window, and when the M weights meet a preset condition, all or part of the M weights may be set to 0. The processing device can reduce the memory access while reducing the amount of computation, thereby obtaining an acceleration ratio and reducing energy consumption.
    Type: Application
    Filed: November 28, 2019
    Publication date: April 30, 2020
    Inventors: Zidong Du, Xuda Zhou, Zai Wang, Tianshi Chen
  • Publication number: 20200125938
    Abstract: A computing device, comprising: a computing module, comprising one or more computing units; and a control module, comprising a computing control unit, and used for controlling shutdown of the computing unit of the computing module according to a determining condition. Also provided is a computing method. The computing device and method have the advantages of low power consumption and high flexibility, and can be combined with the upgrading mode of software, thereby further increasing the computing speed, reducing the computing amount, and reducing the computing power consumption of an accelerator.
    Type: Application
    Filed: November 28, 2019
    Publication date: April 23, 2020
    Inventors: Zidong DU, Shengyuan ZHOU, Shaoli LIU, Tianshi CHEN
  • Publication number: 20200110988
    Abstract: A computing device, comprising: a computing module, comprising one or more computing units; and a control module, comprising a computing control unit, and used for controlling shutdown of the computing unit of the computing module according to a determining condition. Also provided is a computing method. The computing device and method have the advantages of low power consumption and high flexibility, and can be combined with the upgrading mode of software, thereby further increasing the computing speed, reducing the computing amount, and reducing the computing power consumption of an accelerator.
    Type: Application
    Filed: November 28, 2019
    Publication date: April 9, 2020
    Inventors: Zai WANG, Shengyuan ZHOU, Zidong DU, Tianshi CHEN
  • Publication number: 20200110609
    Abstract: A computing device, comprising: a computing module, comprising one or more computing units; and a control module, comprising a computing control unit, and used for controlling shutdown of the computing unit of the computing module according to a determining condition. Also provided is a computing method. The computing device and method have the advantages of low power consumption and high flexibility, and can be combined with the upgrading mode of software, thereby further increasing the computing speed, reducing the computing amount, and reducing the computing power consumption of an accelerator.
    Type: Application
    Filed: November 28, 2019
    Publication date: April 9, 2020
    Inventors: Tianshi CHEN, Xuda ZHOU, Shaoli LIU, Zidong DU
  • Publication number: 20200104693
    Abstract: The present disclosure provides a processing device including: a coarse-grained pruning unit configured to perform coarse-grained pruning on a weight of a neural network to obtain a pruned weight, an operation unit configured to train the neural network according to the pruned weight. The coarse-grained pruning unit is specifically configured to select M weights from the weights of the neural network through a sliding window, and when the M weights meet a preset condition, all or part of the M weights may be set to 0. The processing device can reduce the memory access while reducing the amount of computation, thereby obtaining an acceleration ratio and reducing energy consumption.
    Type: Application
    Filed: November 28, 2019
    Publication date: April 2, 2020
    Inventors: Zidong Du, Xuda Zhou, Shaoli Liu, Tianshi Chen
  • Publication number: 20200104207
    Abstract: The disclosure provides a data processing device and method. The data processing device may include: a task configuration information storage unit and a task queue configuration unit. The task configuration information storage unit is configured to store configuration information of tasks. The task queue configuration unit is configured to configure a task queue according to the configuration information stored in the task configuration information storage unit. According to the disclosure, a task queue may be configured according to the configuration information.
    Type: Application
    Filed: November 28, 2019
    Publication date: April 2, 2020
    Inventors: Zai WANG, Xuda ZHOU, Zidong DU, Tianshi CHEN
  • Publication number: 20200097831
    Abstract: The present disclosure provides a processing device including: a coarse-grained pruning unit configured to perform coarse-grained pruning on a weight of a neural network to obtain a pruned weight, an operation unit configured to train the neural network according to the pruned weight. The coarse-grained pruning unit is specifically configured to select M weights from the weights of the neural network through a sliding window, and when the M weights meet a preset condition, all or part of the M weights may be set to 0. The processing device can reduce the memory access while reducing the amount of computation, thereby obtaining an acceleration ratio and reducing energy consumption.
    Type: Application
    Filed: November 28, 2019
    Publication date: March 26, 2020
    Inventors: Zai Wang, Xuda Zhou, Zidong Du, Tianshi Chen
  • Publication number: 20200097827
    Abstract: The present disclosure provides a processing device including: a coarse-grained pruning unit configured to perform coarse-grained pruning on a weight of a neural network to obtain a pruned weight, an operation unit configured to train the neural network according to the pruned weight. The coarse-grained pruning unit is specifically configured to select M weights from the weights of the neural network through a sliding window, and when the M weights meet a preset condition, all or part of the M weights may be set to 0. The processing device can reduce the memory access while reducing the amount of computation, thereby obtaining an acceleration ratio and reducing energy consumption.
    Type: Application
    Filed: November 28, 2019
    Publication date: March 26, 2020
    Inventors: Zai Wang, Xuda Zhou, Zidong Du, Tianshi Chen
  • Publication number: 20200097828
    Abstract: The present disclosure provides a processing device including: a coarse-grained pruning unit configured to perform coarse-grained pruning on a weight of a neural network to obtain a pruned weight, an operation unit configured to train the neural network according to the pruned weight. The coarse-grained pruning unit is specifically configured to select M weights from the weights of the neural network through a sliding window, and when the M weights meet a preset condition, all or part of the M weights may be set to 0. The processing device can reduce the memory access while reducing the amount of computation, thereby obtaining an acceleration ratio and reducing energy consumption.
    Type: Application
    Filed: November 28, 2019
    Publication date: March 26, 2020
    Inventors: Zidong Du, Xuda Zhou, Zai Wang, Tianshi Chen
  • Publication number: 20200097792
    Abstract: The present disclosure relates to a processing device including a memory configured to store data to be computed; a computational circuit configured to compute the data to be computed, which includes performing acceleration computations on the data to be computed by using an adder circuit and a multiplier circuit; and a control circuit configured to control the memory and the computational circuit, which includes performing acceleration computations according to the data to be computed. The present disclosure may have high flexibility, good configurability, fast computational speed, low power consumption, and other features.
    Type: Application
    Filed: November 27, 2019
    Publication date: March 26, 2020
    Inventors: Tianshi Chen, Shengyuan Zhou, Zidong Du, Qi Guo