Patents by Inventor Zidong Du
Zidong Du has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12124940Abstract: The application provides an operation method and device. Quantized data is looked up to realize an operation, which simplifies the structure and reduces the computation energy consumption of the data, meanwhile, a plurality of operations are realized.Type: GrantFiled: March 26, 2020Date of Patent: October 22, 2024Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Shaoli Liu, Xuda Zhou, Zidong Du, Daofu Liu
-
Patent number: 11907844Abstract: The present disclosure provides a processing device including: a coarse-grained pruning unit configured to perform coarse-grained pruning on a weight of a neural network to obtain a pruned weight, an operation unit configured to train the neural network according to the pruned weight. The coarse-grained pruning unit is specifically configured to select M weights from the weights of the neural network through a sliding window, and when the M weights meet a preset condition, all or part of the M weights may be set to 0. The processing device can reduce the memory access while reducing the amount of computation, thereby obtaining an acceleration ratio and reducing energy consumption.Type: GrantFiled: November 28, 2019Date of Patent: February 20, 2024Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Zidong Du, Xuda Zhou, Shaoli Liu, Tianshi Chen
-
Patent number: 11727276Abstract: The present disclosure provides a processing device including: a coarse-grained pruning unit configured to perform coarse-grained pruning on a weight of a neural network to obtain a pruned weight, an operation unit configured to train the neural network according to the pruned weight. The coarse-grained pruning unit is specifically configured to select M weights from the weights of the neural network through a sliding window, and when the M weights meet a preset condition, all or part of the M weights may be set to 0. The processing device can reduce the memory access while reducing the amount of computation, thereby obtaining an acceleration ratio and reducing energy consumption.Type: GrantFiled: November 28, 2019Date of Patent: August 15, 2023Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Zai Wang, Xuda Zhou, Zidong Du, Tianshi Chen
-
Patent number: 11663491Abstract: An allocation system for machine learning, comprising a terminal server and a cloud server. The terminal server is used for: acquiring demand information; generating a control instruction according to the demand information, wherein the control instruction comprises a terminal control instruction and a cloud control instruction; parsing the terminal control instruction to obtain a terminal control signal; and calculating a terminal workload of a machine learning algorithm of each stage according to the terminal control signal to obtain a terminal computation result. The cloud server is used for parsing the cloud control instruction to obtain a cloud control signal, and calculating a cloud workload of the machine learning algorithm of each stage according to the cloud control signal to obtain a cloud computation result. The terminal computation result and the cloud computation result together compose an output result.Type: GrantFiled: August 19, 2020Date of Patent: May 30, 2023Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Xiaofu Meng, Yongzhe Sun, Zidong Du
-
Patent number: 11593658Abstract: The application provides a processing method and device. Weights and input neurons are quantized respectively, and a weight dictionary, a weight codebook, a neuron dictionary, and a neuron codebook are determined. A computational codebook is determined according to the weight codebook and the neuron codebook. Meanwhile, according to the application, the computational codebook is determined according to two types of quantized data, and the two types of quantized data are combined, which facilitates data processing.Type: GrantFiled: July 13, 2018Date of Patent: February 28, 2023Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Shaoli Liu, Xuda Zhou, Zidong Du, Daofu Liu
-
Patent number: 11580367Abstract: The present disclosure provides a neural network processing system that comprises a multi-core processing module composed of a plurality of core processing modules and for executing vector multiplication and addition operations in a neural network operation, an on-chip storage medium, an on-chip address index module, and an ALU module for executing a non-linear operation not completable by the multi-core processing module according to input data acquired from the multi-core processing module or the on-chip storage medium, wherein the plurality of core processing modules share an on-chip storage medium and an ALU module, or the plurality of core processing modules have an independent on-chip storage medium and an ALU module. The present disclosure improves an operating speed of the neural network processing system, such that performance of the neural network processing system is higher and more efficient.Type: GrantFiled: August 9, 2016Date of Patent: February 14, 2023Assignee: Institute of Computing Technology, Chinese Academy of SciencesInventors: Zidong Du, Qi Guo, Tianshi Chen, Yunji Chen
-
Patent number: 11568269Abstract: Disclosed are a scheduling method and a related apparatus. A computing apparatus in a server can be chosen to implement a computation request, thereby improving the running efficiency of the server.Type: GrantFiled: August 2, 2018Date of Patent: January 31, 2023Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Zidong Du, Luyang Jin
-
Patent number: 11551067Abstract: The present disclosure provides a neural network processor and neural network computation method that deploy a memory and a cache to perform a neural network computation, where the memory may be configured to store data and instructions of the neural network computation, the cache may be connected to the memory via a memory bus, thereby, the actual compute ability of hardware may be fully utilized, the cost and power consumption overhead may be reduced, parallelism of the network may be fully utilized, and the efficiency of the neural network computation may be improved.Type: GrantFiled: July 23, 2019Date of Patent: January 10, 2023Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Tianshi Chen, Xiaobin Chen, Tian Zhi, Zidong Du
-
Patent number: 11544526Abstract: A computing device, comprising: a computing module, comprising one or more computing units; and a control module, comprising a computing control unit, and used for controlling shutdown of the computing unit of the computing module according to a determining condition. Also provided is a computing method. The computing device and method have the advantages of low power consumption and high flexibility, and can be combined with the upgrading mode of software, thereby further increasing the computing speed, reducing the computing amount, and reducing the computing power consumption of an accelerator.Type: GrantFiled: November 28, 2019Date of Patent: January 3, 2023Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Zidong Du, Shaoli Liu, Tianshi Chen
-
Patent number: 11544542Abstract: A computing device, comprising: a computing module, comprising one or more computing units; and a control module, comprising a computing control unit, and used for controlling shutdown of the computing unit of the computing module according to a determining condition. Also provided is a computing method. The computing device and method have the advantages of low power consumption and high flexibility, and can be combined with the upgrading mode of software, thereby further increasing the computing speed, reducing the computing amount, and reducing the computing power consumption of an accelerator.Type: GrantFiled: November 28, 2019Date of Patent: January 3, 2023Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Zai Wang, Shengyuan Zhou, Zidong Du, Tianshi Chen
-
Patent number: 11544543Abstract: A computing device, comprising: a computing module, comprising one or more computing units; and a control module, comprising a computing control unit, and used for controlling shutdown of the computing unit of the computing module according to a determining condition. Also provided is a computing method. The computing device and method have the advantages of low power consumption and high flexibility, and can be combined with the upgrading mode of software, thereby further increasing the computing speed, reducing the computing amount, and reducing the computing power consumption of an accelerator.Type: GrantFiled: November 28, 2019Date of Patent: January 3, 2023Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Zidong Du, Shengyuan Zhou, Shaoli Liu, Tianshi Chen
-
Patent number: 11537858Abstract: A computing device, comprising: a computing module, comprising one or more computing units; and a control module, comprising a computing control unit, and used for controlling shutdown of the computing unit of the computing module according to a determining condition. Also provided is a computing method. The computing device and method have the advantages of low power consumption and high flexibility, and can be combined with the upgrading mode of software, thereby further increasing the computing speed, reducing the computing amount, and reducing the computing power consumption of an accelerator.Type: GrantFiled: November 28, 2019Date of Patent: December 27, 2022Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Tianshi Chen, Xuda Zhou, Shaoli Liu, Zidong Du
-
Patent number: 11531541Abstract: The present disclosure relates to a processing device including a memory configured to store data to be computed; a computational circuit configured to compute the data to be computed, which includes performing acceleration computations on the data to be computed by using an adder circuit and a multiplier circuit; and a control circuit configured to control the memory and the computational circuit, which includes performing acceleration computations according to the data to be computed. The present disclosure may have high flexibility, good configurability, fast computational speed, low power consumption, and other features.Type: GrantFiled: November 27, 2019Date of Patent: December 20, 2022Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Tianshi Chen, Shengyuan Zhou, Zidong Du, Qi Guo
-
Patent number: 11531540Abstract: A processing device with dynamically configurable operation bit width, characterized by comprising: a memory for storing data, the data comprising data to be operated, intermediate operation result, final operation result, and data to be buffered in a neural network; a data width adjustment circuit for adjusting the width of the data to be operated, the intermediate operation result, the final operation result, and/or the data to be buffered; an operation circuit for operating the data to be operated, including performing operation on data to be operated of different bit widths by using an adder circuit and a multiplier; and a control circuit for controlling the memory, the data width adjustment circuit and the operation circuit. The device of the present disclosure can have the advantages of strong flexibility, high configurability, fast operation speed, low power consumption or the like.Type: GrantFiled: April 17, 2018Date of Patent: December 20, 2022Assignee: CAMBRICON (XI'AN) SEMICONDUCTOR CO., LTD.Inventors: Tianshi Chen, Jie Wei, Tian Zhi, Zai Wang, Shaoli Liu, Yuzhe Luo, Qi Guo, Wei Li, Shengyuan Zhou, Zidong Du
-
Patent number: 11507350Abstract: The present disclosure relates to a fused vector multiplier for computing an inner product between vectors, where vectors to be computed are a multiplier number vector {right arrow over (A)}{AN . . . A2A1A0} and a multiplicand number {right arrow over (B)} {BN . . . B2B1B0}, {right arrow over (A)} and {right arrow over (B)} have the same dimension which is N+1.Type: GrantFiled: November 27, 2019Date of Patent: November 22, 2022Assignee: CAMBRICON (XI'AN) SEMICONDUCTOR CO., LTD.Inventors: Tianshi Chen, Shengyuan Zhou, Zidong Du, Qi Guo
-
Publication number: 20220335299Abstract: The present disclosure provides a processing device including: a coarse-grained pruning unit configured to perform coarse-grained pruning on a weight of a neural network to obtain a pruned weight, an operation unit configured to train the neural network according to the pruned weight. The coarse-grained pruning unit is specifically configured to select M weights from the weights of the neural network through a sliding window, and when the M weights meet a preset condition, all or part of the M weights may be set to 0. The processing device can reduce the memory access while reducing the amount of computation, thereby obtaining an acceleration ratio and reducing energy consumption.Type: ApplicationFiled: November 28, 2019Publication date: October 20, 2022Inventors: Zai Wang, Xuda Zhou, Zidong Du, Tianshi Chen
-
Patent number: 11307866Abstract: The disclosure provides a data processing device and method. The data processing device may include: a task configuration information storage unit and a task queue configuration unit. The task configuration information storage unit is configured to store configuration information of tasks. The task queue configuration unit is configured to configure a task queue according to the configuration information stored in the task configuration information storage unit. According to the disclosure, a task queue may be configured according to the configuration information.Type: GrantFiled: November 28, 2019Date of Patent: April 19, 2022Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Shaoli Liu, Shengyuan Zhou, Zidong Du
-
Patent number: 11263520Abstract: Aspects of reusing neural network instructions are described herein. The aspects may include a computing device configured to calculate a hash value of a neural network layer based on the layer information thereof. A determination unit may be configured to determine whether the hash value exists in a hash table. If the hash value is included in the hash table, one or more neural network instructions that correspond to the hash value may be reused.Type: GrantFiled: May 29, 2019Date of Patent: March 1, 2022Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Yunji Chen, Yixuan Ren, Zidong Du, Tianshi Chen
-
Patent number: 11086634Abstract: The disclosure provides a data processing device and method. The data processing device may include: a task configuration information storage unit and a task queue configuration unit. The task configuration information storage unit is configured to store configuration information of tasks. The task queue configuration unit is configured to configure a task queue according to the configuration information stored in the task configuration information storage unit. According to the disclosure, a task queue may be configured according to the configuration information.Type: GrantFiled: November 28, 2019Date of Patent: August 10, 2021Assignee: Shanghai Cambricon Information Technology Co., Ltd.Inventors: Zai Wang, Xuda Zhou, Zidong Du, Tianshi Chen
-
VIDEO RETRIEVAL METHOD, AND METHOD AND APPARATUS FOR GENERATING VIDEO RETRIEVAL MAPPING RELATIONSHIP
Publication number: 20210142069Abstract: The present disclosure relates to a video retrieval method, system and device for generating a video retrieval mapping relationship, and a storage medium. The video retrieval method comprises: acquiring a retrieval instruction, wherein the retrieval instruction carries retrieval information for retrieving a target frame picture; and obtaining the target frame picture according to the retrieval information and a preset mapping relationship.Type: ApplicationFiled: May 17, 2019Publication date: May 13, 2021Inventors: Tianshi CHEN, Zhou FANG, Shengyuan ZHOU, Qun LIU, Zidong DU