Patents by Inventor Daofu LIU
Daofu LIU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230169144Abstract: The present disclosure relates to an operation method, a processor, and related products that improve operation efficiency during matrix multiplication. The products include a storage component, an interface apparatus, a control component, and the an artificial intelligence chip. The artificial intelligence chip is connected to the storage component, the control component, and the interface apparatus, respectively. The storage component stores data. The interface apparatus implements data transfer between the artificial intelligence chip and an external device. The control component monitors a state of the artificial intelligence chip. .Type: ApplicationFiled: February 8, 2021Publication date: June 1, 2023Applicant: Cambricon (Xi'an) Semiconductor Co., Ltd.Inventors: Shaoli LIU, Deyuan HE, Daofu LIU
-
Publication number: 20230091541Abstract: The present disclosure relates to a data quantization processing method and apparatus, an electronic device, and a storage medium. The apparatus includes a control unit having an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store a calculation instruction associated with an artificial neural network operation, the instruction processing unit is configured to parse the calculation instruction to obtain a plurality of operation instructions, and the storage queue unit is configured to store an instruction queue. The instruction queue includes a plurality of operation instructions or calculation instructions to be executed in an order of the queue. The above-mentioned method improves the operation precision of related products during a neural network model operation.Type: ApplicationFiled: February 22, 2021Publication date: March 23, 2023Applicant: Cambricon Technologies Corporation LimitedInventors: Xin YU, Daofu LIU, Shiyi ZHOU
-
Patent number: 11593658Abstract: The application provides a processing method and device. Weights and input neurons are quantized respectively, and a weight dictionary, a weight codebook, a neuron dictionary, and a neuron codebook are determined. A computational codebook is determined according to the weight codebook and the neuron codebook. Meanwhile, according to the application, the computational codebook is determined according to two types of quantized data, and the two types of quantized data are combined, which facilitates data processing.Type: GrantFiled: July 13, 2018Date of Patent: February 28, 2023Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Shaoli Liu, Xuda Zhou, Zidong Du, Daofu Liu
-
Patent number: 11501158Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a controller unit configured to receive an instruction to generate a random vector that includes one or more elements. The instruction may include a predetermined distribution, a count of the elements, and an address of the random vector. The aspects may further include a computation module configured to generate the one or more elements, wherein the one or more elements are subject to the predetermined distribution.Type: GrantFiled: October 25, 2018Date of Patent: November 15, 2022Assignee: CAMBRICON (XI'AN) SEMICONDUCTOR CO., LTD.Inventors: Daofu Liu, Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
-
Publication number: 20220253280Abstract: The present disclosure provides a computing device for processing a multi-bit width value, an integrated circuit board card, a method, and a computer readable storage medium. The computing device is included in the combined processing apparatus, and the combined processing apparatus further includes a general interconnection interface, and other processing devices. The computing device interacts with the other processing device to jointly complete a computing operation specified by a user. The combined processing apparatus further includes a storage device connected to an apparatus and the other processing devices and configured to store data of the apparatus and the other processing device. The solution of the present disclosure can split the multi-bit width value so that the processing capability of the processor is not influenced by the bit width.Type: ApplicationFiled: December 21, 2021Publication date: August 11, 2022Inventors: Shaoli LIU, Shiyi ZHOU, Daofu LIU
-
Publication number: 20220188071Abstract: The present disclosure relates to a computing device for processing a multi-bit width value, an integrated circuit board card, a method, and a computer readable storage medium. The computing device may be included in a combined processing apparatus, and the combined processing apparatus may further include a general interconnection interface, and an other processing device. The computing device interacts with the other processing device to jointly complete a computing operation specified by a user. The combined processing apparatus may further include a storage device connected to an apparatus and the other processing device and configured to store data of the apparatus and the other processing device. The solution of the present disclosure can split the multi-bit width value so that the processing capability of the processor is not influenced by the bit width.Type: ApplicationFiled: December 20, 2021Publication date: June 16, 2022Inventors: Shaoli LIU, Daofu LIU, Shiyi ZHOU
-
Patent number: 10971221Abstract: Aspect for storage device with fault tolerance capability for neural networks are described herein. The aspects may include a first storage unit of a storage device. The first storage unit is configured to store one or more first bits of data and the data includes floating point type data and fixed point type data. The first bits include one or more sign bits of the floating point type data and the fixed point type data. The aspect may further include a second storage unit of the storage device. The second storage unit may be configured to store one or more second bits of the data. In some examples, the first storage unit may include an ECC memory and the second storage unit may include a non-ECC memory. The ECC memory may include an ECC check Dynamic Random Access Memory and an ECC check Static Random Access Memory.Type: GrantFiled: April 30, 2020Date of Patent: April 6, 2021Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Shaoli Liu, Xuda Zhou, Zidong Du, Daofu Liu
-
Publication number: 20210035628Abstract: Aspect for storage device with fault tolerance capability for neural networks are described herein. The aspects may include a first storage unit of a storage device. The first storage unit is configured to store one or more first bits of data and the data includes floating point type data and fixed point type data. The first bits include one or more sign bits of the floating point type data and the fixed point type data. The aspect may further include a second storage unit of the storage device. The second storage unit may be configured to store one or more second bits of the data. In some examples, the first storage unit may include an ECC memory and the second storage unit may include a non-ECC memory. The ECC memory may include an ECC check Dynamic Random Access Memory and an ECC check Static Random Access Memory.Type: ApplicationFiled: April 30, 2020Publication date: February 4, 2021Inventors: Shaoli LIU, Xuda ZHOU, Zidong DU, Daofu LIU
-
Patent number: 10761991Abstract: Aspects for vector circular shifting in neural network are described herein. The aspects may include a direct memory access unit configured to receive a vector that includes multiple elements. The multiple elements are stored in a one-dimensional data structure. The direct memory access unit may store the vector in a vector caching unit. The aspects may further include an instruction caching unit configured to receive a vector shifting instruction that includes a step length for shifting the elements in the vector. Further still, the aspects may include a computation module configured to shift the elements of the vector toward one direction by the step length.Type: GrantFiled: October 26, 2018Date of Patent: September 1, 2020Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Daofu Liu, Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
-
Patent number: 10755772Abstract: Aspect for storage device with fault tolerance capability for neural networks are described herein. The aspects may include a first storage unit of a storage device. The first storage unit is configured to store one or more first bits of data and the data includes floating point type data and fixed point type data. The first bits include one or more sign bits of the floating point type data and the fixed point type data. The aspect may further include a second storage unit of the storage device. The second storage unit may be configured to store one or more second bits of the data. In some examples, the first storage unit may include an ECC memory and the second storage unit may include a non-ECC memory. The ECC memory may include an ECC check Dynamic Random Access Memory and an ECC check Static Random Access Memory.Type: GrantFiled: August 1, 2019Date of Patent: August 25, 2020Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Shaoli Liu, Xuda Zhou, Zidong Du, Daofu Liu
-
Publication number: 20200265300Abstract: The application provides an operation method and device. Quantized data is looked up to realize an operation, which simplifies the structure and reduces the computation energy consumption of the data, meanwhile, a plurality of operations are realized.Type: ApplicationFiled: March 26, 2020Publication date: August 20, 2020Inventors: Shaoli LIU, Xuda ZHOU, Zidong DU, Daofu LIU
-
Publication number: 20200250539Abstract: The application provides a processing method and device. Weights and input neurons are quantized respectively, and a weight dictionary, a weight codebook, a neuron dictionary, and a neuron codebook are determined. A computational codebook is determined according to the weight codebook and the neuron codebook. Meanwhile, according to the application, the computational codebook is determined according to two types of quantized data, and the two types of quantized data are combined, which facilitates data processing.Type: ApplicationFiled: July 13, 2018Publication date: August 6, 2020Inventors: Shaoli LIU, Xuda ZHOU, Zidong DU, Daofu LIU
-
Patent number: 10657439Abstract: The application provides an operation method and device. Quantized data is looked up to realize an operation, which simplifies the structure and reduces the computation energy consumption of the data, meanwhile, a plurality of operations are realized.Type: GrantFiled: August 1, 2019Date of Patent: May 19, 2020Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Shaoli Liu, Xuda Zhou, Zidong Du, Daofu Liu
-
Publication number: 20200034698Abstract: The present application provides an operation device and related products. The operation device is configured to execute operations of a network model, wherein the network model includes a neural network model and/or non-neural network model; the operation device comprises an operation unit, a controller unit and a storage unit, wherein the storage unit includes a data input unit, a storage medium and a scalar data storage unit. The technical solution provided by this application has advantages of a fast calculation speed and energy-saving.Type: ApplicationFiled: April 17, 2018Publication date: January 30, 2020Applicant: Shanghai Cambricon Information Technology Co., Ltd.Inventors: Tianshi CHEN, Yimin ZHUANG, Daofu LIU, Xiaobin CHEN, Zai WANG, Shaoli LIU
-
Publication number: 20190385058Abstract: The present application provides an operation device and related products. The operation device is configured to execute operations of a network model, wherein the network model includes a neural network model and/or non-neural network model; the operation device comprises an operation unit, a controller unit and a storage unit, wherein the storage unit includes a data input unit, a storage medium and a scalar data storage unit. The technical solution provided by this application has advantages of a fast computation speed and energy-saving.Type: ApplicationFiled: August 12, 2019Publication date: December 19, 2019Inventors: Tianshi CHEN, Yimin ZHUANG, Daofu LIU, Xiaobing CHEN, Zai WANG, Shaoli LIU
-
Publication number: 20190370642Abstract: The application provides an operation method and device. Quantized data is looked up to realize an operation, which simplifies the structure and reduces the computation energy consumption of the data, meanwhile, a plurality of operations are realized.Type: ApplicationFiled: August 1, 2019Publication date: December 5, 2019Inventors: Shaoli LIU, Xuda ZHOU, Zidong DU, Daofu LIU
-
Publication number: 20190129858Abstract: Aspects for vector circular shifting in neural network are described herein. The aspects may include a direct memory access unit configured to receive a vector that includes multiple elements. The multiple elements are stored in a one-dimensional data structure. The direct memory access unit may store the vector in a vector caching unit. The aspects may further include an instruction caching unit configured to receive a vector shifting instruction that includes a step length for shifting the elements in the vector. Further still, the aspects may include a computation module configured to shift the elements of the vector toward one direction by the step length.Type: ApplicationFiled: October 26, 2018Publication date: May 2, 2019Inventors: Daofu Liu, Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
-
Publication number: 20190065952Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a controller unit configured to receive an instruction to generate a random vector that includes one or more elements. The instruction may include a predetermined distribution, a count of the elements, and an address of the random vector. The aspects may further include a computation module configured to generate the one or more elements, wherein the one or more elements are subject to the predetermined distribution.Type: ApplicationFiled: October 25, 2018Publication date: February 28, 2019Inventors: Daofu Liu, Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
-
Publication number: 20180321944Abstract: The present disclosure relates to a data ranking apparatus that comprises: a register group for storing K pieces of temporarily ranked maximum or minimum data in a data ranking process, the register group comprises a plurality of registers connected in parallel, and two adjacent registers unidirectionally transmit data from a low level to a high level; a comparator group, which comprises a plurality of comparators connected to the registers on a one-to-one basis, compares the size relationship among a plurality of pieces of input data, and outputs the data of larger or smaller value to the corresponding registers; and a control circuit generating a plurality of flag bits applying to the registers, wherein the flag bits are used to judge whether the registers receive data transmitted from corresponding comparators or lower-level registers, and judge whether the registers transmit data to high level registers.Type: ApplicationFiled: June 17, 2016Publication date: November 8, 2018Inventors: Daofu LIU, Shengyuan ZHOU, Yunji CHEN