Patents by Inventor Daofu LIU

Daofu LIU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230169144
    Abstract: The present disclosure relates to an operation method, a processor, and related products that improve operation efficiency during matrix multiplication. The products include a storage component, an interface apparatus, a control component, and the an artificial intelligence chip. The artificial intelligence chip is connected to the storage component, the control component, and the interface apparatus, respectively. The storage component stores data. The interface apparatus implements data transfer between the artificial intelligence chip and an external device. The control component monitors a state of the artificial intelligence chip. .
    Type: Application
    Filed: February 8, 2021
    Publication date: June 1, 2023
    Applicant: Cambricon (Xi'an) Semiconductor Co., Ltd.
    Inventors: Shaoli LIU, Deyuan HE, Daofu LIU
  • Publication number: 20230091541
    Abstract: The present disclosure relates to a data quantization processing method and apparatus, an electronic device, and a storage medium. The apparatus includes a control unit having an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store a calculation instruction associated with an artificial neural network operation, the instruction processing unit is configured to parse the calculation instruction to obtain a plurality of operation instructions, and the storage queue unit is configured to store an instruction queue. The instruction queue includes a plurality of operation instructions or calculation instructions to be executed in an order of the queue. The above-mentioned method improves the operation precision of related products during a neural network model operation.
    Type: Application
    Filed: February 22, 2021
    Publication date: March 23, 2023
    Applicant: Cambricon Technologies Corporation Limited
    Inventors: Xin YU, Daofu LIU, Shiyi ZHOU
  • Patent number: 11593658
    Abstract: The application provides a processing method and device. Weights and input neurons are quantized respectively, and a weight dictionary, a weight codebook, a neuron dictionary, and a neuron codebook are determined. A computational codebook is determined according to the weight codebook and the neuron codebook. Meanwhile, according to the application, the computational codebook is determined according to two types of quantized data, and the two types of quantized data are combined, which facilitates data processing.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: February 28, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Shaoli Liu, Xuda Zhou, Zidong Du, Daofu Liu
  • Patent number: 11501158
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a controller unit configured to receive an instruction to generate a random vector that includes one or more elements. The instruction may include a predetermined distribution, a count of the elements, and an address of the random vector. The aspects may further include a computation module configured to generate the one or more elements, wherein the one or more elements are subject to the predetermined distribution.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: November 15, 2022
    Assignee: CAMBRICON (XI'AN) SEMICONDUCTOR CO., LTD.
    Inventors: Daofu Liu, Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20220253280
    Abstract: The present disclosure provides a computing device for processing a multi-bit width value, an integrated circuit board card, a method, and a computer readable storage medium. The computing device is included in the combined processing apparatus, and the combined processing apparatus further includes a general interconnection interface, and other processing devices. The computing device interacts with the other processing device to jointly complete a computing operation specified by a user. The combined processing apparatus further includes a storage device connected to an apparatus and the other processing devices and configured to store data of the apparatus and the other processing device. The solution of the present disclosure can split the multi-bit width value so that the processing capability of the processor is not influenced by the bit width.
    Type: Application
    Filed: December 21, 2021
    Publication date: August 11, 2022
    Inventors: Shaoli LIU, Shiyi ZHOU, Daofu LIU
  • Publication number: 20220188071
    Abstract: The present disclosure relates to a computing device for processing a multi-bit width value, an integrated circuit board card, a method, and a computer readable storage medium. The computing device may be included in a combined processing apparatus, and the combined processing apparatus may further include a general interconnection interface, and an other processing device. The computing device interacts with the other processing device to jointly complete a computing operation specified by a user. The combined processing apparatus may further include a storage device connected to an apparatus and the other processing device and configured to store data of the apparatus and the other processing device. The solution of the present disclosure can split the multi-bit width value so that the processing capability of the processor is not influenced by the bit width.
    Type: Application
    Filed: December 20, 2021
    Publication date: June 16, 2022
    Inventors: Shaoli LIU, Daofu LIU, Shiyi ZHOU
  • Patent number: 10971221
    Abstract: Aspect for storage device with fault tolerance capability for neural networks are described herein. The aspects may include a first storage unit of a storage device. The first storage unit is configured to store one or more first bits of data and the data includes floating point type data and fixed point type data. The first bits include one or more sign bits of the floating point type data and the fixed point type data. The aspect may further include a second storage unit of the storage device. The second storage unit may be configured to store one or more second bits of the data. In some examples, the first storage unit may include an ECC memory and the second storage unit may include a non-ECC memory. The ECC memory may include an ECC check Dynamic Random Access Memory and an ECC check Static Random Access Memory.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: April 6, 2021
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Shaoli Liu, Xuda Zhou, Zidong Du, Daofu Liu
  • Publication number: 20210035628
    Abstract: Aspect for storage device with fault tolerance capability for neural networks are described herein. The aspects may include a first storage unit of a storage device. The first storage unit is configured to store one or more first bits of data and the data includes floating point type data and fixed point type data. The first bits include one or more sign bits of the floating point type data and the fixed point type data. The aspect may further include a second storage unit of the storage device. The second storage unit may be configured to store one or more second bits of the data. In some examples, the first storage unit may include an ECC memory and the second storage unit may include a non-ECC memory. The ECC memory may include an ECC check Dynamic Random Access Memory and an ECC check Static Random Access Memory.
    Type: Application
    Filed: April 30, 2020
    Publication date: February 4, 2021
    Inventors: Shaoli LIU, Xuda ZHOU, Zidong DU, Daofu LIU
  • Patent number: 10761991
    Abstract: Aspects for vector circular shifting in neural network are described herein. The aspects may include a direct memory access unit configured to receive a vector that includes multiple elements. The multiple elements are stored in a one-dimensional data structure. The direct memory access unit may store the vector in a vector caching unit. The aspects may further include an instruction caching unit configured to receive a vector shifting instruction that includes a step length for shifting the elements in the vector. Further still, the aspects may include a computation module configured to shift the elements of the vector toward one direction by the step length.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: September 1, 2020
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Daofu Liu, Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Patent number: 10755772
    Abstract: Aspect for storage device with fault tolerance capability for neural networks are described herein. The aspects may include a first storage unit of a storage device. The first storage unit is configured to store one or more first bits of data and the data includes floating point type data and fixed point type data. The first bits include one or more sign bits of the floating point type data and the fixed point type data. The aspect may further include a second storage unit of the storage device. The second storage unit may be configured to store one or more second bits of the data. In some examples, the first storage unit may include an ECC memory and the second storage unit may include a non-ECC memory. The ECC memory may include an ECC check Dynamic Random Access Memory and an ECC check Static Random Access Memory.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: August 25, 2020
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Shaoli Liu, Xuda Zhou, Zidong Du, Daofu Liu
  • Publication number: 20200265300
    Abstract: The application provides an operation method and device. Quantized data is looked up to realize an operation, which simplifies the structure and reduces the computation energy consumption of the data, meanwhile, a plurality of operations are realized.
    Type: Application
    Filed: March 26, 2020
    Publication date: August 20, 2020
    Inventors: Shaoli LIU, Xuda ZHOU, Zidong DU, Daofu LIU
  • Publication number: 20200250539
    Abstract: The application provides a processing method and device. Weights and input neurons are quantized respectively, and a weight dictionary, a weight codebook, a neuron dictionary, and a neuron codebook are determined. A computational codebook is determined according to the weight codebook and the neuron codebook. Meanwhile, according to the application, the computational codebook is determined according to two types of quantized data, and the two types of quantized data are combined, which facilitates data processing.
    Type: Application
    Filed: July 13, 2018
    Publication date: August 6, 2020
    Inventors: Shaoli LIU, Xuda ZHOU, Zidong DU, Daofu LIU
  • Patent number: 10657439
    Abstract: The application provides an operation method and device. Quantized data is looked up to realize an operation, which simplifies the structure and reduces the computation energy consumption of the data, meanwhile, a plurality of operations are realized.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: May 19, 2020
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Shaoli Liu, Xuda Zhou, Zidong Du, Daofu Liu
  • Publication number: 20200034698
    Abstract: The present application provides an operation device and related products. The operation device is configured to execute operations of a network model, wherein the network model includes a neural network model and/or non-neural network model; the operation device comprises an operation unit, a controller unit and a storage unit, wherein the storage unit includes a data input unit, a storage medium and a scalar data storage unit. The technical solution provided by this application has advantages of a fast calculation speed and energy-saving.
    Type: Application
    Filed: April 17, 2018
    Publication date: January 30, 2020
    Applicant: Shanghai Cambricon Information Technology Co., Ltd.
    Inventors: Tianshi CHEN, Yimin ZHUANG, Daofu LIU, Xiaobin CHEN, Zai WANG, Shaoli LIU
  • Publication number: 20190385058
    Abstract: The present application provides an operation device and related products. The operation device is configured to execute operations of a network model, wherein the network model includes a neural network model and/or non-neural network model; the operation device comprises an operation unit, a controller unit and a storage unit, wherein the storage unit includes a data input unit, a storage medium and a scalar data storage unit. The technical solution provided by this application has advantages of a fast computation speed and energy-saving.
    Type: Application
    Filed: August 12, 2019
    Publication date: December 19, 2019
    Inventors: Tianshi CHEN, Yimin ZHUANG, Daofu LIU, Xiaobing CHEN, Zai WANG, Shaoli LIU
  • Publication number: 20190370642
    Abstract: The application provides an operation method and device. Quantized data is looked up to realize an operation, which simplifies the structure and reduces the computation energy consumption of the data, meanwhile, a plurality of operations are realized.
    Type: Application
    Filed: August 1, 2019
    Publication date: December 5, 2019
    Inventors: Shaoli LIU, Xuda ZHOU, Zidong DU, Daofu LIU
  • Publication number: 20190129858
    Abstract: Aspects for vector circular shifting in neural network are described herein. The aspects may include a direct memory access unit configured to receive a vector that includes multiple elements. The multiple elements are stored in a one-dimensional data structure. The direct memory access unit may store the vector in a vector caching unit. The aspects may further include an instruction caching unit configured to receive a vector shifting instruction that includes a step length for shifting the elements in the vector. Further still, the aspects may include a computation module configured to shift the elements of the vector toward one direction by the step length.
    Type: Application
    Filed: October 26, 2018
    Publication date: May 2, 2019
    Inventors: Daofu Liu, Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190065952
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a controller unit configured to receive an instruction to generate a random vector that includes one or more elements. The instruction may include a predetermined distribution, a count of the elements, and an address of the random vector. The aspects may further include a computation module configured to generate the one or more elements, wherein the one or more elements are subject to the predetermined distribution.
    Type: Application
    Filed: October 25, 2018
    Publication date: February 28, 2019
    Inventors: Daofu Liu, Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20180321944
    Abstract: The present disclosure relates to a data ranking apparatus that comprises: a register group for storing K pieces of temporarily ranked maximum or minimum data in a data ranking process, the register group comprises a plurality of registers connected in parallel, and two adjacent registers unidirectionally transmit data from a low level to a high level; a comparator group, which comprises a plurality of comparators connected to the registers on a one-to-one basis, compares the size relationship among a plurality of pieces of input data, and outputs the data of larger or smaller value to the corresponding registers; and a control circuit generating a plurality of flag bits applying to the registers, wherein the flag bits are used to judge whether the registers receive data transmitted from corresponding comparators or lower-level registers, and judge whether the registers transmit data to high level registers.
    Type: Application
    Filed: June 17, 2016
    Publication date: November 8, 2018
    Inventors: Daofu LIU, Shengyuan ZHOU, Yunji CHEN