Patents by Inventor Bingrui WANG

Bingrui WANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240111536
    Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.
    Type: Application
    Filed: December 7, 2023
    Publication date: April 4, 2024
    Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Bingrui Wang, Xiaoyong ZHOU, Yimin ZHUANG, Huiying LAN, Jun LIANG, Hongbo ZENG
  • Patent number: 11900241
    Abstract: Provided are an integrated circuit chip apparatus and a related product, the integrated circuit chip apparatus being used for executing a multiplication operation, a convolution operation or a training operation of a neural network. The present technical solution has the advantages of a small amount of calculation and low power consumption.
    Type: Grant
    Filed: March 7, 2022
    Date of Patent: February 13, 2024
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Xinkai Song, Bingrui Wang, Yao Zhang, Shuai Hu
  • Patent number: 11900242
    Abstract: Provided are an integrated circuit chip apparatus and a related product, the integrated circuit chip apparatus being used for executing a multiplication operation, a convolution operation or a training operation of a neural network. The present technical solution has the advantages of a small amount of calculation and low power consumption.
    Type: Grant
    Filed: March 7, 2022
    Date of Patent: February 13, 2024
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Xinkai Song, Bingrui Wang, Yao Zhang, Shuai Hu
  • Patent number: 11886880
    Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: January 30, 2024
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Bingrui Wang, Xiaoyong Zhou, Yimin Zhuang, Huiying Lan, Jun Liang, Hongbo Zeng
  • Publication number: 20240028334
    Abstract: A data processing method includes obtaining content of a descriptor when an operand of a first processing instruction includes the descriptor, where the descriptor is configured to indicate a shape of tensor data and to indicate data address of the tensor data, and executing the first processing instruction according to the content of the descriptor by determining the data address of the tensor data corresponding to the operand of the first processing instruction in a data storage space, according to the content of the descriptor, and according to the data address, executing data processing corresponding to the first processing instruction.
    Type: Application
    Filed: September 28, 2023
    Publication date: January 25, 2024
    Inventors: Shaoli LIU, Bingrui WANG, Jun LIANG
  • Publication number: 20240004650
    Abstract: The present disclosure provides a data processing method and an apparatus and a related product. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.
    Type: Application
    Filed: September 18, 2023
    Publication date: January 4, 2024
    Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Bingrui WANG, Jun LIANG
  • Patent number: 11836491
    Abstract: The present disclosure provides a data processing method and an apparatus and a related product. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: December 5, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Bingrui Wang, Jun Liang
  • Patent number: 11836497
    Abstract: There is provides an operation module, which includes a memory, a register unit, a dependency relationship processing unit, an operation unit, and a control unit. The memory is configured to store a vector, the register unit is configured to store an extension instruction, and the control unit is configured to acquire and parse the extension instruction, so as to obtain a first operation instruction and a second operation instruction. An execution sequence of the first operation instruction and the second operation instruction can be determined, and an input vector of the first operation instruction can be read from the memory. The operation unit is configured to convert an expression mode of the input data index of the first operation instruction and to screen data, and to execute the first and second operation instruction according to the execution sequence, so as to obtain an extension instruction.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: December 5, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Bingrui Wang, Shengyuan Zhou, Yao Zhang
  • Patent number: 11803735
    Abstract: The present disclosure discloses a neural network processing module, in which a mapping unit is configured to receive an input neuron and a weight, and then process the input neuron and/or the weight to obtain a processed input neuron and a processed weight; and an operation unit is configured to perform an artificial neural network operation on the processed input neuron and the processed weight. Examples of the present disclosure may reduce additional overhead of the device, reduce the amount of access, and improve efficiency of the neural network operation.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: October 31, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Yao Zhang, Shaoli Liu, Bingrui Wang, Xiaofu Meng
  • Patent number: 11775311
    Abstract: A convolution operation method and a processing device for performing the same are provided. The method is performed by a processing device. The processing device includes a main processing circuit and a plurality of basic processing circuits. The basic processing circuits are configured to perform convolution operation in parallel. The technical solutions disclosed by the present disclosure can provide short operation time and low energy consumption.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: October 3, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Tianshi Chen, Bingrui Wang, Yao Zhang
  • Patent number: 11748604
    Abstract: An integrated circuit chip device and related products are provided. The integrated circuit chip device is used for performing a multiplication operation, a convolution operation or a training operation of a neural network. The device has the advantages of small calculation amount and low power consumption.
    Type: Grant
    Filed: December 27, 2020
    Date of Patent: September 5, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Xinkai Song, Bingrui Wang, Yao Zhang, Shuai Hu
  • Patent number: 11748603
    Abstract: An integrated circuit chip device and related products are provided. The integrated circuit chip device is used for performing a multiplication operation, a convolution operation or a training operation of a neural network. The device has the advantages of small calculation amount and low power consumption.
    Type: Grant
    Filed: December 27, 2020
    Date of Patent: September 5, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Xinkai Song, Bingrui Wang, Yao Zhang, Shuai Hu
  • Patent number: 11748602
    Abstract: An integrated circuit chip device and related products are provided. The integrated circuit chip device is used for performing a multiplication operation, a convolution operation or a training operation of a neural network. The device has the advantages of small calculation amount and low power consumption.
    Type: Grant
    Filed: December 27, 2020
    Date of Patent: September 5, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Xinkai Song, Bingrui Wang, Yao Zhang, Shuai Hu
  • Patent number: 11748605
    Abstract: An integrated circuit chip device and related products are provided. The integrated circuit chip device is used for performing a multiplication operation, a convolution operation or a training operation of a neural network. The device has the advantages of small calculation amount and low power consumption.
    Type: Grant
    Filed: December 27, 2020
    Date of Patent: September 5, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Xinkai Song, Bingrui Wang, Yao Zhang, Shuai Hu
  • Patent number: 11748601
    Abstract: An integrated circuit chip device and related products are provided. The integrated circuit chip device is used for performing a multiplication operation, a convolution operation or a training operation of a neural network. The device has the advantages of small calculation amount and low power consumption.
    Type: Grant
    Filed: December 27, 2020
    Date of Patent: September 5, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Xinkai Song, Bingrui Wang, Yao Zhang, Shuai Hu
  • Patent number: 11740898
    Abstract: The present disclosure provides a computation device. The computation device is configured to perform a machine learning computation, and includes an operation unit, a controller unit, and a conversion unit. The storage unit is configured to obtain input data and a computation instruction. The controller unit is configured to extract and parse the computation instruction from the storage unit to obtain one or more operation instructions, and to send the one or more operation instructions and the input data to the operation unit. The operation unit is configured to perform operations on the input data according to one or more operation instructions to obtain a computation result of the computation instruction. In the examples of the present disclosure, the input data involved in machine learning computations is represented by fixed-point data, thereby improving the processing speed and efficiency of training operations.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: August 29, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Yao Zhang, Bingrui Wang
  • Patent number: 11741351
    Abstract: An integrated circuit chip device and related products are provided. The integrated circuit chip device is used for performing a multiplication operation, a convolution operation or a training operation of a neural network. The device has the advantages of small calculation amount and low power consumption.
    Type: Grant
    Filed: December 27, 2020
    Date of Patent: August 29, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Xinkai Song, Bingrui Wang, Yao Zhang, Shuai Hu
  • Patent number: 11734548
    Abstract: The present disclosure provides an integrated circuit chip device and a related product. The integrated circuit chip device includes: a primary processing circuit and a plurality of basic processing circuits. The primary processing circuit or at least one of the plurality of basic processing circuits includes the compression mapping circuits configured to perform compression on each data of a neural network operation. The technical solution provided by the present disclosure has the advantages of a small amount of computations and low power consumption.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: August 22, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Xinkai Song, Bingrui Wang, Yao Zhang, Shuai Hu
  • Patent number: 11720357
    Abstract: The present disclosure provides a computation device. The computation device is configured to perform a machine learning computation, and includes an operation unit, a controller unit, and a conversion unit. The storage unit is configured to obtain input data and a computation instruction. The controller unit is configured to extract and parse the computation instruction from the storage unit to obtain one or more operation instructions, and to send the one or more operation instructions and the input data to the operation unit. The operation unit is configured to perform operations on the input data according to one or more operation instructions to obtain a computation result of the computation instruction. In the examples of the present disclosure, the input data involved in machine learning computations is represented by fixed-point data, thereby improving the processing speed and efficiency of training operations.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: August 8, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Yao Zhang, Bingrui Wang
  • Patent number: 11709672
    Abstract: The present disclosure provides a computation device. The computation device is configured to perform a machine learning computation, and includes an operation unit, a controller unit, and a conversion unit. The storage unit is configured to obtain input data and a computation instruction. The controller unit is configured to extract and parse the computation instruction from the storage unit to obtain one or more operation instructions, and to send the one or more operation instructions and the input data to the operation unit. The operation unit is configured to perform operations on the input data according to one or more operation instructions to obtain a computation result of the computation instruction. In the examples of the present disclosure, the input data involved in machine learning computations is represented by fixed-point data, thereby improving the processing speed and efficiency of training operations.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: July 25, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Yao Zhang, Bingrui Wang