Patents Assigned to SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
  • Patent number: 11977968
    Abstract: The present application provides an operation device and related products. The operation device is configured to execute operations of a network model, wherein the network model includes a neural network model and/or non-neural network model; the operation device comprises an operation unit, a controller unit and a storage unit, wherein the storage unit includes a data input unit, a storage medium and a scalar data storage unit. The technical solution provided by this application has advantages of a fast calculation speed and energy-saving.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: May 7, 2024
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Tianshi Chen, Yimin Zhuang, Daofu Liu, Xiaobin Chen, Zai Wang, Shaoli Liu
  • Patent number: 11971836
    Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: April 30, 2024
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Yao Zhang, Shaoli Liu, Jun Liang, Yu Chen
  • Patent number: 11922132
    Abstract: Disclosed are an information processing method and a terminal device. The method comprises: acquiring first information, wherein the first information is information to be processed by a terminal device; calling an operation instruction in a calculation apparatus to calculate the first information so as to obtain second information; and outputting the second information. By means of the examples in the present disclosure, a calculation apparatus of a terminal device can be used to call an operation instruction to process first information, so as to output second information of a target desired by a user, thereby improving the information processing efficiency. The present technical solution has advantages of a fast computation speed and high efficiency.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: March 5, 2024
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Tianshi Chen, Shaoli Liu, Zai Wang, Shuai Hu
  • Patent number: 11907844
    Abstract: The present disclosure provides a processing device including: a coarse-grained pruning unit configured to perform coarse-grained pruning on a weight of a neural network to obtain a pruned weight, an operation unit configured to train the neural network according to the pruned weight. The coarse-grained pruning unit is specifically configured to select M weights from the weights of the neural network through a sliding window, and when the M weights meet a preset condition, all or part of the M weights may be set to 0. The processing device can reduce the memory access while reducing the amount of computation, thereby obtaining an acceleration ratio and reducing energy consumption.
    Type: Grant
    Filed: November 28, 2019
    Date of Patent: February 20, 2024
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Zidong Du, Xuda Zhou, Shaoli Liu, Tianshi Chen
  • Publication number: 20240028666
    Abstract: The present disclosure discloses a method for performing a matrix multiplication on an on-chip system and related products. The on-chip system is included in a computing processing apparatus of a combined processing apparatus. The computing processing apparatus includes one or a plurality of integrated circuit apparatuses. The combined processing apparatus further includes an interface apparatus and other processing apparatus. The computing processing apparatus interacts with other processing apparatus to jointly complete a computing operation specified by a user. The combined processing apparatus further includes a storage apparatus. The storage apparatus is connected to the apparatus and other processing apparatus, respectively. The storage apparatus is configured to store data of the apparatus and other processing apparatus.
    Type: Application
    Filed: September 29, 2023
    Publication date: January 25, 2024
    Applicant: Shanghai Cambricon Information Technology Co., Ltd
    Inventors: Zheng SUN, Ming LI, Yehao YU, Zhize CHEN, Guang JIANG, Xin YU
  • Patent number: 11880328
    Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system, and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: January 23, 2024
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Yao Zhang, Shaoli Liu, Jun Liang, Yu Chen
  • Patent number: 11880329
    Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: January 23, 2024
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Shaoli Liu, Zhen Li, Yao Zhang
  • Patent number: 11880330
    Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: January 23, 2024
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Shaoli Liu, Zhen Li, Yao Zhang
  • Patent number: 11868299
    Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: January 9, 2024
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Shaoli Liu, Zhen Li, Yao Zhang
  • Patent number: 11841816
    Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: December 12, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Yao Zhang, Shaoli Liu, Jun Liang, Yu Chen
  • Patent number: 11836497
    Abstract: There is provides an operation module, which includes a memory, a register unit, a dependency relationship processing unit, an operation unit, and a control unit. The memory is configured to store a vector, the register unit is configured to store an extension instruction, and the control unit is configured to acquire and parse the extension instruction, so as to obtain a first operation instruction and a second operation instruction. An execution sequence of the first operation instruction and the second operation instruction can be determined, and an input vector of the first operation instruction can be read from the memory. The operation unit is configured to convert an expression mode of the input data index of the first operation instruction and to screen data, and to execute the first and second operation instruction according to the execution sequence, so as to obtain an extension instruction.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: December 5, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Bingrui Wang, Shengyuan Zhou, Yao Zhang
  • Publication number: 20230367722
    Abstract: The present disclosure discloses a data processing apparatus, a data processing method, and related products. The data processing apparatus is used as a computing apparatus and is included in a combined processing apparatus. The combined processing apparatus further includes an interface apparatus and other processing apparatus. The computing apparatus interacts with other processing apparatus to jointly complete a computing operation specified by a user. The combined processing apparatus further includes a storage apparatus. The storage apparatus is respectively connected to the computing apparatus and other processing apparatus and is used to store data of the computing apparatus and other processing apparatus. The solution of the present disclosure takes full advantage of parallelism among different storage units to improve utilization of each functional component.
    Type: Application
    Filed: July 25, 2023
    Publication date: November 16, 2023
    Applicant: Shanghai Cambricon Information Technology Co., Ltd
    Inventors: Siyuan HE, Runsen YANG, Chunyuan WANG, Liutao ZHENG, Yashuai LV, Xuegang LIANG
  • Patent number: 11809360
    Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system, and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: November 7, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Shaoli Liu, Zhen Li, Yao Zhang
  • Patent number: 11797467
    Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: October 24, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Shaoli Liu, Zhen Li, Yao Zhang
  • Patent number: 11789847
    Abstract: The present application discloses an on-chip code breakpoint debugging method, an on-chip processor, and a chip breakpoint debugging system. The on-chip processor starts and executes an on-chip code, and an output function is set at a breakpoint position of the on-chip code. The on-chip processor obtains output information output by the output function, and stores the output information into an off-chip memory. In one embodiment, according to the output information, output by the output function and stored in the off-chip memory, the on-chip processor can obtain execution conditions of the breakpoints of the on-chip code in real time, achieve the purpose of debugging multiple breakpoints in the on-chip code concurrently, and improve debugging efficiency.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: October 17, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Zhenyu Su, Dingfei Zhang, Xiaoyong Zhou
  • Patent number: 11775832
    Abstract: Aspects of data modification for neural networks are described herein. The aspects may include a data modifier configured to receive input data and weight values of a neural network. The data modifier may include an input data configured to modify the received input data and a weight modifier configured to modify the received weight values. The aspects may further include a computing unit configured to calculate one or more groups of output data based on the modified input data and the modifier weight values.
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: October 3, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Shaoli Liu, Yifan Hao, Yunji Chen, Qi Guo, Tianshi Chen
  • Patent number: 11762631
    Abstract: Disclosed are an information processing method and a terminal device. The method comprises: acquiring first information, wherein the first information is information to be processed by a terminal device; calling an operation instruction in a calculation apparatus to calculate the first information so as to obtain second information; and outputting the second information. By means of the examples in the present disclosure, a calculation apparatus of a terminal device can be used to call an operation instruction to process first information, so as to output second information of a target desired by a user, thereby improving the information processing efficiency. The present technical solution has advantages of a fast computation speed and high efficiency.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: September 19, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Tianshi Chen, Shaoli Liu, Zai Wang, Shuai Hu
  • Patent number: 11740898
    Abstract: The present disclosure provides a computation device. The computation device is configured to perform a machine learning computation, and includes an operation unit, a controller unit, and a conversion unit. The storage unit is configured to obtain input data and a computation instruction. The controller unit is configured to extract and parse the computation instruction from the storage unit to obtain one or more operation instructions, and to send the one or more operation instructions and the input data to the operation unit. The operation unit is configured to perform operations on the input data according to one or more operation instructions to obtain a computation result of the computation instruction. In the examples of the present disclosure, the input data involved in machine learning computations is represented by fixed-point data, thereby improving the processing speed and efficiency of training operations.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: August 29, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Yao Zhang, Bingrui Wang
  • Patent number: 11734002
    Abstract: The present disclosure provides a counting device and counting method. The device includes a storage unit, a counting unit, and a register unit, where the storage unit may be connected to the counting unit for storing input data to be counted and storing a number of elements satisfying a given condition in the input data after counting; the register unit may be configured to store an address where input data to be counted is stored in the storage unit; and the counting unit may be connected to the register unit, and may be configured to acquire a counting instruction, read a storage address of the input data to be counted in the register unit according to the counting instruction, acquire corresponding input data to be counted in the storage unit, perform statistical counting on a number of elements in the input data to be counted that satisfy the given condition, and obtain a counting result.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: August 22, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Tianshi Chen, Jie Wei, Tian Zhi, Zai Wang
  • Patent number: 11726754
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Grant
    Filed: June 26, 2022
    Date of Patent: August 15, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Weijian Du, Linyang Wu, Xunyu Chen