Patents Assigned to SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
  • Patent number: 11379199
    Abstract: Disclosed are a general-purpose machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201), performing classification processing on the task parameters to obtain task instructions and model parameters (S1202), aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203), and integrating the stack data and the heap data to obtain a general-purpose machine learning model (S1204). By means of the method, compiled results of a corresponding general-purpose model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: July 5, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Weijian Du, Linyang Wu, Xunyu Chen
  • Patent number: 11360811
    Abstract: Computer systems, data processing methods, and computer-readable media are provided to run original networks. An exemplary computer system includes first and second processors a memory storing offline models and corresponding input data of a plurality of original networks, and a runtime system configured to run on the first processor. The runtime system, when runs on the first processor, causes the first processor to implement a plurality of virtual devices comprising a data processing device configured to obtain an offline model and corresponding input data of an original network from the memory, an equipment management device configured to control turning on or off of the second processor, and a task execution device configured to control the second processor to run the offline model of the original network.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: June 14, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Linyang Wu, Qi Guo, Xunyu Chen, Kangyu Wang
  • Patent number: 11334330
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: May 17, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Weijian Du, Linyang Wu, Xunyu Chen
  • Patent number: 11334329
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: May 17, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Weijian Du, Linyang Wu, Xunyu Chen
  • Publication number: 20220121908
    Abstract: Embodiments of the present disclosure relate to a method and an apparatus for processing data, and related products. The embodiments of the present disclosure relate to a board card including a storage component, an interface apparatus, a control component, and an artificial intelligence chip, where the artificial intelligence chip is connected to the storage component, the control component and the interface apparatus respectively. The storage component is used to store data; the interface apparatus is used to realize data transmission between the artificial intelligence chip and the external device. The control component is used to monitor a state of the artificial intelligence chip. The board card may be used to perform artificial intelligence computations.
    Type: Application
    Filed: December 29, 2021
    Publication date: April 21, 2022
    Applicant: Shanghai Cambricon Information Technology Co., Ltd
    Inventors: Yao ZHANG, Guang JIANG, Xishan ZHANG, Shiyi ZHOU, Di HUANG, Chang LIU, Jiaming GUO
  • Patent number: 11308398
    Abstract: Aspects of data modification for neural networks are described herein. The aspects may include a connection value generator configured to receive one or more groups of input data and one or more weight values and generate one or more connection values based on the one or more weight values. The aspects may further include a pruning module configured to modify the one or more groups of input data and the one or more weight values based on the connection values. Further still, the aspects may include a computing unit configured to update the one or more weight values and/or calculate one or more input gradients.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: April 19, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Yunji Chen, Xinkai Song, Shaoli Liu, Tianshi Chen
  • Patent number: 11307836
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: April 19, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Weijian Du, Linyang Wu, Xunyu Chen
  • Patent number: 11307865
    Abstract: The disclosure provides a data processing device and method. The data processing device may include: a task configuration information storage unit and a task queue configuration unit. The task configuration information storage unit is configured to store configuration information of tasks. The task queue configuration unit is configured to configure a task queue according to the configuration information stored in the task configuration information storage unit. According to the disclosure, a task queue may be configured according to the configuration information.
    Type: Grant
    Filed: November 28, 2019
    Date of Patent: April 19, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Tianshi Chen, Haoyuan He, Shuai Hu
  • Patent number: 11307864
    Abstract: The disclosure provides a data processing device and method. The data processing device may include: a task configuration information storage unit and a task queue configuration unit. The task configuration information storage unit is configured to store configuration information of tasks. The task queue configuration unit is configured to configure a task queue according to the configuration information stored in the task configuration information storage unit. According to the disclosure, a task queue may be configured according to the configuration information.
    Type: Grant
    Filed: November 28, 2019
    Date of Patent: April 19, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Tianshi Chen, Lei Zhang, Shaoli Liu
  • Patent number: 11307866
    Abstract: The disclosure provides a data processing device and method. The data processing device may include: a task configuration information storage unit and a task queue configuration unit. The task configuration information storage unit is configured to store configuration information of tasks. The task queue configuration unit is configured to configure a task queue according to the configuration information stored in the task configuration information storage unit. According to the disclosure, a task queue may be configured according to the configuration information.
    Type: Grant
    Filed: November 28, 2019
    Date of Patent: April 19, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Shaoli Liu, Shengyuan Zhou, Zidong Du
  • Publication number: 20220108150
    Abstract: Embodiments of the present disclosure relate to a method and an apparatus for processing data, and related products. The embodiments of the present disclosure provide a board card including a storage component, an interface device, a control component, and an artificial intelligence chip. The artificial intelligence chip is connected to the storage component, the control component, and the interface device, respectively; the storage component is configured to store data; the interface device is configured to implement data transfer between the artificial intelligence chip and external equipment; and the control component is configured to monitor a state of the artificial intelligence chip. The board card is configured to perform artificial intelligence operations.
    Type: Application
    Filed: December 17, 2021
    Publication date: April 7, 2022
    Applicant: Shanghai Cambricon Information Technology Co., Ltd
    Inventors: Yao ZHANG, Guang JIANG, Xishan ZHANG, Shiyi ZHOU, Di HUANG, Chang LIU, Jiaming GUO
  • Publication number: 20220092386
    Abstract: The present disclosure provides a neural network model splitting method and related products. The scheme provided by the present disclosure splits an operator into a plurality of smaller-scale sub-operators, so that a compute library under a single-core architecture can be called directly, which helps to avoid the extra work caused by re-implementation.
    Type: Application
    Filed: April 13, 2020
    Publication date: March 24, 2022
    Applicant: Shanghai Cambricon Information Technology Co., Ltd
    Inventors: Yusong ZHOU, Xiao ZHANG, Linyang WU, Yehao YU, Yunlong XU
  • Patent number: 11263520
    Abstract: Aspects of reusing neural network instructions are described herein. The aspects may include a computing device configured to calculate a hash value of a neural network layer based on the layer information thereof. A determination unit may be configured to determine whether the hash value exists in a hash table. If the hash value is included in the hash table, one or more neural network instructions that correspond to the hash value may be reused.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: March 1, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Yunji Chen, Yixuan Ren, Zidong Du, Tianshi Chen
  • Patent number: 11221877
    Abstract: The present disclosure provides a task parallel processing method, a device, a system, a storage medium and computer equipment, which are capable of distributing and regulating tasks to be executed according to a task directed acyclic graph, and may thereby realize task parallelism of a multi-core processor and improve the efficiency of data processing.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: January 11, 2022
    Assignee: Shanghai Cambricon Information Technology Co., Ltd
    Inventors: Linyang Wu, Xiaofu Meng
  • Patent number: 11169803
    Abstract: The present disclosure provides a computation device. The computation device is configured to perform a machine learning computation, and includes an operation unit, a controller unit, and a conversion unit. The storage unit is configured to obtain input data and a computation instruction. The controller unit is configured to extract and parse the computation instruction from the storage unit to obtain one or more operation instructions, and to send the one or more operation instructions and the input data to the operation unit. The operation unit is configured to perform operations on the input data according to one or more operation instructions to obtain a computation result of the computation instruction. In the examples of the present disclosure, the input data involved in machine learning computations is represented by fixed-point data, thereby improving the processing speed and efficiency of training operations.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: November 9, 2021
    Assignee: Shanghai Cambricon Information Technology Co., Ltd.
    Inventors: Yao Zhang, Bingrui Wang
  • Patent number: 11113104
    Abstract: Computer systems, data processing methods, and computer-readable media are provided to run original networks. An exemplary computer system includes first and second processors and first and second memories. The first memory stores offline models and corresponding input data of a plurality of original networks, and a runtime system configured to run on the first processor. The second memory stores an operating system configured to run on the first processor or the second processor. When the runtime system runs on the first processor, the runtime system obtains an offline model and corresponding input data of an original network from the first memory and controls the second processor to run the offline model of the original network. The offline model of the original network includes model parameters, instructions, and interface data of respective computation nodes of the original network.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: September 7, 2021
    Assignee: Shanghai Cambricon Information Technology Co., Ltd
    Inventors: Linyang Wu, Qi Guo, Xunyu Chen, Kangyu Wang
  • Patent number: 11113103
    Abstract: Systems and methods for scheduling an instruction list for parallel processing tasks are provided. An exemplary method includes obtaining an instruction set in the instruction list to be scheduled and determining data dependencies among instructions in the instruction set by performing a data dependency analysis on the instruction set. The method also includes obtaining, based on the data dependencies, selection nodes for performing instruction selections during the scheduling of the instruction list. The method further includes determining, based on a preset rule, an order of instructions in a scheduled instruction list according to a corresponding order of the selection nodes.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: September 7, 2021
    Assignee: Shanghai Cambricon Information Technology Co., Ltd
    Inventors: Yongwei Zhao, Xiaofu Meng
  • Patent number: 11106598
    Abstract: The present disclosure provides a computation device. The computation device is configured to perform a machine learning computation, and includes an operation unit, a controller unit, and a conversion unit. The storage unit is configured to obtain input data and a computation instruction. The controller unit is configured to extract and parse the computation instruction from the storage unit to obtain one or more operation instructions, and to send the one or more operation instructions and the input data to the operation unit. The operation unit is configured to perform operations on the input data according to one or more operation instructions to obtain a computation result of the computation instruction. In the examples of the present disclosure, the input data involved in machine learning computations is represented by fixed-point data, thereby improving the processing speed and efficiency of training operations.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: August 31, 2021
    Assignee: Shanghai Cambricon Information Technology Co., Ltd.
    Inventors: Yao Zhang, Bingrui Wang
  • Patent number: 11086634
    Abstract: The disclosure provides a data processing device and method. The data processing device may include: a task configuration information storage unit and a task queue configuration unit. The task configuration information storage unit is configured to store configuration information of tasks. The task queue configuration unit is configured to configure a task queue according to the configuration information stored in the task configuration information storage unit. According to the disclosure, a task queue may be configured according to the configuration information.
    Type: Grant
    Filed: November 28, 2019
    Date of Patent: August 10, 2021
    Assignee: Shanghai Cambricon Information Technology Co., Ltd.
    Inventors: Zai Wang, Xuda Zhou, Zidong Du, Tianshi Chen
  • Patent number: 11049002
    Abstract: The present disclosure provides a computation device including: a computation module for executing a neural network computation, and a power conversion module connected to the computation module, for converting input data and/or output data of the neural network computation into power data. The present disclosure further provides a computation method. The computation device and method of the present disclosure may reduce the cost of storage resources and computing resources, and may increase the computation speed.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: June 29, 2021
    Assignee: Shanghai Cambricon Information Technology Co., Ltd
    Inventors: Lei Zhang, Shuai Hu, Shaoli Liu, Tianshi Chen