Patents by Inventor Xunyu Chen

Xunyu Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230274158
    Abstract: The present disclosure relates to an apparatus and a method for performing neural network computing, a board card, and a readable storage medium. The computing apparatus of the present disclosure is included in an integrated circuit apparatus. The integrated circuit apparatus includes a general interconnection interface and other processing apparatus. The computing apparatus interacts with other processing apparatus to jointly complete a computing operation specified by a user. The integrated circuit apparatus further includes a storage apparatus. The storage apparatus is connected to the computing apparatus and other processing apparatus, respectively. The storage apparatus is used for data storage of the computing apparatus and other processing apparatus.
    Type: Application
    Filed: September 23, 2021
    Publication date: August 31, 2023
    Inventors: Huiying LAN, Ruitao WANG, Haizhao LUO, Bo CAO, Xunyu CHEN
  • Publication number: 20230259746
    Abstract: The present disclosure relates to an apparatus and a method for forward fusing a neural network, a board card, and a readable storage medium. The computing apparatus of the present disclosure is included in an integrated circuit apparatus. The integrated circuit apparatus includes a general interconnection interface and other processing apparatus. The computing apparatus interacts with other processing apparatus to jointly complete a computing operation specified by a user. The integrated circuit apparatus further includes a storage apparatus. The storage apparatus is connected to the computing apparatus and other processing apparatus, respectively. The storage apparatus is used for data storage of the computing apparatus and other processing apparatus.
    Type: Application
    Filed: September 24, 2021
    Publication date: August 17, 2023
    Inventors: Huiying LAN, Ruitao WANG, Haizhao LUO, Bo CAO, Xunyu CHEN
  • Patent number: 11727244
    Abstract: Aspects for Long Short-Term Memory (LSTM) blocks in a recurrent neural network (RNN) are described herein. As an example, the aspects may include one or more slave computation modules, an interconnection unit, and a master computation module collectively configured to calculate an activated input gate value, an activated forget gate value, a current cell status of the current computation period, an activated output gate value, and a forward pass result.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: August 15, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Qi Guo, Xunyu Chen, Yunji Chen, Tianshi Chen
  • Patent number: 11726754
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Grant
    Filed: June 26, 2022
    Date of Patent: August 15, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Weijian Du, Linyang Wu, Xunyu Chen
  • Publication number: 20230214322
    Abstract: The present disclosure relates to a method, a device and a computation apparatus for allocating a space address to data in a memory, where the computation apparatus is included in a combined processing apparatus, which includes a general interconnection interface and other processing apparatuses. The computation apparatus interacts with other processing apparatuses to jointly complete computations specified by the user. The combined processing apparatus also includes a storage apparatus. The storage apparatus is respectively connected to the computation apparatus and the other processing apparatuses, and is used for storing data of the computation apparatus and other processing apparatuses. The technical solutions of the present disclosure improve utilization of storage space of the memory.
    Type: Application
    Filed: May 12, 2021
    Publication date: July 6, 2023
    Inventors: Xiaofu MENG, Tian ZHI, Zhenxing ZHANG, Xunyu CHEN
  • Publication number: 20230196069
    Abstract: A neural network processing method, comprising the following steps: obtaining a model dataset and model structure parameters of an original network (S100); obtaining an operational attribute of each compute node in the original network; operating the original network according to the model dataset and the model structure parameters of the original network and the operational attribute of each compute node, to obtain an instruction corresponding to each compute node in the original network (S200); and if the operational attribute of the current compute node is a first operational attribute, storing a network weight and the instruction corresponding to the current compute node into a first non-volatile memory, so as to obtain a first offline model corresponding to the original network (S300). Further provided are a computer system and a storage medium.
    Type: Application
    Filed: December 17, 2018
    Publication date: June 22, 2023
    Inventors: Xunyu Chen, Qi Guo, Jie Wei, Linyang Wu
  • Patent number: 11531860
    Abstract: Aspects for Long Short-Term Memory (LSTM) blocks in a recurrent neural network (RNN) are described herein. As an example, the aspects may include one or more slave computation modules, an interconnection unit, and a master computation module collectively configured to calculate an activated input gate value, an activated forget gate value, a current cell status of the current computation period, an activated output gate value, and a forward pass result.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: December 20, 2022
    Assignee: CAMBRICON (XI'AN) SEMICONDUCTOR CO., LTD.
    Inventors: Qi Guo, Xunyu Chen, Yunji Chen, Tianshi Chen
  • Publication number: 20220326919
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Application
    Filed: June 26, 2022
    Publication date: October 13, 2022
    Inventors: Weijian DU, Linyang WU, Xunyu CHEN
  • Patent number: 11403080
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: August 2, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Weijian Du, Linyang Wu, Xunyu Chen
  • Patent number: 11379199
    Abstract: Disclosed are a general-purpose machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201), performing classification processing on the task parameters to obtain task instructions and model parameters (S1202), aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203), and integrating the stack data and the heap data to obtain a general-purpose machine learning model (S1204). By means of the method, compiled results of a corresponding general-purpose model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: July 5, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Weijian Du, Linyang Wu, Xunyu Chen
  • Patent number: 11360811
    Abstract: Computer systems, data processing methods, and computer-readable media are provided to run original networks. An exemplary computer system includes first and second processors a memory storing offline models and corresponding input data of a plurality of original networks, and a runtime system configured to run on the first processor. The runtime system, when runs on the first processor, causes the first processor to implement a plurality of virtual devices comprising a data processing device configured to obtain an offline model and corresponding input data of an original network from the memory, an equipment management device configured to control turning on or off of the second processor, and a task execution device configured to control the second processor to run the offline model of the original network.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: June 14, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Linyang Wu, Qi Guo, Xunyu Chen, Kangyu Wang
  • Patent number: 11334329
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: May 17, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Weijian Du, Linyang Wu, Xunyu Chen
  • Patent number: 11334330
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: May 17, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Weijian Du, Linyang Wu, Xunyu Chen
  • Patent number: 11307836
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: April 19, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Weijian Du, Linyang Wu, Xunyu Chen
  • Patent number: 11113104
    Abstract: Computer systems, data processing methods, and computer-readable media are provided to run original networks. An exemplary computer system includes first and second processors and first and second memories. The first memory stores offline models and corresponding input data of a plurality of original networks, and a runtime system configured to run on the first processor. The second memory stores an operating system configured to run on the first processor or the second processor. When the runtime system runs on the first processor, the runtime system obtains an offline model and corresponding input data of an original network from the first memory and controls the second processor to run the offline model of the original network. The offline model of the original network includes model parameters, instructions, and interface data of respective computation nodes of the original network.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: September 7, 2021
    Assignee: Shanghai Cambricon Information Technology Co., Ltd
    Inventors: Linyang Wu, Qi Guo, Xunyu Chen, Kangyu Wang
  • Patent number: 11036480
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: June 15, 2021
    Assignee: Shanghai Cambricon Information Technology Co., Ltd.
    Inventors: Weijian Du, Linyang Wu, Xunyu Chen
  • Publication number: 20210109727
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Application
    Filed: December 22, 2020
    Publication date: April 15, 2021
    Inventors: Weijian DU, Linyang WU, Xunyu CHEN
  • Publication number: 20210109729
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Application
    Filed: December 22, 2020
    Publication date: April 15, 2021
    Inventors: Weijian DU, Linyang WU, Xunyu CHEN
  • Publication number: 20210109725
    Abstract: Disclosed are a general-purpose machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201), performing classification processing on the task parameters to obtain task instructions and model parameters (S1202), aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203), and integrating the stack data and the heap data to obtain a general-purpose machine learning model (S1204). By means of the method, compiled results of a corresponding general-purpose model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Application
    Filed: December 22, 2020
    Publication date: April 15, 2021
    Inventors: Weijian DU, Linyang WU, Xunyu CHEN
  • Publication number: 20210109726
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Application
    Filed: December 22, 2020
    Publication date: April 15, 2021
    Inventors: Weijian DU, Linyang WU, Xunyu CHEN