Patents by Inventor Xunyu Chen
Xunyu Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230274158Abstract: The present disclosure relates to an apparatus and a method for performing neural network computing, a board card, and a readable storage medium. The computing apparatus of the present disclosure is included in an integrated circuit apparatus. The integrated circuit apparatus includes a general interconnection interface and other processing apparatus. The computing apparatus interacts with other processing apparatus to jointly complete a computing operation specified by a user. The integrated circuit apparatus further includes a storage apparatus. The storage apparatus is connected to the computing apparatus and other processing apparatus, respectively. The storage apparatus is used for data storage of the computing apparatus and other processing apparatus.Type: ApplicationFiled: September 23, 2021Publication date: August 31, 2023Inventors: Huiying LAN, Ruitao WANG, Haizhao LUO, Bo CAO, Xunyu CHEN
-
Publication number: 20230259746Abstract: The present disclosure relates to an apparatus and a method for forward fusing a neural network, a board card, and a readable storage medium. The computing apparatus of the present disclosure is included in an integrated circuit apparatus. The integrated circuit apparatus includes a general interconnection interface and other processing apparatus. The computing apparatus interacts with other processing apparatus to jointly complete a computing operation specified by a user. The integrated circuit apparatus further includes a storage apparatus. The storage apparatus is connected to the computing apparatus and other processing apparatus, respectively. The storage apparatus is used for data storage of the computing apparatus and other processing apparatus.Type: ApplicationFiled: September 24, 2021Publication date: August 17, 2023Inventors: Huiying LAN, Ruitao WANG, Haizhao LUO, Bo CAO, Xunyu CHEN
-
Patent number: 11727244Abstract: Aspects for Long Short-Term Memory (LSTM) blocks in a recurrent neural network (RNN) are described herein. As an example, the aspects may include one or more slave computation modules, an interconnection unit, and a master computation module collectively configured to calculate an activated input gate value, an activated forget gate value, a current cell status of the current computation period, an activated output gate value, and a forward pass result.Type: GrantFiled: October 29, 2018Date of Patent: August 15, 2023Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Qi Guo, Xunyu Chen, Yunji Chen, Tianshi Chen
-
Patent number: 11726754Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.Type: GrantFiled: June 26, 2022Date of Patent: August 15, 2023Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Weijian Du, Linyang Wu, Xunyu Chen
-
Publication number: 20230214322Abstract: The present disclosure relates to a method, a device and a computation apparatus for allocating a space address to data in a memory, where the computation apparatus is included in a combined processing apparatus, which includes a general interconnection interface and other processing apparatuses. The computation apparatus interacts with other processing apparatuses to jointly complete computations specified by the user. The combined processing apparatus also includes a storage apparatus. The storage apparatus is respectively connected to the computation apparatus and the other processing apparatuses, and is used for storing data of the computation apparatus and other processing apparatuses. The technical solutions of the present disclosure improve utilization of storage space of the memory.Type: ApplicationFiled: May 12, 2021Publication date: July 6, 2023Inventors: Xiaofu MENG, Tian ZHI, Zhenxing ZHANG, Xunyu CHEN
-
Publication number: 20230196069Abstract: A neural network processing method, comprising the following steps: obtaining a model dataset and model structure parameters of an original network (S100); obtaining an operational attribute of each compute node in the original network; operating the original network according to the model dataset and the model structure parameters of the original network and the operational attribute of each compute node, to obtain an instruction corresponding to each compute node in the original network (S200); and if the operational attribute of the current compute node is a first operational attribute, storing a network weight and the instruction corresponding to the current compute node into a first non-volatile memory, so as to obtain a first offline model corresponding to the original network (S300). Further provided are a computer system and a storage medium.Type: ApplicationFiled: December 17, 2018Publication date: June 22, 2023Inventors: Xunyu Chen, Qi Guo, Jie Wei, Linyang Wu
-
Patent number: 11531860Abstract: Aspects for Long Short-Term Memory (LSTM) blocks in a recurrent neural network (RNN) are described herein. As an example, the aspects may include one or more slave computation modules, an interconnection unit, and a master computation module collectively configured to calculate an activated input gate value, an activated forget gate value, a current cell status of the current computation period, an activated output gate value, and a forward pass result.Type: GrantFiled: October 29, 2018Date of Patent: December 20, 2022Assignee: CAMBRICON (XI'AN) SEMICONDUCTOR CO., LTD.Inventors: Qi Guo, Xunyu Chen, Yunji Chen, Tianshi Chen
-
Publication number: 20220326919Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.Type: ApplicationFiled: June 26, 2022Publication date: October 13, 2022Inventors: Weijian DU, Linyang WU, Xunyu CHEN
-
Patent number: 11403080Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.Type: GrantFiled: December 22, 2020Date of Patent: August 2, 2022Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Weijian Du, Linyang Wu, Xunyu Chen
-
Patent number: 11379199Abstract: Disclosed are a general-purpose machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201), performing classification processing on the task parameters to obtain task instructions and model parameters (S1202), aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203), and integrating the stack data and the heap data to obtain a general-purpose machine learning model (S1204). By means of the method, compiled results of a corresponding general-purpose model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.Type: GrantFiled: December 22, 2020Date of Patent: July 5, 2022Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Weijian Du, Linyang Wu, Xunyu Chen
-
Patent number: 11360811Abstract: Computer systems, data processing methods, and computer-readable media are provided to run original networks. An exemplary computer system includes first and second processors a memory storing offline models and corresponding input data of a plurality of original networks, and a runtime system configured to run on the first processor. The runtime system, when runs on the first processor, causes the first processor to implement a plurality of virtual devices comprising a data processing device configured to obtain an offline model and corresponding input data of an original network from the memory, an equipment management device configured to control turning on or off of the second processor, and a task execution device configured to control the second processor to run the offline model of the original network.Type: GrantFiled: December 3, 2019Date of Patent: June 14, 2022Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Linyang Wu, Qi Guo, Xunyu Chen, Kangyu Wang
-
Patent number: 11334329Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.Type: GrantFiled: May 7, 2019Date of Patent: May 17, 2022Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Weijian Du, Linyang Wu, Xunyu Chen
-
Patent number: 11334330Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.Type: GrantFiled: December 22, 2020Date of Patent: May 17, 2022Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Weijian Du, Linyang Wu, Xunyu Chen
-
Patent number: 11307836Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.Type: GrantFiled: December 22, 2020Date of Patent: April 19, 2022Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Weijian Du, Linyang Wu, Xunyu Chen
-
Patent number: 11113104Abstract: Computer systems, data processing methods, and computer-readable media are provided to run original networks. An exemplary computer system includes first and second processors and first and second memories. The first memory stores offline models and corresponding input data of a plurality of original networks, and a runtime system configured to run on the first processor. The second memory stores an operating system configured to run on the first processor or the second processor. When the runtime system runs on the first processor, the runtime system obtains an offline model and corresponding input data of an original network from the first memory and controls the second processor to run the offline model of the original network. The offline model of the original network includes model parameters, instructions, and interface data of respective computation nodes of the original network.Type: GrantFiled: December 5, 2019Date of Patent: September 7, 2021Assignee: Shanghai Cambricon Information Technology Co., LtdInventors: Linyang Wu, Qi Guo, Xunyu Chen, Kangyu Wang
-
Patent number: 11036480Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.Type: GrantFiled: December 22, 2020Date of Patent: June 15, 2021Assignee: Shanghai Cambricon Information Technology Co., Ltd.Inventors: Weijian Du, Linyang Wu, Xunyu Chen
-
Publication number: 20210109727Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.Type: ApplicationFiled: December 22, 2020Publication date: April 15, 2021Inventors: Weijian DU, Linyang WU, Xunyu CHEN
-
Publication number: 20210109729Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.Type: ApplicationFiled: December 22, 2020Publication date: April 15, 2021Inventors: Weijian DU, Linyang WU, Xunyu CHEN
-
Publication number: 20210109725Abstract: Disclosed are a general-purpose machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201), performing classification processing on the task parameters to obtain task instructions and model parameters (S1202), aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203), and integrating the stack data and the heap data to obtain a general-purpose machine learning model (S1204). By means of the method, compiled results of a corresponding general-purpose model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.Type: ApplicationFiled: December 22, 2020Publication date: April 15, 2021Inventors: Weijian DU, Linyang WU, Xunyu CHEN
-
Publication number: 20210109726Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.Type: ApplicationFiled: December 22, 2020Publication date: April 15, 2021Inventors: Weijian DU, Linyang WU, Xunyu CHEN