Patents by Inventor Xunyu Chen

Xunyu Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210109728
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Application
    Filed: December 22, 2020
    Publication date: April 15, 2021
    Inventors: Weijian DU, Linyang WU, Xunyu CHEN
  • Publication number: 20210089285
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Application
    Filed: May 7, 2019
    Publication date: March 25, 2021
    Inventors: Weijian DU, Linyang WU, Xunyu CHEN
  • Publication number: 20200265299
    Abstract: The disclosure relates to a data processing method, a device, and related products. The related product includes a motherboard comprising a CPU and a board card. The board card comprises multiple artificial intelligence processors. Memories corresponding to the artificial intelligence processors are multi-channel. After receiving an artificial intelligence processor computation instruction sent by a general-purpose processor CPU through a target parallel thread, through a memory channel corresponding to the target parallel thread, a target artificial intelligence processor accesses a physical memory corresponding to the memory channel according to the computation instruction. The target artificial intelligence processor is any of the multiple artificial intelligence processors. The target parallel thread is any of multiple parallel threads started by the CPU. At least two threads in the multiple parallel threads correspond to different memory channels.
    Type: Application
    Filed: December 13, 2019
    Publication date: August 20, 2020
    Inventors: Xunyu CHEN, Xiaofu MENG
  • Publication number: 20200104722
    Abstract: Computer systems, data processing methods, and computer-readable media are provided to run original networks. An exemplary computer system includes first and second processors and first and second memories. The first memory stores offline models and corresponding input data of a plurality of original networks, and a runtime system configured to run on the first processor. The second memory stores an operating system configured to run on the first processor or the second processor. When the runtime system runs on the first processor, the runtime system obtains an offline model and corresponding input data of an original network from the first memory and controls the second processor to run the offline model of the original network. The offline model of the original network includes model parameters, instructions, and interface data of respective computation nodes of the original network.
    Type: Application
    Filed: December 5, 2019
    Publication date: April 2, 2020
    Applicant: Shanghai Cambricon Information Technology Co., Ltd
    Inventors: Linyang WU, Qi GUO, Xunyu CHEN, Kangyu WANG
  • Publication number: 20200104162
    Abstract: Computer systems, data processing methods, and computer-readable media are provided to run original networks. An exemplary computer system includes first and second processors a memory storing offline models and corresponding input data of a plurality of original networks, and a runtime system configured to run on the first processor. The runtime system, when runs on the first processor, causes the first processor to implement a plurality of virtual devices comprising a data processing device configured to obtain an offline model and corresponding input data of an original network from the memory, an equipment management device configured to control turning on or off of the second processor, and a task execution device configured to control the second processor to run the offline model of the original network.
    Type: Application
    Filed: December 3, 2019
    Publication date: April 2, 2020
    Applicant: Shanghai Cambricon Information Technology Co., Ltd
    Inventors: Linyang WU, Qi GUO, Xunyu CHEN, Kangyu WANG
  • Publication number: 20190087710
    Abstract: Aspects for Long Short-Term Memory (LSTM) blocks in a recurrent neural network (RNN) are described herein. As an example, the aspects may include one or more slave computation modules, an interconnection unit, and a master computation module collectively configured to calculate an activated input gate value, an activated forget gate value, a current cell status of the current computation period, an activated output gate value, and a forward pass result.
    Type: Application
    Filed: October 29, 2018
    Publication date: March 21, 2019
    Inventors: Qi Guo, Xunyu Chen, Yunji Chen, Tianshi Chen
  • Publication number: 20190087709
    Abstract: Aspects for Long Short-Term Memory (LSTM) blocks in a recurrent neural network (RNN) are described herein. As an example, the aspects may include one or more slave computation modules, an interconnection unit, and a master computation module collectively configured to calculate an activated input gate value, an activated forget gate value, a current cell status of the current computation period, an activated output gate value, and a forward pass result.
    Type: Application
    Filed: October 29, 2018
    Publication date: March 21, 2019
    Inventors: Qi Guo, Xunyu Chen, Yunji Chen, Tianshi Chen