Patents by Inventor Yunji Chen

Yunji Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10462476
    Abstract: Aspects of data compression/decompression for neural networks are described herein. The aspects may include a model data converter configured to convert neural network content values into pseudo video data. The neural network content values may refer to weight values and bias values of the neural network. The pseudo video data may include one or more pseudo frames. The aspects may further include a compression module configured to encode the pseudo video data into one or more neural network data packages.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: October 29, 2019
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Tianshi Chen, Yuzhe Luo, Qi Guo, Shaoli Liu, Yunji Chen
  • Publication number: 20190327479
    Abstract: Aspects of data compression/decompression for neural networks are described herein. The aspects may include a model data converter configured to convert neural network content values into pseudo video data. The neural network content values may refer to weight values and bias values of the neural network. The pseudo video data may include one or more pseudo frames. The aspects may further include a compression module configured to encode the pseudo video data into one or more neural network data packages.
    Type: Application
    Filed: June 28, 2019
    Publication date: October 24, 2019
    Inventors: Tianshi CHEN, Yuzhe LUO, Qi GUO, Shaoli LIU, Yunji CHEN
  • Publication number: 20190325298
    Abstract: Aspects of processing data for Long Short-Term Memory (LSTM) neural networks are described herein. The aspects may include one or more data buffer units configured to store previous output data at a previous timepoint, input data at a current timepoint, one or more weight values, and one more bias values. The aspects may further include multiple data processing units configured to parallelly calculate a portion of an output value at the current timepoint based on the previous output data at the previous timepoint, the input data at the current timepoint, the one or more weight values, and the one or more bias values.
    Type: Application
    Filed: July 1, 2019
    Publication date: October 24, 2019
    Inventors: Yunji CHEN, Xiaobing CHEN, Shaoli LIU, Tianshi CHEN
  • Publication number: 20190318246
    Abstract: Aspects of data modification for neural networks are described herein. The aspects may include a connection value generator configured to receive one or more groups of input data and one or more weight values and generate one or more connection values based on the one or more weight values. The aspects may further include a pruning module configured to modify the one or more groups of input data and the one or more weight values based on the connection values. Further still, the aspects may include a computing unit configured to update the one or more weight values and/or calculate one or more input gradients.
    Type: Application
    Filed: June 27, 2019
    Publication date: October 17, 2019
    Inventors: Yunji CHEN, Xinkai SONG, Shaoli LIU, Tianshi CHEN
  • Publication number: 20190311266
    Abstract: Aspects of data modification for neural networks are described herein. The aspects may include a data modifier configured to receive input data and weight values of a neural network. The data modifier may include an input data configured to modify the received input data and a weight modifier configured to modify the received weight values. The aspects may further include a computing unit configured to calculate one or more groups of output data based on the modified input data and the modifier weight values.
    Type: Application
    Filed: June 18, 2019
    Publication date: October 10, 2019
    Inventors: Shaoli LIU, Yifan HAO, Yunji CHEN, Qi GUO, Tianshi CHEN
  • Publication number: 20190311252
    Abstract: Aspects of a neural network operation device are described herein. The aspects may include a matrix element storage module configured to receive a first matrix that includes one or more first values, each of the first values being represented in a sequence that includes one or more bits. The matrix element storage module may be further configured to respectively store the one or more bits in one or more storage spaces in accordance with positions of the bits in the sequence. The aspects may further include a numeric operation module configured to calculate an intermediate result for each storage space based on one or more second values in a second matrix and an accumulation module configured to sum the intermediate results to generate an output value.
    Type: Application
    Filed: June 13, 2019
    Publication date: October 10, 2019
    Inventors: Tianshi CHEN, Yimin ZHUANG, Qi GUO, Shaoli LIU, Yunji CHEN
  • Publication number: 20190311251
    Abstract: Aspects of reusing neural network instructions are described herein. The aspects may include a computing device configured to calculate a hash value of a neural network layer based on the layer information thereof. A determination unit may be configured to determine whether the hash value exists in a hash table. If the hash value is included in the hash table, one or more neural network instructions that correspond to the hash value may be reused.
    Type: Application
    Filed: May 29, 2019
    Publication date: October 10, 2019
    Inventors: Yunji CHEN, Yixuan REN, Zidong DU, Tianshi CHEN
  • Publication number: 20190311242
    Abstract: Aspects of a neural network convolution device are described herein. The aspects may include a matrix transformer and a matrix multiplication module. The matrix transformer may be configured to receive an input data matrix and a weight matrix, transform the input data matrix into a transformed input data matrix based on a first transformation matrix, and transform the weight matrix into a transformed weight matrix based on a second transformation matrix. The matrix multiplication module may be configured to multiply one or more input data elements in the transformed input data matrix with one or more weight elements in the transformed weight matrix to generate an intermediate output matrix. The matrix transformer may be further configured to transform the intermediate output matrix into an output matrix based on an inverse transformation matrix.
    Type: Application
    Filed: June 13, 2019
    Publication date: October 10, 2019
    Inventors: Tianshi CHEN, Yimin ZHUANG, Qi GUO, Shaoli LIU, Yunji CHEN
  • Publication number: 20190311264
    Abstract: Aspects of activation function computation for neural networks are described herein. The aspects may include a search module configured to receive an input value. The search module may be further configured to identify a data range based on the received input value and an index associated with the data range. Meanwhile, a count value may be set to one. Further, the search module may be configured to identify a slope value and an intercept value that correspond to the input value. A computation module included in the aspects may be configured to calculate an output value based on the slope value, the intercept value and the input value. In at least some examples, the process may be repeated to increase the accuracy of the result until the count of the repetition reaches the identified index.
    Type: Application
    Filed: June 19, 2019
    Publication date: October 10, 2019
    Inventors: Tianshi CHEN, Yifan HAO, Shaoli LIU, Yunji CHEN, Zhen LI
  • Publication number: 20190294971
    Abstract: An apparatus for executing backpropagation of an artificial neural network comprises an instruction caching unit, a controller unit, a direct memory access unit, an interconnection unit, a master computation module, and multiple slave computation modules. For each layer in a multilayer neural network, weighted summation may be performed on input gradient vectors to calculate an output gradient vector of this layer. The output gradient vector may be multiplied by a derivative value of a next-layer activation function on which forward operation is performed, so that a next-layer input gradient vector can be obtained. The input gradient vector may be multiplied by an input neuron counterpoint in forward operation to obtain the gradient of a weight value of this layer, and the weight value of this layer can be updated according to the gradient of the obtained weight value of this layer.
    Type: Application
    Filed: June 14, 2019
    Publication date: September 26, 2019
    Inventors: Shaoli LIU, Qi GUO, Yunji CHEN, Tianshi CHEN
  • Publication number: 20190294951
    Abstract: Aspects for executing forward propagation of artificial neural network are described here. As an example, the aspects may include a plurality of computation modules connected via an interconnection unit; and a controller unit configured to decode an instruction into one or more groups of micro-instructions, wherein the plurality of computation modules are configured to perform respective groups of the micro-instructions.
    Type: Application
    Filed: June 14, 2019
    Publication date: September 26, 2019
    Inventors: Shaoli LIU, Qi GUO, Yunji CHEN, Tianshi CHEN
  • Patent number: 10416964
    Abstract: The present disclosure discloses an adder device, a data accumulation method and a data processing device. The adder device comprises: a first adder module provided with an adder tree unit, composed of a multi-stage adder array, and a first control unit, wherein the adder tree unit accumulates data by means of step-by-step accumulation based on a control signal of the first control unit; a second adder module comprising a two-input addition/subtraction operation unit and a second control unit, and used for performing an addition or subtraction operation on input data; a shift operation module for performing a left shift operation on output data of the first adder module; an AND operation module for performing an AND operation on output data of the shift operation module and output data of the second adder module; and a controller module.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: September 17, 2019
    Assignee: Institute of Computing Technology, Chinese Academy of Sciences
    Inventors: Zhen Li, Shaoli Liu, Shijin Zhang, Tao Luo, Cheng Qian, Yunji Chen, Tianshi Chen
  • Patent number: 10410112
    Abstract: Aspects for executing forward propagation of artificial neural network are described here. As an example, the aspects may include a plurality of computation modules connected via an interconnection unit; and a controller unit configured to decode an instruction into one or more groups of micro-instructions, wherein the plurality of computation modules are configured to perform respective groups of the micro-instructions.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: September 10, 2019
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Qi Guo, Yunji Chen, Tianshi Chen
  • Patent number: 10402725
    Abstract: A compression coding apparatus for artificial neural network, including memory interface unit, instruction cache, controller unit and computing unit, wherein the computing unit is configured to perform corresponding operation to data from the memory interface unit according to instructions of controller unit; the computing unit mainly performs three steps operation: step one is to multiply input neuron by weight data; step two is to perform adder tree computing and add the weighted output neuron obtained in step one level-by-level via adder tree, or add bias to output neuron to get biased output neuron; step three is to perform activation function operation to get final output neuron. The present disclosure also provides a method for compression coding of multi-layer neural network.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: September 3, 2019
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Tianshi Chen, Shaoli Liu, Qi Guo, Yunji Chen
  • Patent number: 10379816
    Abstract: The present disclosure provides a data accumulation device and method, and a digital signal processing device. The device comprises: an accumulation tree module for accumulating input data in the form of a binary tree structure and outputting accumulated result data; a register module including a plurality of groups of registers and used for registering intermediate data generated by the accumulation tree module during an accumulation process and the accumulated result data; and a control circuit for generating a data gating signal to control the accumulation tree module to filter the input data not required to be accumulated, and generating a flag signal to perform the following control: selecting a result obtained after adding one or more of intermediate data stored in the register to the accumulated result as output data, or directly selecting the accumulated result as output data. Thus, a plurality of groups of input data can be rapidly accumulated to a group of sums in a clock cycle.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: August 13, 2019
    Assignee: Institute of Computing Technology, Chinese Academy of Sciences
    Inventors: Zhen Li, Shaoli Liu, Shijin Zhang, Tao Luo, Cheng Qian, Yunji Chen, Tianshi Chen
  • Publication number: 20190235871
    Abstract: Aspects for processing data segments in neural networks are described herein. The aspects may include a computation module capable of performing operations between two vectors with a limited count of elements. When a data I/O module receives neural network data represented in a form of vectors that includes elements more than the limited count, a data adjustment module may be configured to divide the received vectors into shorter segments such that the computation module may be configured to process the segments sequentially to generate results of the operations.
    Type: Application
    Filed: February 5, 2019
    Publication date: August 1, 2019
    Inventors: Yunji CHEN, Shaoli LIU, Tianshi CHEN
  • Publication number: 20190227946
    Abstract: Aspects of managing Translation Lookaside Buffer (TLB) units are described herein. The aspects may include a memory management unit (MMU) that includes one or more TLB units and a control unit. The control unit may be configured to identify one from the one or more TLB units based on a stream identification (ID) included in a received virtual address and, further, to identify a frame number in the identified TLB unit. A physical address may be generated by the control unit based on the frame number and an offset included in the virtual address.
    Type: Application
    Filed: February 26, 2019
    Publication date: July 25, 2019
    Inventors: Tianshi CHEN, Qi GUO, Yunji CHEN
  • Publication number: 20190171932
    Abstract: Aspects for performing neural network operations are described herein. The aspects may include a first neural network processing module configured to process at least a portion of neural network data and an on-chip interconnection module communicatively connected to the first neural network processing module and one or more second neural network processing modules. The on-chip interconnection module may include a first layer interconnection module configured to communicate with an external storage device and one or more second layer interconnection modules respectively configured to communicate with the first neural network processing module and the one or more second neural network processing modules. Further, the first neural network processing module may include a neural network processor configured to perform one or more operations on the portion of the neural network data and a high-speed storage device configured to store results of the one or more operations.
    Type: Application
    Filed: February 5, 2019
    Publication date: June 6, 2019
    Inventors: Yunji CHEN, Shaoli LIU, Dong HAN, Tianshi CHEN
  • Publication number: 20190171454
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include a computation module that includes one or more bitwise processors and a combiner. The bitwise processors may be configured to perform bitwise operations between each of the first elements and a corresponding one of the second elements to generate one or more operation results. The combiner may be configured to combine the one or more operation results into an output vector.
    Type: Application
    Filed: January 17, 2019
    Publication date: June 6, 2019
    Inventors: Tao Luo, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190163477
    Abstract: Aspects for vector comparison in neural network are described herein. The aspects may include a direct memory access unit configured to receive a first vector and a second vector from a storage device. The first vector may include one or more first elements and the second vector may include one or more second elements. The aspects may further include a computation module that includes one or more comparers respectively configured to generate a comparison result by comparing one of the one or more first elements to a corresponding one of the one or more second elements in accordance with an instruction.
    Type: Application
    Filed: January 14, 2019
    Publication date: May 30, 2019
    Inventors: Dong Han, Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen