Patents by Inventor Yunji Chen

Yunji Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11295196
    Abstract: Aspects for neural network operations with fixed-point number of short bit length are described herein. The aspects may include a fixed-point number converter configured to convert one or more first floating-point numbers to one or more first fixed-point numbers in accordance with at least one format. Further, the aspects may include a neural network processor configured to process the first fixed-point numbers to generate one or more process results.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: April 5, 2022
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Yunji Chen, Shaoli Liu, Qi Guo, Tianshi Chen
  • Patent number: 11263530
    Abstract: Aspects for maxout layer operations in neural network are described herein. The aspects may include a load/store unit configured to retrieve input data from a storage module. The input data may be formatted as a three-dimensional vector that includes one or more feature values stored in a feature dimension of the three-dimensional vector. The aspects may further include a pruning unit configured to divide the one or more feature values into one or more feature groups based on one or more data ranges and select a maximum feature value from each of the one or more feature groups. Further still, the pruning unit may be configured to delete, in each of the one or more feature groups, feature values other than the maximum feature value and update the input data with the one or more maximum feature values.
    Type: Grant
    Filed: October 18, 2018
    Date of Patent: March 1, 2022
    Assignee: Cambricon Technologies Corporation Limited
    Inventors: Dong Han, Qi Guo, Tianshi Chen, Yunji Chen
  • Patent number: 11263520
    Abstract: Aspects of reusing neural network instructions are described herein. The aspects may include a computing device configured to calculate a hash value of a neural network layer based on the layer information thereof. A determination unit may be configured to determine whether the hash value exists in a hash table. If the hash value is included in the hash table, one or more neural network instructions that correspond to the hash value may be reused.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: March 1, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Yunji Chen, Yixuan Ren, Zidong Du, Tianshi Chen
  • Patent number: 11157593
    Abstract: Aspects for vector combination in neural network are described herein. The aspects may include a direct memory access unit configured to receive aa first vector, a second vector, and a controller vector. The first vector, the second vector, and the controller vector may each include one or more elements indexed in accordance with a same one-dimensional data structure. The aspects may further include a computation module configured to select one of the one or more control values, determine that the selected control value satisfies a predetermined condition, and select one of the one or more first elements that corresponds to the selected control value in the one-dimensional data structure as an output element based on a determination that the selected control value satisfies the predetermined condition.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: October 26, 2021
    Assignee: Cambricon Technologies Corporation Limited
    Inventors: Zhen Li, Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Patent number: 11126429
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include a computation module that includes one or more bitwise processors and a combiner. The bitwise processors may be configured to perform bitwise operations between each of the first elements and a corresponding one of the second elements to generate one or more operation results. The combiner may be configured to combine the one or more operation results into an output vector.
    Type: Grant
    Filed: January 17, 2019
    Date of Patent: September 21, 2021
    Assignee: Cambricon Technologies Corporation Limited
    Inventors: Tao Luo, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Patent number: 11120331
    Abstract: Aspects for performing neural network operations are described herein. The aspects may include a first neural network processing module configured to process at least a portion of neural network data and an on-chip interconnection module communicatively connected to the first neural network processing module and one or more second neural network processing modules. The on-chip interconnection module may include a first layer interconnection module configured to communicate with an external storage device and one or more second layer interconnection modules respectively configured to communicate with the first neural network processing module and the one or more second neural network processing modules. Further, the first neural network processing module may include a neural network processor configured to perform one or more operations on the portion of the neural network data and a high-speed storage device configured to store results of the one or more operations.
    Type: Grant
    Filed: February 5, 2019
    Date of Patent: September 14, 2021
    Assignee: Cambricon Technologies Corporation Limited
    Inventors: Yunji Chen, Shaoli Liu, Dong Han, Tianshi Chen
  • Patent number: 11100192
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: August 24, 2021
    Assignee: Cambricon Technologies Corporation Limited
    Inventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Patent number: 11080049
    Abstract: Aspects for matrix multiplication in neural network are described herein. The aspects may include a controller unit configured to receive a matrix-multiply-matrix (MM) instruction that includes a first starting address of a first matrix, a first size of the first matrix, a second starting address of a second matrix, and a second size of the second matrix; multiple computation modules configured to respectively multiply, in response to the MM instruction, row vectors of the first matrix with column vectors of the second matrix to generate one or more result elements; and an interconnection unit configured to combine the result elements to generate one or more row vectors of a result matrix.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: August 3, 2021
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20210132904
    Abstract: Aspects for neural network operations with floating-point number of short bit length are described herein. The aspects may include a neural network processor configured to process one or more floating-point numbers to generate one or more process results. Further, the aspects may include a floating-point number converter configured to convert the one or more process results in accordance with at least one format of shortened floating-point numbers. The floating-point number converter may include a pruning processor configured to adjust a length of a mantissa field of the process results and an exponent modifier configured to adjust a length of an exponent field of the process results in accordance with the at least one format.
    Type: Application
    Filed: January 12, 2021
    Publication date: May 6, 2021
    Inventors: Tianshi CHEN, Shaoli LIU, Qi GUO, Yunji CHEN
  • Patent number: 10997276
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: May 4, 2021
    Assignee: Cambricon Technologies Corporation Limited
    Inventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20210103818
    Abstract: The present disclosure provides a neural network computing method, system and device therefor to be applied in the technical field of computers. The computing method comprises the following steps: A. dividing a neural network into a plurality of subnetworks having consistent internal data characteristics; B. computing each of the subnetworks to obtain a first computation result for each subnetwork; and C. computing a total computation result of the neural network on the basis of the first computation result of each subnetwork. By means of the method, the present disclosure improves the computing efficiency of the neutral network.
    Type: Application
    Filed: August 9, 2016
    Publication date: April 8, 2021
    Inventors: Zidong DU, Qi GUO, Tianshi CHEN, Yunji CHEN
  • Publication number: 20210075639
    Abstract: The present invention provides a fractal tree structure-based data transmit device and method, a control device, and an intelligent chip. The device comprises: a central node that is as a communication data center of a network-on-chip and used for broadcasting or multicasting communication data to a plurality of leaf nodes; the plurality of leaf nodes that are as communication data nodes of the network-on-chip and for transmitting the communication data to a central leaf node; and forwarder modules for connecting the central node with the plurality of leaf nodes and forwarding the communication data; the central node, the forwarder modules and the plurality of leaf nodes are connected in the fractal tree network structure, and the central node is directly connected to M the forwarder modules and/or leaf nodes, any the forwarder module is directly connected to M the next level forwarder modules and/or leaf nodes.
    Type: Application
    Filed: November 20, 2020
    Publication date: March 11, 2021
    Inventors: Jinhua Tao, Tao Luo, Shaoli Liu, Shijin Zhang, Yunji Chen
  • Patent number: 10936284
    Abstract: Aspects for neural network operations with floating-point number of short bit length are described herein. The aspects may include a neural network processor configured to process one or more floating-point numbers to generate one or more process results. Further, the aspects may include a floating-point number converter configured to convert the one or more process results in accordance with at least one format of shortened floating-point numbers. The floating-point number converter may include a pruning processor configured to adjust a length of a mantissa field of the process results and an exponent modifier configured to adjust a length of an exponent field of the process results in accordance with the at least one format.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: March 2, 2021
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Tianshi Chen, Shaoli Liu, Qi Guo, Yunji Chen
  • Patent number: 10904034
    Abstract: One example of a device comprises: a central node that is as a communication data center of a network-on-chip; a plurality of leaf nodes that are as communication data nodes of the network-on-chip and for transmitting the communication data to a central leaf node; forwarder modules for connecting the central node with the plurality of leaf nodes and forwarding the communication data, wherein the plurality of leaf nodes are divided into N groups, each group having the same number of leaf nodes, the central node is individually in communication connection with each group of leaf nodes by means of the forwarder module, a communication structure constituted by each group of leaf nodes has self-similarity, and the plurality of leaf nodes are in communication connection with the central node in a complete multi-way tree approach by means of the forwarder modules of multiple levels.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: January 26, 2021
    Assignee: Institute of Computing Technology, Chinese Academy of Sciences
    Inventors: Jinhua Tao, Tao Luo, Shaoli Liu, Shijin Zhang, Yunji Chen
  • Patent number: 10891353
    Abstract: Aspects for matrix multiplication in neural network are described herein. The aspects may include a controller unit configured to receive a matrix-addition instruction. The aspects may further include a computation module configured to receive a first matrix and a second matrix. The first matrix may include one or more first elements and the second matrix includes one or more second elements. The one or more first elements and the one or more second elements may be arranged in accordance with a two-dimensional data structure. The computation module may be further configured to respectively add each of the first elements to each of the second elements based on a correspondence in the two-dimensional data structure to generate one or more third elements for a third matrix.
    Type: Grant
    Filed: January 17, 2019
    Date of Patent: January 12, 2021
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Patent number: 10866924
    Abstract: An example device comprises a central node for receiving vector data returned by leaf nodes, a plurality of leaf nodes for calculating and shifting the vector data, and forwarder modules comprising a local cache structure and a data processing component, wherein the plurality of leaf nodes are divided into N groups, each group having the same number of leaf nodes; the central node is individually in communication connection with each group of leaf nodes by means of the forwarder modules; a communication structure constituted by each group of leaf nodes has self-similarity; the plurality of leaf nodes are in communication connection with the central node in a complete M-way tree approach by means of the forwarder modules of multiple levels; each of the leaf nodes comprises a setting bit.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: December 15, 2020
    Assignee: Institute of Computing Technology, Chinese Academy of Sciences
    Inventors: Dong Han, Tao Luo, Shaoli Liu, Shijin Zhang, Yunji Chen
  • Patent number: 10860316
    Abstract: Aspects for generating a dot product for two vectors in neural network are described herein. The aspects may include a controller unit configured to receive a vector load instruction that includes a first address of a first vector and a length of the first vector. The aspects may further include a direct memory access unit configured to retrieve the first vector from a storage device based on the first address of the first vector. Further still, the aspects may include a caching unit configured to store the first vector.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: December 8, 2020
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Tian Zhi, Qi Guo, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Patent number: 10860917
    Abstract: Aspects for executing forward propagation of artificial neural network are described here. As an example, the aspects may include a plurality of computation modules connected via an interconnection unit; and a controller unit configured to decode an instruction into one or more groups of micro-instructions, wherein the plurality of computation modules are configured to perform respective groups of the micro-instructions.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: December 8, 2020
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Qi Guo, Yunji Chen, Tianshi Chen
  • Patent number: 10860050
    Abstract: A nonlinear function operation device and method are provided. The device may include a table looking-up module and a linear fitting module. The table looking-up module may be configured to acquire a first address of a slope value k and a second address of an intercept value b based on a floating-point number. The linear fitting module may be configured to obtain a linear function expressed as y=k×x+b based on the slope value k and the intercept value b, and substitute the floating-point number into the linear function to calculate a function value of the linear function, wherein the calculated function value is determined as the function value of a nonlinear function corresponding to the floating-point number.
    Type: Grant
    Filed: October 18, 2018
    Date of Patent: December 8, 2020
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Huiying Lan, Qi Guo, Yunji Chen, Tianshi Chen, Shangying Li, Zhen Li
  • Patent number: 10860681
    Abstract: Aspects for matrix addition in neural network are described herein. The aspects may include a controller unit configured to receive a matrix-add-scalar instruction that includes an address of the first matrix and a scalar value. The aspects may further include a computation module configured to receive the first matrix from a storage device based on the address of the first matrix. The first matrix may include one or more first elements. The one or more first elements are arranged in accordance with a two-dimensional data structure. The computation module may be further configured to respectively add the scalar value to each of the one or more first elements of the first matrix in accordance with the matrix-add-scalar instruction to generate one or more second elements for a second matrix.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: December 8, 2020
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen