Patents by Inventor Yunji Chen

Yunji Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11797269
    Abstract: Aspects for neural network operations with floating-point number of short bit length are described herein. The aspects may include a neural network processor configured to process one or more floating-point numbers to generate one or more process results. Further, the aspects may include a floating-point number converter configured to convert the one or more process results in accordance with at least one format of shortened floating-point numbers. The floating-point number converter may include a pruning processor configured to adjust a length of a mantissa field of the process results and an exponent modifier configured to adjust a length of an exponent field of the process results in accordance with the at least one format.
    Type: Grant
    Filed: January 12, 2021
    Date of Patent: October 24, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Tianshi Chen, Shaoli Liu, Qi Guo, Yunji Chen
  • Patent number: 11775832
    Abstract: Aspects of data modification for neural networks are described herein. The aspects may include a data modifier configured to receive input data and weight values of a neural network. The data modifier may include an input data configured to modify the received input data and a weight modifier configured to modify the received weight values. The aspects may further include a computing unit configured to calculate one or more groups of output data based on the modified input data and the modifier weight values.
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: October 3, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Shaoli Liu, Yifan Hao, Yunji Chen, Qi Guo, Tianshi Chen
  • Patent number: 11734383
    Abstract: A computing device and related products are provided. The computing device is configured to perform machine learning calculations. The computing device includes an operation unit, a controller unit, and a storage unit. The storage unit includes a data input/output (I/O) unit, a register, and a cache. Technical solution provided by the present disclosure has advantages of fast calculation speed and energy saving.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: August 22, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Tianshi Chen, Xiao Zhang, Shaoli Liu, Yunji Chen
  • Patent number: 11727244
    Abstract: Aspects for Long Short-Term Memory (LSTM) blocks in a recurrent neural network (RNN) are described herein. As an example, the aspects may include one or more slave computation modules, an interconnection unit, and a master computation module collectively configured to calculate an activated input gate value, an activated forget gate value, a current cell status of the current computation period, an activated output gate value, and a forward pass result.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: August 15, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Qi Guo, Xunyu Chen, Yunji Chen, Tianshi Chen
  • Patent number: 11720783
    Abstract: Aspects of a neural network operation device are described herein. The aspects may include a matrix element storage module configured to receive a first matrix that includes one or more first values, each of the first values being represented in a sequence that includes one or more bits. The matrix element storage module may be further configured to respectively store the one or more bits in one or more storage spaces in accordance with positions of the bits in the sequence. The aspects may further include a numeric operation module configured to calculate an intermediate result for each storage space based on one or more second values in a second matrix and an accumulation module configured to sum the intermediate results to generate an output value.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: August 8, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Tianshi Chen, Yimin Zhuang, Qi Guo, Shaoli Liu, Yunji Chen
  • Patent number: 11616662
    Abstract: The present invention provides a fractal tree structure-based data transmit device and method, a control device, and an intelligent chip. The device comprises: a central node that is as a communication data center of a network-on-chip and used for broadcasting or multicasting communication data to a plurality of leaf nodes; the plurality of leaf nodes that are as communication data nodes of the network-on-chip and for transmitting the communication data to a central leaf node; and forwarder modules for connecting the central node with the plurality of leaf nodes and forwarding the communication data; the central node, the forwarder modules and the plurality of leaf nodes are connected in the fractal tree network structure, and the central node is directly connected to M the forwarder modules and/or leaf nodes, any the forwarder module is directly connected to M the next level forwarder modules and/or leaf nodes.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: March 28, 2023
    Assignee: Institute of Computing Technology, Chinese Academy of Sciences
    Inventors: Jinhua Tao, Tao Luo, Shaoli Liu, Shijin Zhang, Yunji Chen
  • Patent number: 11580367
    Abstract: The present disclosure provides a neural network processing system that comprises a multi-core processing module composed of a plurality of core processing modules and for executing vector multiplication and addition operations in a neural network operation, an on-chip storage medium, an on-chip address index module, and an ALU module for executing a non-linear operation not completable by the multi-core processing module according to input data acquired from the multi-core processing module or the on-chip storage medium, wherein the plurality of core processing modules share an on-chip storage medium and an ALU module, or the plurality of core processing modules have an independent on-chip storage medium and an ALU module. The present disclosure improves an operating speed of the neural network processing system, such that performance of the neural network processing system is higher and more efficient.
    Type: Grant
    Filed: August 9, 2016
    Date of Patent: February 14, 2023
    Assignee: Institute of Computing Technology, Chinese Academy of Sciences
    Inventors: Zidong Du, Qi Guo, Tianshi Chen, Yunji Chen
  • Patent number: 11574195
    Abstract: Aspects of data modification for neural networks are described herein. The aspects may include a connection value generator configured to receive one or more groups of input data and one or more weight values and generate one or more connection values based on the one or more weight values. The aspects may further include a pruning module configured to modify the one or more groups of input data and the one or more weight values based on the connection values. Further still, the aspects may include a computing unit configured to update the one or more weight values and/or calculate one or more input gradients.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: February 7, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Yunji Chen, Xinkai Song, Shaoli Liu, Tianshi Chen
  • Patent number: 11568258
    Abstract: Aspects of data modification for neural networks are described herein. The aspects may include a connection value generator configured to receive one or more groups of input data and one or more weight values and generate one or more connection values based on the one or more weight values. The aspects may further include a pruning module configured to modify the one or more groups of input data and the one or more weight values based on the connection values. Further still, the aspects may include a computing unit configured to update the one or more weight values and/or calculate one or more input gradients.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: January 31, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Yunji Chen, Xinkai Song, Shaoli Liu, Tianshi Chen
  • Patent number: 11531860
    Abstract: Aspects for Long Short-Term Memory (LSTM) blocks in a recurrent neural network (RNN) are described herein. As an example, the aspects may include one or more slave computation modules, an interconnection unit, and a master computation module collectively configured to calculate an activated input gate value, an activated forget gate value, a current cell status of the current computation period, an activated output gate value, and a forward pass result.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: December 20, 2022
    Assignee: CAMBRICON (XI'AN) SEMICONDUCTOR CO., LTD.
    Inventors: Qi Guo, Xunyu Chen, Yunji Chen, Tianshi Chen
  • Patent number: 11513972
    Abstract: Aspects of managing Translation Lookaside Buffer (TLB) units are described herein. The aspects may include a memory management unit (MMU) that includes one or more TLB units and a control unit. The control unit may be configured to identify one from the one or more TLB units based on a stream identification (ID) included in a received virtual address and, further, to identify a frame number in the identified TLB unit. A physical address may be generated by the control unit based on the frame number and an offset included in the virtual address.
    Type: Grant
    Filed: August 12, 2019
    Date of Patent: November 29, 2022
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Tianshi Chen, Qi Guo, Yunji Chen
  • Patent number: 11507640
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: November 22, 2022
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Patent number: 11501158
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a controller unit configured to receive an instruction to generate a random vector that includes one or more elements. The instruction may include a predetermined distribution, a count of the elements, and an address of the random vector. The aspects may further include a computation module configured to generate the one or more elements, wherein the one or more elements are subject to the predetermined distribution.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: November 15, 2022
    Assignee: CAMBRICON (XI'AN) SEMICONDUCTOR CO., LTD.
    Inventors: Daofu Liu, Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Patent number: 11488000
    Abstract: The present disclosure provides an operation apparatus and method for an acceleration chip for accelerating a deep neural network algorithm. The apparatus comprises: a vector addition processor module and a vector function value arithmetic unit and a vector multiplier-adder module wherein the three modules execute a programmable instruction, and interact with each other to calculate values of neurons and a network output result of a neural network, and a variation amount of a synaptic weight representing the interaction strength of the neurons on an input layer to the neurons on an output layer; and the three modules are all provided with an intermediate value storage region and perform read and write operations on a primary memory.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: November 1, 2022
    Assignee: Intitute of Computing Technology, Chinese Academy of Sciences
    Inventors: Zhen Li, Shaoli Liu, Shijin Zhang, Tao Luo, Cheng Qian, Yunji Chen, Tianshi Chen
  • Publication number: 20220308831
    Abstract: Aspects for neural network operations with fixed-point number of short bit length are described herein. The aspects may include a fixed-point number converter configured to convert one or more first floating-point numbers to one or more first fixed-point numbers in accordance with at least one format. Further, the aspects may include a neural network processor configured to process the first fixed-point numbers to generate one or more process results.
    Type: Application
    Filed: March 1, 2022
    Publication date: September 29, 2022
    Inventors: Yunji CHEN, Shaoli LIU, Qi GUO, Tianshi CHEN
  • Patent number: 11436301
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: September 6, 2022
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Patent number: 11409524
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a vector, wherein the vector includes one or more elements. The aspects may further include a computation module that includes one or more comparers configured to compare the one or more elements to generate an output result that satisfies a predetermined condition included in an instruction.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: August 9, 2022
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Tian Zhi, Shaoli Liu, Qi Guo, Tianshi Chen, Yunji Chen
  • Patent number: 11373084
    Abstract: Aspects for forward propagation in fully connected layers of a convolutional artificial neural network are described herein. The aspects may include multiple slave computation modules configured to parallelly calculate multiple groups of slave output values based on an input vector received via the interconnection unit. Further, the aspects may include a master computation module connected to the multiple slave computation modules via an interconnection unit, wherein the master computation module is configured to generate an output vector based on the intermediate result vector.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: June 28, 2022
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Huiying Lan, Qi Guo, Yunji Chen, Tianshi Chen
  • Patent number: 11341211
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: May 24, 2022
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Patent number: 11308398
    Abstract: Aspects of data modification for neural networks are described herein. The aspects may include a connection value generator configured to receive one or more groups of input data and one or more weight values and generate one or more connection values based on the one or more weight values. The aspects may further include a pruning module configured to modify the one or more groups of input data and the one or more weight values based on the connection values. Further still, the aspects may include a computing unit configured to update the one or more weight values and/or calculate one or more input gradients.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: April 19, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Yunji Chen, Xinkai Song, Shaoli Liu, Tianshi Chen