Patents by Inventor Yunji Chen

Yunji Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190147015
    Abstract: Aspects for matrix multiplication in neural network are described herein. The aspects may include a controller unit configured to receive a matrix-addition instruction. The aspects may further include a computation module configured to receive a first matrix and a second matrix. The first matrix may include one or more first elements and the second matrix includes one or more second elements. The one or more first elements and the one or more second elements may be arranged in accordance with a two-dimensional data structure. The computation module may be further configured to respectively add each of the first elements to each of the second elements based on a correspondence in the two-dimensional data structure to generate one or more third elements for a third matrix.
    Type: Application
    Filed: January 17, 2019
    Publication date: May 16, 2019
    Inventors: Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190146793
    Abstract: Aspects for generating a dot product for two vectors in neural network are described herein. The aspects may include a controller unit configured to receive a transcendental function instruction that includes an address of a vector and an operation code that identifies a transcendental function. The aspects may further include a CORDIC processor configured to receive the vector that includes one or more elements based on the address of the vector in response to the transcendental function instruction. The CORDIC processor may be further configured to apply the transcendental function to each element of the vector to generate an output vector.
    Type: Application
    Filed: January 14, 2019
    Publication date: May 16, 2019
    Inventors: Dong Han, Xiao Zhang, Tianshi Chen, Yunji Chen
  • Publication number: 20190138922
    Abstract: Aspects for forward propagation of a multilayer neural network (MNN) in a neural network processor are described herein. As an example, the aspects may include a computation module that includes a master computation module and one or more slave computation modules. The master computation module may be configured to receive one or more groups of MNN data. The one or more groups of MNN data may include input data and one or more weight values and wherein at least a portion of the input data and the weight values are stored as discrete values. The one or more slave computation modules may be configured to calculate one or more groups of slave output values based on a data type of each of the one or more groups of MNN data.
    Type: Application
    Filed: April 15, 2016
    Publication date: May 9, 2019
    Inventors: Shaoli Liu, Yong Yu, Yunji Chen, Tianshi Chen
  • Publication number: 20190138570
    Abstract: The present invention discloses an apparatus and a method for performing a variety of transcendental function operations. The apparatus comprises a pre-processing unit group, a core unit and a post-processing unit group, wherein the pre-processing unit group is configured to transform an externally input independent variable a into x, y coordinates, an angle z, and other information k, and determine an operation mode to be used by the core unit; the core unit is configured to perform trigonometric or hyperbolic transformation on the x, y coordinates and the angle z, obtain transformed x?, y? coordinates and angle z?, and output them to the post-processing unit group; and the post-processing unit group is configured to transform the x?, y? coordinates and the angle z? input by the core unit according to the other information k and a function f input by the pre-processing unit group to obtain an output result c.
    Type: Application
    Filed: April 29, 2016
    Publication date: May 9, 2019
    Inventors: Shijin Zhang, Shangying Li, Tianshi Chen, Yunji Chen
  • Publication number: 20190129858
    Abstract: Aspects for vector circular shifting in neural network are described herein. The aspects may include a direct memory access unit configured to receive a vector that includes multiple elements. The multiple elements are stored in a one-dimensional data structure. The direct memory access unit may store the vector in a vector caching unit. The aspects may further include an instruction caching unit configured to receive a vector shifting instruction that includes a step length for shifting the elements in the vector. Further still, the aspects may include a computation module configured to shift the elements of the vector toward one direction by the step length.
    Type: Application
    Filed: October 26, 2018
    Publication date: May 2, 2019
    Inventors: Daofu Liu, Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190130274
    Abstract: Aspects for backpropagation of a multilayer neural network (MNN) in a neural network processor are described herein. The aspects may include a computation module configured to receive one or more groups of MNN data. The computation module may further include a master computation module configured to calculate an input gradient vector based on a first output gradient vector from an adjacent layer and based on a data type of each of the one or more groups of MNN data. Further still, the computation module may include one or more slave computation modules configured to parallelly calculate portions of a second output vector based on the input gradient vector calculated by the master computation module and based on the data type of each of the one or more groups of MNN data.
    Type: Application
    Filed: April 15, 2016
    Publication date: May 2, 2019
    Inventors: Qi Guo, Yong Yu, Tianshi Chen, Yunji Chen
  • Publication number: 20190122094
    Abstract: Aspects for neural network operations with fixed-point number of short bit length are described herein. The aspects may include a fixed-point number converter configured to convert one or more first floating-point numbers to one or more first fixed-point numbers in accordance with at least one format. Further, the aspects may include a neural network processor configured to process the first fixed-point numbers to generate one or more process results.
    Type: Application
    Filed: October 29, 2018
    Publication date: April 25, 2019
    Inventors: Yunji Chen, Shaoli Liu, Qi Guo, Tianshi Chen
  • Publication number: 20190095206
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.
    Type: Application
    Filed: October 26, 2018
    Publication date: March 28, 2019
    Inventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190095401
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.
    Type: Application
    Filed: October 26, 2018
    Publication date: March 28, 2019
    Inventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190095207
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.
    Type: Application
    Filed: October 26, 2018
    Publication date: March 28, 2019
    Inventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190087716
    Abstract: The present disclosure provides a neural network processing system that comprises a multi-core processing module composed of a plurality of core processing modules and for executing vector multiplication and addition operations in a neural network operation, an on-chip storage medium, an on-chip address index module, and an ALU module for executing a non-linear operation not completable by the multi-core processing module according to input data acquired from the multi-core processing module or the on-chip storage medium, wherein the plurality of core processing modules share an on-chip storage medium and an ALU module, or the plurality of core processing modules have an independent on-chip storage medium and an ALU module. The present disclosure improves an operating speed of the neural network processing system, such that performance of the neural network processing system is higher and more efficient.
    Type: Application
    Filed: August 9, 2016
    Publication date: March 21, 2019
    Inventors: Zidong DU, Qi GUO, Tianshi CHEN, Yunji CHEN
  • Publication number: 20190087710
    Abstract: Aspects for Long Short-Term Memory (LSTM) blocks in a recurrent neural network (RNN) are described herein. As an example, the aspects may include one or more slave computation modules, an interconnection unit, and a master computation module collectively configured to calculate an activated input gate value, an activated forget gate value, a current cell status of the current computation period, an activated output gate value, and a forward pass result.
    Type: Application
    Filed: October 29, 2018
    Publication date: March 21, 2019
    Inventors: Qi Guo, Xunyu Chen, Yunji Chen, Tianshi Chen
  • Publication number: 20190087709
    Abstract: Aspects for Long Short-Term Memory (LSTM) blocks in a recurrent neural network (RNN) are described herein. As an example, the aspects may include one or more slave computation modules, an interconnection unit, and a master computation module collectively configured to calculate an activated input gate value, an activated forget gate value, a current cell status of the current computation period, an activated output gate value, and a forward pass result.
    Type: Application
    Filed: October 29, 2018
    Publication date: March 21, 2019
    Inventors: Qi Guo, Xunyu Chen, Yunji Chen, Tianshi Chen
  • Publication number: 20190079727
    Abstract: Aspects for neural network operations with floating-point number of short bit length are described herein. The aspects may include a neural network processor configured to process one or more floating-point numbers to generate one or more process results. Further, the aspects may include a floating-point number converter configured to convert the one or more process results in accordance with at least one format of shortened floating-point numbers. The floating-point number converter may include a pruning processor configured to adjust a length of a mantissa field of the process results and an exponent modifier configured to adjust a length of an exponent field of the process results in accordance with the at least one format.
    Type: Application
    Filed: October 29, 2018
    Publication date: March 14, 2019
    Inventors: Tianshi Chen, Shaoli Liu, Qi Guo, Yunji Chen
  • Publication number: 20190080241
    Abstract: Aspects for backpropagation of a multilayer neural network (MNN) in a neural network processor are described herein. The aspects may include a computation module configured to receive one or more groups of MNN data. The computation module may further include a master computation module configured to calculate an input gradient vector based on a first output gradient vector from an adjacent layer and based on a data type of each of the one or more groups of MNN data. Further still, the computation module may include one or more slave computation modules configured to parallelly calculate portions of a second output vector based on the input gradient vector calculated by the master computation module and based on the data type of each of the one or more groups of MNN data.
    Type: Application
    Filed: November 6, 2018
    Publication date: March 14, 2019
    Inventors: Qi Guo, Yong Yu, Tianshi Chen, Yunji Chen
  • Publication number: 20190079766
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.
    Type: Application
    Filed: October 26, 2018
    Publication date: March 14, 2019
    Inventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190079765
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.
    Type: Application
    Filed: October 26, 2018
    Publication date: March 14, 2019
    Inventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190073220
    Abstract: The present disclosure provides a data read-write scheduler and a reservation station for vector operations. The data read-write scheduler suspends the instruction execution by providing a read instruction cache module and a write instruction cache module and detecting conflict instructions based on the two modules. After the time is satisfied, instructions are re-executed, thereby solving the read-after-write conflict and the write-after-read conflict between instructions and guaranteeing that correct data are provided to a vector operations component. Therefore, the subject disclosure has more values for promotion and application.
    Type: Application
    Filed: November 7, 2018
    Publication date: March 7, 2019
    Inventors: Dong HAN, Shaoli LIU, Yunji CHEN, Tianshi Chen
  • Publication number: 20190073583
    Abstract: Aspects for forward propagation of a convolutional artificial neural network are described herein. The aspects may include a direct memory access unit configured to receive input data from a storage device and a master computation module configured to select one or more portions of the input data based on a predetermined convolution window. Further, the aspects may include one or more slave computation modules respectively configured to convolute a convolution kernel with one of the one or more portions of the input data to generate a slave output value. Further still, the aspects may include an interconnection unit configured to combine the one or more slave output values into one or more intermediate result vectors, wherein the master computation module is further configured to merge the one or more intermediate result vectors into a merged intermediate vector.
    Type: Application
    Filed: October 29, 2018
    Publication date: March 7, 2019
    Inventors: Tianshi Chen, Dong Han, Yunji Chen, Shaoli Liu, Qi Guo
  • Publication number: 20190073219
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include a computation module that includes one or more bitwise processors and a combiner. The bitwise processors may be configured to perform bitwise operations between each of the first elements and a corresponding one of the second elements to generate one or more operation results. The combiner may be configured to combine the one or more operation results into an output vector.
    Type: Application
    Filed: October 26, 2018
    Publication date: March 7, 2019
    Inventors: Tao Luo, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen