Patents by Inventor Yunji Chen

Yunji Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190073339
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.
    Type: Application
    Filed: October 26, 2018
    Publication date: March 7, 2019
    Inventors: Jianhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190073221
    Abstract: The present disclosure provides a data read-write scheduler and a reservation station for vector operations. The data read-write scheduler suspends the instruction execution by providing a read instruction cache module and a write instruction cache module and detecting conflict instructions based on the two modules. After the time is satisfied, instructions are re-executed, thereby solving the read-after-write conflict and the write-after-read conflict between instructions and guaranteeing that correct data are provided to a vector operations component. Therefore, the subject disclosure has more values for promotion and application.
    Type: Application
    Filed: November 7, 2018
    Publication date: March 7, 2019
    Inventors: Dong HAN, Shaoli LIU, Yunji CHEN, Tianshi Chen
  • Publication number: 20190073584
    Abstract: Aspects for forward propagation of a multilayer neural network (MNN) in a neural network processor are described herein. As an example, the aspects may include a computation module that includes a master computation module and one or more slave computation modules. The master computation module may be configured to receive one or more groups of MNN data. The one or more groups of MNN data may include input data and one or more weight values and wherein at least a portion of the input data and the weight values are stored as discrete values. The one or more slave computation modules may be configured to calculate one or more groups of slave output values based on a data type of each of the one or more groups of MNN data.
    Type: Application
    Filed: November 6, 2018
    Publication date: March 7, 2019
    Inventors: Shaoli Liu, Yong Yu, Yunji Chen, Tianshi Chen
  • Patent number: 10223115
    Abstract: The present disclosure provides a data read-write scheduler and a reservation station for vector operations. The data read-write scheduler suspends the instruction execution by providing a read instruction cache module and a write instruction cache module and detecting conflict instructions based on the two modules. After the time is satisfied, instructions are re-executed, thereby solving the read-after-write conflict and the write-after-read conflict between instructions and guaranteeing that correct data are provided to a vector operations component. Therefore, the subject disclosure has more values for promotion and application.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: March 5, 2019
    Assignee: Cambricon Technologies Corporation Limited
    Inventors: Dong Han, Shaoli Liu, Yunji Chen, Tianshi Chen
  • Publication number: 20190065437
    Abstract: Aspects for matrix multiplication in neural network are described herein. The aspects may include a controller unit configured to receive a matrix-addition instruction. The aspects may further include a computation module configured to receive a first matrix and a second matrix. The first matrix may include one or more first elements and the second matrix includes one or more second elements. The one or more first elements and the one or more second elements may be arranged in accordance with a two-dimensional data structure. The computation module may be further configured to respectively add each of the first elements to each of the second elements based on a correspondence in the two-dimensional data structure to generate one or more third elements for a third matrix.
    Type: Application
    Filed: October 26, 2018
    Publication date: February 28, 2019
    Inventors: Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190065959
    Abstract: Aspects for backpropagation of a convolutional neural network are described herein. The aspects may include a direct memory access unit configured to receive input data from a storage device and a master computation module configured to select one or more portions of the input data based on a predetermined convolution window. Further, the aspects may include one or more slave computation modules respectively configured to convolute one of the one or more portions of the input data with one of one or more previously calculated first data gradients to generate a kernel gradient, wherein the master computation module is further configured to update a prestored convolution kernel based on the kernel gradient.
    Type: Application
    Filed: October 29, 2018
    Publication date: February 28, 2019
    Inventors: Yunji Chen, Tian Zhi, Shaoli Liu, Qi Guo, Tianshi Chen
  • Publication number: 20190065193
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.
    Type: Application
    Filed: October 26, 2018
    Publication date: February 28, 2019
    Inventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190065934
    Abstract: Aspects for forward propagation in fully connected layers of a convolutional artificial neural network are described herein. The aspects may include multiple slave computation modules configured to parallelly calculate multiple groups of slave output values based on an input vector received via the interconnection unit. Further, the aspects may include a master computation module connected to the multiple slave computation modules via an interconnection unit, wherein the master computation module is configured to generate an output vector based on the intermediate result vector.
    Type: Application
    Filed: October 29, 2018
    Publication date: February 28, 2019
    Inventors: Shaoli Liu, Huiying Lan, Qi Guo, Yunji Chen, Tianshi Chen
  • Publication number: 20190065958
    Abstract: Aspects for backpropagation in a fully connect layer of a convolutional neural network are described herein. The aspects may include a direct memory access unit configured to receive input data and one or more first data gradients from a storage device. The aspects may further include a master computation module configured to transmit the input data and the one or more first data gradients to one or more slave computation modules. The slave computation modules are respectively configured to multiply one of the one or more first data gradients with the input data to generate a default weight gradient vector.
    Type: Application
    Filed: October 29, 2018
    Publication date: February 28, 2019
    Inventors: Qi Guo, Shijin Zhang, Yunji Chen, Tianshi Chen
  • Publication number: 20190065189
    Abstract: Aspects for vector comparison in neural network are described herein. The aspects may include a direct memory access unit configured to receive a first vector and a second vector from a storage device. The first vector may include one or more first elements and the second vector may include one or more second elements. The aspects may further include a computation module that includes one or more comparers respectively configured to generate a comparison result by comparing one of the one or more first elements to a corresponding one of the one or more second elements in accordance with an instruction.
    Type: Application
    Filed: October 25, 2018
    Publication date: February 28, 2019
    Inventors: Dong Han, Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190065952
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a controller unit configured to receive an instruction to generate a random vector that includes one or more elements. The instruction may include a predetermined distribution, a count of the elements, and an address of the random vector. The aspects may further include a computation module configured to generate the one or more elements, wherein the one or more elements are subject to the predetermined distribution.
    Type: Application
    Filed: October 25, 2018
    Publication date: February 28, 2019
    Inventors: Daofu Liu, Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190065192
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.
    Type: Application
    Filed: October 26, 2018
    Publication date: February 28, 2019
    Inventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190065953
    Abstract: Aspects for self-learning operations of an artificial neural network are described herein. The aspects may include a master computation module configured to transmit an input vector via an interconnection unit and one or more slave computation modules connected to the master computation module via the interconnection unit. Each of the one or more slave computation modules may be configured to respectively store a column weight vector of a weight matrix and multiply the input vector with the column weight vector to generate a first multiplication result. The interconnection unit may be configured to combine the one or more first multiplication results into a first multiplication vector and transmit the first multiplication vector to the master computation module.
    Type: Application
    Filed: October 29, 2018
    Publication date: February 28, 2019
    Inventors: Zhen Li, Qi Guo, Yunji Chen, Tianshi Chen
  • Publication number: 20190065938
    Abstract: Aspects for pooling operations in a multilayer neural network (MNN) in a MNN acceleration processor are described herein. The aspects may include a direct memory access unit configured to receive multiple input values from a storage device. The aspects may further include a pooling processor configured to select a portion of the input values based on a pooling kernel that include a data range, and generate a pooling result based on the selected portion of the input values.
    Type: Application
    Filed: October 29, 2018
    Publication date: February 28, 2019
    Inventors: Shaoli Liu, Jin Song, Yunji Chen, Tianshi Chen
  • Publication number: 20190065184
    Abstract: Aspects for generating a dot product for two vectors in neural network are described herein. The aspects may include a controller unit configured to receive a vector load instruction that includes a first address of a first vector and a length of the first vector. The aspects may further include a direct memory access unit configured to retrieve the first vector from a storage device based on the first address of the first vector. Further still, the aspects may include a caching unit configured to store the first vector.
    Type: Application
    Filed: October 26, 2018
    Publication date: February 28, 2019
    Inventors: Tian Zhi, Qi Guo, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190065436
    Abstract: Aspects for matrix multiplication in neural network are described herein. The aspects may include a controller unit configured to receive a matrix-addition instruction. The aspects may further include a computation module configured to receive a first matrix and a second matrix. The first matrix may include one or more first elements and the second matrix includes one or more second elements. The one or more first elements and the one or more second elements may be arranged in accordance with a two-dimensional data structure. The computation module may be further configured to respectively add each of the first elements to each of the second elements based on a correspondence in the two-dimensional data structure to generate one or more third elements for a third matrix.
    Type: Application
    Filed: October 26, 2018
    Publication date: February 28, 2019
    Inventors: Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190065191
    Abstract: Aspects for generating a dot product for two vectors in neural network are described herein. The aspects may include a controller unit configured to receive a transcendental function instruction that includes an address of a vector and an operation code that identifies a transcendental function. The aspects may further include a CORDIC processor configured to receive the vector that includes one or more elements based on the address of the vector in response to the transcendental function instruction. The CORDIC processor may be further configured to apply the transcendental function to each element of the vector to generate an output vector.
    Type: Application
    Filed: October 25, 2018
    Publication date: February 28, 2019
    Inventors: Dong Han, Xiao Zhang, Tianshi Chen, Yunji Chen
  • Publication number: 20190065194
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.
    Type: Application
    Filed: October 26, 2018
    Publication date: February 28, 2019
    Inventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190065435
    Abstract: Aspects for vector combination in neural network are described herein. The aspects may include a direct memory access unit configured to receive aa first vector, a second vector, and a controller vector. The first vector, the second vector, and the controller vector may each include one or more elements indexed in accordance with a same one-dimensional data structure. The aspects may further include a computation module configured to select one of the one or more control values, determine that the selected control value satisfies a predetermined condition, and select one of the one or more first elements that corresponds to the selected control value in the one-dimensional data structure as an output element based on a determination that the selected control value satisfies the predetermined condition.
    Type: Application
    Filed: October 25, 2018
    Publication date: February 28, 2019
    Inventors: Zhen Li, Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20190065190
    Abstract: Aspects for matrix multiplication in neural network are described herein. The aspects may include a master computation module configured to receive a first matrix and transmit a row vector of the first matrix. In addition, the aspects may include one or more slave computation modules respectively configured to store a column vector of a second matrix, receive the row vector of the first matrix, and multiply the row vector of the first matrix with the stored column vector of the second matrix to generate a result element. Further, the aspects may include an interconnection unit configured to combine the one or more result elements generated respectively by the one or more slave computation modules to generate a row vector of a result matrix and transmit the row vector of the result matrix to the master computation module.
    Type: Application
    Filed: October 25, 2018
    Publication date: February 28, 2019
    Inventors: Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen