Patents by Inventor Yunji Chen

Yunji Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10592801
    Abstract: Aspects for forward propagation of a convolutional artificial neural network are described herein. The aspects may include a direct memory access unit configured to receive input data from a storage device and a master computation module configured to select one or more portions of the input data based on a predetermined convolution window. Further, the aspects may include one or more slave computation modules respectively configured to convolute a convolution kernel with one of the one or more portions of the input data to generate a slave output value. Further still, the aspects may include an interconnection unit configured to combine the one or more slave output values into one or more intermediate result vectors, wherein the master computation module is further configured to merge the one or more intermediate result vectors into a merged intermediate vector.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: March 17, 2020
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Tianshi Chen, Dong Han, Yunji Chen, Shaoli Liu, Qi Guo
  • Patent number: 10592241
    Abstract: Aspects for matrix multiplication in neural network are described herein. The aspects may include a master computation module configured to receive a first matrix and transmit a row vector of the first matrix. In addition, the aspects may include one or more slave computation modules respectively configured to store a column vector of a second matrix, receive the row vector of the first matrix, and multiply the row vector of the first matrix with the stored column vector of the second matrix to generate a result element. Further, the aspects may include an interconnection unit configured to combine the one or more result elements generated respectively by the one or more slave computation modules to generate a row vector of a result matrix and transmit the row vector of the result matrix to the master computation module.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: March 17, 2020
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20200084065
    Abstract: One example of a device comprises: a central node that is as a communication data center of a network-on-chip; a plurality of leaf nodes that are as communication data nodes of the network-on-chip and for transmitting the communication data to a central leaf node; forwarder modules for connecting the central node with the plurality of leaf nodes and forwarding the communication data, wherein the plurality of leaf nodes are divided into N groups, each group having the same number of leaf nodes, the central node is individually in communication connection with each group of leaf nodes by means of the forwarder module, a communication structure constituted by each group of leaf nodes has self-similarity, and the plurality of leaf nodes are in communication connection with the central node in a complete multi-way tree approach by means of the forwarder modules of multiple levels.
    Type: Application
    Filed: June 17, 2016
    Publication date: March 12, 2020
    Inventors: Jinhua TAO, Tao LUO, Shaoli LIU, Shijin ZHANG, Yunji CHEN
  • Patent number: 10585973
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: March 10, 2020
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Patent number: 10574260
    Abstract: Aspects for converting floating-point numbers in a processor are described herein. As an example, the aspects may include receiving, by a floating-point number converter, an exponent bit length, a base value, and one or more first floating-point numbers of a first bit length. Further, the aspects may include calculating, by the floating-point number converter, one or more second floating-point numbers of a second bit length based on the exponent bit length and the base value, the one or more second floating-point numbers respectively corresponding to the one or more first floating-point numbers.
    Type: Grant
    Filed: May 9, 2018
    Date of Patent: February 25, 2020
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Zhen Li, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Publication number: 20200050927
    Abstract: Aspects of a neural network operation device are described herein. The aspects may include a matrix element storage module configured to receive a first matrix that includes one or more first values, each of the first values being represented in a sequence that includes one or more bits. The matrix element storage module may be further configured to respectively store the one or more bits in one or more storage spaces in accordance with positions of the bits in the sequence. The aspects may further include a numeric operation module configured to calculate an intermediate result for each storage space based on one or more second values in a second matrix and an accumulation module configured to sum the intermediate results to generate an output value.
    Type: Application
    Filed: October 21, 2019
    Publication date: February 13, 2020
    Inventors: Tianshi CHEN, Yimin ZHUANG, Qi GUO, Shaoli LIU, Yunji CHEN
  • Publication number: 20200050453
    Abstract: Aspects for matrix multiplication in neural network are described herein. The aspects may include a controller unit configured to receive a matrix-multiply-matrix (MM) instruction that includes a first starting address of a first matrix, a first size of the first matrix, a second starting address of a second matrix, and a second size of the second matrix; multiple computation modules configured to respectively multiply, in response to the MM instruction, row vectors of the first matrix with column vectors of the second matrix to generate one or more result elements; and an interconnection unit configured to combine the result elements to generate one or more row vectors of a result matrix.
    Type: Application
    Filed: October 17, 2019
    Publication date: February 13, 2020
    Inventors: Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
  • Patent number: 10534841
    Abstract: Aspects for submatrix operations in neural network are described herein. The aspects may include a controller unit configured to receive a submatrix instruction. The submatrix instruction may include a starting address of a submatrix of a matrix, a width of the submatrix, a height of the submatrix, and a stride that indicates a position of the submatrix relative to the matrix. The aspects may further include a computation module configured to select one or more values from the matrix as elements of the submatrix in accordance with the starting address of the matrix, the starting address of the submatrix, the width of the submatrix, the height of the submatrix, and the stride.
    Type: Grant
    Filed: October 22, 2018
    Date of Patent: January 14, 2020
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Xiao Zhang, Yunji Chen, Tianshi Chen
  • Patent number: 10521228
    Abstract: The present disclosure provides a data read-write scheduler and a reservation station for vector operations. The data read-write scheduler suspends the instruction execution by providing a read instruction cache module and a write instruction cache module and detecting conflict instructions based on the two modules. After the time is satisfied, instructions are re-executed, thereby solving the read-after-write conflict and the write-after-read conflict between instructions and guaranteeing that correct data are provided to a vector operations component. Therefore, the subject disclosure has more values for promotion and application.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: December 31, 2019
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Dong Han, Shaoli Liu, Yunji Chen, Tianshi Chen
  • Publication number: 20190394477
    Abstract: Aspects of data compression/decompression for neural networks are described herein. The aspects may include a model data converter configured to convert neural network content values into pseudo video data. The neural network content values may refer to weight values and bias values of the neural network. The pseudo video data may include one or more pseudo frames. The aspects may further include a compression module configured to encode the pseudo video data into one or more neural network data packages.
    Type: Application
    Filed: September 5, 2019
    Publication date: December 26, 2019
    Inventors: Tianshi CHEN, Yuzhe LUO, Qi GUO, Shaoli LIU, Yunji CHEN
  • Patent number: 10509998
    Abstract: Aspects of a neural network operation device are described herein. The aspects may include a matrix element storage module configured to receive a first matrix that includes one or more first values, each of the first values being represented in a sequence that includes one or more bits. The matrix element storage module may be further configured to respectively store the one or more bits in one or more storage spaces in accordance with positions of the bits in the sequence. The aspects may further include a numeric operation module configured to calculate an intermediate result for each storage space based on one or more second values in a second matrix and an accumulation module configured to sum the intermediate results to generate an output value.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: December 17, 2019
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Tianshi Chen, Yimin Zhuang, Qi Guo, Shaoli Liu, Yunji Chen
  • Publication number: 20190370664
    Abstract: Aspects of data modification for neural networks are described herein. The aspects may include a connection value generator configured to receive one or more groups of input data and one or more weight values and generate one or more connection values based on the one or more weight values. The aspects may further include a pruning module configured to modify the one or more groups of input data and the one or more weight values based on the connection values. Further still, the aspects may include a computing unit configured to update the one or more weight values and/or calculate one or more input gradients.
    Type: Application
    Filed: August 15, 2019
    Publication date: December 5, 2019
    Inventors: Yunji CHEN, Xinkai SONG, Shaoli LIU, Tianshi CHEN
  • Publication number: 20190370663
    Abstract: Aspects of data modification for neural networks are described herein. The aspects may include a connection value generator configured to receive one or more groups of input data and one or more weight values and generate one or more connection values based on the one or more weight values. The aspects may further include a pruning module configured to modify the one or more groups of input data and the one or more weight values based on the connection values. Further still, the aspects may include a computing unit configured to update the one or more weight values and/or calculate one or more input gradients.
    Type: Application
    Filed: August 15, 2019
    Publication date: December 5, 2019
    Inventors: Yunji CHEN, Xinkai SONG, Shaoli LIU, Tianshi CHEN
  • Patent number: 10496404
    Abstract: The present disclosure provides a data read-write scheduler and a reservation station for vector operations. The data read-write scheduler suspends the instruction execution by providing a read instruction cache module and a write instruction cache module and detecting conflict instructions based on the two modules. After the time is satisfied, instructions are re-executed, thereby solving the read-after-write conflict and the write-after-read conflict between instructions and guaranteeing that correct data are provided to a vector operations component. Therefore, the subject disclosure has more values for promotion and application.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: December 3, 2019
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Dong Han, Shaoli Liu, Yunji Chen, Tianshi Chen
  • Patent number: 10496597
    Abstract: The present invention is directed to the storage technical field and discloses an on-chip data partitioning read-write method, the method comprises: a data partitioning step for storing on-chip data in different areas, and storing the on-chip data in an on-chip storage medium and an off-chip storage medium respectively, based on a data partitioning strategy; a pre-operation step for performing an operational processing of an on-chip address index of the on-chip storage data in advance when implementing data splicing; and a data splicing step, for splicing the on-chip storage data and the off-chip input data to obtain a representation of the original data based on a data splicing strategy. Also provided are a corresponding on-chip data partitioning read-write system and device. Thus, read and write of repeated data can be efficiently realized, reducing memory access bandwidth requirements while providing good flexibility, thus reducing on-chip storage overhead.
    Type: Grant
    Filed: August 9, 2016
    Date of Patent: December 3, 2019
    Assignee: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCES
    Inventors: Tianshi Chen, Zidong Du, Qi Guo, Yunji Chen
  • Publication number: 20190361816
    Abstract: Aspects of managing Translation Lookaside Buffer (TLB) units are described herein. The aspects may include a memory management unit (MMU) that includes one or more TLB units and a control unit. The control unit may be configured to identify one from the one or more TLB units based on a stream identification (ID) included in a received virtual address and, further, to identify a frame number in the identified TLB unit. A physical address may be generated by the control unit based on the frame number and an offset included in the virtual address.
    Type: Application
    Filed: August 12, 2019
    Publication date: November 28, 2019
    Inventors: Tianshi CHEN, Qi GUO, Yunji CHEN
  • Patent number: 10489113
    Abstract: The present disclosure provides a quick operation device for a nonlinear function, and a method therefor. The device comprises: a domain conversion part for converting an input independent variable into a corresponding value in a table lookup range; a table lookup part for looking up a slope and an intercept of the corresponding piecewise linear fitting based on the input independent variable or an independent variable processed by the domain conversion part; and a linear fitting part for obtaining, a final result in a way of linear fitting based on the slope and the intercept obtained, by means of table lookup, by the table lookup part. The present disclosure solves the problems of slow operation speed, large area of the operation device, and high power consumption caused by the traditional method.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: November 26, 2019
    Assignee: Institute of Computing Technology, Chinese Academy of Sciences
    Inventors: Shijin Zhang, Tao Luo, Shaoli Liu, Yunji Chen
  • Patent number: 10474586
    Abstract: Aspects of managing Translation Lookaside Buffer (TLB) units are described herein. The aspects may include a memory management unit (MMU) that includes one or more TLB units and a control unit. The control unit may be configured to identify one from the one or more TLB units based on a stream identification (ID) included in a received virtual address and, further, to identify a frame number in the identified TLB unit. A physical address may be generated by the control unit based on the frame number and an offset included in the virtual address.
    Type: Grant
    Filed: February 26, 2019
    Date of Patent: November 12, 2019
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Tianshi Chen, Qi Guo, Yunji Chen
  • Publication number: 20190332945
    Abstract: A compression coding apparatus for artificial neural network, including memory interface unit, instruction cache, controller unit and computing unit, wherein the computing unit is configured to perform corresponding operation to data from the memory interface unit according to instructions of controller unit; the computing unit mainly performs three steps operation: step one is to multiply input neuron by weight data; step two is to perform adder tree computing and add the weighted output neuron obtained in step one level-by-level via adder tree, or add bias to output neuron to get biased output neuron; step three is to perform activation function operation to get final output neuron. The present disclosure also provides a method for compression coding of multi-layer neural network.
    Type: Application
    Filed: July 10, 2019
    Publication date: October 31, 2019
    Inventors: Tianshi CHEN, Shaoli LIU, Qi GUO, Yunji CHEN
  • Patent number: 10462476
    Abstract: Aspects of data compression/decompression for neural networks are described herein. The aspects may include a model data converter configured to convert neural network content values into pseudo video data. The neural network content values may refer to weight values and bias values of the neural network. The pseudo video data may include one or more pseudo frames. The aspects may further include a compression module configured to encode the pseudo video data into one or more neural network data packages.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: October 29, 2019
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Tianshi Chen, Yuzhe Luo, Qi Guo, Shaoli Liu, Yunji Chen