Patents by Inventor Junlong KANG

Junlong KANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10698657
    Abstract: The present invention relates to recurrent neural network. In particular, the present invention relates to how to implement and accelerate a recurrent neural network based on an embedded FPGA. Specifically, it proposes an overall design processing method of matrix decoding, matrix-vector multiplication, vector accumulation and activation function. In another aspect, the present invention proposes an overall hardware design to implement and accelerate the above process.
    Type: Grant
    Filed: December 26, 2016
    Date of Patent: June 30, 2020
    Assignee: XILINX, INC.
    Inventors: Junlong Kang, Song Han, Yi Shan
  • Patent number: 10691996
    Abstract: Hardware accelerator for compressed Long Short Term Memory (LSTM) is disclosed. The accelerator comprise a sparse matrix-vector multiplication module for performing multiplication operation between all sparse matrices in the LSTM and vectors to sequentially obtain a plurality of sparse matrix-vector multiplication results. A addition tree module are also included for accumulating a plurality of said sparse matrix multiplication results to obtain an accumulated result. And a non-linear operation module passes the accumulated results through an activation function to generate non-linear operation result. That is, the present accelerator adopts pipeline design to overlap the time of data transfer and computation for compressed LSTM.
    Type: Grant
    Filed: December 15, 2016
    Date of Patent: June 23, 2020
    Assignee: BEIJING DEEPHI INTELLIGENT TECHNOLOGY CO., LTD.
    Inventors: Song Han, Dongliang Xie, Junlong Kang, Yubin Li
  • Publication number: 20180174036
    Abstract: Hardware accelerator for compressed Long Short Term Memory (LSTM) is disclosed. The accelerator comprise a sparse matrix-vector multiplication module for performing multiplication operation between all sparse matrices in the LSTM and vectors to sequentially obtain a plurality of sparse matrix-vector multiplication results. A addition tree module are also included for accumulating a plurality of said sparse matrix multiplication results to obtain an accumulated result. And a non-linear operation module passes the accumulated results through an activation function to generate non-linear operation result. That is, the present accelerator adopts pipeline design to overlap the time of data transfer and computation for compressed LSTM.
    Type: Application
    Filed: December 15, 2016
    Publication date: June 21, 2018
    Inventors: Song HAN, Dongliang XIE, Junlong KANG, Yubin LI
  • Publication number: 20180046897
    Abstract: The present invention relates to recurrent neural network. In particular, the present invention relates to how to implement and accelerate a recurrent neural network based on an embedded FPGA. Specifically, it proposes an overall design processing method of matrix decoding, matrix-vector multiplication, vector accumulation and activation function. In another aspect, the present invention proposes an overall hardware design to implement and accelerate the above process.
    Type: Application
    Filed: December 26, 2016
    Publication date: February 15, 2018
    Inventors: Junlong KANG, Song HAN, Yi Shan
  • Publication number: 20180046895
    Abstract: The present invention proposes a highly parallel solution for implementing ANN by sharing both weights matrix of ANN and input activation vectors. It significantly reduces the memory access operations, the on-chip buffers. In addition, the present invention considers how to achieve a load balance among a plurality of on-chip processing units being operated in parallel. It also considers a balance between the I/O bandwidth and calculation capabilities of the processing units.
    Type: Application
    Filed: August 22, 2016
    Publication date: February 15, 2018
    Inventors: Dongliang XIE, Junlong KANG, Song HAN