Patents by Inventor Dongliang XIE

Dongliang XIE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240361406
    Abstract: A radio-frequency power amplification module for a magnetic resonance system and an imaging method are provided. The radio-frequency power amplification module includes: a power synthesizer, a main amplifier, and an auxiliary amplifier, output ends of the main amplifier and the auxiliary amplifier being both connected to a power synthesis unit; and a controller. The controller is configured to output a control signal according to a required radio-frequency transmission parameter or a scan parameter corresponding to the radio-frequency transmission parameter, so as to adjust a control parameter of the auxiliary amplifier. The radio-frequency power amplification module has high efficiency.
    Type: Application
    Filed: April 23, 2024
    Publication date: October 31, 2024
    Inventors: Xin Xie, Dongliang Yang, Yu Liu, Yanfang Cai, Ning Zhang, Chaoya Zhao, Alen Wang
  • Patent number: 12044761
    Abstract: Provided in an embodiment of the present application are a magnetic resonance imaging system and a transmission apparatus and a transmission method. The method comprises: generating and outputting a pulse signal by a transmission controller; amplifying the pulse signal by a radio-frequency amplifier; transmitting, by a signal processor, the signal amplified by the radio-frequency amplifier to a transmit coil of the magnetic resonance imaging system; and generating frequency offset lookup information for bandwidth compensation of the entire transmission apparatus according to bandwidth data of the transmission controller, the radio-frequency amplifier, and the signal processor; wherein the frequency offset lookup information is used to control the transmission controller to generate output power.
    Type: Grant
    Filed: August 29, 2022
    Date of Patent: July 23, 2024
    Assignee: GE Precision Healthcare LLC
    Inventors: Yu Liu, Tingting Song, Kai Wang, Haoyang Xing, Jianye Ning, Xin Xie, Dongliang Yang, Chunlai Xiao
  • Patent number: 10810484
    Abstract: The present technical disclosure relates to artificial neural networks, e.g., gated recurrent unit (GRU). In particular, the present technical disclosure relates to how to implement a hardware accelerator for compressed GRU based on an embedded FPGA. Specifically, it proposes an overall design processing method of matrix decoding, matrix-vector multiplication, vector accumulation and activation function. In another aspect, the present technical disclosure proposes an overall hardware design to implement and accelerate the above process.
    Type: Grant
    Filed: December 27, 2016
    Date of Patent: October 20, 2020
    Assignee: XILINX, INC.
    Inventors: Dongliang Xie, Song Han, Yi Shan
  • Patent number: 10691996
    Abstract: Hardware accelerator for compressed Long Short Term Memory (LSTM) is disclosed. The accelerator comprise a sparse matrix-vector multiplication module for performing multiplication operation between all sparse matrices in the LSTM and vectors to sequentially obtain a plurality of sparse matrix-vector multiplication results. A addition tree module are also included for accumulating a plurality of said sparse matrix multiplication results to obtain an accumulated result. And a non-linear operation module passes the accumulated results through an activation function to generate non-linear operation result. That is, the present accelerator adopts pipeline design to overlap the time of data transfer and computation for compressed LSTM.
    Type: Grant
    Filed: December 15, 2016
    Date of Patent: June 23, 2020
    Assignee: BEIJING DEEPHI INTELLIGENT TECHNOLOGY CO., LTD.
    Inventors: Song Han, Dongliang Xie, Junlong Kang, Yubin Li
  • Publication number: 20180174036
    Abstract: Hardware accelerator for compressed Long Short Term Memory (LSTM) is disclosed. The accelerator comprise a sparse matrix-vector multiplication module for performing multiplication operation between all sparse matrices in the LSTM and vectors to sequentially obtain a plurality of sparse matrix-vector multiplication results. A addition tree module are also included for accumulating a plurality of said sparse matrix multiplication results to obtain an accumulated result. And a non-linear operation module passes the accumulated results through an activation function to generate non-linear operation result. That is, the present accelerator adopts pipeline design to overlap the time of data transfer and computation for compressed LSTM.
    Type: Application
    Filed: December 15, 2016
    Publication date: June 21, 2018
    Inventors: Song HAN, Dongliang XIE, Junlong KANG, Yubin LI
  • Publication number: 20180157969
    Abstract: An apparatus for achieving an accelerator of a sparse convolutional neural network is provided. The apparatus comprises a convolution and pooling unit, a full connection unit and a control unit. Convolution parameter information and input data and intermediate calculation data are read based on control information, and weight matrix position information of a full connection layer is also read. Then a convolution and pooling operation for a first iteration number of times is performed on the input data in accordance with the convolution parameter information, and then a full connection calculation for a second iteration number of times is performed in accordance with the weight matrix position information of the full connection layer. Each input data is divided into a plurality of sub-blocks, and the convolution and pooling unit and the full connection unit perform operations on the plurality of sub-blocks in parallel, respectively.
    Type: Application
    Filed: December 5, 2017
    Publication date: June 7, 2018
    Inventors: Dongliang XIE, Yu ZHANG, Yi SHAN
  • Publication number: 20180046901
    Abstract: The present technical disclosure relates to artificial neural networks, e.g., gated recurrent unit (GRU). In particular, the present technical disclosure relates to how to implement a hardware accelerator for compressed GRU based on an embedded FPGA. Specifically, it proposes an overall design processing method of matrix decoding, matrix-vector multiplication, vector accumulation and activation function. In another aspect, the present technical disclosure proposes an overall hardware design to implement and accelerate the above process.
    Type: Application
    Filed: December 27, 2016
    Publication date: February 15, 2018
    Inventors: Dongliang XIE, Song HAN, Yi SHAN
  • Publication number: 20180046895
    Abstract: The present invention proposes a highly parallel solution for implementing ANN by sharing both weights matrix of ANN and input activation vectors. It significantly reduces the memory access operations, the on-chip buffers. In addition, the present invention considers how to achieve a load balance among a plurality of on-chip processing units being operated in parallel. It also considers a balance between the I/O bandwidth and calculation capabilities of the processing units.
    Type: Application
    Filed: August 22, 2016
    Publication date: February 15, 2018
    Inventors: Dongliang XIE, Junlong KANG, Song HAN