Patents by Inventor Shaoxia FANG

Shaoxia FANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240061903
    Abstract: Circuits and methods for determining a maximum bias for computing softmax on a tensor include a processor circuit configured to transform in parallel, elements of each group of a plurality of groups of elements of a tensor X into respective power-of-two elements. The respective power-of-two element from element xt of the tensor is pt, pt=(xt*log2e), and pt has an integer part and a fraction part. A first comparison circuit (204) is configured to determine respective group-level biases for the groups. The group-level bias of groupm is dm, and dm is an integer part of a maximum of the power-of-two elements of groupm. A second comparison circuit is configured to determine a greatest one of the respective group-level biases to be a tensor-level bias, dmax.
    Type: Application
    Filed: August 22, 2022
    Publication date: February 22, 2024
    Applicant: Xilinx, Inc.
    Inventors: Wenzong Yang, Wang Xi, Yadong Li, Junbin Wang, Shaoxia Fang
  • Patent number: 11093225
    Abstract: A high parallelism computing system and instruction scheduling method thereof are disclosed. The computing system comprises: an instruction reading and distribution module for reading a plurality of types of instructions in a specific order, and distributing the acquired instructions to corresponding function modules according to the types; an internal buffer for buffering data and instructions for performing computation; a plurality of function modules each of which sequentially executes instructions of the present type distributed by the instruction reading and distribution module and reads the data from the internal buffer; and wherein the specific order is obtained by topologically sorting the instructions according to a directed acyclic graph consisting of the types and dependency relationships.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: August 17, 2021
    Assignee: Xilinx, Inc.
    Inventors: Qian Yu, Lingzhi Sui, Shaoxia Fang, Junbin Wang, Yi Shan
  • Patent number: 10902315
    Abstract: The present disclosure relates to a processor for implementing artificial neural networks, for example, convolutional neural networks. The processor includes a memory controller group, an on-chip bus and a processor core, wherein the processor core further includes a register map, an instruction module, a data transferring controller, a data writing scheduling unit, a buffer module, a convolution operation unit and a hybrid computation unit. The processor of the present disclosure may be used for implementing various neural networks with increased computation efficiency.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: January 26, 2021
    Assignee: XILINX, INC.
    Inventors: Shaoxia Fang, Lingzhi Sui, Qian Yu, Junbin Wang, Yi Shan
  • Patent number: 10824939
    Abstract: The present disclosure relates to a processor for implementing artificial neural networks, for example, convolutional neural networks. The processor includes a memory controller group, an on-chip bus and a processor core, wherein the processor core further includes a register map, an instruction module, a data transferring controller, a data writing scheduling unit, a buffer pool, a data reading scheduling unit and a computation module. The processor of the present disclosure may be used for implementing various neural networks with increased computation efficiency.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: November 3, 2020
    Assignee: XILINX, INC.
    Inventors: Shaoxia Fang, Lingzhi Sui, Qian Yu, Junbin Wang, Yi Shan
  • Publication number: 20200004514
    Abstract: A high parallelism computing system and instruction scheduling method thereof are disclosed. The computing system comprises: an instruction reading and distribution module for reading a plurality of types of instructions in a specific order, and distributing the acquired instructions to corresponding function modules according to the types; an internal buffer for buffering data and instructions for performing computation; a plurality of function modules each of which sequentially executes instructions of the present type distributed by the instruction reading and distribution module and reads the data from the internal buffer; and wherein the specific order is obtained by topologically sorting the instructions according to a directed acyclic graph consisting of the types and dependency relationships.
    Type: Application
    Filed: June 27, 2019
    Publication date: January 2, 2020
    Inventors: Qian YU, Lingzhi SUI, Shaoxia FANG, Junbin WANG, Yi SHAN
  • Patent number: 10282659
    Abstract: The present disclosure relates to a processor for implementing artificial neural networks, for example, convolutional neural networks. The processor includes a memory controller group, an on-chip bus and a processor core, wherein the processor core further includes a register map, a first instruction unit, a second instruction unit, an instruction distributing unit, a data transferring controller, a buffer module and a computation module. The processor of the present disclosure may be used for implementing various neural networks with increased computation efficiency.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: May 7, 2019
    Assignee: BEIJING DEEPHI INTELLIGENT TECHNOLOGY CO., LTD.
    Inventors: Shaoxia Fang, Lingzhi Sui, Qian Yu, Junbin Wang, Yi Shan
  • Publication number: 20180307974
    Abstract: The present disclosure relates to a processor for implementing artificial neural networks, for example, convolutional neural networks. The processor includes a memory controller group, an on-chip bus and a processor core, wherein the processor core further includes a register map, a first instruction unit, a second instruction unit, an instruction distributing unit, a data transferring controller, a buffer module and a computation module. The processor of the present disclosure may be used for implementing various neural networks with increased computation efficiency.
    Type: Application
    Filed: May 22, 2017
    Publication date: October 25, 2018
    Inventors: Shaoxia FANG, Lingzhi SUI, Qian YU, Junbin WANG, Yi SHAN
  • Publication number: 20180307973
    Abstract: The present disclosure relates to a processor for implementing artificial neural networks, for example, convolutional neural networks. The processor includes a memory controller group, an on-chip bus and a processor core, wherein the processor core further includes a register map, an instruction module, a data transferring controller, a data writing scheduling unit, a buffer pool, a data reading scheduling unit and a computation module. The processor of the present disclosure may be used for implementing various neural networks with increased computation efficiency.
    Type: Application
    Filed: May 22, 2017
    Publication date: October 25, 2018
    Inventors: Shaoxia FANG, Lingzhi SUI, Qian YU, Junbin WANG, Yi SHAN
  • Publication number: 20180307976
    Abstract: The present disclosure relates to a processor for implementing artificial neural networks, for example, convolutional neural networks. The processor includes a memory controller group, an on-chip bus and a processor core, wherein the processor core further includes a register map, an instruction module, a data transferring controller, a data writing scheduling unit, a buffer module, a convolution operation unit and a hybrid computation unit. The processor of the present disclosure may be used for implementing various neural networks with increased computation efficiency.
    Type: Application
    Filed: May 22, 2017
    Publication date: October 25, 2018
    Inventors: Shaoxia FANG, Lingzhi SUI, Qian YU, Junbin WANG, Yi SHAN