FPGA Co-Processor For Accelerated Computation
A co-processor module for accelerating computational performance includes a Field Programmable Gate Array (“FPGA”) and a Programmable Logic Device (“PLD”) coupled to the FPGA and configured to control start-up configuration of the FPGA. A non-volatile memory is coupled to the PLD and configured to store a start-up bitstream for the start-up configuration of the FPGA. A mechanical and electrical interface is for being plugged into a microprocessor socket of a motherboard for direct communication with at least one microprocessor capable of being coupled to the motherboard. After completion of a start-up cycle, the FPGA is configured for direct communication with the at least one microprocessor via a microprocessor bus to which the microprocessor socket is coupled.
Latest DRC Computer Corporation Patents:
This application is a continuation of U.S. patent application Ser. No. 11/829,801, filed Jul. 27, 2007, which claims benefit to U.S. provisional patent application No. 60/820,730, entitled “FPGA Co-Processor for Accelerated Computation,” filed Jul. 28, 2006, each of the disclosures of which is herein incorporated by reference in its entirety for all purposes.
FIELDOne or more embodiments generally relate to accelerators and, more particularly, to a co-processor module including a Field Programmable Gate Array (“FPGA”).
BACKGROUNDCo-processors have often been used to accelerate computational performance. For example, early microprocessors were unable to include floating-point computation circuitry due to chip area limitations. Doing floating-point computations in software is extremely slow so this circuitry was often placed in a second chip which was activated whenever a floating-point computation was required. As chip technology improved, the microprocessor chip and the floating-point co-processor chip were combined together.
A similar situation occurs today with specialized computational algorithms. Standard microprocessors do not include circuitry for performing these algorithms because they are often specific to only a few users. By using an FPGA (field programmable gate-array) as a co-processor, an algorithm can be designed and programmed into hardware to build a circuit that is unique for each application, resulting in a significant acceleration of the desired computation.
SUMMARYOne or more embodiments generally relate to accelerators and, more particularly, to a co-processor module including a Field Programmable Gate Array (“FPGA”).
A co-processor module for accelerating computational performance includes a Field Programmable Gate Array (“FPGA”) and a Programmable Logic Device (“PLD”) coupled to the FPGA and configured to control start-up configuration of the FPGA. A non-volatile memory is coupled to the PLD and configured to store a start-up bitstream for the start-up configuration of the FPGA. A mechanical and electrical interface is for being plugged into a microprocessor socket of a motherboard for direct communication with at least one microprocessor capable of being coupled to the motherboard. After completion of a start-up cycle, the FPGA is configured for direct communication with the at least one microprocessor via a microprocessor bus to which the microprocessor socket is coupled.
Accompanying drawing(s) show exemplary embodiment(s) in accordance with one or more embodiments; however, the accompanying drawing(s) should not be taken to limit the invention to the embodiment(s) shown, but are for explanation and understanding only.
In the following description, numerous specific details are set forth to provide a more thorough description of the specific embodiments of the invention. It should be apparent, however, to one skilled in the art, that the invention may be practiced without all the specific details given below. In other instances, well-known features have not been described in detail so as not to obscure the invention. For ease of illustration, the same number labels are used in different diagrams to refer to the same items; however, in alternative embodiments the items may be different. Furthermore, although particular integrated circuit parts are described herein for purposes of clarity by way of example, it should be understood that the scope of the description is not limited to these particular numerical examples as other integrated circuit parts may be used.
A multi-processor system consists of several processing chips connected to each other by high-speed busses. By replacing one or more of these processor chips by application-specific co-processors, it is often possible to obtain a significant acceleration in computational speed. Each co-processor sits in the motherboard socket designed for a standard processor and makes use of motherboard resources.
According to one embodiment, the co-processor FPGA is located on a module which plugs into a standard microprocessor socket. Motherboards are commonly available which have multiple microprocessor sockets, allowing one or more standard microprocessors to co-exist with one or more co-processor modules. Thus, no changes to the motherboard or other system hardware are required, making it easy to build co-processor systems. The co-processor has access to motherboard resources including large amounts of memory. These resources need not be duplicated on the co-processor module, reducing the cost, size and power requirements for the co-processor. The co-processor is connected to the main processor by one or more high-speed low-latency busses. Many algorithms require frequent communication between the main microprocessor and the co-processor, making this interface a factor in achieving high performance.
According to another embodiment, to accelerate computational algorithms, a co-processor module is included which plugs into a standard microprocessor socket on a motherboard and communicates with the microprocessor by one or more high-speed, low-latency busses. The co-processor has access to motherboard resources through the microprocessor socket. The co-processor includes an FPGA which is reconfigurable and may be loaded with a new configuration pattern suitable for a different algorithm under control of the microprocessor. The configuration pattern is developed using a set of software tools. The co-processor module capabilities may be extended by adding additional piggyback cards.
An another embodiment is an accelerator module, including an FPGA and a Programmable Logic Device (“PLD”) coupled to the FPGA and configured to control start-up configuration of the FPGA. A non-volatile memory is coupled to the PLD and configured to store a start-up bitstream for the start-up configuration of the FPGA. A mechanical and electrical interface is configured for being plugged into a microprocessor socket of a motherboard for direct communication with at least one microprocessor capable of being coupled to the motherboard. After completion of a start-up cycle, the FPGA is configured for direct communication with the at least one microprocessor via a microprocessor bus to which the microprocessor socket is coupled.
Another embodiment generally is an accelerator system, comprising a first motherboard having accelerator modules and a second motherboard having at least one microprocessor. Each of the accelerator modules includes an FPGA and a Programmable Logic Device (“PLD”) coupled to the FPGA and configured to control start-up configuration of the FPGA. A non-volatile memory is coupled to the PLD and configured to store a start-up bitstream for the start-up configuration of the FPGA. A mechanical and electrical interface is configured for being plugged into a microprocessor socket of the first motherboard for direct communication as between the accelerator modules. The microprocessor socket is coupled to a microprocessor bus for the direct communication between the accelerator modules.
Yet another embodiment generally is a method for co-processing. An accelerator module is coupled to a microprocessor bus, the accelerator module including a Field Programmable Gate Array (“FPGA”). A microprocessor bus interface bitstream is loaded into the FPGA to program programmable logic thereof. Data is transferred to first memory of the accelerator module via a microprocessor bus using a microprocessor bus interface instantiated in the FPGA responsive to the microprocessor bus interface bitstream. A default configuration bitstream stored in the first memory is instantiated in the FPGA to configure the FPGA to have the microprocessor bus interface with sufficient functionality to be recognized by a microprocessor coupled to the microprocessor bus.
Still yet another embodiment generally is another method for co-processing. An accelerator module, which includes a Field Programmable Gate Array (“FPGA”) and first memory, is coupled to a microprocessor bus. The first memory has a default configuration bitstream stored therein. The default configuration bitstream is loaded into the FPGA to program programmable logic thereof. The default configuration bitstream includes a microprocessor bus interface. The FPGA is configured with the default configuration bitstream with sufficient functionality to be recognized by a microprocessor coupled to the microprocessor bus.
Referring to
It is also possible to build high performance computing systems with multiple motherboards interconnected by high speed busses. In such a system, some of the motherboards may contain only co-processor modules while other motherboards contain only processor chips or a mixture of processor chips and co-processor modules. In such a multi-board system, there must be at least one processor chip in order to communicate with one or more co-processor modules.
Returning now to
Referring now to
FPGA 201 also connects to SRAM 202 and PLD 203 via bus 214. PLD 203 additionally connects to flash memory 204 via bus 213 and to FPGA 201 via programming signals 212.
Referring now to
The physical size of module 200 is limited because of the need to fit into socket 102 without interfering with other components which may exist on motherboard 10. At the same time, it is desirable to be able to expand the functionality of module 200 to support various applications. Expanded functionality may include, for example, additional memory or additional hypertransport interfaces.
Referring now to
Once the default configuration pattern (bitstream) is loaded into FPGA 201, module 200 becomes visible over the hypertransport bus to a main processor 101 in the system. At 501, the main processor transfers a new configuration pattern over hypertransport bus 210 for writing to FPGA 201 of module 200. This new configuration pattern typically contains a user logic function 306 and may also contain new definitions for support functions 300. At 502, FPGA 201 of module 200 saves the new configuration pattern into either SRAM or DRAM using the memory interfaces 302 or 303. If full reconfiguration of FPGA 201 is planned, the configuration pattern must be saved into SRAM. DRAM cannot be used for full reconfiguration because the configuration data would be lost when DRAM interface 302 ceases to operate during the configuration process. SRAM may be controlled using PLD 203 instead of SRAM interface 303 in FPGA 201 so the configuration data is retained while FPGA 201 is reprogrammed. The processors 501 and 502 may operate concurrently since the amount of data required to configure. FPGA 201 may be very large. At 503, main processor 101 uses the hypertransport bus to send FPGA 201 of module 200 the address of the configuration pattern in SRAM or DRAM, along with a command to reprogram itself. A decision 506 is then made whether to do full or partial reconfiguration.
During partial reconfiguration, support functions 300 remain active and only enough data must be transferred over hypertransport bus 210 to configure user logic 306. This allows partial reconfiguration to be much faster than full reconfiguration, making partial reconfiguration the preferred alternative in most situations. Data for partial reconfiguration may be saved in either DRAM or SRAM. When module 201 is used to accelerate computational algorithms, frequent reconfiguration is often necessary and reconfiguration time becomes a limiting factor in determining the amount of acceleration that may be obtained. Partial reconfiguration at 505 involves FPGA 201 loading the reconfiguration data, where an internal memory interface of FPGA 201 is used to read a bitstream and pass it to user logic 306. After loading is complete, new logic functions specified by the new configuration become active and may be used.
If full reconfiguration is desired at 504 of
With additional reference to
At 602, constraints are generated for the user design. These include both physical and timing constraints. Physical constraints are necessary to ensure that user logic 306 connects correctly and does not conflict with support functions 300. Timing constraints determine the operating speed of user logic 306 and prevent other potential timing problems such as race conditions.
At 603, user logic 306 is synthesized. Synthesis converts the design from an HDL description to a netlist of FPGA primitives. The Xilinx tool XST may be used.
At 604, the user logic 306 is combined with the pre-designed support functions 300. The support functions 300, as well as wrapper interface 305 associated therewith, have a pre-assigned fixed placement so they may be combined with arbitrary user logic without affecting operation of support functions 300. Sections of the support functions 300 are very sensitive to timing and correct operation could not be guaranteed without fixing the placement.
At 605, the design for instantiation in user logic 306 is placed and routed. Placement and routing is performed by the appropriate FPGA software tools. These are available from the FPGA vendor. Constraints generated at 602 guide the place and route 605 as well as synthesis 603 to ensure that the desired speed and functionality are achieved.
At 606 a full or partial configuration pattern (or bitstream) for the FPGA is generated. This may be performed by a tool supplied by the FPGA vendor. The bitstream is then ready for download into co-processor FPGA 201.
While the foregoing describes exemplary embodiment(s) in accordance with one or more embodiments, other and further embodiment(s) in accordance with the one or more embodiments may be devised without departing from the scope thereof, which is determined by the claim(s) that follow and equivalents thereof. Claim(s) listing steps do not imply any order of the steps. Trademarks are the property of their respective owners.
Claims
1. An accelerator module, comprising:
- a Field Programmable Gate Array (“FPGA”)
- a Programmable Logic Device (“PLD”) coupled to the FPGA and configured to control start-up configuration of the FPGA;
- a non-volatile memory coupled to the PLD and configured to store a start-up bitstream for the start-up configuration of the FPGA; and
- a mechanical and electrical interface for being plugged into a microprocessor socket of a motherboard for direct communication with at least one microprocessor capable of being coupled to the motherboard;
- the FPGA after completion of a start-up cycle being configured for direct communication with the at least one microprocessor via a microprocessor bus to which the microprocessor socket is coupled.
2. The accelerator module according to claim 1, wherein the microprocessor bus is a point-to-point bus.
3. The accelerator module according to claim 2, wherein the FPGA after completion of the start-up cycle is configured for direct communication with resources associated with the motherboard in addition to the at least one microprocessor, wherein the resources are directly accessible by the FPGA via the point-to-point bus, the point-to-point bus being a Hypertransport bus.
4. The accelerator module according to claim 3, wherein the FPGA after completion of the start-up cycle is further configured for direct communication via a dedicated bus with dynamic random access memory forming a portion of the resources associated with the motherboard.
5. The accelerator module according to claim 2, wherein the FPGA after completion of the start-up cycle is further configured for direct communication with resources associated with the motherboard in addition to the at least one microprocessor, wherein the resources include random access memory which is directly accessible by the FPGA via a dedicated memory bus.
6. The accelerator module according to claim 5, wherein the random access memory is Dynamic Random Access Memory (“DRAM”).
7. The accelerator module according to claim 1, wherein the FPGA after completion of the start-up cycle is configured for direct communication with system memory coupled to the motherboard which is associated with the microprocessor point-to-point bus to which the microprocessor socket is coupled.
8. The accelerator module according to claim 1, further comprising Static Random Access Memory (“SRAM”) coupled to the FPGA and configured for storing configuration information for configuring at least a user programmable logic portion of the FPGA.
9-32. (canceled)
33. An accelerator system, comprising:
- a first motherboard having accelerator modules;
- a second motherboard having at least one microprocessor;
- each of the accelerator modules including: a Field Programmable Gate Array (“FPGA”) a Programmable Logic Device (“PLD”) coupled to the FPGA and configured to control start-up configuration of the FPGA; a non-volatile memory coupled to the PLD and configured to store a start-up bitstream for the start-up configuration of the FPGA; and a mechanical and electrical interface configured for being plugged into a microprocessor socket of the first motherboard for direct communication as between the accelerator modules; the microprocessor socket being coupled to a microprocessor bus for the direct communication between the accelerator modules.
34. The accelerator system according to claim 33, wherein the microprocessor bus is a point-to-point bus.
35. The accelerator system according to claim 34, wherein the microprocessor bus is a Hypertransport bus.
Type: Application
Filed: Nov 23, 2010
Publication Date: May 26, 2011
Applicant: DRC Computer Corporation (Sunnyvale, CA)
Inventor: Steven Casselman (Santa Clara, CA)
Application Number: 12/952,959
International Classification: G06F 13/40 (20060101); G06F 12/00 (20060101); G06F 13/16 (20060101);