Methods and apparatus for efficient synchronous MIMD operations with IVLIW PE-TO-PE communication
A SIMD machine employing a plurality of parallel processor (PEs) in which communications hazards are eliminated in an efficient manner. An indirect Very Long Instruction Word instruction memory (VIM) is employed along with execute and delimiter instructions. A masking mechanism may be employed to control which PEs have their VIMs loaded. Further, a receive model of operation is preferably employed. In one aspect, each PE operates to control a switch that selects from which PE it receives. The present invention addresses a better machine organization for execution of parallel algorithms that reduces hardware cost and complexity while maintaining the best characteristics of both SIMD and MIMD machines and minimizing communication latency. This invention brings a level of MIMD computational autonomy to SIMD indirect Very Long Instruction Word (iVLIW) processing elements while maintaining the single thread of control used in the SIMD machine organization. Consequently, the term Synchronous-MIMD (SMIMD) is used to describe the present approach.
Latest Altera Corp. Patents:
- Methods and apparatus for attaching application specific functions within an array processor
- Methods and apparatus for automated generation of abbreviated instruction set and configurable processor architecture
- Methods and apparatus for power control in a scalable array of processor elements
- Processor organized in clusters of processing elements and cluster interconnections by a clustering process
- Package for integrated circuit die
The present application is a continuation of Ser. No. 09/187,539, filed on Nov. 6, 1998, now U.S. Pat. No. 6,151,668.
The present invention claims the benefit of U.S. Provisional Application Ser. No. 60/064,619 entitled “Methods and Apparatus for Efficient Synchronous MIMD VLIW Communication” and filed Nov. 7, 1997.
FIELD OF THE INVENTIONFor any Single Instruction Multiple Data stream (SIMD) machine with a given number of parallel processing elements, there will exist algorithms which cannot make efficient use of the available parallel processing elements, or in other words, the available computing resources. Multiple Instruction Multiple Data stream (MIMD) class machines execute some of these algorithms with more efficiency but require additional hardware to support a separate instruction stream on each processor and lose performance due to communication latency with lightly coupled program implementations. The present invention addresses a better machine organization for execution of these algorithms that reduces hardware cost and complexity while maintaining the best characteristics of both SIMD and MIMD machines and minimizing communication latency. The present invention provides a level of MIMD computational autonomy to SIMD indirect Very Long Instruction Word (iVLIW) processing elements while maintaining the single thread of control used in the SIMD machine organization. Consequently, the term Synchronous-MIMD (SMIMD) is used to describe the invention.
BACKGROUND OF THE INVENTIONThere are two primary parallel programming models, the SIMD and the MIMD models. In the SIMD model, there is a single program thread which controls multiple processing elements (PEs) in a synchronous lock-step mode. Each PE executes the same instruction but on different data. This is in contrast to the MIMD model where multiple program threads of control exist and any inter-processor operations must contend with the latency that occurs when communicating between the multiple processors due to requirements to synchronize the independent program threads prior to communicating. The problem with SIMD is that not all algorithms can make efficient use of the available parallelism existing in the processor. The amount of parallelism inherent in different algorithms varies leading to difficulties in efficiently implementing a wide variety of algorithms on SIMD machines. The problem with MIMD machines is the latency of communications between multiple processors leading to difficulties in efficiently synchronizing processors to cooperate on the processing of an algorithm. Typically, MIMD machines also incur a greater cost of implementation as compared to SIMD machines since each MIMD PE must have its own instruction sequencing mechanism which can amount to a significant amount of hardware. MIMD machines also have an inherently greater complexity of programming control required to manage the independent parallel processing elements. Consequently, levels of programming complexity and communication latency occur in a variety of contexts when parallel processing elements are employed. It will be highly advantageous to efficiently address such problems as discussed in greater detail below.
SUMMARY OF THE INVENTIONThe present invention is preferably used in conjunction with the ManArray architecture various aspects of which are described in greater detail in U.S. patent application Ser. No. 08/885,310 filed Jun. 30, 1997, now U.S. Pat. No. 6,023,753, U.S. Ser. No. 08/949,122 filed Oct. 10, 1997, now U.S. Pat. No. 6,167,502, U.S. Ser. No. 09/169,255 filed Oct. 9, 1998, now U.S. Pat. No. 6,343,356, U.S. Ser. No. 09/169,256 filed Oct. 9, 1998 now U.S. Pat. No. 6,167,501 and U.S. Ser. No. 09/169,072 filed Oct. 9, 1998, now U.S. Pat. No. 6,219,776, Provisional Application Ser. No. 60/067,511 entitled “Method and Apparatus for Dynamically Modifying Instructions in a Very Long Instruction Word Processor” filed Dec. 4, 1997, Provisional Application Ser. No. 60/068,021 entitled “Methods and Apparatus for Scalable Instruction Set Architecture” filed Dec. 18, 1997, Provisional Application Ser. No. 60/071,248 entitled “Methods and Apparatus to Dynamically Expand the Instruction Pipeline of a Very Long Instruction Word Processor” filed Jan. 12, 1998, Provisional Application Ser. No. 60/072,915 entitled “Methods and Apparatus to Support Conditional Execution in a VLIW-Based Array Processor with Subword Execution” filed Jan. 28, 1998, Provisional Application Ser. No. 60/077,766 entitled “Register File Indexing Methods and Apparatus for Providing Indirect Control of Register in a VLIW Processor”, filed Mar. 12, 1998, Provisional Application Ser. No. 60/092,130 entitled “Methods and Apparatus for Instruction Addressing in Indirect VLIW Processors” filed on Jul. 9, 1998, Provisional Application Ser. No. 60/103,712 entitled “Efficient Complex Multiplexing and Fast Fourier Transform (FFT) Implementation on the ManArray” filed on Oct. 9, 1998, and Provisional Application Ser. No. 60/106,867 entitled “Methods and Apparatus for Improved Motion Estimation for Video Encoding” filed on Nov. 3, 1998, respectively, all of which are assigned to the assignee of the present invention and incorporated herein in their entirety.
A ManArray processor suitable for use in conjunction with ManArray indirect Very Long Instruction Words (iVLIWs) in accordance with the present invention may be implemented as an array processor that has a Sequence Processor (SP) acting as an array controller for a scalable array of Processing Elements (PEs) to provide an indirect Very Long Instruction Word architecture. Indirect Very Long Instruction Words (iVLIWs) in accordance with the present invention may be compared in an iVLIW Instruction Memory (VIM) by the SIMD array controller. Sequence Processor or SP. Preferably, VIM exists in each Processing Element or PE and contains a plurality of iVLIWs. After an iVLIW is composed in VIM, another SP instruction, designated XV for “execute iVLIW” in the preferred embodiment, concurrently executes the iVLIW at an identical VIM address in all PEs. If all PE VIMs contain the same instructions, SIMD operation occurs. A one-to-one mapping exists between the XV instruction and the single identical iVLIW that exists in each PE.
To increase the efficiency of certain algorithms running on the ManArray, it is possible to operate indirectly on VLIW instructions stored in a VLIW memory with the indirect execution initiated by an execute VLIW (XV) instruction and with different VLIW instructions stored in the multiple PEs at the same VLIW memory address. When the SP instruction causes this set of iVLIWs to execute concurrently across all PEs, Synchronous MIMD or SMIMD operation occurs. A one-to-many mapping exists between the XV instruction and the multiple different iVLIWs that exist in each PE. No specialized synchronization mechanism is necessary since the multiple different iVLIW executions are instigated synchronously by the single controlling point SP with the issuance of the XV instruction. Due to the use of a Receive Model to govern communication between PEs and a ManArray network, the communication latency characteristic common to MIMD operations is avoided as discussed further below. Additionally, since there is only one synchronous locus of execution, additional MIMD hardware for separate program flow in each PE is not required. In this way, the machine is organized to support SMIMD operations at a reduced hardware cost while minimizing communication latency.
A ManArray indirect VLIW or iVLIW is preferably loaded under program control, although the alternatives of direct memory access (DMA) loading of the iVLIWs and implementing a section of VIM address space with ROM containing fixed iVLIWs are not precluded. To maintain a certain level of dynamic program flexibility, a portion of VIM, if not all of the VIM, will typically be of the random access type of memory. To load the random access type of VIM, a delimiter instruction, LV for Load iVLIW, specifies that a certain number of instructions that follow the delimiter are to be loaded into the VIM rather than executed. For SIMD operation, each PE gets the same instructions for each VIM address. To set up for SMIMD operation it is necessary to load different instructions at the same VIM address in each PE.
In the presently preferred embodiment, this is achieved by a masking mechanism that functions such that the loading of VIM only occurs on PEs that are masked ON. PEs that are masked OFF do not execute the delimiter instruction and therefore do not load the specified set of instructions that follow the delimiter into the VIM. Alternatively, different instructions could be loaded in parallel from the PE local memory or the VIM could be the target of a DMA transfer. Another alternative for loading different instructions into the same VIM address is through the use of a second LV instruction, LV2, which has a second 32-bit control word that follows the LV instruction. The first and second control words rearrange the bits between them so that a PE label can be added. This second LV2 approach does not require the PEs to be masked and may provide some advantages in different system implementations. By selectively loading different instructions into the same VIM address on different PEs, the ManArray is set up for the SMIMD operation.
One problem encountered when implementing SMIMD operation is in dealing with inter-processing element communication. In SIMD mode, all PEs in the array are executing the same instruction. Typically, these SIMD PE-to-PE communications instructions are thought of as assigning a Send Model. That is to say, the SIMD Send Model communication instructions indicate in which direction or to which target PE, each PE should send its data. When a communication instruction such as SEND-WEST is encountered, each PE sends data to the PE topologically defined as being its western neighbor. The Send Model specifies both sender and receiver PEs. In the SEND-WEST example, each PE sends its data to its West PE and receives data from its East PE. In SIMD mode, this is not a problem.
In SMIMD mode of operation, using a Send Model, it is possible for multiple processing elements to all attempt to send data to the same neighbor. This attempt presents a hazardous situation because processing elements such as those in the ManArray may be defined as having only one receive port, capable of receiving from only one other processing element at a time. When each processing element is defined as having one receipt port, such an attempted operation cannot complete successfully and results in a communication hazard.
To avoid the communication hazard described above, a Receive Model is used for the communication between PEs. Using the Receive Model, each processing element controls a switch that selects from which processing element it receives. It is impossible for communication hazards to occur because it is impossible for any two processing elements to contend for the same receive port. By definition, each PE controls its own receive port and makes data available without target PE specification. For any meaningful communication to occur between processing elements using the Receive Model, the PEs must be programmed to cooperate in the receiving of the data that is made available. Using Synchronous MIMD (SMIMD), this is guaranteed to occur if the cooperating instructions all exist at the same iVLIW location. Without SMIMD, a complex mechanism would be necessary to synchronize communications and use the Receive Model.
A more complete understanding of the present invention, as well as further features and advantages of the invention will be apparent from the following Detailed Description and the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGSFIGS. 4F1 and 4F2 illustrate slot storage for three Synchronous MIMD iVLIWs in a 2×2 ManArray configuration;
One set of presently preferred indirect Very Long Instruction Word (iVLIW) control instructions for use in conjunction with the present invention is described in detail below.
The SP 102 and each PE 104 in the ManArray architecture as adapted for use in accordance with the present invention contains a quantity of iVLIW memory (VIM) 106 as shown in FIG. 1. Each VIM 106 contains storage space to hold multiple VLIW instruction Addresses 103, and each Address is capable of storing up to eight simplex instructions. Presently preferred implementations allow each iVLIW instruction to contain up to five simplex instructions: one associated with each of the Store Unit 108, Load Unit 110, Arithmetic Logic Unit 112 (ALU), Multiply-Accumulate Unit 114 (MAU), and Data-Select Unit 116 (DSU) 116. For example, an iVLIW instruction at VIM address “i” 105 contains the five instructions SLAMD.
iVLIW instructions can be loaded into an array of PE VIMs collectively, or, by using special instructions to mask a PE or PEs, each PE VIM can be loaded individually. The iVLIW instructions in VIM are accessed for execution through the Execute VLIW (XV) instruction, which, when executed as a single instruction, causes the simultaneous execution of the simplex instructions located at the VIM memory address. An XV instruction can cause the simultaneous execution of:
- 1. all of the simplex instructions located in an individual SP's or PE's VIM address, or
- 2. all instructions located in all PEs at the same relative VIM address, or
- 3. all instructions located at a subset or group of all PEs at the same relative VIM address.
Only two control instructions are necessary to load/modify iVLIW memories, and to execute iVLIW instructions. They are:
- 1. Load/Modify VLIW Memory Address (LV) illustrated in
FIG. 4A , and - 2. Execute VLIW (XV) illustrated in FIG. 4B.
The LV instruction 400 shown in
Any combination of individual instruction slots may be disabled via the disable slot parameter ‘d={SLAMD}’, where S=Store Unit (SU), L=Load Unit (LU), A=Arithmetic Logic Unit (ALU), M=Multiply-Accumulate Unit (MAU) and D=Data Select Unit (DSU). A blank ‘D=’parameter does not disable any slots. Specified slots are disabled prior to any instructions that are loaded.
The number of instructions to load are specified utilizing an InstrCnt parameter. For the present implementation, valid values are 0-5. The next InstrCnt instructions following LV are loaded into the specified VIM. The Unit Affecting Flags (UAF) parameter ‘F=[AMD]’ selects which arithmetic instruction slot (A=ALU, M=MAU, D=DSU) is allowed to set condition flags for the specified VIM when it is executed. A blank ‘F=’ selects the ALU instruction slot. During processing of the LV instruction no arithmetic flags are affected and the number of cycles is one plus the number of instructions loaded.
The XV instruction 425 shown in
Any combination of individual instruction slots may be executed via the execute slot parameter ‘E={SLAMD}’, where S=Store Unit (SU), L=Load Unit (LU), A=Arithmetic Logic Unit (ALU), M=Multiply-Accumulate Unit (MAU), D=Data Select Unit (DSU). A blank ‘E=’ parameter does not execute any slots. The Unit Affecting Flags (UAF) parameter ‘F={AMDN}’ overrides the UAF specified for the VLIW when it was loaded via the LV instruction. The override selects which arithmetic instruction slot (A=ALU, M=MAU, D=DSU) or none (N=NONE) is allowed to set condition flags for this execution of the VLIW. The override does not affect the UAF setting specified by the LV instruction. A blank ‘F=’ selects the UAF specified when the VLIW was loaded.
Condition flags are set by the individual simplex instruction in the slot specified by the setting of the ‘F= parameter from the original LV instruction or as overridden by an ‘F=[AMD]’ parameter in the XV instruction. Condition flags are not affected when ‘F=N’. Operation occurs in one cycle. Pipeline considerations must be taken into account based upon the individual simplex instructions in each of the slots that are executed. Descriptions of individual fields in these iVLIW instructions are shown in
The ADD instruction 450 shown in
Individual, Group, and “Synchronous MIMD” PE iVLIW Operations
The LV and XV instructions may be used to load, modify, disable, or execute iVLIW instructions in individual PEs or PE groups defined by the programmer. To do this, individual PEs are enabled or disabled by an instruction which modifies a Control Register located in each PE which, among other things, enables or disables each PE. To load and operate an individual PE or a group of PEs, the control registers are modified to enable individual PE(s), and to disable all others. Normal iVLIW instructions will then operate only on PEs that are enabled.
Referring to
Upon receipt of an XV instruction in IR1510, the VIM address 511 is calculated by use of the specified Vb register 502 added by adder 504 to the offset value included in the XV instruction via path 503. The resulting VIM Address 507 is passed through multiplexer 508 to address the VIM. The iVLIW at the specified address is read out of the VIM 516 and passes through the multiplexers 530, 532, 534, 536, and 538, to the IR2 registers 514. As an alternative to minimize the read VIM access timing critical path, the output of VIM 516 can be latched into a register whose output is passed through a multiplexer prior to the decode state logic.
For execution of the XV instruction, the IR2MUX1 control signal 533 in conjunction with the pre-decode XVc1 control signal 517 causes all the IR2 multiplexers, 530, 532, 534, 536, and 538, to select the VIM output paths, 541, 543, 545, 547, and 549. At this point, the five individual decode and execution stages of the pipeline, 540, 542, 544, 546, and 548, are completed in synchrony providing the iVLIW parallel execution performance. To allow a single 32-bit instruction to execute by itself in the PE or SP, the bypass VIM path 533 is shown. For example, when a simplex ADD instruction is received into IR1510 for parallel array execution, the pre-decode function 512 generates the IR2MUX1533 control signal, which in conjunction with the instruction type pre-decode signal, 523 in the case of an ADD, and lack of an XV 517 or LV 515 active control signal, causes the ALU multiplexer 534 to select the bypass path 535.
Since a ManArray can be configured with a varying number of PEs,
It is noted, that in the previous discussion, covered by
To allow a single 32-bit instruction to execute by itself in the iVLIW PE or iVLIW SP, the bypass VIM path 835 is shown in FIG. 8A. For example, when a simplex ADD instruction is received into IR1810 for parallel array execution, the pre-decode function 812 generates the IR2MUX2833 control signal, which in conjunction with the instruction type pre-decode signal, 823 in the case of an ADD, and lack of an XV 817 or LV 815 active control signal, causes the ALU multiplexer 834 to select the bypass path 835. Since as described herein, the bypass operation is to occur during a full stage of the pipeline, it is possible to replace the group bits and the unit field bits in the bypassed instructions as they enter the IR2 latch stage. This is indicated in
It is noted that alternative formats for VIM iVLIW storage are possible and may be preferable depending upon technology and design considerations. For example,
In a processor consisting of an SP controller 102 as in
In attempting to extend the Send Model into the SMIMD mode, toher problems may occur. One such problem is that in SMIMD mode it is possible for multiple processing elements to all attempt to send data to a single PE, since each PE can receive a dfiferent inter-PE communication instruction. The two attributes of the SIMD Send Model break down immediately, naemly having a common inter-PE instruction and specifying both source and target, or, in other words, both sender and receiver. It is a communications hazard to have more than one PE target the same PE in a SIMD model with single cycle communication. This communication hazard is shown in
This arrangement is shown for a 2×2 array processor 1100 in
For example, VIM entry number 29 in PE2495 is loaded with the four instructions li.p.w R3, A1+, A7, fmpy.pm.1fw R5, R2, R31, fadd.pa.1fw R9, R7, R5, and pexchg.pd.w R8, R0, 2×2_PE3. These instructions are those found in the next to last row of FIG. 4F. That same VIM entry (29) contains different instructions in PEs 0, 1, and 3, as can be seen by the rows corresponding to thes PEs on VIM entry 29, for PE0491, PE2493, and PE3497.
The following example 1-1 shows the sequence of instructions which load the PE VIM memories as defined in FIG. 4F. Note that PE Masking is used in order to load different instructions into different PE VIMs at the same address.
EXAMPLE 1-1 Loading Synchronous MIMD iVLIWs into PE VIMs
The following example 1-2 shows the sequence of instructions which execute the PE VIM entries as loaded by the example 1-1 code in FIG. 4F. Note that no PE Masking is necessary. The specified VIM entry is executed in each of the PEs, PE0, PE1, PE2, and PE3.
Description of Exemplary Algorithms Being Performed
The iVLIWs defined in
In order to avoid redundant calculations or idle PEs, the iVLIWs operate on three variable vectors at a time. Due to the distribution of the vector components over the PEs, it is not feasible to use PE0 to compute a 4th vector dot product. PE0 is advantageously employed instead to take care of some setup for afuture algorithm stage. This can be seen in the iVLIW load slots, as vector 1 is loaded in iVLIW 27 (component-wise across the PEs, as described above), vector 2 is loaded in iVLIW 28, and vector 3 is loaded in iVLIW 29 (lo.p.w R*, A1+, A7). PE1 computes the x component of the dot product for each ofthe three vectors. PE2 computes the y component, and PE3 computes the z component (ftopy.pm.1fw R*, R*, R31). At this point, the communication among the PEs must occur in order to get the y and z components of the vector 1 dot product to PE1, and x and z components of the vector 2 dot product to PE2, and the x and y components of the vector 3 dot product to PE3. This communication occurs in the DSU via the pexchg instruction. In this way, each PE is summing (fadd.pa.1fw R9, R7, R* and fadd.pa.1fw R10, R9, R8) the components of a unique dot product result simultaneously. These results are then stored (si.p.w. R10, +A2, A6) into PE memories. Note that each PE will compute and store every third result. The final set of results are then accessed in round-robin fashion from PEs 1, 2, and 3.
Additionally, each PE performs a comparison (fcmpLE.pa.1fw R10, R0) of its dot product result with zero (held in PE register R0), and conditionally stores a zero (t.sii.p.w R0, A2+, 0) in place of the computed dot product if that dot product was negative. In other words, it is determined if the comparison is R10 less than R0? is true. This implementation of adot product with removal of negative values is used, for example, in lighting calculations for 3D graphics applications.
While the present invention has been disclosed in the context of presently preferred methods and apparatus for carrying out the invention, variuos alternative implementations and variations wil be readily apparetn to those of ordinary skill in the art. By way of example, the present invention does not preclude the ability to load an instruction into VIM and also execute the instruction. This capability was deemed an unnecesary complication for the presently preferred programming model among other considerations such as instruction formats and hardware complexity. Consequently, the Load iVLIW delimiter approach was chosen.
Claims
1. An indirect very long instruction word (VLIW) processing system comprising:
- a first processing element (PE) having a VLIW instruction memory (VIM) for storing function instructions in slots within a VIM memory location;
- a first register for storing a control instruction and a function instruction, the function instruction having a plurality of definition bits defining both a the control instruction type and an execution unit type of the function instruction;
- a predecoder for decoding the plurality of definition bits; and
- a load mechanism for loading the function instruction in one of said slots in VIM based upon both said decoding, and a control instruction defining a load operation.
2. The system of claim 1 wherein the predecoder is for decoding an execute VLIW control instruction containing an address offset and a pointer to a base address register for indirectly exeucting VLIWs.
3. The system of claim 1 wherein the predecoder is for decoding said control instruction defining a load operation containing an address offset and pointer to a base address register for loading the function instruction.
4. The system of claim 1 wherein the definition bits are removed from the function instruction before the function instruction is stored in VIM.
5. The system of claim 1 wherein the definition bits are removed from the function instruction and at least one simplex control bit is added to the fucntion instruction before the function instruction is stored in VIM.
6. The system of claim 5 wherein the at least one simplex control bit includes an enable/disable bit.
7. The system of claim 5 wherein the at least one simplex control bit includes an operation code extension bit.
8. The system of claim 5 wherein the at least one simplex control bit includes a register file extension bit.
9. The system of claim 5 wherein the at least one simplex control bit includes a conditional execution extension bit.
10. The system of claim 9 further comprising a plurality of execution units, and first and second banks of registers, and the register file extension bit is utilized to determine whether the plurality of execution units read from or write to the first bank of registers or the second bank of registers.
11. The system of claim 1 further comprising a second register for storing the function instruction; a bypass path for connecting an output of the first register to an input of the second register; and a selection mechanism for selecting a bypass operation in which the function instruction is passed from the first register to the second register without being loaded into VIM.
12. The system of claim 1 further comprising at least one additional PE connected through a network interface connection to the first PE, and each PE has an associated cluster switch connected to a receive port such that each PE controls a portion of the cluster switch.
13. The system of claim 12 wherein the associated cluster switch comprises at least one multiplexer per PE interconnected to provide independent paths between the PEs in a cluster of PEs.
14. The system of claim 1 further comprising a sequence processor (SP) connected to the first PE and providing both said control instruction and said function instruction to the first PE, the control instruction containing an address offset and a ponter to a base address register for loading the function instruction.
15. The system of claim 14 further comprising at least one additional PE connected to the SP and said control instruction is provided synchronously to both the first PE and said at least one additional PE.
16. The system of claim 15 wherein a plurality of PEs are connected to the SP and the plurality of PEs is organized into first and second groups of one or more PEs.
17. The system of claim 16 wherein the first group of PEs indirectly operate on a VLIW instruction ata first VIM address during a cycle of operation and the second group of PEs indirectly operate on a different VLIW instruction at the same first VIM address during the cycle of operation.
18. The system of claim 16 wherein the plurality of PEs operate following a receive model of communication control in which each PE has a receive port and controls whether data is received at the receive port.
19. The system of claim 18 wherein each PE has a output port for making data available in the cluster switch.
20. The system of claim 18 whereby each PE has an input multiplexer connected to the receive port and controls communication by controlling said input multiplexer.
21. The system of claim 18 wherein the plurality of PEs are programmed to cooperate by storing a cooperating instruction so that one PE has a receive instruction specifying the path that the other PE is making data available on in the same location in VIM for each of said plurality of PEs.
22. The system of claim 16 further comprising a masking mechanism for masking individual PEs ON or OFF.
23. The system of claim 22 in which VIMs for PEs masked ON are loaded and VIMs for PEs masked OFF are not loaded during al oad VLIW operation.
24. The system of claim 16 wherein different PEs execute different VLIWs at the same VIM address during the same cycle.
25. The system of claim 1 wherein the VIM comprises slots for storing function instructions of the following type: store unit instructions; load unit instructions; arithmetic logic unit instructions; mulitply-accumulate unit instructions; or data select unit instructions.
26. A processing system comprising:
- a plurality of processing elements (PEs) communicatively connected to each other, each of said PEs including a very long instruction word (VLIW) memory (VIM) for storing VLIWs to be executed by each PE; and
- a sequence processor (SP) operable for concurrently initiating indirect execution of a VLIW stored at a first address in the VIM of each PE, in response to the SP issuing an indirect instruction to initiate concurrent execution by each PE, each PE of said plurality of PEs concurrently executing the VLIW stored at the first address in the VIM associated with each PE, and
- at least one of said plurality of PEs concurrently executing a VLIW at the first address of its VIM which defines a different operation from a VLIW concurrently executed by another PE of said plurality of PEs.
27. The processing system of claim 26 wherein the SP is further operable for concurrently initiating the execution of instructions stored in a VLIW at a second address in the VIM of each PE wherein each PE concurrently executes an instruction stored in a VLIW at the second address in the instruction memory associated with each PE, and
- the plurality of PEs execute instructions which define the same operation.
28. The processing system of claim 27 wherein the plurality of PEs include a first PE and a second PE and the SP is further operable for:
- concurrently initiating the execution of instructions stored in a VLIW at a third address in the VIMs such that the first PE executes a first instruction stored in a VLIW at the third address in the VIM associated with the first PE, and the second PE executes a second instruction stored in a VLIW at the third address in the VIM associated with the second PE.
29. The processing system of claim 28 wherein:
- the first instruction and the second instruction define different operations.
30. The processing system of claim 28 wherein:
- the first instruction and the second instruction define the same operation.
31. The processing system of claim 26 wherein the SP is further operable for executing an instruction stored in a VLIW at the first address in the VIM of one of said plurality of PEs.
32. The processing system of claim 26 wherein each PE includes a base address register, and wherein the first address in each PE is determined utilizing the base address register and an offset value contained in an indirect instruction issued by the SP.
33. The processing system of claim 26 wherein the instruction to be executed by the PEs comprises at least one very long instruction word (VLIW).
34. The processing system of claim 26 wherein the indirect instruction to initiate concurrent execution by each PE is an execute VLIW instruction.
35. The processing system of claim 34 wherein the execute VLIW instruction is operable to enable each of at least two instructions comprising a VLIW for execution.
36. The processing system of claim 35 wherein:
- each PE includes a base address register; and
- each PE determines the first address utilizing the base address register associated with each PE and an offset value contained in the execute VLIW instruction.
37. The processing system of claim 26 wherein:
- each PE is operable to receive data from other PEs; and
- each PE is operable to control from which PE data is received.
38. A processing system comprising:
- a first processing element (PE) including a first instruction memory for storing a first very long instruction word (VLIW) to be executed by said first PE; and
- a second processing element (PE) including a second instruction memory for storing a second VLIW to be executed by said second PE, said second VLIW and said first VLIW defining different operations;
- wherein the first VLIW and the second VLIW are both stored at the same address location in each memory;
- wherein the first PE and the second PE are operable for simultaneously executing the first VLIW and the second VLIW, respectively, in response to each PE receiving an execute very long instruction word (VLIW) instruction.
39. The processing system of claim 38 further comprising a sequencing processor (SP) which initiates the concurrent execution of the first instruction and the second instruction by issuing the VLIW instruction.
40. The processing system of claim 38 wherein
- each PE includes a base address register; and
- each PE determines the first address utilizing both the base address register associated with each PE and an offset value contained in the execute VLIW instruction.
41. The processing system of claim 38 wherein:
- the first and second instructions comprise very long instruction word (VLIW) instructions; and
- each VLIW instruction comprises a plurality of simplex instructions.
42. The processing system of claim 41 wherein:
- each PE comprises a plurality of execution units; and
- each simplex instruction is adapted for being executed by at least one of the execution units.
43. The processing system of claim 38 wherein each PE further comprises:
- an instruction register for storing the execute VLIW instruction; and
- a predecoder for decoding if the instruction stored in the instruction register in an execute VLIW instruction.
44. The processing system of claim 43 wherein the predecoder of the first PE generates a first signal which is used to initiate the load of the first instruction into the first PE, and wherein the predecoder of the second PE generates a second signal which is used to initiate the load of the second instruction into the second PE.
45. A processing method for a processing system comprising a first processing element (PE) including a first very long instruction word memory (VIM), the first PE communicatively connected to a second PE including a second VIM, the method comprising:
- loading a first function instruction in the first VIM at a first address;
- loading a second function instruction in the second VIM at the first address;
- receiving an execute VLIW instruction; and
- concurrently executing the first function instruction by the first PE and the second function instruction by the second PE, in response to the received execute VLIW instruction;
- wherein the first function instruction stored in the first VIM at the first address and the second function instruction stored in the second VIM at the first address define different operations.
46. The method of claim 45 wherein the first PE includes a base address register and the method further comprising, before the step of loading the first function instruction:
- receiving a load VLIW instruction which contains an address offset;
- predecoding the load VLIW instruction; and
- determining the first address utilizing the address offset and the base address register.
47. The method of claim 45 wherein the first address of the first VIM includes a plurality of slots and wherein the step of loading the first function instruction further comprises:
- receiving the first function instruction; and
- predecoding the first function instruction to determine into which slot the first instruction is to be loaded.
48. The method of claim 47 wherein the step of predecoding the first function instruction further comprises:
- determining if any of said plurality of slots are to be disabled; and
- if any of said plurality of slots are to be disabled, loading a disable bit in a storage bit for each slot which is to be disabled.
49. The method of claim 47 wherein the first function instruction includes at least one group bit defining an instruction type and at least one unit field bit defining an execution unit type, and the step of predecoding the first function instruction utilizes both the instruction type and the execution unit type to determine which slot the first function instruction should be loaded.
50. The method of claim 49 further comprising:
- removing the at least one group bit and the at least one unit field bit from the first function instruction before the first function instruction is loaded into the first VIM; and
- adding at least one replacement bit to the first function instruction.
51. The method of claim 45 wherein the first PE includes a base address register, wherein the execute VLIW instruction includes an address offset, and wherein the step of receiving the execute VLIW instruction further comprises:
- predecoding the execute VLIW instruction; and
- determining the first address utilizing the address offset and the base address register.
52. The processing method of claim 45 wherein the step of loading a first function instruction in the first VIM at a first address further comprises:
- masking the first PE to be enabled; and
- masking the second PE to be disabled.
53. The processing method of claim 45 wherein the step of loading a second function instruction in the second VIM at the first address further comprises:
- masking the first PE to be disabled; and
- masking the second PE to be enabled.
4979096 | December 18, 1990 | Ueda |
5680597 | October 21, 1997 | Kumar et al. |
5930508 | July 27, 1999 | Faraboschi |
5951674 | September 14, 1999 | Moreno |
5963745 | October 5, 1999 | Collins et al. |
5968160 | October 19, 1999 | Saito |
6094715 | July 25, 2000 | Wilkinson |
6122722 | September 19, 2000 | Slavenburg |
- Pechanek, GG., et al., M.F.A.S.T.: A Single Chip Highly Parallel Image Processing Architexture, IEEE, vol. 3, pp. 69-72 (Oct. 23, 1995).
- Supplementary European Search Report for European Patent Application No. 98957630.1 dated Jun. 30, 2005.
Type: Grant
Filed: Jun 21, 2004
Date of Patent: Sep 14, 2010
Assignee: Altera Corp. (San Jose, CA)
Inventors: Gerald George Pechanek (Cary, NC), Thomas L. Drabenstott (Cary, NC), Juan Guillermo Revilla (Austin, TX), David Strube (Raleigh, NC), Grayson Morris (Eindhoven)
Primary Examiner: Eric Coleman
Attorney: Priest & Goldstein, PLLC
Application Number: 10/872,995
International Classification: G06F 15/16 (20060101);