FAULTY CORE RECOVERY MECHANISMS FOR A THREE-DIMENSIONAL NETWORK ON A PROCESSOR ARRAY
Embodiments of the invention relate to faulty recovery mechanisms for a three-dimensional (3-D) network on a processor array. One embodiment comprises a multidimensional switch network for a processor array. The switch network comprises multiple switches for routing packets between multiple core circuits of the processor array. The switches are organized into multiple planes. The switch network further comprises a redundant plane including multiple redundant switches. Multiple data paths interconnect the switches. The redundant plane is used to facilitate full operation of the processor array in the event of one or more component failures.
This invention was made with Government support under HR0011-09-C-0002 awarded by Defense Advanced Research Projects Agency (DARPA). The Government has certain rights in this invention.
BACKGROUNDEmbodiments of the invention relate to redundant routing systems, and in particular, faulty recovery mechanisms for a three-dimensional (3-D) network on a processor array.
A processor array contains and manages multiple processing elements. There are different types of processing elements, such as microprocessors, microcontrollers, digital signal processors, graphics processors, reconfigurable processors, fixed function units, hardware accelerators, neurosynaptic neural core circuits, etc. A processor array may include different types of processing elements. The processing elements may be arranged in a one-dimensional array, a two-dimensional array, or a three-dimensional array, or a ring or torus topology. The processing elements are interconnected by a routing system including buses and switches. Packets are communicated between processing elements using the routing system.
BRIEF SUMMARYEmbodiments of the invention relate to faulty recovery mechanisms for a three-dimensional (3-D) network on a processor array. One embodiment comprises a multidimensional switch network for a processor array. The switch network comprises multiple switches for routing packets between multiple core circuits of the processor array. The switches are organized into multiple planes. The switch network further comprises a redundant plane including multiple redundant switches. Multiple data paths interconnect the switches. The redundant plane is used to facilitate full operation of the processor array in the event of one or more component failures.
Another embodiment comprises routing packets between multiple core circuits of a processor array via multiple switches. The switches are organized into multiple planes. The switches are interconnected via multiple data paths. The data paths include at least one redundant data path for bypassing at least one component failure of the processor array.
These and other features, aspects and advantages of the present invention will become understood with reference to the following description, appended claims and accompanying figures.
Embodiments of the invention relate to faulty recovery mechanisms for a three-dimensional (3-D) network on a processor array. One embodiment comprises a multidimensional switch network for a processor array. The switch network comprises multiple switches for routing packets between multiple core circuits of the processor array. The switches are organized into multiple planes. The switch network further comprises a redundant plane including multiple redundant switches. Multiple data paths interconnect the switches. The redundant plane is used to facilitate full operation of the processor array in the event of one or more component failures.
Another embodiment comprises routing packets between multiple core circuits of a processor array via multiple switches. The switches are organized into multiple planes. The switches are interconnected via multiple data paths. The data paths include at least one redundant data path for bypassing at least one component failure of the processor array.
The core circuits 10 may be organized into a one-dimensional (1-D) array, a two-dimensional (2-D) array, a three-dimensional (3-D) array, or a ring or torus topology. In one embodiment, the core circuits 10 are arranged into a two-dimensional array including multiple rows 40 and multiple columns 45 (
The array 50 further comprises a routing system 15 for routing packets between the core circuits 10. The routing system 15 includes multiple switches (i.e., routers) 20 and multiple data paths (i.e., buses) 30. Each switch 20 corresponds to one or more core circuits 10.
For example, as shown in
Each switch 20 is interconnected with a corresponding core circuit 10 via at least one data path 30. Each switch 20 is further interconnected with at least one adjacent neighboring switch 20 via at least one data path 30. For example, as shown in
Each core circuit 10 utilizes a corresponding switch 20 to pass along packets including information in the eastbound, westbound, northbound, or southbound direction. For example, a packet generated by core circuit C00 and targeting core circuit C33 may traverse switches S00, S01, S02, and S03 n the eastbound direction, and switches S13, S23, and S33 in the southbound direction to reach core circuit C33.
Relative to a switch 20, each data path 30 is either an incoming router channel 30F or an outgoing router channel 30B. The switch 20 receives packets from a neighboring component via an incoming router channel 30F. The switch 20 sends packets to a neighboring component via an outgoing router channel 30B. Each incoming router channel 30F has a reciprocal outgoing router channel 30B. An incoming router channel 30F may have a buffer 30Q for maintaining incoming packets. In one embodiment, the incoming packets are maintained in a buffer 30Q in a First In, First Out (FIFO) fashion.
In one embodiment, the switch 20 exchanges packets with neighboring components via multiple sets of router channels, wherein each set of router channels has at least one incoming router channel 30F and at least one reciprocal router channel 30B. As shown in
A second set 25N of router channels (“North router channels”) interconnects the switch 20 with an adjacent neighboring switch 20 to the north of the switch 20 (“north neighboring switch”). The switch 20 receives packets from the north neighboring switch 20 via an incoming router channel 30F of the set 25N, and sends packets to the north neighboring switch 20 via an outgoing router channel 30B of the set 25N.
A third set 25S of router channels (“South router channels”) interconnects the switch 20 with an adjacent neighboring switch 20 to the south of the switch 20 (“south neighboring switch”). The switch 20 receives packets from the south neighboring switch 20 via an incoming router channel 30F of the set 25S, and sends packets to the south neighboring switch 20 via an outgoing router channel 30B of the set 25S.
A fourth set 25E of router channels (“East router channels”) interconnects the switch 20 with an adjacent neighboring switch 20 to the east of the switch 20 (“east neighboring switch”). The switch 20 receives packets from the east neighboring switch 20 via an incoming router channel 30F of the set 25E, and sends packets to the east neighboring switch 20 via an outgoing router channel 30B of the set 25E.
A fifth set 25W of router channels (“West router channels”) interconnects the switch 20 with an adjacent neighboring switch 20 to the west of the switch 20 (“west neighboring switch”). The switch 20 receives packets from the west neighboring switch 20 via an incoming router channel 30F of the set 25W, and sends packets to the west neighboring switch 20 via an outgoing router channel 30B of the set 25W.
For example, referring back to
In this specification, a column 45 including at least one failed core circuit 10 is generally referred to as a failed column. As shown in
Embodiments of the invention provide a redundant routing system for a processor array. The different redundancy granularities disclosed herein include the ability to bypass a single failed core circuit 10, a block of one or more failed core circuits 10, a row of one or more failed core circuits 10, a column of one or more failed core circuits 10, or a plane of one or more failed core circuits 10.
The redundant routing system 100 further comprises additional data paths 30 (
Redundant data paths 30R are present throughout the array 50. For ease of illustration, only enabled redundant data paths 30R (i.e., redundant data paths 30R that are enabled/selected for routing around a component failure) are shown in
A switch 20 exchanges packets with adjacent neighboring switches 20 via normal data paths 30N. A switch 20 may also exchange packets with non-neighboring switches 20 via redundant data paths 30R. As shown in
The redundant routing system 100 further comprises additional core circuits 10, such as core circuits 10 having physical labels 0R, 1R, 2R, and 3R. These additional core circuits 10 are redundant core circuits 10R. The redundant routing system 100 further comprises additional switches 20, such as switches S0R, S1R, S2R, and S3R. These additional switches 20 are redundant switches 20R. Redundant switches S0R, S1R, S2R, and S3R correspond to redundant core circuits 0R, 1R, 2R, and 3R, respectively.
In one embodiment, the redundant core circuits 10R are organized into at least one redundant column 45R. A redundant column 45R may be disposed anywhere in the array 50. Each redundant column 45R is used to recover a failed column 45. The redundant routing system 100 recovers one failed column 45 per redundant column 45R.
In one embodiment, the maximum number of failed core circuits 10 that a redundant column 45R can recover is equal to M, where M is the number of rows 40 (
As shown in
Even though col1 is bypassed entirely, the redundant routing system 100 enables the array 50 to logically look like a complete M×N array. Specifically, colR provides a redundant column 45R that makes the array 50 a complete M×N array. In one example, the columns 45 with physical labels col0, col2, col3, and colR are logically mapped as columns 45 with logical labels col0, col1, col2, and col3, respectively.
Multiple static multiplexers 26 are used to select the switches 20 that the switch 20 exchanges packets with. Specifically, each static multiplexer 26 corresponds to only one set of router channels (e.g., Local router channels 25L, North router channels 25N, South router channels 25S, East router channels 25E, or West router channels 25W). Each static multiplexer 26 is used to select the type of data path 30 that a corresponding set of router channels should receive packets from/send packets to.
In one embodiment, a static multiplexer 26 is used to select either normal data paths 30N (that interconnect the switch 20 to an adjacent neighboring switch 20) or redundant data paths 30R (that interconnect the switch 20 with a non-neighboring switch 20). Relative to a switch 20, each normal data path 30N is either an incoming normal data path 30NF or an outgoing normal data path 30NB, and each redundant data path 30R is either an incoming redundant data path 30RF or an outgoing redundant data path 30RB.
As shown in
Also shown in
A controller 60 is used to select the data paths 30. Specifically, a controller 60 provides a configuration bit to each static multiplexer 26. The configuration bit indicates whether redundancy mode for the array 50 is enabled or disabled. Each static multiplexer 26 selects the type of data path 30 based on the configuration bit received. For example, when the redundancy mode is enabled, redundant data paths (30R) are selected. When the redundancy mode is disabled, normal data paths (30N) are selected instead.
The controller 60 maintains a control register file. In one embodiment, one controller 60 is used for the entire array 50. In another embodiment, each switch 20 or each core circuit 10 has its own controller 60.
In one embodiment, the controller 60 sends a control packet including a configuration bit in-band to each static multiplexer 26. In another embodiment, the controller 60 sends a configuration bit out-of-band (e.g., via a separate communication channel, such as a scan chain or a dedicated bus) to each static multiplexer 26.
Component failures are detected by presenting test vectors. There may be a test vector for each core circuit 10, a test vector for each switch 20, and a test vector for each data path 30. For each test vector, the output generated based on said test vector is compared with expected output. A core circuit 10, a switch 20, or a data path 30 for a test vector is a component failure if the output generated based on the test vector does not equal the expected output. The controller 60 sets configuration bits that result in the bypass of the detected component failures.
Each data path 30 may include one signal wire or multiple signal wires (i.e., a bus of wires). A logic pass-gate may be used in the switching of a single signal wire. In one example implementation, each static multiplexer 26 is implemented using two logic pass-gates (i.e., four transistors 27) per signal wire of a data path 30. Other types of logic can also be used to implement the multiplexers 26.
More than one configuration bit is required for a multiplexer 26 that is configured to select from more than two data paths 30 (see, for example,
The redundant routing system 150 further comprises redundant core circuits 10R, such as core circuits 0R, 1R, 2R, and 3R. The redundant routing system 150 further comprises redundant switches 20R, such as switches S0R, S1R, S2R, and S3R. Redundant switches S0R, S1R, S2R, and S3R correspond to redundant core circuits 0R, 1R, 2R, and 3R, respectively. In one embodiment, the redundant core circuits 10R are organized into at least one redundant column 45R.
The redundant routing system 150 further comprises additional data paths 30 (
Redundant data paths 30R and diagonal data paths 30D are present throughout the array 50. For ease of illustration, only enabled redundant data paths 30R and enabled diagonal data paths 30D (i.e., diagonal data paths 30D that are enabled/selected for routing around a component failure) are shown in
Each switch 20 exchanges packets with adjacent neighboring switches 20 via normal data paths 30N. Some switches 20 may also exchange packets with non-neighboring switches 20 via redundant data paths 30R. Some switches 20 may also exchange packets with diagonally adjacent switches 20 via diagonal data paths 30D. As shown in
The redundant routing system 150 recovers M failed core circuits 10 per redundant column 45R, wherein M is the number of rows 40 (
As shown in
One redundant core circuit 10R of colR, such as redundant core circuit 1R, is used to recover one failed core circuit C11. The remaining core circuits 10R of colR may be used to recover up to three additional failed core circuits 10 as long as the failed core circuits 10 are in different rows 40.
As shown in
As shown in
Also shown in
In another embodiment, multiple sets of redundant router channels (i.e., spare router channels) are used instead of static multiplexers 26. As shown in
The controller 60 (
The redundant routing system 150 further comprises redundant switches 20R, such as switches S0R, S1R, S2R, and S3R. In one embodiment, the redundant switches 20R are organized into at least one redundant router column 45RR. Redundant router columns 45RR are positioned at an end of the array 50.
The redundant routing system 150 further comprises additional data paths 30 (
As shown in
Even though col1 includes at least one component failure, the redundant routing system 200 allows the array 50 to logically operate as a fully functionally M×N network array of switches 20. The redundant routing system 200 uses less area than the redundant routing system 150 of
The redundant routing system 250 operates using a block-based approach. Components of the array 50 are organized into multiple blocks 270. The redundant routing system 250 further comprises redundant core circuits 10R, such as core circuits 0R, 1R, 2R, and 3R. The redundant routing system 250 further comprises redundant switches 20R, such as switches S0R, S1R, S2R, and S3R. Redundant switches S0R, S1R, S2R, and S3R correspond to redundant core circuits 0R, 1R, 2R, and 3R, respectively. In one embodiment, the redundant core circuits 10R are organized into at least one redundant column 45R.
The redundant routing system 250 further comprises additional data paths 30 (
The redundant routing system 250 operates using a block-based approach. Components of the array 50 are organized into multiple blocks 270. The redundant routing system 250 recovers one failed core circuit 10 per block 270, per redundant column 45R. For each block 270 including a failed core circuit 10, redundant data paths 30R within said block 270 are used to bypass components of a column 45 within said block 270, wherein the column 45 includes the failed core circuit 10, and wherein the bypassed components are recovered used components of a redundant column 45R within said block 270. Packets are propagated between blocks 270 using diagonal data paths 30D.
As shown in
Components of col1 within Block 0 (i.e., core circuits C01 and C11, and switches S01 and S11) are entirely bypassed using redundant data paths 30R. Components of redundant column 45R within Block 0 (i.e., redundant core circuits 0R and 1R, and redundant switches S0R and S1R) are used to recover the bypassed components. Diagonal data paths 30D at the edges of Block 0 are used to propagate packets to components of the Block 1.
The redundant routing system 325 further comprises additional data paths 30 (
The redundant routing system 325 further comprises additional core circuits 10, such as core circuits R0, R1, R2, and R3. These additional core circuits 10 are redundant core circuits 10R. The redundant routing system 325 further comprises additional switches 20, such as switches SR0, SR1, SR2, and SR3. These additional switches 20 are redundant switches 20R. Redundant switches SR0, SR1, SR2, and SR3 correspond to redundant core circuits R0, R1, R2, and R3, respectively.
In one embodiment, the redundant core circuits 10R are organized into at least one redundant row 40R. Redundant rows 40R are disposed anywhere in the array 50. In this specification, a row 40 including at least one failed core circuit 10 is generally referred to as a failed row. Each redundant row 40R is used to recover a failed row 40. The redundant routing system 325 recovers one failed row 40 per redundant row 40R.
In one embodiment, the maximum number of failed core circuits 10 that a redundant row 40R may recover is equal to N, wherein N is the number of columns 45 (
As shown in
Even though row1 is bypassed entirely, the redundant routing system 325 enables the array 50 to logically operate as an M×N array. Specifically, rowR provides a redundant row 30R that enables the full operation of the array 50. In one example, the rows 40 with physical labels row0, row2, row3, and rowR are logically mapped as rows 40 with logical labels row0, row1, row2, and row3, respectively.
As shown in
Also shown in
The redundant routing system 350 further comprises redundant core circuits 10R, such as core circuits R0, R1, R2, and R3. The redundant routing system 350 further comprises redundant switches 20R, such as switches SR0, SR1, SR2, and SR3. Redundant switches SR0, SR1, SR2, and SR3 correspond to redundant core circuits R0, R1, R2, and R3, respectively. In one embodiment, the redundant core circuits 10R are organized into at least one redundant row 40R.
The redundant routing system 350 further comprises additional data paths 30 (
As shown in
The redundant routing system 350 recovers N failed core circuits 10 per redundant row 40R, wherein N is the number of rows 40 (
As shown in
One redundant core circuit 10R of rowR, such as redundant core circuit R1, is used to recover failed core circuit C11. The remaining core circuits 10R of rowR may be used to recover up to three additional failed core circuits 10 as long as the failed core circuits 10 are in different columns 45.
As shown in
As shown in
Also shown in
The redundant routing system 400 further comprises multiple redundant core circuits 10R, such as redundant core circuits 0R, 1R, 2R, 3R, R0, R1, R2, R3, and RR. The redundant routing system 400 further comprises multiple redundant switches 20R, such as switches S0R, S1R, S2R, S3R, SR0, SR1, SR2, SR3, and SRR. Redundant switches S0R, S1R, S2R, S3R, SR0, SR1, SR2, SR3, and SRR correspond to redundant core circuits 0R, 1R, 2R, 3R, R0, R1, R2, R3, and RR, respectively.
In one embodiment, the redundant core circuits 10R are organized into at least one redundant column 45R and at least one redundant row 40R. Each redundant column 45R is used to bypass a failed column 45. Each redundant row 40R is used to bypass a failed row 40. The redundant routing system 400 recovers one failed column 45 per redundant column 45R, and one failed row 40 per redundant row 40R.
In one embodiment, the maximum number of failed core circuits 10 that a failed column 45 may have is equal to M, wherein M is the number of rows 40 (
The redundant routing system 400 further comprises additional data paths 30 (
As shown in
As col2 is bypassed entirely, colR is used to recover col2.
Further, each switch S00, S01, S03, and S0R of row0 exchanges packets with non-neighboring switch S20, S21, S23, and S2R of row2 instead of adjacent neighboring switch S10, S11, S13, and S1R of row1, respectively. Similarly, each switch S20, S21, S23, and S2R of row2 exchanges packets with non-neighboring switch S00, S01, S03, and S0R of row0 instead of adjacent neighboring switch S10, S11, S13, and S1R of row1, respectively. As such, switches S10, S11, S13, and S1R of row1 are not used to propagate packets. Switch S12 is also not used to propagate packets.
As row1 is bypassed entirely, rowR is used to recover row1.
A first static multiplexer 26 is used to select one of the following sets of data paths 30 that North router channels 25N should receive packets from/send packets to: a set of normal data paths 30N that interconnect the switch 20 to a north neighboring switch 20, or a set of redundant data paths 30R that interconnect the switch 20 to a north non-neighboring switch 20. For example, referring back to
A second static multiplexer 26 is used to select one of the following sets of data paths 30 that South router channels 25S should receive packets from/send packets to: a set of normal data paths 30N that interconnect the switch 20 to a south neighboring switch 20, or a set of redundant data paths 30R that interconnect the switch 20 to a south non-neighboring switch 20. For example, referring back to
A third static multiplexer 26 is used to select one of the following sets of data paths 30 that East router channels 25E should receive packets from/send packets to: a set of normal data paths 30N that interconnect the switch 20 to a east neighboring switch 20, or a set of redundant data paths 30R that interconnect the switch 20 to a east non-neighboring switch 20. For example, referring back to
A fourth static multiplexer 26 is used to select one of the following sets of data paths 30 that West router channels 25W should receive packets from/send packets to: a set of normal data paths 30N that interconnect the switch 20 to a west neighboring switch 20, or a set of redundant data paths 30R that interconnect the switch 20 to a west non-neighboring switch 20. For example, referring back to
The redundant routing system 450 further comprises multiple redundant core circuits 10R, such as redundant core circuits 0R, 1R, 2R, 3R, R0, R1, R2, R3, and RR. The redundant routing system 450 further comprises multiple redundant switches 20R, such as switches S0R, S1R, S2R, S3R, SR0, SR1, SR2, SR3, and SRR. Redundant switches S0R, S1R, S2R, S3R, SR0, SR1, SR2, SR3, and SRR correspond to redundant core circuits 0R, 1R, 2R, 3R, R0, R1, R2, R3, and RR, respectively. In one embodiment, the redundant core circuits 10R are organized into at least one redundant column 45R and at least one redundant row 40R.
The redundant routing system 450 recovers N failed core circuits 10 per redundant row 40R, and M failed core circuits 10 per redundant column 45R, wherein M is the number of rows 40 (
The redundant routing system 450 further comprises additional data paths 30 (
As shown in
To facilitate full operation of the array 50, core circuits C11, C13, and C32 and corresponding switches S11, S13, and S32, respectively, are bypassed using redundant data paths 30R and diagonal data paths 30D. For example, to shift packets around the failed core circuit C11, switches S01 and S21 exchange packets via at least one redundant data path 30R, switches S10 and S21 exchange packets via at least one diagonal data path 30D, and switches S12 and S21 exchange packets via at least one diagonal data path 30D. As such, switch S11 is not used to propagate packets.
As shown in
A first static multiplexer 26 is used to select one of the following sets of data paths 30 that North router channels 25N should receive packets from/send packets to: a set of normal data paths 30N that interconnect the switch 20 to a north neighboring switch 20, a set of redundant data paths 30R that interconnect the switch 20 to a north non-neighboring switch 20, a set of diagonal data paths 30D that interconnect the switch 20 to a north-east diagonally adjacent switch 20, and a different set of diagonal data paths 30D that interconnect the switch 20 to a north-west diagonally adjacent switch 20.
A second static multiplexer 26 is used to select one of the following sets of data paths 30 that South router channels 25S should receive packets from/send packets to: a set of normal data paths 30N that interconnect the switch 20 to a south neighboring switch 20, a set of redundant data paths 30R that interconnect the switch 20 to a south non-neighboring switch 20, a set of diagonal data paths 30D that interconnect the switch 20 to a south-east diagonally adjacent switch 20, and a different set of diagonal data paths 30D that interconnect the switch 20 to a south-west diagonally adjacent switch 20.
A third static multiplexer 26 is used to select one of the following sets of data paths 30 that East router channels 25E should receive packets from/send packets to: a set of normal data paths 30N that interconnect the switch 20 to an east neighboring switch 20, a set of redundant data paths 30R that interconnect the switch 20 to an east non-neighboring switch 20, a set of diagonal data paths 30D that interconnect the switch 20 to a north-east diagonally adjacent switch 20, and a different set of diagonal data paths 30D that interconnect the switch 20 to a south-east diagonally adjacent switch 20.
A fourth static multiplexer 26 is used to select one of the following sets of data paths 30 that West router channels 25W should receive packets from/send packets to: a set of normal data paths 30N that interconnect the switch 20 to a west neighboring switch 20, a set of redundant data paths 30R that interconnect the switch 20 to a west non-neighboring switch 20, a set of diagonal data paths 30D that interconnect the switch 20 to a north-west diagonally adjacent switch 20, and a different set of diagonal data paths 30D that interconnect the switch 20 to a south-west diagonally adjacent switch 20. For example, referring back to
The redundant routing system 750 further comprises additional data paths 30 (
Each redundant data path 30R interconnects non-neighboring switches 20 in different rows 40 (
As shown in
The number of component failures the redundant routing system 750 can bypass is up to one-half the size of the array 50. The redundant routing system 750 does not utilize redundant core circuits 10R or redundant routers 20R. As such, the number of core circuits 10 that the array 50 logically represents is directly reduced by the number of bypassed core circuits 10.
In one embodiment, each switch 20 of the redundant routing system 750 is implemented in the same manner as each switch 20 of the redundant routing system 400 (
As stated above, multiple core circuits 10 may be organized into a three-dimensional 3-D processor array.
A routing system 500 for a 3-D array 625 comprises multiple 3-D switches 520 and multiple data paths 30. The routing system 500 is a multidimensional switch network. Each 3-D switch 520 corresponds to a core circuit 10 of the array 625. As described in detail later herein, each 3-D switch 520 is interconnected with a corresponding core circuit 10 via at least one data path 30. Each 3-D switch 520 is further interconnected with at least one adjacent neighboring 3-D switch 520 via at least one data path 30.
The processor array has multiple X-Y planes 540 (e.g., Tier 0, Tier 1, and Tier 2), multiple Y-Z planes 545, and multiple X-Z planes 546. As shown in
For ease of illustration, only the corresponding core circuit 10 for switch S220 is shown (i.e., C220).
As described in detail later herein, Z routing interconnects the X-Y planes 540, and X-Y routing interconnects the switches 520 within an X-Y plane 540.
In one embodiment, the 3-D switch 520 exchanges packets with neighboring components via multiple sets of router channels, wherein each set of router channels has an incoming router channel 30F and a reciprocal router channel 30B. As shown in
A second set 25X1 and a third set 25X2 of router channels (“X router channels”) interconnects the 3-D switch 520 with an adjacent neighboring 3-D switch 520 in a first X direction with increasing X coordinates (“X+ direction”), and a different adjacent neighboring 3-D switch 520 in a second X direction with decreasing X coordinates (“X− direction”), respectively.
A fourth set 25Y1 and a fifth set 25Y2 of router channels (“Y router channels”) interconnects the 3-D switch 520 with an adjacent neighboring 3-D switch 520 in a first Y direction with increasing Y coordinates (“Y+ direction”), and a different adjacent neighboring 3-D switch 520 in a second Y direction with decreasing Y coordinates (“Y− direction”), respectively.
A sixth set 25Z1 and a seventh set 25Z2 of router channels (“Z router channels”) interconnects the 3-D switch 520 with an adjacent neighboring 3-D switch 520 in a first Z direction with increasing Z coordinates (“Z+ direction”), and a different adjacent neighboring 3-D switch 520 in a second Z direction with decreasing Z coordinates (“Z− direction”), respectively.
For example, referring back to
The redundant routing system 550 further comprises additional 3-D switches 520, such as 3-D switches R00, R01, R02, R10, R11, R12, R20, R21, and R22. These additional 3-D switches 520 are redundant 3-D switches 520R. In one embodiment, the redundant 3-D switches 520 are organized into at least one redundant plane 545R. A redundant plane 545R may be an X-Y plane 540, a Y-Z plane 545, or a X-Z plane 546. For example, the redundant plane 545R shown in
The redundant routing system 550 further comprises additional data paths 30 (
As shown in
Each 3-D switch 520 exchanges packets with adjacent neighboring 3-D switches 520 via normal data paths 30N. Some 3-D switches 520 may also exchange packets with non-neighboring 3-D switches 520 via redundant data paths 30R. For example, as shown in
As shown in
As the third Y-Z plane 545 including failed 3-D switch S211 is bypassed entirely, a redundant plane 545R is used to recover the bypassed third Y-Z plane 545. Even though only 3-D switch S211 failed, each redundant 3-D switch 520R of the redundant plane 545R serves as a backup for a 3-D switch 520 of the bypassed third Y-Z plane 545. For example, each redundant 3-D switch R00, R01, R02, R10, R11, R12, R20, R21, and R22 of the redundant plane 545R is used to recover 3-D switch S200, S201, S202, S210, S211, S212, S220, S221, and S222 of the third Y-Z plane 545, respectively.
The redundant routing system 600 further comprises additional 3-D switches 520, such as 3-D switches R00, R01, R02, R10, R11, R12, R20, R21, and R22. These additional 3-D switches circuits 520 are redundant 3-D switches 520R. In one embodiment, the redundant 3-D switches 520 are organized into at least one redundant plane 545R.
The redundant routing system 600 further comprises additional data paths 30 (
As shown in
As shown in
The redundant routing system 800 further comprises additional data paths 30 (
Each redundant data path 30R interconnects non-neighboring switches 520 in different X-Y planes 540 (
As shown in
The number of component failures the redundant routing system 800 can bypass is up to one-half the size of the array 625. The redundant routing system 800 does not utilize redundant core circuits 10R or redundant routers 20R. As such, the number of core circuits 10 that the array 625 logically represents is directly reduced by the number of bypassed core circuits 10.
The redundant routing system 800 further comprises additional data paths 30 (
As shown in
As shown in
The array 650 further comprises a routing system 665 for routing packets between the core circuits 10. The routing system 665 includes multiple switches 20 and multiple data paths 30. As shown in
In one embodiment, for each switch 20, the set 25L of Local router channels (
The array 700 further comprises a routing system 715 for routing packets between the core circuits 10. The routing system 715 includes multiple switches 20 and multiple data paths 30. As shown in
As stated above, each switch 20 in
Switch S00 in
Also shown in
The computer system can include a display interface 306 that forwards graphics, text, and other data from the communication infrastructure 304 (or from a frame buffer not shown) for display on a display unit 308. The computer system also includes a main memory 310, preferably random access memory (RAM), and may also include a secondary memory 312. The secondary memory 312 may include, for example, a hard disk drive 314 and/or a removable storage drive 316, representing, for example, a floppy disk drive, a magnetic tape drive, or an optical disk drive. The removable storage drive 316 reads from and/or writes to a removable storage unit 318 in a manner well known to those having ordinary skill in the art. Removable storage unit 318 represents, for example, a floppy disk, a compact disc, a magnetic tape, or an optical disk, etc. which is read by and written to by removable storage drive 316. As will be appreciated, the removable storage unit 318 includes a computer readable medium having stored therein computer software and/or data.
In alternative embodiments, the secondary memory 312 may include other similar means for allowing computer programs or other instructions to be loaded into the computer system. Such means may include, for example, a removable storage unit 320 and an interface 322. Examples of such means may include a program package and package interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units 320 and interfaces 322, which allows software and data to be transferred from the removable storage unit 320 to the computer system.
The computer system may also include a communication interface 324. Communication interface 324 allows software and data to be transferred between the computer system and external devices. Examples of communication interface 324 may include a modem, a network interface (such as an Ethernet card), a communication port, or a PCMCIA slot and card, etc. Software and data transferred via communication interface 324 are in the form of signals which may be, for example, electronic, electromagnetic, optical, or other signals capable of being received by communication interface 324. These signals are provided to communication interface 324 via a communication path (i.e., channel) 326. This communication path 326 carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an RF link, and/or other communication channels.
In this document, the terms “computer program medium,” “computer usable medium,” and “computer readable medium” are used to generally refer to media such as main memory 310 and secondary memory 312, removable storage drive 316, and a hard disk installed in hard disk drive 314.
Computer programs (also called computer control logic) are stored in main memory 310 and/or secondary memory 312. Computer programs may also be received via communication interface 324. Such computer programs, when run, enable the computer system to perform the features of the present invention as discussed herein. In particular, the computer programs, when run, enable the processor 302 to perform the features of the computer system. Accordingly, such computer programs represent controllers of the computer system.
From the above description, it can be seen that the present invention provides a system, computer program product, non-transitory computer-useable storage medium, and method for implementing the embodiments of the invention. The non-transitory computer-useable storage medium has a computer-readable program, wherein the program upon being processed on a computer causes the computer to implement the steps of the present invention according to the embodiments described herein. References in the claims to an element in the singular is not intended to mean “one and only” unless explicitly so stated, but rather “one or more.” All structural and functional equivalents to the elements of the above-described exemplary embodiment that are currently known or later come to be known to those of ordinary skill in the art are intended to be encompassed by the present claims. No claim element herein is to be construed under the provisions of 35 U.S.C. section 112, sixth paragraph, unless the element is expressly recited using the phrase “means for” or “step for.”
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Claims
1. A redundant routing system for a processor array, comprising:
- multiple switches for routing packets between multiple core circuits of the processor array, wherein the multiple switches are organized into multiple blocks, each switch corresponds to at least one core circuit of the multiple core circuits, and each block includes at least one redundant switch for recovering a failed switch and at least one redundant core circuit for recovering a failed core circuit; and
- multiple data paths interconnecting the multiple switches, wherein the multiple data paths include at least one redundant data path interconnecting a pair of non-adjacent switches in a same block to bypass a component failure in the same block, and the multiple data paths further include at least one diagonal data path interconnecting a pair of diagonally adjacent switches in different blocks to propagate packets between the different blocks;
- wherein each bypassed component failure is recovered using at least one of a redundant switch and a redundant core circuit included in at least one of the multiple blocks.
2. The redundant routing system of claim 1, wherein:
- a component is one of the following: a core circuit of the processor array, or a switch of the multiple switches; and
- a component failure is one of the following: a failed core circuit of the processor array, a failed switch of the multiple switches, or a failed data path of the multiple data paths.
3. The redundant routing system of claim 2, wherein:
- the at least one redundant switch and the at least one redundant core circuit are organized into at least one redundant column; and
- the redundant routing system recovers one failed core circuit per block, per redundant column.
4. The redundant routing system of claim 3, wherein:
- components of each block are organized into multiple columns; and
- for a block including a column having a component failure, at least one redundant data path is used to bypass components of the column within the block, and the bypassed components are recovered using the at least one redundant column.
5. The redundant routing system of claim 2, wherein:
- the multiple blocks include a first block and a second block;
- components of the first block include a first switch;
- components of the second block include a second switch;
- the first switch is diagonally adjacent to the second switch; and
- the at least one diagonal data path interconnects the first switch in the first block with the second switch in the second block to propagate packets between the components of the first block and the components of the second block.
6. The redundant routing system of claim 1, wherein:
- each core circuit comprises a processing element for executing and generating data.
7. The redundant routing system of claim 1, further comprising:
- a controller for selecting one or more of the multiple data paths, wherein the one or more selected data paths are used to bypass at least one component failure of the processor array.
8. The redundant routing system of claim 1, wherein:
- the multiple data paths further include at least one normal data path interconnecting a pair of adjacent switches.
9. The redundant routing system of claim 8, wherein:
- each switch is configured to exchange packets with either an adjacent switch, a diagonally adjacent switch or a non-adjacent switch based on one or more configuration bits provided by the controller.
10. A method comprising:
- routing packets between multiple core circuits of a processor array via multiple switches, wherein the multiple switches are organized into multiple blocks, and each switch corresponds to at least one core circuit of the multiple core circuits, and each block includes at least one redundant switch for recovering a failed switch and at least one redundant core circuit for recovering a failed core circuit; and
- interconnecting the multiple switches via multiple data paths, wherein the multiple data paths include at least one redundant data path interconnecting a pair of non-adjacent switches in a same block to bypass a component failure in the same block, and the multiple data paths further include at least one diagonal data path interconnecting a pair of diagonally adjacent switches in different blocks to propagate packets between the different blocks;
- wherein each bypassed component failure is recovered using at least one of a redundant switch and a redundant core circuit included in at least one of the multiple blocks.
11. The method of claim 10, wherein:
- a component is one of the following: a core circuit of the processor array, or a switch of the multiple switches; and
- a component failure is one of the following: a failed core circuit of the processor array, a failed switch of the multiple switches, or a failed data path of the multiple data paths.
12. The method of claim 11, wherein:
- the at least one redundant switch and the at least one redundant core circuit are organized into at least one redundant column; and
- the redundant routing system recovers one failed core circuit per block, per redundant column.
13. The method of claim 12, wherein:
- components of each block are organized into multiple columns; and
- for a block including a column having a component failure, at least one redundant data path is used to bypass components of the column within the block, and the bypassed components are recovered using the at least one redundant column.
14. The method of claim 11, wherein:
- the multiple blocks include a first block and a second block;
- components of the first block include a first switch;
- components of the second block include a second switch;
- the first switch is diagonally adjacent to the second switch; and
- the at least one diagonal data path interconnects the first switch in the first block with the second switch in the second block to propagate packets between the components of the first block and the components of the second block.
15. The method of claim 10, wherein:
- each core circuit comprises a processing element for executing and generating data.
16. The method of claim 10, further comprising:
- selecting, via a controller, one or more of the multiple data paths, wherein the one or more selected data paths are used to bypass at least one component failure of the processor array.
17. The method of claim 10, wherein:
- the multiple data paths further include at least one normal data path interconnecting a pair of adjacent switches.
18. The method of claim 17, wherein:
- each switch is configured to exchange packets with either an adjacent switch, a diagonally adjacent switch or a non-adjacent switch based on one or more configuration bits provided by the controller.
19. A computer program product comprising a computer-readable hardware storage medium having program code embodied therewith, the program code being executable by a computer to implement a method comprising:
- routing packets between multiple core circuits of a processor array via multiple switches, wherein the multiple switches are organized into multiple blocks, and each switch corresponds to at least one core circuit of the multiple core circuits, and each block includes at least one redundant switch for recovering a failed switch and at least one redundant core circuit for recovering a failed core circuit; and
- interconnecting the multiple switches via multiple data paths, wherein the multiple data paths include at least one redundant data path interconnecting a pair of non-adjacent switches in a same block to bypass a component failure in the same block, and the multiple data paths further include at least one diagonal data path interconnecting a pair of diagonally adjacent switches in different blocks to propagate packets between the different blocks;
- wherein each bypassed component failure is recovered using at least one of a redundant switch and a redundant core circuit included in at least one of the multiple blocks.
20. The computer program product of claim 19, wherein:
- a component is one of the following: a core circuit of the processor array, or a switch of the multiple switches; and
- a component failure is one of the following: a failed core circuit of the processor array, a failed switch of the multiple switches, or a failed data path of the multiple data paths.
Type: Application
Filed: Aug 6, 2015
Publication Date: Jun 2, 2016
Inventors: Rodrigo Alvarez-Icaza Rivera (San Jose, CA), John V. Arthur (Mountain View, CA), John E. Barth, JR. (Williston, VT), Andrew S. Cassidy (San Jose, CA), Subramanian Iyer (Mount Kisco, NY), Paul A. Merolla (Palo Alto, CA), Dharmendra S. Modha (San Jose, CA)
Application Number: 14/819,742