SPLIT TRAFFIC ROUTING IN A PROCESSOR

A multi-chip module configuration includes two processors, each having two nodes, each node including multiple cores or compute units. Each node is connected to the other nodes by links that are high bandwidth or low bandwidth. Routing of traffic between the nodes is controlled at each node according to a routing table and/or a control register that optimize bandwidth usage and traffic congestion control.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
FIELD OF INVENTION

This application is related to traffic routing of a processor.

BACKGROUND

In a processor composed of multiple processing units, each having several cores, or compute units, there are links of varying bandwidth between the cores and memory caches which permit traffic transfer. Traffic congestion on any of these links degrades performance of the processor. Diversion of traffic routing to alleviate congestion may result in additional hops to reach the destination, resulting in increased latency for a single transfer.

SUMMARY OF EMBODIMENTS

A multi-chip module configuration includes two processors, each having two nodes, each node including multiple cores or compute units. Each node is connected to the other nodes by links that are high bandwidth or low bandwidth. Routing of traffic between the nodes is controlled at each node according to a routing table and/or a control register that optimize bandwidth usage and traffic congestion control.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is an example functional block diagram of a processor node, including several computing units, a routing table and a crossbar unit that interfaces with links to other nodes; and

FIGS. 2-4 are example functional block diagrams of a processor configuration having traffic flow across various links between processor nodes.

DETAILED DESCRIPTION OF THE EMBODIMENTS

In this application, a processor may include a plurality of nodes, with each node having a plurality of computing units. A multi-chip processor is configured to include at least two processors with means to link the nodes to other nodes, and to memory caches.

FIG. 1 is an example functional block diagram of a processor 110. The processor 110 may be any one of a variety of processors such as a Central Processing Unit (CPU) or a Graphics Processing Unit (GPU). For instance, it may be a x86 processor that implements x86 64-bit instruction set architecture and used in desktops, laptops, servers, and superscalar computers, or it may be an Advanced RISC (Reduced Instruction Set Computer) Machine (ARM) processor that is used in mobile phones or digital media players. Other embodiments of the processor are contemplated, such as Digital Signal Processors (DSP) that are particularly useful in the processing and implementation of algorithms related to digital signals, such as voice data and communication signals, and microcontrollers that are useful in consumer applications, such as printers and copy machines.

As shown, processor 110 includes computing units 105, 106 and 107, which are connected to a system request queue (SRQ) 113 used as a command queue for the computing units 105, 106, 107. A crossbar (Xbar) switch 112 interfaces between links L1, L2, L3 and L4 and the SQR 113. A routing table 111 and a control register 114 are each configured to control the crossbar interface 112 and the traffic routing over the links L1, L2, L3 and L4. While four links L1, L2, L3 and L4 are depicted in FIG. 1, this is by way of example, and more or less links may be implemented in the processor node 110 configuration, including links of various throughput capacities.

FIG. 2 shows an example functional block diagram of a multi-processor configuration 200, where two-node processors 201 and 202 are connected by links 253, 254, 255 and 256. Processor 201 includes processor nodes 110 and 120 connected by link 251. Memory cache 210 is connected to processor node 110 by a memory channel 211 and memory cache 220 is connected to the processor node 120 by memory channel 221. The processor 202 includes processor nodes 130 and 140, connected by link 252. Memory channel 231 connects memory cache 230 to the processor node 130, and memory channel 241 connects memory cache 240 to the processor node 140. Links 257 and 258 are available to connect I/O devices 205, 206, such as network cords and graphic drivers, to the processors 201 and 202. In this example configuration, each of cross links 255 and 256 are a low bandwidth connection (e.g., an 8-bit connection, or a half-link), while links 251, 252, 253 and 254 are high bandwidth connections (e.g., a 16-bit connection, or a full-link). Alternatively, any of links 251, 252, 253 and 254 may each include multiple connections (e.g., one full link and one half link). In this example, the routing table 111 provides a direct path for all node-to-node transfers. For example, if processor node 110 needs to send a request 261 to processor node 140, the cross link 255 is used as the direct path. Using this form of routing selection, there is a low latency for a single request. Statistically, all links will carry an equal distribution of traffic. Therefore, the upper bandwidth limit for the traffic rate of the multi-processor configuration 200 is set by the smaller bandwidth links 255 and 256.

FIG. 3 shows an example functional block diagram of a block diagram of a multi-processor configuration 300, which resembles the configuration 200 shown in FIG. 2. In this example, routing table 111 provides an alternative routing scheme that keeps traffic on the high bandwidth links 251, 252, 253 and 254. For example, if processor node 110 has a request to send to processor node 140, the routing is configured as a two-hop request 361, 362 along links 251 and 254. Accordingly, the latency for this single request is approximately double the latency of the single-hop request 261. However, the upper bandwidth limit for request traffic according to configuration 300 is higher based on the minimum bandwidth of the links 251, 252, 253, 254. An optional alternative for this configuration 300 is for the routing table 111 to divert request traffic on the high bandwidth links 251, 252, 253, and 254, while sending response traffic on the low bandwidth links 255 and 256, where response traffic is significantly lower than request traffic. This keeps the upper bandwidth limit for the multi-processor configuration 300 based on the minimum bandwidth of the high bandwidth links 251, 252, 253, and 254, since most of the traffic is diverted there.

FIG. 4 shows an example functional block diagram of a multi-processor configuration 400 for a split traffic routing scheme. The physical configuration resembles that of configurations 200 and 300. However, the control register 114 is configured to control traffic based on whether the traffic is related to victim requests and their associated responses, or whether the traffic is related to non-victim requests and responses. According to this routing scheme, only victim requests and associated responses follow the high bandwidth links 251, 252, 253 and 254. Since victim traffic is generally not sensitive to latency, a two-hop transmission routing scheme for this traffic does not impede processor performance. This routing scheme is also favorable since there is generally higher victim traffic volume than non-victim traffic, which can be better served by the higher bandwidth links 251, 252, 253, 254. Moreover, evicted victims are not required to be ordered, and are better suited for the longer routing paths, compared to non-victim requests.

In order to enable the victim requests and responses to be routed according to the split routing scheme along the high bandwidth links, a special mode bit cHTVicDistMode is set in the control register 114 (e.g., a coherent link traffic distribution register). For example, the compute unit 105, 106, 107 may set a value of 1 for the mode bit cHTVicDistMode when a link pair traffic distribution is enabled, such as processor node pair 110 and 140. Alternatively, the mode bit cHTVicDistMode may be set to 1 to indicate that the split traffic scheme is enabled without having enabled the pair traffic distribution. In addition, the following settings may be made by the compute unit 105, 106, 107 to the control register 114 to enable and define parameters for the split routing scheme. A distribution node identification bit in element DistNode [5:0] is set for each of the processor nodes involved with the distribution (e.g., for this 5-bit element with binary value range of 0 to 31, a value 0 may be assigned to processor node 110, and a value 3 may be assigned to processor node 140). A destination link element DstLnk [7:0] is specified for a single link. For example, for this 8-bit element, bit 0 may be assigned to link 251, bit 1 may be assigned to link 253, bit 2 may be assigned to link 255, and setting the destination link to link 251 would be achieved by setting bit 0 to value 1. Using this enablement setting scheme for processor node 110 by way of example, when a victim packet is detected and heading toward the distribution node identified by the bit DistNode, such as processor node 140, the victim packet is routed to the destination link that is specified by the bit DstLnk (high bandwidth link 251) instead of the destination link as defined in the routing table 111 (low bandwidth link 255). Additional refinement to the split traffic routing scheme can be achieved by providing indicators as to whether the split routing scheme should handle a victim request or a victim response or both. To indicate that a victim request is enabled for the split routing scheme, a coherent request distribution enable bit cHTReqDistEn is set to 1. If it is desired to control only the associated victim response, or to control the victim response additionally to the victim request using the split traffic routing, a coherent response distribution enable bit cHTRspDistEn is set to 1.

In a variation to the above described embodiment, the routing table 111 may be configured with the parameters of the split traffic routing scheme such that the split traffic routing is enabled to be executed directly according to the routing indicated in the routing table 111, instead of the control register 114.

The victim distribution mode for a processor node in the configuration illustrated in FIG. 4 (i.e., split traffic routing) is enabled in specific conditions, including by way of example, only if the following are true: (1) a victim distribution processor node is enabled for the processor; (2) the victim distribution processor node connects to another processor node, a destination processor node, directly with only one unganged link hop on a low bandwidth link and indirectly through two ganged link hops on at least high bandwidth links. For example, the method described above with respect to FIG. 4 pertains to a distribution processor node 110 and destination processor node 140, which satisfy the above specific conditions.

Table 1 shows an example of a utilization table comparing link utilization based on implementation of the above configurations 200 and 400, having read:write ratios that are a function of the workload. As shown, when routing is evenly distributed across high bandwidth links and low bandwidth links (i.e. configuration 200), the high bandwidth link utilization is 50% which corresponds to the 2:1 link size ratio. Using the split routing scheme of configuration 400, the high bandwidth and low bandwidth links can be more evenly utilized.

TABLE 1 High:Low Low High Bandwidth bandwidth bandwidth Read:Write Link Size link link Configuration ratio Ratio utilization utilization 200 2:1 2:1 100%   50% 400 2:1 2:1 98% 100% 400 3:1 2:1 92% 100%

Although features and elements are described above in particular combinations, each feature or element can be used alone without the other features and elements or in various combinations with or without other features and elements. The apparatus described herein may be manufactured by using a computer program, software, or firmware incorporated in a computer-readable storage medium for execution by a general purpose computer or a processor. Examples of computer-readable storage mediums include a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).

Embodiments of the present invention may be represented as instructions and data stored in a computer-readable storage medium. For example, aspects of the present invention may be implemented using Verilog, which is a hardware description language (HDL). When processed, Verilog data instructions may generate other intermediary data (e.g., netlists, GDS data, or the like) that may be used to perform a manufacturing process implemented in a semiconductor fabrication facility. The manufacturing process may be adapted to manufacture semiconductor devices (e.g., processors) that embody various aspects of the present invention.

Suitable processors include, by way of example, a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), and/or a state machine. Such processors may be manufactured by configuring a manufacturing process using the results of processed hardware description language (HDL) instructions (such instructions capable of being stored on a computer readable media). The results of such processing may be maskworks that are then used in a semiconductor manufacturing process to manufacture a processor which implements aspects of the present invention.

Claims

1. A method comprising:

monitoring victim traffic and non-victim traffic between nodes of a processor;
selecting a routing scheme for the victim traffic that utilizes high bandwidth links between the nodes and a routing scheme for the non-victim traffic that utilizes low bandwidth links between the nodes; and
setting a control register to enable the routing scheme.

2. The method as in claim 1, wherein setting the control register includes setting a routing mode bit when distribution is enabled for a particular pair of processor nodes.

3. The method as in claim 2, wherein setting the control register includes:

setting a distribution node identification bit for each of the processor nodes involved with the distribution; and
setting a destination link element.

4. The method as in claim 1, wherein setting the control register includes a setting a coherent request distribution enable bit to indicate that the routing scheme is enabled to handle victim requests.

5. The method as in claim 1, wherein setting the control register includes a setting a coherent request distribution enable bit to indicate that the routing scheme is enabled to handle victim responses.

6. The method as in claim 1, wherein the victim traffic on the high bandwidth links includes a ganged two-hop request and the non-victim traffic on the low bandwidth links includes an unganged one-hop request.

7. The method as in claim 1, further comprising executing the routing scheme in the processor, where the processor includes at least three nodes, a first processor node connected to a second processor node by a low bandwidth link, a third processor node connected to the first processor node by a first high bandwidth link and connected to the second processor node by a second high bandwidth link;

wherein victim traffic is routed from the first node to the second node along the first and second high bandwidth links, and non-victim traffic is routed from the first node to the third node along the low bandwidth link.

8. A processor, comprising:

a first processor node connected to a second processor node by a low bandwidth link;
a third processor node connected to the first processor node by a first high bandwidth link and connected to the second processor node by a second high bandwidth link;
wherein each of the processor nodes comprise: a plurality of compute units connected to a cross bar switch, the cross bar switch configured to control traffic sent from the compute units to a designated link; and the compute units configured to set a control register having a defined routing scheme that determines the designated link, such that when executing the routing scheme, the cross bar switch is controlled to send victim traffic on the first and second high bandwidth links and to send non-victim traffic on the low bandwidth link.

9. The processor as in claim 8, wherein at least one of the plurality of compute units sets a routing mode bit in the control register when distribution is enabled for a particular pair of processor nodes.

10. The processor as in claim 9, wherein at least one of the plurality of compute units sets a distribution node identification bit in the control register for each of the processor nodes involved with the distribution and sets a destination link element.

11. The processor as in claim 8, wherein at least one of the plurality of compute units sets a coherent request distribution enable bit in the control register to indicate that the routing is enabled to handle victim requests.

12. The processor as in claim 8, wherein at least one of the plurality of compute units sets a coherent request distribution enable bit in the control register to indicate that the routing is enabled to handle victim responses.

13. The processor as in claim 8, wherein the victim traffic on the high bandwidth links includes a ganged two-hop request and the non-victim traffic on the low bandwidth links includes an unganged one-hop request.

14. A computer-readable storage medium storing a set of instructions for execution by one or more processors to perform a split routing scheme, the set of instructions comprising:

monitoring victim traffic and non-victim traffic between nodes of a processor;
selecting a routing scheme for the victim traffic that utilizes high bandwidth links between the nodes and a routing scheme for the non-victim traffic that utilizes low bandwidth links between the nodes.

15. The medium as in claim 14, wherein the victim traffic on the high bandwidth links includes a ganged two-hop request and the non-victim traffic on the low bandwidth links includes an unganged one-hop request.

16. The medium as in claim 14, the set of instructions further comprising:

enabling a distribution node and a destination link for the routing scheme.

17. The medium as in claim 14, the set of instructions further comprising:

enabling the routing scheme to handle victim requests.

18. The medium as in claim 14, the set of instructions further comprising:

enabling the routing scheme to handle victim responses.
Patent History
Publication number: 20120155273
Type: Application
Filed: Dec 15, 2010
Publication Date: Jun 21, 2012
Applicant: ADVANCED MICRO DEVICES, INC. (Sunnyvale, CA)
Inventors: William A. Hughes (San Jose, CA), Chenping Yang (Fremont, CA), Michael K. Fertig (Sunnyvale, CA), Kevin M. Lepak (Austin, TX)
Application Number: 12/968,857
Classifications
Current U.S. Class: Including Signaling Between Network Elements (370/236)
International Classification: H04L 12/26 (20060101);