Node identification for distributed shared memory system
An example embodiment of the present invention provides processes relating to a connection/communication protocol and a memory-addressing scheme for a distributed shared memory system. In the example embodiment, a logical node identifier comprises bits in the physical memory addresses used by the distributed shared memory system. Processes in the embodiment include logical node identifiers in packets which conform to the protocol and which are stored in a connection control block in local memory. By matching the logical node identifiers in a packet against the logical node identifiers in the connection control block, the processes ensure reliable delivery of packet data. Further, in the example embodiment, the logical node identifiers are used to create a virtual server consisting of multiple nodes in the distributed shared memory system.
Latest Intellectual Ventures Holding 80 LLC Patents:
The present application is for the broadening reissue of U.S. Pat. No. 7,715,400, entitled “NODE IDENTIFICATION FOR DISTRIBUTED SHARED MEMORY SYSTEM,” which issued May 11, 2010 from U.S. patent application Ser. No. 11/740,432, which was filed Apr. 26, 2007.
This The present application is related to the following commonly-owned U.S. utility patent application, filed on Jan. 29, 2007. whose the disclosure of which is incorporated herein by reference in its entirety for all purposes: U.S. patent application Ser. No. 11/668,275, entitled “Fast Invalidation for Cache Coherency in Distributed Shared Memory System,” filed on Jan. 29, 2007.
TECHNICAL FIELDThe present disclosure relates to an identification process for the nodes in a distributed shared memory system.
BACKGROUNDA distributed shared memory (DSM) is a multiprocessor system in which the processors in the system are connected by a scalable interconnect, such as an InfiniBand switched fabric communications link, instead of a bus. DSM systems present a single memory image to the user, but the memory is physically distributed at the hardware level. Typically, each processor has access to a large shared global memory in addition to a limited local memory, which might be used as a component of the large shared global memory and also as a cache for the large shared global memory. Naturally, each processor will access the limited local memory associated with the processor much faster than the large shared global memory associated with other processors. This discrepancy in access time is called non-uniform memory access (NUMA).
A major technical challenge in DSM systems is ensuring that the each processor's memory cache is consistent with each other processor's memory cache. Such consistency is called cache coherence. To maintain cache coherence in larger distributed systems, additional hardware logic (e.g., a chipset) or software is used to implement a coherence protocol, typically directory-based, chosen in accordance with a data consistency model, such as strict consistency. DSM systems that maintain cache coherence are called cache-coherent NUMA (ccNUMA).
Typically, if additional hardware logic is used, a node in the system will comprise a chip that includes the hardware logic and one or more processors and will be connected to the other nodes by the scalable interconnect. For purposes of initial connection and later communication between nodes, the system might employ node identifiers, e.g., serial, random, or centrally-assigned numbers, which in turn might be used as part of an address for physical memory residing on the node.
SUMMARYIn particular embodiments, the present invention provides methods, apparatuses, and systems directed to node identification in a DSM system. In one particular embodiment, the present invention provides node-identification processes for use with a connection/communication protocol and a memory-addressing scheme in a DSM system.
The following example embodiments are described and illustrated in conjunction with apparatuses, methods, and systems which are meant to be examples and illustrative, not limiting in scope.
A. ccNUMA DMA System with DSM-Management ChipsA DSM system has been developed that provides cache-coherent non-uniform memory access (ccNUMA) through the use of a DSM-management chip. In a particular embodiment, a DSM system may comprise a distributed computer network of up to 16 nodes, connected by a switched fabric, where each node includes two or more Opteron CPUs and one DSM-management chip. In another embodiment, this DSM system comprises up to 256 nodes connected by the switched fabric.
The DSM system allows the creation of a multi-node virtual server which is a virtual machine consisting of multiple CPUs belonging to two or more nodes. In some embodiments, the nodes use a connection/communication protocol to communicate with each other and with virtual I/O servers in the DSM system. Enforcement of the connection/communication protocol is also handled by the DSM-management chip. Consequently, virtual I/O servers include a DSM-management chip, though they do not contribute any physical memory to the DSM system and consequently do not make use of the chip's functionality directly related to cache coherence, in particular embodiments. For a further description of a virtual I/O server, see U.S. patent application Ser. No. 11/624,542, entitled “Virtualized Access to I/O Subsystems”, and U.S. patent application Ser. No. 11/624,573, entitled “Virtual Input/Output Server”, both filed on Jan. 18, 2007, which are incorporated herein by reference for all purposes. As explained below, the connection/communication protocol uses an identifier called a logical node identifier (LNID) to identify source and destination nodes for packets that travel over the switched fabric.
Also as shown in
Also as shown in
In some embodiments, the CMM behaves like both a processor cache on a cache-coherent (e.g., ccHT) bus and a memory controller on a cache-coherent (e.g., ccHT) bus, depending on the scenario. In particular, when a processor on a node performs an access to a home (or local) memory address, the home (or local) memory will generate a probe request that is used to snoop the caches of all the processors on the node. The CMM will use this probe to determine if it has exported the block of memory containing that address to another node and may generate DSM probes (over the fabric) to respond appropriately to the initial probe. In this scenario, the CMM behaves like a processor cache on the cache-coherent bus.
When a processor on a node performs an access to a remote memory, the processor will direct this access to the CMM. The CMM will examine the request and satisfy it from the local cache, if possible, and, in the process, generate any appropriate probes. If the request cannot be satisfied from the local cache, the CMM will send a DSM request to the remote memory's home node to (a) fetch the block of memory that contains the requested data or (b) request a state upgrade. In this case, the CMM will wait for the DSM response before it responds back to the processor. In this scenario, the CMM behaves like a memory controller on the ccHT bus.
The RDM manages the flow of packets across the DSM-managementchip's two fabric interface ports. The RDM has two major clients, the CMM and the DMA Manager (DMM), which initiate packets to be transmitted and consume received packets. The RDM ensures reliable end-to-end delivery of packets using a connection/communication protocol called Reliable Delivery Protocol (RDP). On the fabric side, the RDM interfaces to the selected link/MAC (XGM for Ethernet, IBL for InfiniBand) for each of the two fabric ports. In particular embodiments, the fabric might connect nodes to other nodes. In other embodiments, the fabric might also connect nodes to virtual IO servers. In particular embodiments, the processes using LNIDs described below might be executed by the RDM.
The XGM provides a 10G Ethernet MAC function, which includes framing, inter-frame gap handling, padding for minimum frame size, Ethernet FCS (CRC) generation and checking, and flow control using PAUSE frames. The XGM supports two link speeds: single data rate XAUI (10 Gbps) and double data rate XAUI (20 Gbps). In particular embodiments, the DSM-management chip has two instances of the XGM, one for each fabric port. Each XGM instance interfaces to the RDM, on one side, and to the associated PCS, on the other side.
The IBL provides a standard 4-lane IB link layer function, which includes link initialization, link state machine, CRC generation and checking, and flow control. The IBL block supports two link speeds, data rate (8 Gbps) and double data rate (16 Gbps), with automatic speed negotiation. In particular embodiments, the DSM-management chip has two instances of the IBL, one for each fabric port. Each IBL instance interfaces to the RDM, on one side, and to the associated Physical Coding Sub-layer (PCS), on the other side.
The PCS, along with an associated quad-serdes, provides physical layer functionality for a 4-lane InfiniBand SDR/DDR interface, or a 10G/20G Ethernet XAUI/10GBase-CX4 interface. In particular embodiments, the DSM-management chip has two instances of the PCS, one for each fabric port. Each PCS instance interfaces to the associated IBL and XGM.
The DMM shown in
The DDR2 SDRAM Controller (SDC) attaches to a one or two external 240-pin DDR2 SDRAM DIMM, which is actually external to the DMS-management chip, as shown in both
In some embodiments, the DSM-management chip might comprise an application specific integrated circuit (ASIC), whereas in other embodiments the chip might comprise a field-programmable gate array (FPGA). Indeed, the logic encoded in the chip could be implemented in software for DSM systems whose requirements might allow for longer latencies with respect to cache coherence, DMA, interrupts, etc.
C. RDP Packets and Their HeadersThe Reliable Delivery Protocol allows RDP and non-RDP packets to co-exist on the same fabric. When RDP runs over the Ethernet MAC layer, RDP and non-RDP packets are distinguished from each other by the presence of the VLAN header and the value of the Length/Type field following it. For an RDP packet: (a) the VLAN header is present, i.e., the first Length/Type field (following the last SA byte) has a value of 0x0081; and (h) the second Length/Type field (following the VLAN header) has a value less than 1536 (frame length). An Ethernet frame that does not satisfy both of the above conditions is a non-RDP packet.
When RDP runs over the InfiniBand link layer, RDP and non-RDP packets are distinguished by the values of the LNH field in the IB Local Route Header and the OpCode field in the IB Base Transport Header. For an RDP packet: (a) LNH=0x2 (IBA Local); and (b) OpCode bits [7:6]=0x3 (Manufacturer Specific OpCode). An InfiniBand packet that does not satisfy both of the above conditions is a non-RDP packet.
In particular embodiments, the DSM system uses a software data structure called the connection control block (CCB), stored in local memory such as the local main memory shown in
For an RDP connection between a pair of nodes, the node at each end uses an LNID to refer to the node at the other end. Within a multi-node virtual server (VS), every node is assigned a unique LNID, possibly by some management entity for the DSM system. For example, within a three-node VS, the LNID values might be 0, 1, and 2, or 1, 3, and 4, i.e., they not need to be sequentially incrementing from 0. In addition, every server (multi-node virtual server or standalone server) assigns a unique LNID to each node that communicates with it. For example, a standalone server node that communicates with the virtual server described above might be assigned an LNID value of 16 by the VS. If that same node communicates with another server, it may be assigned the same LNID or a different LNID by that server. Therefore, LNID assignments are unique from the standpoint of a given server, but they are not unique across servers.
An example of LNID assignments is shown in
Table 7.2 shows the SrcLNID and DstLNID values used in the headers of RDP packets exchanged between different node pairs. For example, VS nodes A0 and A1 both belong to virtual server A, so a packet from A0 to A1 will have a SrcLNID value of 0 (LNID assigned to A1 by VS A), and a DstLNID value of 1 (LNID assigned to A1 by VS A). As another example, a packet from A1 to I/O server D will have a SrcLNID value of 2 (LNID assigned to A1 by I/O server D) and a DstLNID value of 16 (LNID assigned by VS A to I/O server D).
As indicated earlier, the DSM system also uses LNIDs in its memory-addressing scheme. In particular embodiments, the physical memory address width is 40-bits (e.g., in DSM systems that use the present generation of Opteron CPUs), though it will be appreciated that there are numerous other suitable widths.
In particular embodiments of the DSM system, the physical address space for a virtual server is arranged so that the local node's memory always starts at address 0 (zero). One reason for using this arrangement is compatibility with legacy system software, in particular embodiments. Specifically, with local memory starting at address 0, system software (e.g., boot code) accesses local memory the same way that it does on a standard server. Another reason for using this arrangement is that it simplifies the address lookup in the CMM. For a memory read/write request from a local processor, an address in the lower 1/16th or 1/256th segment of the 40-bit address space is always local and all other addresses map to memory in other nodes.
To see how the arrangement works, consider the example of a virtual server consisting of three nodes: 0, 1, and 2. In a 16-node DSM system, the total addressable memory space for this virtual server would be 1 terabyte (2^40) and each node would be allocated a segment which is 1/16 of that space (64GB or 2^36). From a global view, the first 64GB segment of the physical address space starting at address 0 would be allocated to node 0 (i.e., the node whose LNID equals 0), the next 64GB segment to node 1, and the following segment to node 2. The remaining 13 segments would be unused since LNIDs 4-15 are not used.
It will be appreciated that in order to accomplish this arrangement, the locations of the local segment and the node 0 segment are swapped in the address map. And since MY_LNID, as defined above, is the LNID assigned to the local node, this is equivalent to swapping MY_LNID with LNID 0 in the address map. However, such a swapping would create confusion in the DSM system if it were applied to memory traffic leaving the node over the switched fabric. Therefore, the node's CMM reverses the swapping for traffic leaving the node.
Particular embodiments of the above-described processes might be comprised of instructions that are stored on storage media. The instructions might be retrieved and executed by a processing system. The instructions are operational when executed by the processing system to direct the processing system to operate in accord with the present invention. Some examples of instructions are software, program code, firmware, and microcode. Some examples of storage media are memory devices, tape, disks, integrated circuits, and servers. The term “processing system” refers to a single processing device or a group of inter-operational processing devices. Some examples of processing devices are integrated circuits and logic circuitry. Those skilled in the art are familiar with instructions, storage media, and processing systems.
Those skilled in the art will appreciate variations of the above-described embodiments that fall within the scope of the invention. In this regard, it will be appreciated that there are many other possible orderings of the steps in the processes described above and many other possible modularizations of those orderings. Also, it will be appreciated that the above processes relating to memory-addressing will work with physical memory addresses that exceed 40-bits in width and DSM systems that have more than 256 nodes. Further, it will be appreciated that the DSM system will work with nodes whose CPUs are not Opterons having a ccHT bus. As a result, the invention is not limited to the specific examples and illustrations discussed above, but only by the following claims and their equivalents.
Claims
1. A method, comprising:
- receiving, at a distributed memory logic circuit of a first node, data for a packet destined to a distributed memory logic circuit of a second node, wherein the first and second nodes are connected by a network switch fabric and are parts of a distributed shared memory system, and wherein the data for the packet includes a physical memory address in which one or more bits in the physical memory address comprise a destination logical node identifier for the second node;
- using the destination logical node identifier as an index into a connection control block to locate an entry for a connection between the first and second nodes, resulting in a located entry of the connection control block, wherein the connection control block is stored in a local memory on the first node;
- building a the packet in a format of a connection and communication protocol using the data, the destination logical node identifier, and a logical node identifier for the first node, wherein the logical node identifier for the first node is included in the located entry of the connection control block entry;
- adding, to the packet, a header that includes a switch fabric address for the second node, wherein the switch fabric address is identified in the located entry of the connection control block; and
- transmitting the packet on a link to the switch fabric.
2. A method as in claim 1, wherein the distributed shared memory system is a cache coherent non-uniform memory access system.
3. A method as in claim 1, wherein the a distributed memory logic circuit in the first node sets the destination logical node identifier to zero if the destination logical node identifier in the physical memory address equals the logical node identifier for the first node.
4. A method, comprising:
- receiving, at a distributed memory logic circuit of a first node, a packet from a distributed memory logic circuit of a second node, wherein the packet includes a source logical node identifier and wherein the first and second nodes are connected by a network switch fabric and are parts of a distributed shared memory system;
- determining whether a destination switch fabric address included in the packet matches a switch fabric address for the first node;
- using the source logical node identifier as an index into a connection control block to locate an entry for the a connection between the first and second nodes, resulting in a located entry of the connection control block, wherein the connection control block is stored in a local memory on the first node;
- determining whether a destination logical node identifier included in the packet matches a logical node identifier for the first node, wherein the logical node identifier for the first node is identified in the located entry of the connection control block; and
- accepting data in the packet for further processing by the first node.
5. The method of claim 4, wherein the packet is discarded if the destination switch fabric address included in the packet does not match the switch fabric address for the first node.
6. The method of claim 4, wherein the packet is discarded if the destination logical node identifier does not match the logical node identifier for the first node identified in the located entry of the connection control block.
7. The method of claim 4, wherein the distributed shared memory system is a cache coherent non-uniform memory access system.
8. A distributed memory logic circuit encoded with executable logic, the logic when executed operable to:
- receive, at the distributed memory logic circuit of a first node, data for a packet destined to a distributed memory logic circuit of a second node, wherein the first and second nodes are connected by a network switch fabric and are parts of a distributed shared memory system,; and wherein the data for the packet includes a physical memory address in which one or more bits in the physical memory address comprise a destination logical node identifier for the second node;
- use the destination logical node identifier as an index into a connection control block to locate an entry for a connection between the first and second nodes, resulting in a located entry of the connection control block, wherein the connection control block is stored in a local memory on the first node;
- build a the packet in a format of a connection and communication protocol using the data, the destination logical node identifier, and a logical node identifier for the first node, wherein the logical node identifier for the first node is included in the located entry of the connection control block entry;
- add, to the packet, a header that includes a switch fabric address for the second node, wherein the switch fabric address is identified in the located entry of the connection control block; and
- transmit the packet on a link to the switch fabric.
9. The distributed memory logic circuit of in claim 8, wherein the distributed shared memory system is a cache coherent non-uniform memory access system.
10. The distributed memory logic circuit of claim 8, wherein the distributed memory logic circuit of the first node sets logic is further operable to set the destination logical node identifier to zero if the destination logical node identifier in the physical memory address equals the logical node identifier for the first node.
11. A distributed memory logic circuit encoded with executable logic, the logic when executed operable to:
- receive, at the distributed memory logic circuit of a first node, a packet from a distributed memory logic circuit of a second node, wherein the packet includes a source logical node identifier and wherein the first and second nodes are connected by a network switch fabric and are parts of a distributed shared memory system;
- determine whether a destination switch fabric address included in the packet matches a switch fabric address for the first node;
- use the source logical node identifier as an index into a connection control block to locate an entry for a connection between the first and second nodes, resulting in a located entry of the connection control block, wherein the connection control block is stored in a local memory on the first node;
- determine whether a destination logical node identifier included in the packet matches a logical node identifier for the first node, wherein the logical node identifier for the first node is identified in the located entry of the connection control block; and
- accept data in the packet for further processing by the first node.
12. The distributed memory logic circuit of claim 11, wherein the packet is discarded if the destination switch fabric address included in the packet does not match the switch fabric address for the first node.
13. The distributed memory logic circuit of claim 11, wherein the packet is discarded if the destination logical node identifier does not match the logical node identifier for the first node identified in the located entry of the connection control block.
14. The distributed memory logic circuit of claim 11, wherein the distributed shared memory system is a cache coherent non-uniform memory access system.
15. A distributed shared memory system comprising:
- a network switch fabric;
- two or more nodes in a distributed shared memory system connected by a the network switch fabric; and wherein, each of the two or more nodes comprises comprising:
- one or more processors,;
- local memory; and
- a distributed shared memory logic circuit,
- wherein the distributed memory logic circuit is encoded with executable logic, the logic that
- when executed, is operable to: receive, at the distributed memory logic circuit of a local node, data for a packet destined to a distributed memory logic circuit of a remote node of the two or more nodes in the distributed shared memory system, wherein the data for the packet includes a physical memory address in which one or more bits in the physical memory address comprise a destination logical node identifier for the remote node, use the destination logical node identifier as an index into a connection control block to locate an entry for a connection between the local node and the remote node, resulting in a local entry of the connection control block, wherein the connection control block is stored in local memory on the local node,
- build a the packet in a format of a connection and communication protocol using the data, the destination logical node identifier, and a logical node identifier for the local node, wherein the logical node identifier for the local node is included in the located entry of the connection control block entry,
- add, to the packet, a header that includes a switch fabric address for the remote node, wherein the switch fabric address is identified in the located entry of the connection control block,
- transmit the packet on a link to the network switch fabric, receive, at the distributed memory logic circuit of the local node, a second packet from a distributed memory logic circuit of the remote node or another remote node of the two or more nodes in the distributed shared memory system, wherein the second packet includes a source logical node identifier,
- determine whether a destination switch fabric address included in the second packet matches a switch fabric address for the local node,
- use the source logical node identifier as an index into the connection control block to locate an entry for a connection between the local and remote node, resulting in a second located entry of the connection control block, determine whether a destination logical node identifier included in the second packet matches a the logical node identifier for the local node, wherein the logical node identifier for the local node is identified in the second located entry of the connection control block, and
- accept data in the packet for further processing by the local node.
16. A method comprising:
- receiving, at a first node in a distributed shared memory system, a message from a second node in the distributed shared memory system, the distributed shared memory system comprising a plurality of interconnected nodes each having a unique logical node identifier, wherein the message indicates a memory operation related to a local memory of the first node and identifies a memory address;
- if a first plurality of contiguous bits of the memory address equal a logical node identifier of the first node, changing the first plurality of contiguous bits to a predetermined value;
- if the first plurality of contiguous bits of the memory address equal the predetermined value, changing the first plurality of contiguous bits to the logical node identifier of the first node; and
- forwarding the message to a processor of the first node for processing.
17. The method of claim 16, wherein the predetermined value is zero.
18. The method of claim 16, wherein each node of the plurality of interconnected nodes internally accesses a respective local memory having memory addresses with a first plurality of contiguous bits set to the predetermined value.
19. The method of claim 16, wherein a given node of the plurality of interconnected nodes accesses a local memory of another node of the plurality of interconnected nodes that has a logical unit identifier equal to the predetermined value using the given node's own respective logical node identifier for the another node.
20. The method of claim 16, wherein the memory operation is one of a read command, a write command, or a probe.
21. A method comprising:
- receiving, at a first node in a distributed shared memory system, a message from a processor of the first node identifying a memory operation related to a local memory of a second node in the distributed shared memory system, the distributed shared memory system comprising a plurality of nodes each having a unique logical unit identifier, the plurality of nodes being interconnected by a switch fabric, wherein the message identifies a memory address;
- if a first plurality of contiguous bits of the memory address equal a logical node identifier of the first node, changing the first plurality of contiguous bits to a predetermined value;
- if the first plurality of contiguous bits of the memory address equal the predetermined value, changing the first plurality of contiguous bits to the logical node identifier of the first node; and
- forwarding the message to the second node for processing.
22. A distributed shared memory system, comprising:
- a network switch fabric; and
- a plurality of nodes interconnected by the network switch fabric, each given node of the plurality of nodes comprising: a logical node identifier of a plurality of contiguous bits; a local memory; a distributed shared memory management chip operative to share the local memory of the given node with others of the plurality of nodes in the distributed shared memory system to create a shared memory accessible using binary addresses comprising a plurality of bits, wherein a set of contiguous most-significant bits of the binary addresses collectively represent a logical node identifier of a node of the plurality of nodes; and one or more processors each operative to access the local memory of the given node, the local memory accessed using binary addresses having the set of contiguous most-significant bits collectively set to a predetermined value, wherein the distributed shared memory management chip is further operative to map the predetermined value to the logical node identifier of the given node in memory management traffic transmitted between the plurality of nodes that include one or more binary addresses of the shared memory.
23. The distributed shared memory system of claim 22, wherein the distributed shared memory management chip of each node of the plurality of nodes is further operative to:
- if the set of contiguous most-significant bits of a given binary address equal the logical node identifier of the given node, change the set of contiguous most-significant bits of the given binary address to the predetermined value; and
- if the set of contiguous most-significant bits of the given binary address equal the predetermined value, change the set of contiguous most-significant bits of the given binary address to the logical node identifier of the given node.
5774731 | June 30, 1998 | Higuchi et al. |
6160814 | December 12, 2000 | Ren et al. |
6757790 | June 29, 2004 | Chalmer et al. |
6877030 | April 5, 2005 | Deneroff |
6922766 | July 26, 2005 | Scott |
20010037435 | November 1, 2001 | Van Doren |
20030076831 | April 24, 2003 | Van Doren et al. |
20040030763 | February 12, 2004 | Manter et al. |
20040148472 | July 29, 2004 | Barroso et al. |
Type: Grant
Filed: May 10, 2012
Date of Patent: Nov 26, 2013
Assignee: Intellectual Ventures Holding 80 LLC (Las Vegas, NV)
Inventors: Shahe Hagop Krakirian (Palo Alto, CA), Isam Akkawi (Sunnyvale, CA)
Primary Examiner: Duc C Ho
Application Number: 13/468,751
International Classification: H04L 12/28 (20060101);