HYPERSCALE SERVER ARCHITECTURE
In a switch fabric-based infrastructure a flexible scalable server is obtained by physical disaggregation of converged resources to obtain pools of a plurality of operationally independent resource element types such as storage, computing, networking and more. A plurality of computing facilities can be created either dynamically or statically by resource element managers by composing instances of resources from such pools of a plurality of resource element types expressed across a single disaggregated logical resource plane.
The present invention relates to a scalable server architecture and a method for implementing a scalable server.
BACKGROUND OF THE INVENTIONTraditionally, a compute node has a processor (or CPU) with defined processing capability, a local memory, allocated for the compute node and its IO interfaces.
This compute node creates an independently capable server with the compute/memory/networking resources that are generally enough to be able to manage the most complex tasks.
The CPU is the master of the node, with sole ownership over the attached memory, and the I/O operations which provides its interface to the external world. The processor I/O is the link the processor has with the other system resources, such as persistent storage (HDD/SDD) and Networking (Ethernet NIC).
This architecture was made popular when the desktop PC became commodity, and hasn't fundamentally changed since, even when adopted into the server market. However, to scale to the computing demands of today's applications, servers must scale beyond the resources that can be supplied by a single compute node.
Existing solutions to these requirements can be summarized as follows.
-
- More processing performance: Use higher performance CPU, along with multicore processing, and NUMA-processing to create a larger more capable processing node.
- Increase the number of CPU memory channels to local memory, so as to increase the total available memory, and the bandwidth to memory.
- Increase the number and speed of the I/O interfaces to support fast/larger storage and network interfaces.
- Duplicate multiples of independent compute nodes and use clustering software (and more recently, hyper-convergent software above a virtualization layer) to try and manage the multiples compute nodes as a single datacentre/cluster.
There are various restrictions to future applicability of these approaches and fundamental physical aspects that means these approaches are reaching the end of their applicability.
-
- Faster processors have reached the economic limits of fabrication technology, bringing the end to the benefits of silicon scaling, and the consequential limit to power/density to further increase CPU performance. Likewise the memory capability is limited by physical silicon size (limited by fabrication and thermal issues) and the number of pins to connect to memory, along with the physical distance memory can be placed away from the processor element.
- To scale beyond a multicore, NUMA-processing enables a small number of compute nodes to share their common view of the memory and IO of the multi-socket server. However, to maintain this illusion of unity, significant complexity is required, with any returns through such scaling becoming negligible after 4 to 6 compute nodes.
- Virtualization software provides balancing between a system's computing resources that must be averaged between multiple applications, however at the cost of loss of performance, and complex management to get right. This is why many cloud providers can show only 10% system utilization on their servers because the balance of resources is not appropriate.
Several solutions have been developed to try to overcome the above limitations of present approaches.
In US 2011/0271014 it is presented a system and a method for scaling memory capacity by identifying a memory page that is accessible via a common physical address, providing direct access to an I/O device by a virtual machine with memory managed using memory disaggregation. In this solution the process is controlled by a single processor which manages the mapping of physical addresses.
In US 2016/0216982 it is presented a forward fabric platform system to scale IO resources comprising a plurality of nodes, an interconnect backplane coupled between the plurality of nodes and owned by the CPU and a Forward Fabric Manager (FFM). The Fabric computing system has an embedded software defined network whose frontend is managed by a security manager which is physically in a node. In this solution everything is controlled and dependent on the host CPU.
In US2012017037 it is presented a distributed storage system to scale available storage comprising a plurality of compute nodes executing one or more application processes capable of accessing a persistent shared memory implemented by solid state devices physically maintained on the nodes, with the application processes that communicate with a shared data fabric (SDF) to access the memory. In this solution each persistent memory is controlled by a controller on the CPU internal to the node.
In US 2014/0122560 is presented a flexible scalable server comprising a plurality of tiled compute nodes, each node comprising a plurality of cores formed of a processor and a switching circuitry. The switching circuitry couples the processor to a network among the cores and the cores implement networking functions within the compute node. In this solution, the inter-node routing is done via software on the computing node, so the processing of the inter-node routing is made by CPUs in the node.
All the above solutions have limitations concerning the need of a CPU in the node that somehow manage the access to the resource elements.
Since the processing element/CPU is the master of the node, interactions between different nodes, and the resources of a node must be controlled and managed by the CPU, creating inefficiencies due to the software processing of I/O transactions, and a limit to the capabilities of any given storage or networking resource. For example, no existing software on a system can manage the bandwidth of a 100 Gb/s Ethernet connection.
In addition, there is no flexibility in the system architecture other than what the CPU enables. For example, if a given processing load needs twice as much IO networking bandwidth to a given compute level, this can only be addressed by a completely different system designed with twice the networking bandwidth interfacing with the processing element. This IO bottleneck is well understood, and effects for example GPU accelerators as well as high speed network interfaces that must today connect to an external host CPU through its PCIe IO interface or within a SoC and the internal host CPU and its comparable IO interface.
SUMMARY OF THE INVENTIONIt is an object of the present invention to propose a scalable server architecture able to overcome the above discussed limits of existing solutions.
According to a first aspect of the present invention, the above objects and further more are attained by a compute node comprising a plurality of physical resource elements defined across a physically converged substrate, and a switch fabric used to couple the physical resource elements each other by using a processor aware addressing scheme to physically disaggregate various types of resource elements so that they form pools of a plurality of operationally independent resource element types expressed within a single plane of disaggregated logical resources. The switch fabric also bridged to an external IO resource through an instance of a resource element type.
According to another aspect of the present invention, the above objects and further more are attained by a scalable server comprising plural compute nodes, each compute node comprising a plurality of physical resource elements defined across a physically converged substrate, and a switch fabric used to couple the physical resource elements to each other by using a processor aware addressing scheme to physically disaggregate resource elements so that they form pools of a plurality of operationally independent resource element types, and wherein the switch fabric is used to couple said compute nodes each other for extending the physically converged substrates in a global physical converged substrate, wherein said pools of a plurality of operationally independent resource elements of each compute node are expressed together within a single plane of disaggregated logical resources. The switch fabric also bridged to external IO resource through an instance of a resource element type.
According to another aspect of the present invention, the above objects and further more are attained by a method of implementing a scalable server machine comprising one or more physically converged substrates, across each physical converged substrate being defined a plurality of physical resource elements, fabric switches for connecting the physical resource elements across the physical converged substrates using a processor native addressing, wherein the method comprises: physically disaggregating the physical resource elements; expressing the disaggregated physical resource elements as pools of a plurality of operationally independent logical resource element types within a single plane of disaggregated logical resources; and abstracting a computing facility from said pools of logical resource elements types by collecting instances of logical resource elements from said pools of logical resource elements types.
The method above defined can adopt and use the most capable of processor devices, along with their physical memory interface capability, to implement the processing element. This element only requires the CPU functionality along with its memory interface, plus at least one link to the global resource switch fabric. This permits a system according to the invention to use the best processors, in a system that do not need costly and market limiting integration of the other system resources.
In addition, since each element of the system can be selected and integrated in different configurations, the solution can address any market with a high return on investment.
Furthermore, since resources are locally attached, then the highest performance and lowest cost can be achieved through integration and resource locality. However, since each compute node also exposes further its share (i.e. everything it can share) to the global resource pool, all resource elements can arbitrate remote access thus creating disaggregated pools of a resource element type.
Finally, since each element can instantiate its physical interfaces anywhere in the global resource substrate, then the capability of any element can be accessed as if that resource was physically attached within the other resource elements. For example, physical storage of a remotely defined resource element can be exposed directly along with the storage of a resource element local to a node. The IO buffers of a device resource element can be placed directly within the memory of any resource element. Such physicalization has the ability to remove the physical limitations of attaching a resource to any single processing element, limitations such as pin count, distance, thermal, fabrication.
For a better comprehension of advantages and features, an embodiment of the invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which:
With reference to
Alternatively, a switch fabric composed of one or more fabric switches is defined independently within the compute node 10 and interfaces to one or more of the physical resource elements 21, 22, 23, 24, 25, 26. Even with this last physical arrangement, the switch fabric is adapted to couple the plurality of physical resource elements 21, 22, 23, 24, 25, 26 of the compute node 10 to each other by using a processor native addressing or processor aware addressing scheme.
Advantageously, the compute node 10 is designed for providing convergence of processing, memory, storage and networking system resources with a physical balanced ratio of capabilities.
Anyway, in different embodiments of the invention, across the physical converged substrate 20 can be defined more or different physical resource elements such as accelerators and any other resource element type used within a computer facility that are either as a master or as a slave to another physical resource element.
In any case, across a physical converged substrate 20 according to the invention are defined a plurality of resource elements selected among processing, storage, networks, accelerators, memories and any such other element type.
According to the present invention, a compute node 10 is the disaggregation of the physical resource elements defined across the physically converged substrate 20, whose result is a plurality of operationally independent resource element types each resource element type being composed of a pool of resource elements.
With reference to
The additional fabric switch 50 operates as a server mounted network resource element to further extend the switch fabrics of the compute nodes 10 in a single common switch fabric and then the physically converged substrate 20 in a global physical converged substrate and to bridge it through an external network.
In
In the embodiment of
Alternatively, the additional fabric switch 50, i.e. the server mounted network resource element, is not present and the network resource elements 24 for the compute nodes are connected directly to an external network. Accordingly, the work of the above additional fabric switch 50 is performed directly by the fabric switches 30 of the compute nodes 10.
In any case, thanks to networking resource elements, a single common switch fabric is created which extends the switch fabrics of the compute nodes 10. With reference to
Obviously, where the physical resource elements defined across any physically converged substrate 20 of a scalable server according to the invention contain other type of resources, such as accelerators, the disaggregated logical resource plane, 71, also contain such resource element type.
With reference to
More specifically, a computing facility 82 can be created, dynamically or statically, by a) physicalization of resource elements through a common physical address space or b) virtualization of resource elements over any form of abstracted communication or c) any combination thereof.
In the logical view of the method the invention, each logical resource element type (72, 73, 74, 75) becomes a logical pool of resources built internally using traditional processor SoC addressing schemes into a global pool of resources. For example, no single resource element is the master of the compute facility 82, and, as such, networking can serve storage without processor element involvement. It also means the capabilities of each compute node 10 can be independently defined and instantiated without the traditional cost of building a new SoC with different IO resource capabilities.
In the physical view of the method of the invention, each compute facility 82 is created with the convergence of processing, memory, storage and networking system resources using a physical balanced ratio of capabilities required to create the compute node 10. A single compute facility 82 therefore can include any number of processing elements 21, storage elements 25, 26 or network resource element 24 for the compute node to create a compute facility 82. Each physical resource element (21, 22, 23, 24, 25, 26) cannot exist independently but only when connected with one or more of the other physical resource element types. The resource elements are arranged using a processor aware addressing scheme of physical disaggregation and therefore they also need to interconnect to become a meaningful system.
There is not a precise CPU allocation and a memory dedicated to the processing unit but a pool of memories distributed across the compute elements which can be used by the different processing units and the different processing units can be connected together to adapt the processing capability to the requirement of the specific required tasks. Likewise, the global resource pool addressing scheme allows the physical IO resources placed anywhere in the system to be attached to a processing element as if the resource was physically attached to the local address bus of the processor.
All logical resources are therefore considered at the same level of importance in the system.
Additionally, it is not necessary to access the CPU to ‘speak’ with the memory or the resources physically associated with a compute facility 82, but access is possible directly through the global resource address without management by any other element of the system, (assuming the appropriate security and access privileges).
It is understood that what described above is a pure and not limiting example, therefore, possible detail variants which could be necessary for technical and/or functional reasons, are considered from now on within the protective scope defined by the claims below.
Claims
1. A compute node comprising a plurality of physical resource elements defined across a physically converged substrate, and a switch fabric configured to couple the physical resource elements each other by using a processor native addressing or processor aware addressing scheme to physically disaggregate various types of resource elements so that they form pools of a plurality of operationally independent resource element types expressed within a single plane of disaggregated logical resources, the switch fabric being also bridged through an external physical network through networking resource elements.
2. The compute node according to claim 1 characterized in that said switch fabric is composed of one or more fabric switches aggregated or distributed in one or more of said physical resource elements.
3. The compute node according to claim 1 characterized in that said switch fabric is defined independently and interfaces to one or more of said physical resource elements.
4. The compute node according to claim 1 characterized in that said physical resource elements defined across said physically converged substrate comprise at least a processing element, a storage element, a memory element or a network resource element.
5. A scalable server comprising a plurality of compute nodes, each compute node comprising a plurality of physical resource elements defined across a physically converged substrate, and a switch fabric configured to couple the physical resource elements to each other by using a processor native addressing or processor aware addressing scheme to physically disaggregate various types of resource elements so that they form pools of a plurality of operationally independent resource element types, the switch fabrics of each compute node being also bridged through an external or embedded physical network through networking resource elements, a single common switch fabric being created from said switch fabrics adapted to couple said compute nodes each other for extending the physically converged substrates into a global physical converged substrate, wherein said pools of a plurality of operationally independent resource element types of each compute node are expressed together within a single plane of disaggregated logical resources.
6. The scalable server according to claim 5 characterized in that said switch fabric is composed of one or more fabric switches aggregated or distributed in one or more of said physical resource elements.
7. The scalable server according to claim 5 characterized in that said switch fabric is composed of one or more independent fabric switches connecting one or more of said physical resource elements.
8. The scalable server according to claim 5 characterized in that said networking resource elements comprise network resource elements for the compute nodes and an additional fabric switch that operates as a server mounted network resource element to further extend the switch fabrics of the compute nodes in a single common switch fabric.
9. The scalable server according to claim 8 characterized in that a plurality of compute nodes are connected to a server mounted network resource element which creates a bridge between the switch fabrics exposed by each network resource element for the compute node to create a single common switch fabric between all compute nodes, and further creating a bridge to an external physical network.
10. The scalable server according to claim 5 characterized in that said networking resource elements comprise network resource elements for the compute nodes connected directly to an external network, said fabric switches of the compute nodes being adapted to create a bridge between the switch fabrics exposed by each network resource element for the compute node to create a single common switch fabric between all compute nodes, and further to create a bridge to an external physical network.
11. A method of implementing a scalable server comprising one or more physically converged substrates, across each physical converged substrate being defined a plurality of physical resource elements, fabric switches for connecting the physical resource elements each other across the physical converged substrates using a processor native addressing scheme, wherein the method comprises:
- physically disaggregating the physical resource elements;
- expressing the disaggregated physical resource elements as pools of a plurality of operationally in-dependent logical resource element types within a single plane of disaggregated logical resources; and
- abstracting a computing facility from said pools of logical resource element types by selecting instances of logical resource elements from said pools of logical resource element types.
12. The method of implementing a scalable server according to claim 11 characterized in that said physical resource elements defined across said physically converged substrate, comprise at least a processing element, a storage element, a memory element and a network resource element.
13. The method of implementing a scalable server according to claim 11 characterized in that each disaggregated physical resource element in a pool of disaggregated physical resource elements of a operationally independent logical resource element type in the disaggregated logical resource plane operates independently of any other disaggregated physical resource element so that a plurality of them can be encapsulated, one or more instances for each logical resource element type, by a disaggregated resource element manager to create said computing facility.
14. The method of implementing a scalable server according to claim 11 characterized in that said computing facility is created, dynamically or statically, by physicalization of resource elements through a common physical address space.
15. The method of implementing a scalable server according to claim 11 characterized in that said computing facility is created, dynamically or statically, by virtualization of resource elements over any form of abstracted communication.
16. The method of implementing a scalable server according to claim 11 characterized in that said computing facility is created, dynamically or statically, by a combination of a physicalization of resource elements through a common physical address space and a virtualization of resource elements over any form of abstracted communication.
Type: Application
Filed: Jan 27, 2017
Publication Date: Feb 20, 2020
Applicant: Kaleao Limited (Cambridge, Cambridgeshire)
Inventors: John GOODACRE (Cambridge), Giampietro TECCHIOLLI (Cambridge)
Application Number: 16/340,073