TECHNOLOGIES FOR MANAGING ALLOCATION OF ACCELERATOR RESOURCES
Technologies for dynamically managing the allocation of accelerator resources include an orchestrator server. The orchestrator server is to assign a workload to a managed node for execution, determine a predicted demand for one or more accelerator resources to accelerate the execution of one or more jobs within the workload, provision, prior to the predicted demand, one or more accelerator resources to accelerate the one or more jobs, and allocate the one or more provisioned accelerator resources to the managed node to accelerate the execution of the one or more jobs. Other embodiments are also described and claimed.
The present application claims the benefit of U.S. Provisional Patent Application No. 62/365,969, filed Jul. 22, 2016, U.S. Provisional Patent Application No. 62/376,859, filed Aug. 18, 2016, and U.S. Provisional Patent Application No. 62/427,268, filed Nov. 29, 2016.
BACKGROUNDIn a typical cloud-based computing environment (e.g., a data center), multiple compute nodes may execute workloads (e.g., processes, applications, services, etc.) on behalf of customers. One or more of the workloads may include sets of functions (e.g., jobs), that could be accelerated using accelerator resources such as field programmable gate arrays (FPGAs), dedicated graphics processors, or other specialized devices for accelerating specific types of jobs. In typical data centers, all or a subset of the compute nodes may be physically equipped (e.g., on the same board as the central processing unit) with one or more accelerator resources. However, in such data centers, the accelerator resources may go unused or may be used only a subset of the time that the workloads are being executed, as many jobs assigned to the compute nodes may not include jobs that are amenable to acceleration. Furthermore, even in data centers in which each compute node is assembled from resources distributed across the data center when a workload is assigned to the compute node, information regarding whether the assigned workload may benefit from acceleration may be unavailable. As such, the compute node may be assembled without the accelerator resources that could be beneficial to the execution of the workload, or may be assembled with one or more accelerator resources that are underutilized (e.g., idle more than a threshold amount of time) during the execution of the workload. As such, the allocation of accelerator resources in typical data centers is problematic and can often result in inefficient use of resources and, as result, unnecessary costs for the operator of the data center.
The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.
While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.
References in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of “at least one A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).
The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.
The illustrative data center 100 differs from typical data centers in many ways. For example, in the illustrative embodiment, the circuit boards (“sleds”) on which components such as CPUs, memory, and other components are placed are designed for increased thermal performance In particular, in the illustrative embodiment, the sleds are shallower than typical boards. In other words, the sleds are shorter from the front to the back, where cooling fans are located. This decreases the length of the path that air must to travel across the components on the board. Further, the components on the sled are spaced further apart than in typical circuit boards, and the components are arranged to reduce or eliminate shadowing (i.e., one component in the air flow path of another component). In the illustrative embodiment, processing components such as the processors are located on a top side of a sled while near memory, such as DIMMs, are located on a bottom side of the sled. As a result of the enhanced airflow provided by this design, the components may operate at higher frequencies and power levels than in typical systems, thereby increasing performance. Furthermore, the sleds are configured to blindly mate with power and data communication cables in each rack 102A, 102B, 102C, 102D, enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, individual components located on the sleds, such as processors, accelerators, memory, and data storage drives, are configured to be easily upgraded due to their increased spacing from each other. In the illustrative embodiment, the components additionally include hardware attestation features to prove their authenticity.
Furthermore, in the illustrative embodiment, the data center 100 utilizes a single network architecture (“fabric”) that supports multiple other network architectures including Ethernet and Omni-Path. The sleds, in the illustrative embodiment, are coupled to switches via optical fibers, which provide higher bandwidth and lower latency than typical twisted pair cabling (e.g., Category 5, Category 5e, Category 6, etc.). Due to the high bandwidth, low latency interconnections and network architecture, the data center 100 may, in use, pool resources, such as memory, accelerators (e.g., graphics accelerators, FPGAs, ASICs, etc.), and data storage drives that are physically disaggregated, and provide them to compute resources (e.g., processors) on an as needed basis, enabling the compute resources to access the pooled resources as if they were local. The illustrative data center 100 additionally receives usage information for the various resources, predicts resource usage for different types of workloads based on past resource usage, and dynamically reallocates the resources based on this information.
The racks 102A, 102B, 102C, 102D of the data center 100 may include physical design features that facilitate the automation of a variety of types of maintenance tasks. For example, data center 100 may be implemented using racks that are designed to be robotically-accessed, and to accept and house robotically-manipulatable resource sleds. Furthermore, in the illustrative embodiment, the racks 102A, 102B, 102C, 102D include integrated power sources that receive a greater voltage than is typical for power sources. The increased voltage enables the power sources to provide additional power to the components on each sled, enabling the components to operate at higher than typical frequencies.
In various embodiments, dual-mode optical switches may be capable of receiving both Ethernet protocol communications carrying Internet Protocol (IP packets) and communications according to a second, high-performance computing (HPC) link-layer protocol (e.g., Intel's Omni-Path Architecture's, Infiniband) via optical signaling media of an optical fabric. As reflected in
MPCMs 916-1 to 916-7 may be configured to provide inserted sleds with access to power sourced by respective power modules 920-1 to 920-7, each of which may draw power from an external power source 921. In various embodiments, external power source 921 may deliver alternating current (AC) power to rack 902, and power modules 920-1 to 920-7 may be configured to convert such AC power to direct current (DC) power to be sourced to inserted sleds. In some embodiments, for example, power modules 920-1 to 920-7 may be configured to convert 277-volt AC power into 12-volt DC power for provision to inserted sleds via respective MPCMs 916-1 to 916-7. The embodiments are not limited to this example.
MPCMs 916-1 to 916-7 may also be arranged to provide inserted sleds with optical signaling connectivity to a dual-mode optical switching infrastructure 914, which may be the same as—or similar to—dual-mode optical switching infrastructure 514 of
Sled 1004 may also include dual-mode optical network interface circuitry 1026. Dual-mode optical network interface circuitry 1026 may generally comprise circuitry that is capable of communicating over optical signaling media according to each of multiple link-layer protocols supported by dual-mode optical switching infrastructure 914 of
Coupling MPCM 1016 with a counterpart MPCM of a sled space in a given rack may cause optical connector 1016A to couple with an optical connector comprised in the counterpart MPCM. This may generally establish optical connectivity between optical cabling of the sled and dual-mode optical network interface circuitry 1026, via each of a set of optical channels 1025. Dual-mode optical network interface circuitry 1026 may communicate with the physical resources 1005 of sled 1004 via electrical signaling media 1028. In addition to the dimensions of the sleds and arrangement of components on the sleds to provide improved cooling and enable operation at a relatively higher thermal envelope (e.g., 250 W), as described above with reference to
As shown in
In another example, in various embodiments, one or more pooled storage sleds 1132 may be included among the physical infrastructure 1100A of data center 1100, each of which may comprise a pool of storage resources that is available globally accessible to other sleds via optical fabric 1112 and dual-mode optical switching infrastructure 1114. In some embodiments, such pooled storage sleds 1132 may comprise pools of solid-state storage devices such as solid-state drives (SSDs). In various embodiments, one or more high-performance processing sleds 1134 may be included among the physical infrastructure 1100A of data center 1100. In some embodiments, high-performance processing sleds 1134 may comprise pools of high-performance processors, as well as cooling features that enhance air cooling to yield a higher thermal envelope of up to 250 W or more. In various embodiments, any given high-performance processing sled 1134 may feature an expansion connector 1117 that can accept a far memory expansion sled, such that the far memory that is locally available to that high-performance processing sled 1134 is disaggregated from the processors and near memory comprised on that sled. In some embodiments, such a high-performance processing sled 1134 may be configured with far memory using an expansion sled that comprises low-latency SSD storage. The optical infrastructure allows for compute resources on one sled to utilize remote accelerator/FPGA, memory, and/or SSD resources that are disaggregated on a sled located on the same rack or any other rack in the data center. The remote resources can be located one switch jump away or two-switch jumps away in the spine-leaf network architecture described above with reference to
In various embodiments, one or more layers of abstraction may be applied to the physical resources of physical infrastructure 1100A in order to define a virtual infrastructure, such as a software-defined infrastructure 1100B. In some embodiments, virtual computing resources 1136 of software-defined infrastructure 1100B may be allocated to support the provision of cloud services 1140. In various embodiments, particular sets of virtual computing resources 1136 may be grouped for provision to cloud services 1140 in the form of SDI services 1138. Examples of cloud services 1140 may include—without limitation—software as a service (SaaS) services 1142, platform as a service (PaaS) services 1144, and infrastructure as a service (IaaS) services 1146.
In some embodiments, management of software-defined infrastructure 1100B may be conducted using a virtual infrastructure management framework 1150B. In various embodiments, virtual infrastructure management framework 1150B may be designed to implement workload fingerprinting techniques and/or machine-learning techniques in conjunction with managing allocation of virtual computing resources 1136 and/or SDI services 1138 to cloud services 1140. In some embodiments, virtual infrastructure management framework 1150B may use/consult telemetry data in conjunction with performing such resource allocation. In various embodiments, an application/service management framework 1150C may be implemented in order to provide QoS management capabilities for cloud services 1140. The embodiments are not limited in this context.
As shown in
As discussed in more detail herein, the orchestrator server 1240, in operation, is configured to assign workloads to managed nodes 1260, receive telemetry data indicative of performance and conditions from the managed nodes 1260 as the workloads are performed, identify jobs within the workloads to be accelerated with one or more accelerator resources 205-2, provision (e.g., configure) the accelerator resources 205-2 to accelerate the identified jobs, and allocate the provisioned accelerator resources 205-2 to the managed nodes 1260 to accelerate the identified jobs. In the illustrative embodiment, the accelerator resources 205-2 include field programmable gate arrays (FPGAs) and the orchestrator server provisions the FPGAs by sending bitstreams indicative of desired configurations of the FPGAs to accelerate particular jobs. The orchestrator server 1240, in the illustrative embodiment, determines when the demand for acceleration for a particular job is likely to occur, based on evaluating the telemetry data and identifying patterns in the execution of the jobs, and sends the bitstreams to the FPGAs ahead of time, to provision the FPGAs in time to accelerate the jobs when the acceleration demand occurs. Additionally, the orchestrator server may receive resource allocation objective data indicative of one or more objectives to be achieved during the execution of the workloads. In the illustrative embodiment, the objectives pertain to power consumption, life expectancy, heat production, and/or performance of the resources allocated to the managed nodes 1260. As the workloads are executed, the orchestrator server 1240 may selectively allocate or deallocate the accelerator resources 205-2 to achieve the resource allocation objectives. In the illustrative embodiment, the achievement of an objective may be measured, equal to, or otherwise defined as the degree to which a measured value from one or more managed nodes 1260 satisfies a target value associated with the objective. For example, in the illustrative embodiment, increasing the achievement may be performed by decreasing the error (e.g., difference) between the measured value (e.g., a time taken to complete a workload or an operation in a workload) and the target value (e.g., a target time to complete the workload or operation in the workload). Conversely, decreasing the achievement may be performed by increasing the error (e.g., difference) between the measured value and the target value.
Referring now to
The CPU 1302 may be embodied as any type of processor capable of performing the functions described herein. The CPU 1302 may be embodied as a single or multi-core processor(s), a microcontroller, or other processor or processing/controlling circuit. In some embodiments, the CPU 1302 may be embodied as, include, or be coupled to a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein. Similarly, the main memory 1304 may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage capable of performing the functions described herein. In some embodiments, all or a portion of the main memory 1304 may be integrated into the CPU 1302. In operation, the main memory 1304 may store various software and data used during operation such as telemetry data, resource allocation objective data, workload labels, workload classifications, job data, resource allocation data, operating systems, applications, programs, libraries, and drivers.
The I/O subsystem 1306 may be embodied as circuitry and/or components to facilitate input/output operations with the CPU 1302, the main memory 1304, and other components of the orchestrator server 1240. For example, the I/O subsystem 1306 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 1306 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with one or more of the CPU 1302, the main memory 1304, and other components of the orchestrator server 1240, on a single integrated circuit chip.
The communication circuitry 1308 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications over the network 1230 between the orchestrator server 1240 and another compute device (e.g., the client device 1220, and/or the managed nodes 1260). The communication circuitry 1308 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication.
The illustrative communication circuitry 1308 includes a network interface controller (NIC) 1310, which may also be referred to as a host fabric interface (HFI). The NIC 1310 may be embodied as one or more add-in-boards, daughtercards, network interface cards, controller chips, chipsets, or other devices that may be used by the orchestrator server 1240 to connect with another compute device (e.g., the client device 1220 and/or the managed nodes 1260). In some embodiments, the NIC 1310 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, the NIC 1310 may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC 1310. In such embodiments, the local processor of the NIC 1310 may be capable of performing one or more of the functions of the CPU 1302 described herein. Additionally or alternatively, in such embodiments, the local memory of the NIC 1310 may be integrated into one or more components of the orchestrator server 1240 at the board level, socket level, chip level, and/or other levels.
The one or more illustrative data storage devices 1312, may be embodied as any type of devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. Each data storage device 1312 may include a system partition that stores data and firmware code for the data storage device 1312. Each data storage device 1312 may also include an operating system partition that stores data files and executables for an operating system.
Additionally or alternatively, the orchestrator server 1240 may include one or more peripheral devices 1314. Such peripheral devices 1314 may include any type of peripheral device commonly found in a compute device such as a display, speakers, a mouse, a keyboard, and/or other input/output devices, interface devices, and/or other peripheral devices.
The client device 1220 and the managed nodes 1260 may have components similar to those described in
As described above, the client device 1220, the orchestrator server 1240, and the managed nodes 1260 are illustratively in communication via the network 1230, which may be embodied as any type of wired or wireless communication network, including global networks (e.g., the Internet), local area networks (LANs) or wide area networks (WANs), cellular networks (e.g., Global System for Mobile Communications (GSM), 3G, Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), etc.), digital subscriber line (DSL) networks, cable networks (e.g., coaxial networks, fiber networks, etc.), or any combination thereof.
Referring now to
Additionally, the illustrative environment 1400 includes workload classifications 1408 which may be embodied as any data indicative of the general resource utilization tendencies of each workload (e.g., processor intensive, memory intensive, network bandwidth intensive, etc.). Further, the illustrative environment 1400 includes job data 1410 indicative of jobs (e.g., sets of functions) within each workload that may be accelerated. In the illustrative embodiment, the job data 1410 is embodied as a queue of jobs to be processed, an indication of the types of functions within the job (e.g., compression, encryption, matrix operations, etc.), information about the format and size of input data used by the job (e.g., number of bytes, whether the input data is formatted as a matrix or otherwise, an encoding scheme for the input data, etc.), a globally unique identifier (GUID) associated with each job, counters indicative of how many times a particular job has been in the queue within a predefined time frame for each workload and across all workloads executed in the data center 1100, the average amount of time each job resides in the queue, and/or other characteristics of the jobs. Additionally, the illustrative embodiment 1400 includes resource allocation data 1412 indicative of the resources, including accelerator resources 205-2, within the data center 1100 that have been allocated to each managed node 1260 at any given time.
In the illustrative environment 1400, the network communicator 1420, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to facilitate inbound and outbound network communications (e.g., network traffic, network packets, network flows, etc.) to and from the orchestrator server 1240, respectively. To do so, the network communicator 1420 is configured to receive and process data packets from one system or computing device (e.g., the client device 1220) and to prepare and send data packets to another computing device or system (e.g., the managed nodes 1260). Accordingly, in some embodiments, at least a portion of the functionality of the network communicator 1420 may be performed by the communication circuitry 1308, and, in the illustrative embodiment, by the NIC 1310.
The telemetry monitor 1430, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to collect the telemetry data 1402 from the managed nodes 1260 as the managed nodes 1260 execute the workloads assigned to them. The telemetry monitor 1430 may actively poll each of the managed nodes 1260 for updated telemetry data 1402 on an ongoing basis or may passively receive telemetry data 1402 from the managed nodes 1260, such as by listening on a particular network port for updated telemetry data 1402. The telemetry monitor 1430 may further parse and categorize the telemetry data 1402, such as by separating the telemetry data 1402 into an individual file or data set for each managed node 1260.
The resource manager 1440, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof, is configured to assign workloads to managed nodes, identify jobs within the workloads to accelerate, predict when acceleration demand will occur within the workloads, provision (e.g., configure) accelerator resources 205-2 in advance of the predicted acceleration demand, and adjust the allocation of accelerator resources 205-2 to and from the managed nodes 1260 on an ongoing basis to improve the efficiency of workload execution and/or satisfy other resource allocation objectives (e.g. from the resource allocation objective data 1404).
To do so, the resource manager 1440 includes a workload labeler 1442, a workload classifier 1444, a workload behavior predictor 1446, an acceleration manager 1448, and a multi-objective analyzer 1450. The workload labeler 1442, in the illustrative embodiment, is configured to assign a workload label 1406 to each workload presently performed or scheduled to be performed by the managed nodes 1260. The workload labeler 1442 may generate the workload label 1406 as a function of an executable name of the workload, a hash of all or a portion of the code of the workload, or based on any other method to uniquely identify each workload. The workload classifier 1444, in the illustrative embodiment, is configured to categorize each labeled workload based on the average resource utilization of each workload (e.g., generally utilizes 65% of processor capacity, generally utilizes 40% of memory capacity, etc.).
The workload behavior predictor 1446, in the illustrative embodiment, is configured to analyze the telemetry data 1402 to identify different phases of resource utilization within the telemetry data 1402 for each workload. Each resource utilization phase may be embodied as a period of time in which the resource utilization of one or more resources allocated to a managed node 1260 satisfies a predefined threshold. For example, a utilization of at least 85% of the allocated processor capacity may be indicative of a high processor utilization phase, and a utilization of at least 85% of the allocated memory capacity may be indicative of a high memory utilization phase. In the illustrative embodiment, the workload behavior predictor 1446 is further to identify patterns in the resource utilization phases of the workloads (e.g., a high processor utilization phase, followed by a high memory utilization phase, followed by a phase of low resource utilization, which is then followed by the high processor utilization phase again). The workload behavior predictor 1446 may be configured to utilize the identifications of the resource utilization phase patterns, determine a present resource utilization phase of a given workload, predict the next resource utilization phase based on the patterns, and determine an amount of remaining time until the workload transitions to the next resource utilization phase.
The acceleration manager 1448, in the illustrative embodiment, is configured to identify, generate, from the telemetry data 1402, the job data 1410, identify jobs within the workloads to be accelerated, based on their types, residency time in the job queue, how often the jobs are executed, and other factors, coordinate selecting and provisioning accelerator resources 205-2, such as FPGAs, available within the data center 1100, and manage the timing of the allocation and/or deallocation of the accelerator resources 205-2 to coincide with predicting times when the jobs to be accelerated are likely to be initiated (e.g., called) by the workloads.
The multi-objective analyzer 1450, in the illustrative embodiment, is configured to whether an efficiency objective and/or other resource allocation objective data 1404 is being met during the execution of workloads, and, determine adjustments to the allocation of resources among the managed nodes 1260 to enable the one more objectives to be satisfied. As such, with regard to the allocation of accelerator resources 205-2, the multi-objective analyzer 1450 coordinates with the acceleration manager 1448 to determine which accelerator resources 205-2 to allocate to which managed nodes 1260 and at what time. In the illustrative embodiment, the multi-objective analyzer 1450 may include a model of the data center 1100 that simulates the expected effects, including power consumption, heat generation, changes to compute capacity, and other factors, in response to various adjustments to the allocations of resources among the managed nodes 1260 and/or the settings of components (e.g., increasing or decreasing clock speeds, enabling or disabling support for extended instruction sets, etc.) within the resources. To do so, in the illustrative embodiment, the multi-objective analyzer 1450 includes a resource allocator 1452 and a resource settings adjuster 1454. The resource allocator 1452, in the illustrative embodiment, is configured to issue instructions to the managed nodes 1260 to allocate or deallocate resources as determined by the multi-objective analyzer 1450 and the acceleration manager 1448, and to update the resource allocation data 1412 to indicate the present state of allocation of the resources among the managed nodes 1260. Similarly, the resource settings adjuster 1454, in the illustrative embodiment, is configured issue instructions to one or more of the managed nodes 1260 to adjust settings of resources allocated to the managed nodes 1260, such as by adjusting a firmware setting to increase or decrease a clock speed of a processor, increasing or decreasing power utilization settings, and/or other settings that affect the operation of the resources.
It should be appreciated that each of the workload labeler 1442, the workload classifier 1444, the workload behavior predictor 1446, the acceleration manager 1448, the multi-objective analyzer 1450, the resource allocator 1452, and the resource settings adjuster 1454 may be separately embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof. For example, the workload labeler 1442 may be embodied as a hardware component, while the workload classifier 1444, the workload behavior predictor 1446, the acceleration manager 1448, the multi-objective analyzer 1450, the resource allocator 1452, and the resource settings adjuster 1454 are embodied as virtualized hardware components or as some other combination of hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof.
Referring now to
In block 1514, in the illustrative embodiment, the orchestrator server 1240 allocates resources to the managed nodes 1260. Initially, the orchestrator server 1240 has not received any telemetry data 1402 to inform a decision as to which resources to allocate to the various managed nodes 1260. As such, as indicated in block 1516, the orchestrator server 1240 may initially allocate no accelerator resources 205-2 to any of the managed nodes 1260. Alternatively, as indicated in block 1518, the orchestrator server 1240 may assign accelerator resources 205-2 among the managed nodes 1260 according to a default scheme (e.g., dividing the accelerator resources 205-2 among the managed nodes 1260 evenly, allocating a predefined number of accelerator resources 205-2 to each managed node 1260 as the managed nodes 1260 are defined until no more available accelerator resources 205-2 are available, etc.). In doing so, the orchestrator server 1240 may defer allocating any FPGAs to the managed nodes 1260 until after the workloads have been assigned and the FPGAs have been provisioned (e.g., configured) to perform one or more jobs to be accelerated, as described in more detail herein. In block 1520, the orchestrator server 1240 assigns workloads to the managed nodes 1260 for execution and, as indicated in block 1522, begins receiving the telemetry data 1402 as the workloads are executed by the managed nodes 1260. Subsequently, the method 1500 advances to block 1524 of
Referring now to
Still referring to
Referring now to
In block 1564, the orchestrator server 1240 provides (e.g., sends) a bitstream indicative of a desired configuration of each FPGA to each FPGA to be provisioned. The bitstream may include a portion specific to the architecture of the particular FPGA (e.g., to initialize the FPGA for configuration) and another portion indicative of the desired configuration of the gates within the FPGA to perform the corresponding job to be accelerated. In providing the bitstreams, in the illustrative embodiment and as indicated in block 1566, the orchestrator server 1240 provides the bitstreams in advance of the predicted time (e.g., the time predicted in block 1546 of
Afterwards, the method 1500 advances to block 1568 in which the orchestrator server 1240 allocates the accelerator resources 205-2 to the managed nodes 1260 to accelerate execution of the workloads (e.g., the workload jobs that were identified for acceleration in block 1526 of
Illustrative examples of the technologies disclosed herein are provided below. An embodiment of the technologies may include any one or more, and any combination of, the examples described below.
Example 1 includes an orchestrator server to dynamically manage the allocation of accelerator resources, the orchestrator server comprising one or more processors; one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the orchestrator server to assign a workload to a managed node for execution; determine a predicted demand for one or more accelerator resources to accelerate the execution of one or more jobs within the workload; provision, prior to the predicted demand, one or more accelerator resources to accelerate the one or more jobs; and allocate the one or more provisioned accelerator resources to the managed node to accelerate the execution of the one or more jobs.
Example 2 includes the subject matter of Example 1, and wherein to determine the predicted demand comprises to determine a demand for one or more field programmable gate arrays (FPGAs).
Example 3 includes the subject matter of any of Examples 1 and 2, and wherein to provision the one or more accelerator resources comprises to provide, to the one or more FPGAs, a bit stream indicative of a configuration of each FPGA to accelerate execution of the one or more jobs.
Example 4 includes the subject matter of any of Examples 1-3, and wherein to determine the predicted demand comprises to determine the number of accelerator resources to allocate to satisfy the predicted demand.
Example 5 includes the subject matter of any of Examples 1-4, and wherein to provision the one or more accelerator resources comprises to provision one or more accelerator resources located on one or more sleds that are different than a sled on which the workload is presently executed.
Example 6 includes the subject matter of any of Examples 1-5, and wherein the plurality of instructions, when executed, further cause the orchestrator server to determine a configuration time period to provision each of the one or more accelerator resources; and determine a predicted time of the predicted demand; and wherein to provision the one or more accelerator resources comprises to begin configuration of the one or more accelerator resources for accelerated execution of the one or more jobs at a time that is earlier than the predicted time by at least the configuration time period.
Example 7 includes the subject matter of any of Examples 1-6, and wherein the plurality of instructions, when executed, further cause the orchestrator server to identify one or more jobs within the workload to be accelerated with one or more field programmable gate arrays (FPGAs); associate each identified job with a globally unique identifier indicative of one or more of a specific interface of the job or a definition of the job.
Example 8 includes the subject matter of any of Examples 1-7, and wherein to associate each identified job with a globally unique identifier comprises to associate each identified job with a globally unique identifier indicative of one or more of a size of an input or a format of an input to the job.
Example 9 includes the subject matter of any of Examples 1-8, and wherein the managed node is one of a plurality of managed nodes and the workload is one of a plurality of workloads executed by the managed nodes and the plurality of instructions, when executed, further cause the orchestrator server to determine, for each workload, a local count indicative of a number of times a job is executed in each workload; determine a global count indicative of a number of times a job is executed by all of the managed nodes; determine whether one or more of the local count or the global count satisfies a threshold count value; and identify, in response to a determination that one or more of the local count or the global count satisfies the threshold count value, the associated job as a job to be accelerated.
Example 10 includes the subject matter of any of Examples 1-9, and wherein the plurality of instructions, when executed, further cause the orchestrator server to identify, from a plurality of accelerator resources, the one or more accelerator resources to accelerate the one or more jobs.
Example 11 includes the subject matter of any of Examples 1-10, and wherein to identify the one or more accelerator resources comprises to determine whether one or more of the accelerator resources is already configured to perform one or more of the jobs; and select, in response to a determination that one or more the accelerator resources is already configured to perform one or more of the jobs, the one or more already-configured accelerator resources for acceleration of the one or more jobs.
Example 12 includes the subject matter of any of Examples 1-11, and wherein to identify the one or more accelerator resources comprises to select the one or more accelerator resources as a function of one or more of a target heat generation, a target power usage, or a target economic cost of utilization of the one or more accelerator resources.
Example 13 includes the subject matter of any of Examples 1-12, and wherein the managed node is one of a plurality of managed nodes and the workload is one of a plurality of workloads executed by the managed nodes, and wherein to determine the demand comprises to establish a job queue indicative of all jobs for all of the workloads to be performed; determine an average time period in which each job resides in the job queue; and determine the demand for each job as a function of the average time period for each job.
Example 14 includes the subject matter of any of Examples 1-13, and wherein to determine the demand for each job further comprises to apply an exponential averaging algorithm to the time period in which each job resides in the job queue.
Example 15 includes a method for dynamically managing the allocation of accelerator resources, the method comprising assigning, by an orchestrator server, a workload to a managed node for execution; determining, by the orchestrator server, a predicted demand for one or more accelerator resources to accelerate the execution of one or more jobs within the workload; provisioning, by the orchestrator server and prior to the predicted demand, one or more accelerator resources to accelerate the one or more jobs; and allocating, by the orchestrator server, the one or more provisioned accelerator resources to the managed node to accelerate the execution of the one or more jobs.
Example 16 includes the subject matter of Example 15, and wherein determining the predicted demand comprises determining a demand for one or more field programmable gate arrays (FPGAs).
Example 17 includes the subject matter of any of Examples 15 and 16, and wherein provisioning the one or more accelerator resources comprises providing, to the one or more FPGAs, a bit stream indicative of a configuration of each FPGA to accelerate execution of the one or more jobs.
Example 18 includes the subject matter of any of Examples 15-17, and wherein determining the predicted demand comprises determining the number of accelerator resources to allocate to satisfy the predicted demand.
Example 19 includes the subject matter of any of Examples 15-18, and wherein provisioning the one or more accelerator resources comprises provisioning one or more accelerator resources located on one or more sleds that are different than a sled on which the workload is presently executed.
Example 20 includes the subject matter of any of Examples 15-19, and further including determining, by the orchestrator server, a configuration time period to provision each of the one or more accelerator resources; and determining, by the orchestrator server, a predicted time of the predicted demand; and wherein provisioning the one or more accelerator resources comprises beginning configuration of the one or more accelerator resources for accelerated execution of the one or more jobs at a time that is earlier than the predicted time by at least the configuration time period.
Example 21 includes the subject matter of any of Examples 15-20, and further including identifying, by the orchestrator server, one or more jobs within the workload to be accelerated with one or more field programmable gate arrays (FPGAs); and associating, by the orchestrator server, each identified job with a globally unique identifier indicative of one or more of a specific interface of the job or a definition of the job.
Example 22 includes the subject matter of any of Examples 15-21, and wherein associating each identified job with a globally unique identifier comprises associating each identified job with a globally unique identifier indicative of one or more of a size of an input or a format of an input to the job.
Example 23 includes the subject matter of any of Examples 15-22, and wherein the managed node is one of a plurality of managed nodes and the workload is one of a plurality of workloads executed by the managed nodes, the method further comprising determining, by the orchestrator server and for each workload, a local count indicative of a number of times a job is executed in each workload; determining, by the orchestrator server, a global count indicative of a number of times a job is executed by all of the managed nodes; determining, by the orchestrator server, whether one or more of the local count or the global count satisfies a threshold count value; and identifying, by the orchestrator server and in response to a determination that one or more of the local count or the global count satisfies the threshold count value, the associated job as a job to be accelerated.
Example 24 includes the subject matter of any of Examples 15-23, and further including identifying, by the orchestrator server and from a plurality of accelerator resources, the one or more accelerator resources to accelerate the one or more jobs.
Example 25 includes the subject matter of any of Examples 15-24, and wherein identifying the one or more accelerator resources comprises determining whether one or more of the accelerator resources is already configured to perform one or more of the jobs, the method further comprising selecting, by the orchestrator server in response to a determination that one or more the accelerator resources is already configured to perform one or more of the jobs, the one or more already-configured accelerator resources for acceleration of the one or more jobs.
Example 26 includes the subject matter of any of Examples 15-25, and wherein identifying the one or more accelerator resources comprises selecting the one or more accelerator resources as a function of one or more of a target heat generation, a target power usage, or a target economic cost of utilization of the one or more accelerator resources.
Example 27 includes the subject matter of any of Examples 15-26, and wherein the managed node is one of a plurality of managed nodes and the workload is one of a plurality of workloads executed by the managed nodes, and wherein determining the demand comprises establishing a job queue indicative of all jobs for all of the workloads to be performed; determining an average time period in which each job resides in the job queue; and determining the demand for each job as a function of the average time period for each job.
Example 28 includes the subject matter of any of Examples 15-27, and wherein determining the demand for each job further comprises applying an exponential averaging algorithm to the time period in which each job resides in the job queue.
Example 29 includes an orchestrator server comprising means for performing the method of any of Examples 15-28.
Example 30 includes an orchestrator server to dynamically manage the allocation of accelerator resources, the orchestrator server comprising one or more processors; one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the orchestrator server to perform the method of any of Examples 15-28.
Example 31 includes one or more machine-readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, cause an orchestrator server to perform the method of any of Examples 15-28.
Example 32 includes an orchestrator server to dynamically manage the allocation of accelerator resources, the orchestrator server comprising resource manager circuitry to assign a workload to a managed node for execution, determine a predicted demand for one or more accelerator resources to accelerate the execution of one or more jobs within the workload, provision, prior to the predicted demand, one or more accelerator resources to accelerate the one or more jobs, and allocate the one or more provisioned accelerator resources to the managed node to accelerate the execution of the one or more jobs.
Example 33 includes the subject matter of Example 32, and wherein to determine the predicted demand comprises to determine a demand for one or more field programmable gate arrays (FPGAs).
Example 34 includes the subject matter of any of Examples 32 and 33, and wherein to provision the one or more accelerator resources comprises to provide, to the one or more FPGAs, a bit stream indicative of a configuration of each FPGA to accelerate execution of the one or more jobs.
Example 35 includes the subject matter of any of Examples 32-34, and wherein to determine the predicted demand comprises to determine the number of accelerator resources to allocate to satisfy the predicted demand.
Example 36 includes the subject matter of any of Examples 32-35, and wherein to provision the one or more accelerator resources comprises to provision one or more accelerator resources located on one or more sleds that are different than a sled on which the workload is presently executed.
Example 37 includes the subject matter of any of Examples 32-36, and wherein the resource manager circuitry is further to determine a configuration time period to provision each of the one or more accelerator resources; and determine a predicted time of the predicted demand; and wherein to provision the one or more accelerator resources comprises to begin configuration of the one or more accelerator resources for accelerated execution of the one or more jobs at a time that is earlier than the predicted time by at least the configuration time period.
Example 38 includes the subject matter of any of Examples 32-37, and wherein resource manager circuitry is further to identify one or more jobs within the workload to be accelerated with one or more field programmable gate arrays (FPGAs); associate each identified job with a globally unique identifier indicative of one or more of a specific interface of the job or a definition of the job.
Example 39 includes the subject matter of any of Examples 32-38, and wherein to associate each identified job with a globally unique identifier comprises to associate each identified job with a globally unique identifier indicative of one or more of a size of an input or a format of an input to the job.
Example 40 includes the subject matter of any of Examples 32-39, and wherein the managed node is one of a plurality of managed nodes and the workload is one of a plurality of workloads executed by the managed nodes and the resource manager circuitry is further to determine, for each workload, a local count indicative of a number of times a job is executed in each workload; determine a global count indicative of a number of times a job is executed by all of the managed nodes; determine whether one or more of the local count or the global count satisfies a threshold count value; and identify, in response to a determination that one or more of the local count or the global count satisfies the threshold count value, the associated job as a job to be accelerated.
Example 41 includes the subject matter of any of Examples 32-40, and wherein the resource manager circuitry is further to identify, from a plurality of accelerator resources, the one or more accelerator resources to accelerate the one or more jobs.
Example 42 includes the subject matter of any of Examples 32-41, and wherein to identify the one or more accelerator resources comprises to determine whether one or more of the accelerator resources is already configured to perform one or more of the jobs; and select, in response to a determination that one or more the accelerator resources is already configured to perform one or more of the jobs, the one or more already-configured accelerator resources for acceleration of the one or more jobs.
Example 43 includes the subject matter of any of Examples 32-42, and wherein to identify the one or more accelerator resources comprises to select the one or more accelerator resources as a function of one or more of a target heat generation, a target power usage, or a target economic cost of utilization of the one or more accelerator resources.
Example 44 includes the subject matter of any of Examples 32-43, and wherein the managed node is one of a plurality of managed nodes and the workload is one of a plurality of workloads executed by the managed nodes, and wherein to determine the demand comprises to establish a job queue indicative of all jobs for all of the workloads to be performed; determine an average time period in which each job resides in the job queue; and determine the demand for each job as a function of the average time period for each job.
Example 45 includes the subject matter of any of Examples 32-44, and wherein to determine the demand for each job further comprises to apply an exponential averaging algorithm to the time period in which each job resides in the job queue.
Example 46 includes an orchestrator server to dynamically manage the allocation of accelerator resources, the orchestrator server comprising circuitry for assigning a workload to a managed node for execution; means for determining a predicted demand for one or more accelerator resources to accelerate the execution of one or more jobs within the workload; circuitry for provisioning, by the orchestrator server and prior to the predicted demand, one or more accelerator resources to accelerate the one or more jobs; and circuitry for allocating the one or more provisioned accelerator resources to the managed node to accelerate the execution of the one or more jobs.
Example 47 includes the subject matter of Example 46, and wherein the means for determining the predicted demand comprises means for determining a demand for one or more field programmable gate arrays (FPGAs).
Example 48 includes the subject matter of any of Examples 46 and 47, and wherein the circuitry for provisioning the one or more accelerator resources comprises circuitry for providing, to the one or more FPGAs, a bit stream indicative of a configuration of each FPGA to accelerate execution of the one or more jobs.
Example 49 includes the subject matter of any of Examples 46-48, and wherein the means for determining the predicted demand comprises means for determining the number of accelerator resources to allocate to satisfy the predicted demand.
Example 50 includes the subject matter of any of Examples 46-49, and wherein the circuitry for provisioning the one or more accelerator resources comprises circuitry for provisioning one or more accelerator resources located on one or more sleds that are different than a sled on which the workload is presently executed.
Example 51 includes the subject matter of any of Examples 46-50, and further including circuitry for determining a configuration time period to provision each of the one or more accelerator resources; and means for determining a predicted time of the predicted demand; and wherein the circuitry for provisioning the one or more accelerator resources comprises circuitry for beginning configuration of the one or more accelerator resources for accelerated execution of the one or more jobs at a time that is earlier than the predicted time by at least the configuration time period.
Example 52 includes the subject matter of any of Examples 46-51, and further including means for identifying one or more jobs within the workload to be accelerated with one or more field programmable gate arrays (FPGAs); and circuitry for associating each identified job with a globally unique identifier indicative of one or more of a specific interface of the job or a definition of the job.
Example 53 includes the subject matter of any of Examples 46-52, and wherein the circuitry for associating each identified job with a globally unique identifier comprises circuitry for associating each identified job with a globally unique identifier indicative of one or more of a size of an input or a format of an input to the job.
Example 54 includes the subject matter of any of Examples 46-53, and wherein the managed node is one of a plurality of managed nodes and the workload is one of a plurality of workloads executed by the managed nodes, the orchestrator server further comprising circuitry for determining, for each workload, a local count indicative of a number of times a job is executed in each workload; circuitry for determining a global count indicative of a number of times a job is executed by all of the managed nodes; circuitry for determining whether one or more of the local count or the global count satisfies a threshold count value; and circuitry for identifying, in response to a determination that one or more of the local count or the global count satisfies the threshold count value, the associated job as a job to be accelerated.
Example 55 includes the subject matter of any of Examples 46-54, and further including circuitry for identifying, from a plurality of accelerator resources, the one or more accelerator resources to accelerate the one or more jobs.
Example 56 includes the subject matter of any of Examples 46-55, and wherein the circuitry for identifying the one or more accelerator resources comprises circuitry for determining whether one or more of the accelerator resources is already configured to perform one or more of the jobs, the orchestrator server further comprising circuitry for selecting, in response to a determination that one or more the accelerator resources is already configured to perform one or more of the jobs, the one or more already-configured accelerator resources for acceleration of the one or more jobs.
Example 57 includes the subject matter of any of Examples 46-56, and wherein the circuitry for identifying the one or more accelerator resources comprises circuitry for selecting the one or more accelerator resources as a function of one or more of a target heat generation, a target power usage, or a target economic cost of utilization of the one or more accelerator resources.
Example 58 includes the subject matter of any of Examples 46-57, and wherein the managed node is one of a plurality of managed nodes and the workload is one of a plurality of workloads executed by the managed nodes, and wherein the means for determining the demand comprises circuitry for establishing a job queue indicative of all jobs for all of the workloads to be performed; circuitry for determining an average time period in which each job resides in the job queue; and circuitry for determining the demand for each job as a function of the average time period for each job.
Example 59 includes the subject matter of any of Examples 46-58, and wherein the means for determining the demand for each job further comprises circuitry for applying an exponential averaging algorithm to the time period in which each job resides in the job queue.
Claims
1. An orchestrator server to dynamically manage the allocation of accelerator resources, the orchestrator server comprising:
- one or more processors;
- one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the orchestrator server to: assign a workload to a managed node for execution; determine a predicted demand for one or more accelerator resources to accelerate the execution of one or more jobs within the workload; provision, prior to the predicted demand, one or more accelerator resources to accelerate the one or more jobs; and allocate the one or more provisioned accelerator resources to the managed node to accelerate the execution of the one or more jobs.
2. The orchestrator server of claim 1, wherein to determine the predicted demand comprises to determine a demand for one or more field programmable gate arrays (FPGAs).
3. The orchestrator server of claim 2, wherein to provision the one or more accelerator resources comprises to provide, to the one or more FPGAs, a bit stream indicative of a configuration of each FPGA to accelerate execution of the one or more jobs.
4. The orchestrator server of claim 1, wherein to determine the predicted demand comprises to determine the number of accelerator resources to allocate to satisfy the predicted demand.
5. The orchestrator server of claim 1, wherein to provision the one or more accelerator resources comprises to provision one or more accelerator resources located on one or more sleds that are different than a sled on which the workload is presently executed.
6. The orchestrator server of claim 1, wherein the plurality of instructions, when executed, further cause the orchestrator server to:
- determine a configuration time period to provision each of the one or more accelerator resources; and
- determine a predicted time of the predicted demand; and
- wherein to provision the one or more accelerator resources comprises to begin configuration of the one or more accelerator resources for accelerated execution of the one or more jobs at a time that is earlier than the predicted time by at least the configuration time period.
7. The orchestrator server of claim 1, wherein the plurality of instructions, when executed, further cause the orchestrator server to:
- identify one or more jobs within the workload to be accelerated with one or more field programmable gate arrays (FPGAs);
- associate each identified job with a globally unique identifier indicative of one or more of a specific interface of the job or a definition of the job.
8. The orchestrator server of claim 7, wherein to associate each identified job with a globally unique identifier comprises to associate each identified job with a globally unique identifier indicative of one or more of a size of an input or a format of an input to the job.
9. The orchestrator server of claim 1, wherein the managed node is one of a plurality of managed nodes and the workload is one of a plurality of workloads executed by the managed nodes and the plurality of instructions, when executed, further cause the orchestrator server to:
- determine, for each workload, a local count indicative of a number of times a job is executed in each workload;
- determine a global count indicative of a number of times a job is executed by all of the managed nodes;
- determine whether one or more of the local count or the global count satisfies a threshold count value; and
- identify, in response to a determination that one or more of the local count or the global count satisfies the threshold count value, the associated job as a job to be accelerated.
10. The orchestrator server of claim 9, wherein the plurality of instructions, when executed, further cause the orchestrator server to identify, from a plurality of accelerator resources, the one or more accelerator resources to accelerate the one or more jobs.
11. The orchestrator server of claim 10, wherein to identify the one or more accelerator resources comprises to determine whether one or more of the accelerator resources is already configured to perform one or more of the jobs; and
- select, in response to a determination that one or more the accelerator resources is already configured to perform one or more of the jobs, the one or more already-configured accelerator resources for acceleration of the one or more jobs.
12. The orchestrator server of claim 10, wherein to identify the one or more accelerator resources comprises to select the one or more accelerator resources as a function of one or more of a target heat generation, a target power usage, or a target economic cost of utilization of the one or more accelerator resources.
13. The orchestrator server of claim 1, wherein the managed node is one of a plurality of managed nodes and the workload is one of a plurality of workloads executed by the managed nodes, and wherein to determine the demand comprises to:
- establish a job queue indicative of all jobs for all of the workloads to be performed;
- determine an average time period in which each job resides in the job queue; and
- determine the demand for each job as a function of the average time period for each job.
14. The orchestrator server of claim 13, wherein to determine the demand for each job further comprises to apply an exponential averaging algorithm to the time period in which each job resides in the job queue.
15. One or more machine-readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, cause an orchestrator server to:
- assign a workload to a managed node for execution;
- determine a predicted demand for one or more accelerator resources to accelerate the execution of one or more jobs within the workload;
- provision, prior to the predicted demand, one or more accelerator resources to accelerate the one or more jobs; and
- allocate the one or more provisioned accelerator resources to the managed node to accelerate the execution of the one or more jobs.
16. The one or more machine-readable storage media of claim 15, wherein to determine the predicted demand comprises to determine a demand for one or more field programmable gate arrays (FPGAs).
17. The one or more machine-readable storage media of claim 16, wherein to provision the one or more accelerator resources comprises to provide, to the one or more FPGAs, a bit stream indicative of a configuration of each FPGA to accelerate execution of the one or more jobs.
18. The one or more machine-readable storage media of claim 15, wherein to determine the predicted demand comprises to determine the number of accelerator resources to allocate to satisfy the predicted demand.
19. The one or more machine-readable storage media of claim 15, wherein to provision the one or more accelerator resources comprises to provision one or more accelerator resources located on one or more sleds that are different than a sled on which the workload is presently executed.
20. The one or more machine-readable storage media of claim 15, wherein the plurality of instructions, when executed, further cause the orchestrator server to:
- determine a configuration time period to provision each of the one or more accelerator resources; and
- determine a predicted time of the predicted demand; and
- wherein to provision the one or more accelerator resources comprises to begin configuration of the one or more accelerator resources for accelerated execution of the one or more jobs at a time that is earlier than the predicted time by at least the configuration time period.
21. The one or more machine-readable storage media of claim 15, wherein the plurality of instructions, when executed, further cause the orchestrator server to:
- identify one or more jobs within the workload to be accelerated with one or more field programmable gate arrays (FPGAs);
- associate each identified job with a globally unique identifier indicative of one or more of a specific interface of the job or a definition of the job.
22. The one or more machine-readable storage media of claim 21, wherein to associate each identified job with a globally unique identifier comprises to associate each identified job with a globally unique identifier indicative of one or more of a size of an input or a format of an input to the job.
23. The one or more machine-readable storage media of claim 15, wherein the managed node is one of a plurality of managed nodes and the workload is one of a plurality of workloads executed by the managed nodes and the plurality of instructions, when executed, further cause the orchestrator server to:
- determine, for each workload, a local count indicative of a number of times a job is executed in each workload;
- determine a global count indicative of a number of times a job is executed by all of the managed nodes;
- determine whether one or more of the local count or the global count satisfies a threshold count value; and
- identify, in response to a determination that one or more of the local count or the global count satisfies the threshold count value, the associated job as a job to be accelerated.
24. The one or more machine-readable storage media of claim 23, wherein the plurality of instructions, when executed, further cause the orchestrator server to identify, from a plurality of accelerator resources, the one or more accelerator resources to accelerate the one or more jobs.
25. An orchestrator server to dynamically manage the allocation of accelerator resources, the orchestrator server comprising:
- circuitry for assigning a workload to a managed node for execution;
- means for determining a predicted demand for one or more accelerator resources to accelerate the execution of one or more jobs within the workload;
- circuitry for provisioning, by the orchestrator server and prior to the predicted demand, one or more accelerator resources to accelerate the one or more jobs; and
- circuitry for allocating the one or more provisioned accelerator resources to the managed node to accelerate the execution of the one or more jobs.
26. A method for dynamically managing the allocation of accelerator resources, the method comprising:
- assigning, by an orchestrator server, a workload to a managed node for execution;
- determining, by the orchestrator server, a predicted demand for one or more accelerator resources to accelerate the execution of one or more jobs within the workload;
- provisioning, by the orchestrator server and prior to the predicted demand, one or more accelerator resources to accelerate the one or more jobs; and
- allocating, by the orchestrator server, the one or more provisioned accelerator resources to the managed node to accelerate the execution of the one or more jobs.
27. The method of claim 26, wherein determining the predicted demand comprises determining a demand for one or more field programmable gate arrays (FPGAs).
28. The method of claim 27, wherein provisioning the one or more accelerator resources comprises providing, to the one or more FPGAs, a bit stream indicative of a configuration of each FPGA to accelerate execution of the one or more jobs.
Type: Application
Filed: Jan 17, 2017
Publication Date: Jan 25, 2018
Inventors: Susanne M. Balle (Hudson, NH), Rahul Khanna (Portland, OR)
Application Number: 15/407,329