TECHNOLOGIES FOR OFFLOADING I/O INTENSIVE OPERATIONS TO A DATA STORAGE SLED
Technologies for offloading I/O intensive workload phases to a data storage sled include a compute sled. The compute sled is to execute a workload that includes multiple phases. Each phase is indicative of a different resource utilization over a time period. Additionally, the compute sled is to identify an I/O intensive phase of the workload, wherein the amount of data to be communicated through a network path between the compute sled and the data storage sled to execute the I/O intensive phase satisfies a predefined threshold. The compute sled is also to migrate the workload to the data storage sled to execute the I/O intensive phase locally on the data storage sled. Other embodiments as also described and claimed.
The present application claims the benefit of U.S. Provisional Patent Application No. 62/427,268, filed Nov. 29, 2016 and Indian Provisional Patent Application No. 201741030632, filed Aug. 30, 2017.
BACKGROUNDTypically, in systems in which data is accessed by a compute device from remote data storage (e.g., data stored at a location remote from the compute device within a data center), the network can become congested when the amount of data requested is relatively large. As such, other compute devices may be unable to perform operations that also require the communication of relatively large amounts of data through the network in a timely manner (e.g., in accordance with a latency or throughput target specified in a service level agreement with a customer). In other words, the network may become a bottleneck for the execution of workloads in the data center and the compute resources (e.g., processors) of the compute devices may be wasted as those resources sit idle waiting for requested data to arrive. To remedy such situations, an operator of the data center may spend monetary resources to install a higher throughput network. However, in many instances, the capacity of the higher throughput network may go largely unused, as the times when multiple workloads are concurrently in I/O intensive phases (e.g., periods of high network utilization to access remote data storage) may occur only a small percentage of the total time that the data center is in use.
The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.
While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.
References in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of “at least one A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).
The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.
The illustrative data center 100 differs from typical data centers in many ways. For example, in the illustrative embodiment, the circuit boards (“sleds”) on which components such as CPUs, memory, and other components are placed for increased thermal performance In particular, in the illustrative embodiment, the sleds are shallower than typical boards. In other words, the sleds are shorter from the front to the back, where cooling fans are located. This decreases the length of the path that air must to travel across the components on the board. Further, the components on the sled are spaced further apart than in typical circuit boards, and the components are arranged to reduce or eliminate shadowing (i.e., one component in the air flow path of another component). In the illustrative embodiment, processing components such as the processors are located on a top side of a sled while near memory, such as DIMMs, are located on a bottom side of the sled. As a result of the enhanced airflow provided by this design, the components may operate at higher frequencies and power levels than in typical systems, thereby increasing performance Furthermore, the sleds are configured to blindly mate with power and data communication cables in each rack 102A, 102B, 102C, 102D, enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, individual components located on the sleds, such as processors, accelerators, memory, and data storage drives, are configured to be easily upgraded due to their increased spacing from each other. In the illustrative embodiment, the components additionally include hardware attestation features to prove their authenticity.
Furthermore, in the illustrative embodiment, the data center 100 utilizes a single network architecture (“fabric”) that supports multiple other network architectures including Ethernet and Omni-Path. The sleds, in the illustrative embodiment, are coupled to switches via optical fibers, which provide higher bandwidth and lower latency than typical twisted pair cabling (e.g., Category 5, Category 5e, Category 6, etc.). Due to the high bandwidth, low latency interconnections and network architecture, the data center 100 may, in use, pool resources, such as memory, accelerators (e.g., graphics accelerators, FPGAs, ASICs, etc.), and data storage drives that are physically disaggregated, and provide them to compute resources (e.g., processors) on an as needed basis, enabling the compute resources to access the pooled resources as if they were local. The illustrative data center 100 additionally receives utilization information for the various resources, predicts resource utilization for different types of workloads based on past resource utilization, and dynamically reallocates the resources based on this information.
The racks 102A, 102B, 102C, 102D of the data center 100 may include physical design features that facilitate the automation of a variety of types of maintenance tasks. For example, data center 100 may be implemented using racks that are designed to be robotically-accessed, and to accept and house robotically-manipulatable resource sleds. Furthermore, in the illustrative embodiment, the racks 102A, 102B, 102C, 102D include integrated power sources that receive a greater voltage than is typical for power sources. The increased voltage enables the power sources to provide additional power to the components on each sled, enabling the components to operate at higher than typical frequencies.
In various embodiments, dual-mode optical switches may be capable of receiving both Ethernet protocol communications carrying Internet Protocol (IP packets) and communications according to a second, high-performance computing (HPC) link-layer protocol (e.g., Intel's Omni-Path Architecture's, InfiniBand™) via optical signaling media of an optical fabric. As reflected in
MPCMs 916-1 to 916-7 may be configured to provide inserted sleds with access to power sourced by respective power modules 920-1 to 920-7, each of which may draw power from an external power source 919. In various embodiments, external power source 921 may deliver alternating current (AC) power to rack 902, and power modules 920-1 to 920-7 may be configured to convert such AC power to direct current (DC) power to be sourced to inserted sleds. In some embodiments, for example, power modules 920-1 to 920-7 may be configured to convert 277-volt AC power into 12-volt DC power for provision to inserted sleds via respective MPCMs 916-1 to 916-7. The embodiments are not limited to this example.
MPCMs 916-1 to 916-7 may also be arranged to provide inserted sleds with optical signaling connectivity to a dual-mode optical switching infrastructure 914, which may be the same as—or similar to—dual-mode optical switching infrastructure 514 of
Sled 1004 may also include dual-mode optical network interface circuitry 1026. Dual-mode optical network interface circuitry 1026 may generally comprise circuitry that is capable of communicating over optical signaling media according to each of multiple link-layer protocols supported by dual-mode optical switching infrastructure 914 of
Coupling MPCM 1016 with a counterpart MPCM of a sled space in a given rack may cause optical connector 1016A to couple with an optical connector comprised in the counterpart MPCM. This may generally establish optical connectivity between optical cabling of the sled and dual-mode optical network interface circuitry 1026, via each of a set of optical channels 1025. Dual-mode optical network interface circuitry 1026 may communicate with the physical resources 1005 of sled 1004 via electrical signaling media 1028. In addition to the dimensions of the sleds and arrangement of components on the sleds to provide improved cooling and enable operation at a relatively higher thermal envelope (e.g., 250 W), as described above with reference to
As shown in
In another example, in various embodiments, one or more pooled storage sleds 1132 may be included among the physical infrastructure 1100A of data center 1100, each of which may comprise a pool of storage resources that is globally accessible to other sleds via optical fabric 1112 and dual-mode optical switching infrastructure 1114. In some embodiments, such pooled storage sleds 1132 may comprise pools of solid-state storage devices such as solid-state drives (SSDs). In various embodiments, one or more high-performance processing sleds 1134 may be included among the physical infrastructure 1100A of data center 1100. In some embodiments, high-performance processing sleds 1134 may comprise pools of high-performance processors, as well as cooling features that enhance air cooling to yield a higher thermal envelope of up to 250 W or more. In various embodiments, any given high-performance processing sled 1134 may feature an expansion connector 1117 that can accept a far memory expansion sled, such that the far memory that is locally available to that high-performance processing sled 1134 is disaggregated from the processors and near memory comprised on that sled. In some embodiments, such a high-performance processing sled 1134 may be configured with far memory using an expansion sled that comprises low-latency SSD storage. The optical infrastructure allows for compute resources on one sled to utilize remote accelerator/FPGA, memory, and/or SSD resources that are disaggregated on a sled located on the same rack or any other rack in the data center. The remote resources can be located one switch jump away or two-switch jumps away in the spine-leaf network architecture described above with reference to
In various embodiments, one or more layers of abstraction may be applied to the physical resources of physical infrastructure 1100A in order to define a virtual infrastructure, such as a software-defined infrastructure 1100B. In some embodiments, virtual computing resources 1136 of software-defined infrastructure 1100B may be allocated to support the provision of cloud services 1140. In various embodiments, particular sets of virtual computing resources 1136 may be grouped for provision to cloud services 1140 in the form of software defined infrastructure (SDI) services 1138. Examples of cloud services 1140 may include—without limitation—software as a service (SaaS) services 1142, platform as a service (PaaS) services 1144, and infrastructure as a service (IaaS) services 1146.
In some embodiments, management of software-defined infrastructure 1100B may be conducted using a virtual infrastructure management framework 1150B. In various embodiments, virtual infrastructure management framework 1150B may be designed to implement workload fingerprinting techniques and/or machine-learning techniques in conjunction with managing allocation of virtual computing resources 1136 and/or SDI services 1138 to cloud services 1140. In some embodiments, virtual infrastructure management framework 1150B may use/consult telemetry data in conjunction with performing such resource allocation. In various embodiments, an application/service management framework 1150C may be implemented in order to provide QoS management capabilities for cloud services 1140. The embodiments are not limited in this context.
Referring now to
In operation, the system 1210 may utilize one or more migration logic units 1250 in a compute sled 1230, 1232 and/or an I/O accelerator unit 1260 in a data storage sled 1240 to perform migration of a workload from a compute sled 1230, 1232 to the data storage sled 1240 when the workload enters an I/O intensive phase, indicative a period of execution of the workload in which the amount of data to be sent through the network between the compute sled 1230 and the data storage sled 1240 satisfies a predefined threshold amount (e.g., 8 GB/s per second) and the congestion level of the network path between the compute sled 1230 and the data storage sled 1240 satisfies a predefined level of congestion (e.g., a predefined latency, a predefined utilization of the total throughput of the network). In the illustrative embodiment, the predefined level of congestion is a level of congestion in which, if the I/O intensive phase of the workload was executed on the compute sled 1230 and the data used by the I/O intensive phase was sent through the network 1212 between the compute sled 1230 and the data storage sled 1240, the speed of execution of the workload would be slowed. As a result, the workload may not produce a result in a time period specified in a service level agreement (SLA) with a customer. By migrating the workload to the data storage sled 1240 for execution, the I/O intensive phase may be executed faster, as the data utilized by the I/O intensive phase is local to the sled where the workload is executed. In the illustrative embodiment, the data storage sled 1240 may map a memory range of the main memory of the compute sled 1230 to the data storage sled 1240, such that data (e.g., a relatively small set of output data, compared to a relatively large amount of input data read from a data storage device local to the data storage sled 1240) may be read from and written to the main memory of the compute sled 1230 during execution of the I/O intensive phase on the data storage sled 1240.
Referring now to
As shown in
The compute engine 1302 may be embodied as any type of device or collection of devices capable of performing various compute functions described below. In some embodiments, the compute engine 1302 may be embodied as a single device such as an integrated circuit, an embedded system, a field-programmable gate array (FPGA), a system-on-a-chip (SOC), or other integrated system or device. Additionally, in some embodiments, the compute engine 1302 includes or is embodied as a processor 1304 and a memory 1306. The processor 1304 may be embodied as any type of processor capable of performing the functions described herein. For example, the processor 1304 may be embodied as a single or multi-core processor(s), a microcontroller, or other processor or processing/controlling circuit. In some embodiments, the processor 1304 may be embodied as, include, or be coupled to an FPGA, an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein. The processor 1304 may include a migration logic unit 1250 briefly mentioned with reference to
The main memory 1306 may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage capable of performing the functions described herein. Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM). One particular type of DRAM that may be used in a memory module is synchronous dynamic random access memory (SDRAM). In particular embodiments, DRAM of a memory component may comply with a standard promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4 (these standards are available at www.jedec.org). Such standards (and similar standards) may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces.
In one embodiment, the memory device is a block addressable memory device, such as those based on NAND or NOR technologies. A memory device may also include future generation nonvolatile devices, such as a three dimensional crosspoint memory device (e.g., Intel 3D XPoint™ memory), or other byte addressable write-in-place nonvolatile memory devices. In one embodiment, the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory. The memory device may refer to the die itself and/or to a packaged memory product.
In some embodiments, 3D crosspoint memory (e.g., Intel 3D XPoint™ memory) may comprise a transistor-less stackable cross point architecture in which memory cells sit at the intersection of word lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance. In some embodiments, all or a portion of the main memory 1306 may be integrated into the processor 1304. In operation, the main memory 1306 may store various software and data used during operation such as workload data, phase data, network congestion data, migration data, applications, programs, libraries, and drivers.
The compute engine 1302 is communicatively coupled to other components of the compute sled 1230 via the I/O subsystem 1308, which may be embodied as circuitry and/or components to facilitate input/output operations with the compute engine 1302 (e.g., with the processor 1304 and/or the main memory 1306) and other components of the compute sled 1230. For example, the I/O subsystem 1308 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 1308 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with one or more of the processor 1304, the main memory 1306, and other components of the compute sled 1230, into the compute engine 1302.
The communication circuitry 1310 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications over the network 1212 between the compute sled 1230 and another compute device (e.g., the data storage sled 1240, the orchestrator server 1216, etc.). The communication circuitry 1310 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication.
The illustrative communication circuitry 1310 includes a network interface controller (NIC) 1312, which may also be referred to as a host fabric interface (HFI). The NIC 1312 may be embodied as one or more add-in-boards, daughter cards, network interface cards, controller chips, chipsets, or other devices that may be used by the compute sled 1230 to connect with another compute device (e.g., the data storage sled 1240, the orchestrator server 1216, etc.). In some embodiments, the NIC 1312 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, the NIC 1312 may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC 1312. In such embodiments, the local processor of the NIC 1312 may be capable of performing one or more of the functions of the compute engine 1302 described herein. Additionally or alternatively, in such embodiments, the local memory of the NIC 1312 may be integrated into one or more components of the compute sled 1230 at the board level, socket level, chip level, and/or other levels. In some embodiments, the migration logic unit 1250 may be included in the NIC 1312.
The one or more illustrative data storage devices 1314, may be embodied as any type of devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. Each data storage device 1314 may include a system partition that stores data and firmware code for the data storage device 1314. Each data storage device 1314 may also include an operating system partition that stores data files and executables for an operating system.
Additionally or alternatively, the compute sled 1230 may include one or more peripheral devices 1316. Such peripheral devices 1316 may include any type of peripheral device commonly found in a compute device such as a display, speakers, a mouse, a keyboard, and/or other input/output devices, interface devices, and/or other peripheral devices.
Referring now to
As shown in
The compute engine 1402 may be embodied as any type of device or collection of devices capable of performing various compute functions described below. In some embodiments, the compute engine 1402 may be embodied as a single device such as an integrated circuit, an embedded system, a field-programmable gate array (FPGA), a system-on-a-chip (SOC), or other integrated system or device. Additionally, in some embodiments, the compute engine 1402 includes or is embodied as a processor 1404 and a memory 1406. The processor 1404 may be embodied as any type of processor capable of performing the functions described herein. For example, the processor 1404 may be embodied as a single or multi-core processor(s), a microcontroller, or other processor or processing/controlling circuit. In some embodiments, the processor 1404 may be embodied as, include, or be coupled to an FPGA, an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein. The processor 1404 may include an I/O accelerator unit 1260, which may be embodied as a specialized device, such as a co-processor, an FPGA, or an ASIC, for executing the I/O intensive phase of one or more workloads, using data in the local data storage device(s) 1414. In some embodiments, the I/O accelerator unit 1260 may map a memory region of the compute sled 1230 as local memory for the corresponding workload. As such, during execution of the workload, the I/O accelerator unit 1260 may cause data to be read from and/or written to the main memory 1306 of the compute sled 1406 (e.g., by interfacing with the migration logic unit 1250 of the compute sled 1230) as if the memory 1306 was local and without modifying the executable code of the workload. Additionally, the I/O accelerator unit 1260 may reformat data from the compute sled 1230 to a format usable by the I/O accelerator unit 1260 (e.g., converting a file to a block or vice versa, changing a byte ordering of data, etc.) to execute the I/O intensive phase of the workload.
The main memory 1406 may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage capable of performing the functions described herein. In operation, the main memory 1406 may store various software and data used during operation such as workload data, phase data, network congestion data, migration data, applications, programs, libraries, and drivers.
The compute engine 1402 is communicatively coupled to other components of the data storage sled 1240 via the I/O subsystem 1408, which may be embodied as circuitry and/or components to facilitate input/output operations with the compute engine 1402 (e.g., with the processor 1404 and/or the main memory 1406) and other components of the data storage sled 1240. For example, the I/O subsystem 1408 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 1408 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with one or more of the processor 1404, the main memory 1406, and other components of the data storage sled 1240, into the compute engine 1402.
The communication circuitry 1410 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications over the network 1212 between the data storage sled 1240 and another compute device (e.g., the compute sleds 1230, 1232, the orchestrator server 1216, etc.). The communication circuitry 1310 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication.
The illustrative communication circuitry 1410 includes a network interface controller (NIC) 1412, which may also be referred to as a host fabric interface (HFI). The NIC 1412 may be embodied as one or more add-in-boards, daughter cards, network interface cards, controller chips, chipsets, or other devices that may be used by the data storage sled 1240 to connect with another compute device (e.g., the compute sleds 1240, 1242 the orchestrator server 1216, etc.). In some embodiments, the NIC 1412 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, the NIC 1412 may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC 1412. In such embodiments, the local processor of the NIC 1412 may be capable of performing one or more of the functions of the compute engine 1402 described herein. Additionally or alternatively, in such embodiments, the local memory of the NIC 1412 may be integrated into one or more components of the compute sled 1240 at the board level, socket level, chip level, and/or other levels. In some embodiments, the I/O accelerator unit 1260 may be included in the NIC 1412.
The one or more illustrative data storage devices 1414, may be embodied as any type of devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. Each data storage device 1414 may include a system partition that stores data and firmware code for the data storage device 1414. Each data storage device 1414 may also include an operating system partition that stores data files and executables for an operating system.
Additionally or alternatively, the data storage sled 1240 may include one or more peripheral devices 1416. Such peripheral devices 1416 may include any type of peripheral device commonly found in a compute device such as a display, speakers, a mouse, a keyboard, and/or other input/output devices, interface devices, and/or other peripheral devices.
The client device 1214, the orchestrator server 1216, and the compute sled 1232 may have components similar to those described in
As described above, the network switch 1220, the orchestrator server 1216, and the sleds 1230, 1232, 1240 are illustratively in communication via the network 1212, which may be embodied as any type of wired or wireless communication network, including global networks (e.g., the Internet), local area networks (LANs) or wide area networks (WANs), cellular networks (e.g., Global System for Mobile Communications (GSM), 3G, Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), etc.), digital subscriber line (DSL) networks, cable networks (e.g., coaxial networks, fiber networks, etc.), or any combination thereof.
Referring now to
In the illustrative environment 1500, the network communicator 1520, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to facilitate inbound and outbound network communications (e.g., network traffic, network packets, network flows, etc.) to and from the compute sled 1230, respectively. To do so, the network communicator 1520 is configured to receive and process data packets from one system or computing device (e.g., the orchestrator server 1216) and to prepare and send data packets to another computing device or system (e.g., the data storage sled 1240). Accordingly, in some embodiments, at least a portion of the functionality of the network communicator 1520 may be performed by the communication circuitry 1310, and, in the illustrative embodiment, by the NIC 1312.
The migration manager 1530, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof, is configured to manage the migration of a workload to the data storage sled 1240 for execution if the workload is entering an I/O intensive phase and the network 1212 would be a bottleneck (e.g., transferring the data though the network between the data storage sled 1240 and the compute sled 1230 would slow the execution of the workload). To do so, in the illustrative embodiment, the migration manager 1530 includes a workload executor 1532, an I/O intensity determiner 1534, a network congestion determiner 1536, and a workload phase migrator 1538. The workload executor 1532, in the illustrative embodiment, is configured to execute the workload using data stored in the data storage device(s) 1414 of the data storage sled 1240. As the compute sled 1230 executes the workload, the workload may transition through multiple phases of resource utilization, as described above. The I/O intensity determiner 1534, in the illustrative embodiment, is configured to determine whether the amount of data to be accessed from the data storage device(s) 1414 of the data storage sled 1240 to execute a phase satisfies a threshold amount (e.g., a predefined number of gigabytes per second, etc.). In the illustrative embodiment, the I/O intensity determiner 1534 may monitor the resource utilization of the workload over time to identify the different phases, identify patterns in the phases, and/or metadata associated with sections of the executable code of the workload that demarcate different phases, to predict whether the workload will transition into an I/O intensive phase within a predefined time period. The network congestion determiner 1536, in the illustrative embodiment, is configured to determine the level of network congestion, such as by sending a test message to the data storage sled 1240 to determine a latency in receiving a response from the data storage sled 1240, identifying a fullness of a transmit buffer of the NIC 1312 of the compute sled 1230 (e.g., a fuller buffer may indicate more congestion), and/or by querying the orchestrator server 1216 for the network congestion data 1506. The workload phase migrator 1538, in the illustrative embodiment, is configured to determine whether to migrate the workload to the data storage sled 1240 as a function of whether the workload is predicted to enter an I/O intensive phase within a predefined time period (e.g., 10 milliseconds) and further as a function of the network congestion data 1506 (e.g., whether the network 1212 is congested to the point that the network 1212 would be a bottleneck to the execution of the workload in the I/O intensive phase). Further, in the illustrative embodiment, the workload phase migrator 1538 may facilitate migration of the workload to the data storage sled 1240 by providing memory map data that is usable by the data storage sled 1240 to map a region of the main memory 1306 of the compute sled 1230 as local memory to be used by the workload when the workload is executed on the data storage sled 1240. Additionally, the workload phase migrator 1538 may reformat data in the main memory 1306 to a different format that is usable by the data storage sled 1240 (e.g., by the I/O accelerator unit 1260 of the data storage sled 1240), as described herein.
It should be appreciated that each of the workload executor 1532, the I/O intensity determiner 1534, the network congestion determiner 1536, and the workload phase migrator 1538 may be separately embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof. For example, the workload executor 1532 may be embodied as a hardware component, while the I/O intensity determiner 1534, the network congestion determiner 1536, and the workload phase migrator 1538 are embodied as virtualized hardware components or as some other combination of hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof.
Referring now to
In the illustrative environment 1600, the network communicator 1620, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to facilitate inbound and outbound network communications (e.g., network traffic, network packets, network flows, etc.) to and from the data storage sled 1240, respectively. To do so, the network communicator 1620 is configured to receive and process data packets from one system or computing device (e.g., the orchestrator server 1216) and to prepare and send data packets to another computing device or system (e.g., the compute sled 1230). Accordingly, in some embodiments, at least a portion of the functionality of the network communicator 1620 may be performed by the communication circuitry 1410, and, in the illustrative embodiment, by the NIC 1412.
The migration manager 1630, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof, is configured to facilitate the migration of a workload to the data storage sled 1240 and for migrating a workload back to the corresponding compute sled 1230, 1232 after the I/O intensive phase of the workload has completed or if the network congestion satisfies a predefined threshold (e.g., the network would no longer be a bottleneck to the execution of the I/O intensive phase of the workload). To do so, in the illustrative embodiment, the migration manager 1630 includes a phase accelerator 1632, a quality of service (QoS) manager 1634, a network congestion determiner 1636, and a workload phase migrator 1638. The phase accelerator 1632, in the illustrative embodiment, is configured to execute a workload that is in an I/O intensive phase (e.g., with the I/O accelerator unit 1260). The QoS manager 1634, in the illustrative embodiment, is configured to apply a quality of service (QoS) policy to throttle the usage of resources by the workloads executed on the data storage sled 1240 so that no workload dominates the usage of data storage sled resources to the detriment of other workloads (e.g., causing a workload to no longer satisfy a QoS target specified in a service level agreement (SLA)). The network congestion determiner 1636 is similar to the network congestion determiner 1536 described with reference to the environment 1500. Additionally, in the illustrative embodiment, the workload phase migrator 1638 is configured to facilitate the migration of the workload to the data storage sled 1240, such as by establishing a memory map that enables the workload to access the main memory 1306 of the compute sled 1230 as local memory. The workload phase migrator 1638 is also configured to determine when to migrate the workload back to the compute sled 1230, 1232. In the illustrative embodiment, the workload phase migrator 1638 is configured to migrate the workload back to the compute sled 1230, 1232 when the I/O intensive phase has been completed (e.g., the executable code of the I/O intensive phase has been completely executed, the amount of memory bandwidth utilized by the workload has fallen below a predefined amount, etc.) and/or when the network congestion has decreased to a level that the network 1212 would no longer be a bottleneck to the execution of the workload on the compute sled 1230, 1232.
It should be appreciated that each of the phase accelerator 1632, the QoS manager 1634, the network congestion determiner 1636, and the workload phase migrator 1638 may be separately embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof. For example, the phase accelerator 1632 may be embodied as a hardware component, while the QoS manager 1634, the network congestion determiner 1636, and the workload phase migrator 1638 are embodied as virtualized hardware components or as some other combination of hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof.
Referring now to
In block 1710, the compute sled 1230 executes the workload, including accessing (e.g., reading from and/or writing to) the data storage sled 1240 and a region of the main memory 1306 of the compute sled 1230. Additionally, in block 1712, the compute sled 1230 identifies I/O intensive phases of the workload. In doing so, the compute sled 1230 may identify phases in which the amount of data sent through the network (e.g., between the data storage sled 1240 and the compute sled 1230) satisfies a predefined threshold, such as a predefined number of gigabytes per second, as indicated in block 1714. As indicated in block 1716, in identifying the I/O intensive phases, the compute sled 1230 identifies I/O intensive phases as a function of workload metadata indicative of the I/O intensive phases. For example, the metadata may be included with the executable code of the workload and may identify the sections of the executable code that mark the beginning and end of each phase. Further, the metadata may indicate the types and amounts of resources utilized by each phase. As indicated in block 1718, the compute sled 1230 identifies I/O intensive phases using pattern recognition. In doing so, and as indicated in block 1720, the compute sled 1230 may determine historical I/O usage associated with different periods of execution of the workload and identify changes in the I/O usage as changes in the phases of the workload. Further, as indicated in block 1722, the compute sled 1230 may identify patterns of phases (e.g., phase A, followed by phase B, then phase A, then phase C, then phase A, etc.).
In block 1724, the compute sled 1230 determines whether an I/O intensive phase is likely to occur within a predefined time period. In doing so, the compute sled 1230, in the illustrative embodiment, determines a likelihood of an I/O intensive phase occurring within the predefined time period as a function of the identified pattern of phases and the present time, as indicated in block 1726. For example, if the compute sled 1230 has determined that phase B is I/O intensive, that phase B typically (e.g., 80% of the time) follows phase A, and that phase A has been executing for 90% of its typical phase residency (i.e., time period of execution) of 100 milliseconds, then the compute sled 1230 may determine that the I/O intensive phase (e.g., phase B) is likely to occur within the next 10 milliseconds. In block 1728, the compute sled 1230 determines the subsequent course of action as a function of whether there is an upcoming I/O intensive phase in the workload (e.g., whether the likelihood of an I/O intensive phase occurring within the next 10 milliseconds is greater than a predefined threshold, such as 50%). If the compute sled 1230 determines that there is not an upcoming I/O intensive phase, the method 1700 loops back to block 1702 in which the compute sled 1230 determines whether to continue to enable offloading of I/O intensive phases. Otherwise, the method 1700 advances to block 1730 of
In response to a determination that the network is not congested, the method 1700 loops back to block 1702 of
Referring now to
In block 1926, the data storage sled 1240 determines whether the I/O intensive phase has ended. For example, and as indicated in block 1928, the data storage sled determines whether executable code associated with the I/O intensive phase has been completely executed (e.g., the executable code sent by the compute sled 1230 in block 1746 of
Referring now to
Illustrative examples of the technologies disclosed herein are provided below. An embodiment of the technologies may include any one or more, and any combination of, the examples described below.
Example 1 includes a compute sled comprising a compute engine to execute a workload that includes multiple phases, wherein each phase is indicative of a different resource utilization over a time period; identify an I/O intensive phase of the workload wherein an amount of data to be communicated through a network path between the compute sled and the data storage sled to execute the I/O intensive phase satisfies a predefined threshold; and migrate the workload to the data storage sled to execute the I/O intensive phase locally on the data storage sled.
Example 2 includes the subject matter of Example 1, and wherein the compute engine is further to send memory map data to the data storage sled, wherein the memory map data is usable by the data storage sled to access main memory of the compute sled as local memory as the I/O intensive phase is executed on the data storage sled.
Example 3 includes the subject matter of any of Examples 1 and 2, and wherein the compute engine is further to determine whether the I/O intensive phase will occur within a predefined time period; and wherein to migrate comprises to migrate, in response to a determination that the I/O intensive phase will occur within the predefined time period, the workload to the data storage sled.
Example 4 includes the subject matter of any of Examples 1-3, and wherein the compute engine is further to identify a pattern of phases over time as the workload is executed; and wherein to determine whether the I/O intensive phase will occur within a predefined time period comprises to determine a likelihood, as a function of a present time and the identified pattern of phases, that the I/O intensive phase will occur within the predefined time period; determine whether the likelihood satisfies a predefined threshold likelihood; and determine, in response to a determination that the likelihood satisfies the predefined threshold likelihood, that the I/O intensive phase will occur within the predefined time period.
Example 5 includes the subject matter of any of Examples 1-4, and wherein the compute engine is further to determine whether the network path between the compute sled and the data storage sled satisfies a predefined level of congestion; and wherein to migrate further comprises to migrate, in response to a determination that the network path satisfies the predefined level of congestion, the workload to the data storage sled.
Example 6 includes the subject matter of any of Examples 1-5, and wherein to determine whether the network path between the compute sled and the data storage sled satisfies a predefined level of congestion comprises to determine whether access of data on the data storage sled through the network path would reduce the execution speed of the I/O intensive phase.
Example 7 includes the subject matter of any of Examples 1-6, and wherein to identify the I/O intensive phase comprises to identify the I/O intensive phase as a function of workload metadata that identifies executable code associated with the I/O intensive phase.
Example 8 includes the subject matter of any of Examples 1-7, and wherein to identify the I/O intensive phase comprises to identify the I/O intensive phase with pattern recognition.
Example 9 includes the subject matter of any of Examples 1-8, and wherein to identify the I/O intensive phase with pattern recognition comprises to determine historical I/O usage associated with different periods of execution of the workload.
Example 10 includes the subject matter of any of Examples 1-9, and wherein to migrate the workload to the data storage sled comprises to send a request to the data storage sled to execute the I/O intensive phase of the workload.
Example 11 includes the subject matter of any of Examples 1-10, and wherein to send the request comprises to send executable code associated with the I/O intensive phase to the data storage sled.
Example 12 includes the subject matter of any of Examples 1-11, and wherein to send the request comprises to send input data from a main memory of the compute sled to the data storage sled for use in execution of the I/O intensive phase.
Example 13 includes the subject matter of any of Examples 1-12, and wherein the compute sled is further to reformat the input data to a format usable by an I/O accelerator unit of the data storage sled.
Example 14 includes a method comprising executing, by a compute sled, a workload that includes multiple phases, wherein each phase is indicative of a different resource utilization over a time period; identifying, by the compute sled, an I/O intensive phase of the workload wherein an amount of data to be communicated through a network path between the compute sled and the data storage sled to execute the I/O intensive phase satisfies a predefined threshold; and migrating, by the compute sled, the workload to the data storage sled to execute the I/O intensive phase locally on the data storage sled.
Example 15 includes the subject matter of Example 14, and further including sending, by the compute sled, memory map data to the data storage sled, wherein the memory map data is usable by the data storage sled to access main memory of the compute sled as local memory as the I/O intensive phase is executed on the data storage sled.
Example 16 includes the subject matter of any of Examples 14 and 15, and further including determining, by the compute sled, whether the I/O intensive phase will occur within a predefined time period; and wherein migrating comprises migrating, in response to a determination that the I/O intensive phase will occur within the predefined time period, the workload to the data storage sled.
Example 17 includes the subject matter of any of Examples 14-16, and further including identifying, by the compute sled, a pattern of phases over time as the workload is executed; and wherein determining whether the I/O intensive phase will occur within a predefined time period comprises determining a likelihood, as a function of a present time and the identified pattern of phases, that the I/O intensive phase will occur within the predefined time period; determining whether the likelihood satisfies a predefined threshold likelihood; and determining, in response to a determination that the likelihood satisfies the predefined threshold likelihood, that the I/O intensive phase will occur within the predefined time period.
Example 18 includes the subject matter of any of Examples 14-17, and further including determining, by the compute sled, whether the network path between the compute sled and the data storage sled satisfies a predefined level of congestion; and wherein migrating further comprises migrating, in response to a determination that the network path satisfies the predefined level of congestion, the workload to the data storage sled.
Example 19 includes the subject matter of any of Examples 14-18, and wherein determining whether the network path between the compute sled and the data storage sled satisfies a predefined level of congestion comprises determining whether access of data on the data storage sled through the network path would reduce the execution speed of the I/O intensive phase.
Example 20 includes the subject matter of any of Examples 14-19, and wherein identifying the I/O intensive phase comprises identifying the I/O intensive phase as a function of workload metadata that identifies executable code associated with the I/O intensive phase.
Example 21 includes the subject matter of any of Examples 14-20, and wherein identifying the I/O intensive phase comprises identifying the I/O intensive phase with pattern recognition.
Example 22 includes the subject matter of any of Examples 14-21, and wherein identifying the I/O intensive phase with pattern recognition comprises determining historical I/O usage associated with different periods of execution of the workload.
Example 23 includes the subject matter of any of Examples 14-22, and wherein migrating the workload to the data storage sled comprises sending a request to the data storage sled to execute the I/O intensive phase of the workload.
Example 24 includes the subject matter of any of Examples 14-23, and wherein sending the request comprises sending executable code associated with the I/O intensive phase to the data storage sled.
Example 25 includes the subject matter of any of Examples 14-24, and wherein sending the request comprises sending input data from main memory of the compute sled to the data storage sled for use in execution of the I/O intensive phase.
Example 26 includes the subject matter of any of Examples 14-25, and further including reformatting, by the compute sled, the input data to a format usable by an I/O accelerator unit of the data storage sled.
Example 27 includes one or more machine-readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, cause a compute sled to perform the method of any of Examples 14-26.
Example 28 includes a network device comprising one or more processors; one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the network device to perform the method of any of Examples 14-26.
Example 29 includes a compute sled comprising means for performing the method of any of Examples 14-26.
Example 30 includes a compute sled comprising means for executing a workload that includes multiple phases, wherein each phase is indicative of a different resource utilization over a time period; means for identifying an I/O intensive phase of the workload wherein an amount of data to be communicated through a network path between the compute sled and the data storage sled to execute the I/O intensive phase satisfies a predefined threshold; and means for migrating the workload to the data storage sled to execute the I/O intensive phase locally on the data storage sled.
Example 31 includes the subject matter of Example 30, and further including means for sending memory map data to the data storage sled, wherein the memory map data is usable by the data storage sled to access main memory of the compute sled as local memory as the I/O intensive phase is executed on the data storage sled.
Example 32 includes the subject matter of any of Examples 30 and 31, and further including means for determining whether the I/O intensive phase will occur within a predefined time period; and wherein the means for migrating comprises means for migrating, in response to a determination that the I/O intensive phase will occur within the predefined time period, the workload to the data storage sled.
Example 33 includes the subject matter of any of Examples 30-32, and further including means for identifying a pattern of phases over time as the workload is executed; and wherein the means for determining whether the I/O intensive phase will occur within a predefined time period comprises means for determining a likelihood, as a function of a present time and the identified pattern of phases, that the I/O intensive phase will occur within the predefined time period; means for determining whether the likelihood satisfies a predefined threshold likelihood; and means for determining, in response to a determination that the likelihood satisfies the predefined threshold likelihood, that the I/O intensive phase will occur within the predefined time period.
Example 34 includes the subject matter of any of Examples 30-33, and further including means for determining whether the network path between the compute sled and the data storage sled satisfies a predefined level of congestion; and wherein the means for migrating further comprises means for migrating, in response to a determination that the network path satisfies the predefined level of congestion, the workload to the data storage sled.
Example 35 includes the subject matter of any of Examples 30-34, and wherein the means for determining whether the network path between the compute sled and the data storage sled satisfies a predefined level of congestion comprises means for determining whether access of data on the data storage sled through the network path would reduce the execution speed of the I/O intensive phase.
Example 36 includes the subject matter of any of Examples 30-35, and wherein the means for identifying the I/O intensive phase comprises means for identifying the I/O intensive phase as a function of workload metadata that identifies executable code associated with the I/O intensive phase.
Example 37 includes the subject matter of any of Examples 30-36, and wherein the means for identifying the I/O intensive phase comprises means for identifying the I/O intensive phase with pattern recognition.
Example 38 includes the subject matter of any of Examples 30-37, and wherein the means for identifying the I/O intensive phase with pattern recognition comprises means for determining historical I/O usage associated with different periods of execution of the workload.
Example 39 includes the subject matter of any of Examples 30-38, and wherein the means for migrating the workload to the data storage sled comprises means for sending a request to the data storage sled to execute the I/O intensive phase of the workload.
Example 40 includes the subject matter of any of Examples 30-39, and wherein the means for sending the request comprises means for sending executable code associated with the I/O intensive phase to the data storage sled.
Example 41 includes the subject matter of any of Examples 30-40, and wherein the means for sending the request comprises means for sending input data from main memory of the compute sled to the data storage sled for use in execution of the I/O intensive phase.
Example 42 includes the subject matter of any of Examples 30-41, and further including means for reformatting the input data to a format usable by an I/O accelerator unit of the data storage sled.
Example 43 includes a data storage sled comprising a compute engine to execute an I/O intensive phase of a workload, wherein the I/O intensive phase is indicative of a period of execution in which an amount of data to be accessed from a data storage device of the data storage sled satisfies a predefined threshold; determine whether the I/O intensive phase has ended; and migrate, in response to a determination that the I/O intensive phase has ended, the execution of the workload to a compute sled.
Example 44 includes the subject matter of Example 43, and wherein the compute engine is further to determine whether a network path to the compute sled satisfies a predefined level of congestion; and migrate, in response to a determination that the network path does not satisfy the predefined level of congestion, execution of the workload to the compute sled.
Example 45 includes the subject matter of any of Examples 43 and 44, and wherein the compute engine is further to map a memory region to a main memory of the compute sled; and access data in the main memory of the compute sled as the I/O intensive phase is executed on the data storage sled.
Example 46 includes the subject matter of any of Examples 43-45, and wherein the compute engine is further to receive executable code associated with the I/O intensive phase from the compute sled; and wherein to execute the I/O intensive phase comprises to execute the received executable code.
Example 47 includes the subject matter of any of Examples 43-46, and wherein the compute engine is further to receive an input set of data from the compute sled; and reformat the input set of data to a format that is usable by an I/O accelerator unit of the data storage sled.
Example 48 includes the subject matter of any of Examples 43-47, and wherein the compute engine is further to execute multiple I/O intensive phases of different workloads concurrently; and apply a quality of service management policy to the execution of the workloads to maintain a target quality of service as the I/O intensive phases are executed.
Example 49 includes the subject matter of any of Examples 43-48, and wherein the compute engine is further to send output data from execution of the I/O intensive phase to the compute sled.
Example 50 includes the subject matter of any of Examples 43-49, and wherein the compute engine is to receive a first set of input data from the compute sled and access a second set of input data from a data storage device of the data storage sled, wherein the first and second sets of input data are usable to execute the I/O intensive phase and the second data set is larger than the first data set.
Example 51 includes the subject matter of any of Examples 43-50, and wherein to determine whether the I/O intensive phase has ended comprises to determine whether executable code associated with the I/O intensive phase has been completely executed.
Example 52 includes the subject matter of any of Examples 43-51, and wherein to execute the I/O intensive phase comprises to execute the I/O intensive phase with an I/O accelerator unit of the data storage sled.
Example 53 includes a method comprising executing, by a data storage sled, an I/O intensive phase of a workload, wherein the I/O intensive phase is indicative of a period of execution in which an amount of data to be accessed from a data storage device of the data storage sled satisfies a predefined threshold; determining, by the data storage sled, whether the I/O intensive phase has ended; and migrating, by the data storage sled and in response to a determination that the I/O intensive phase has ended, the execution of the workload to a compute sled.
Example 54 includes the subject matter of Example 53, and further including determining, by the data storage sled, whether a network path to the compute sled satisfies a predefined level of congestion; and migrating, by the data storage sled and in response to a determination that the network path does not satisfy the predefined level of congestion, execution of the workload to the compute sled.
Example 55 includes the subject matter of any of Examples 53 and 54, and further including mapping, by the data storage sled, a memory region to a main memory of the compute sled; and accessing data in the main memory of the compute sled as the I/O intensive phase is executed on the data storage sled.
Example 56 includes the subject matter of any of Examples 53-55, and further including receiving, by the data storage sled, executable code associated with the I/O intensive phase from the compute sled; and wherein executing the I/O intensive phase comprises executing the received executable code.
Example 57 includes the subject matter of any of Examples 53-56, and further including receiving, by the data storage sled, an input set of data from the compute sled; and reformatting, by the data storage sled, the input set of data to a format that is usable by an I/O accelerator unit of the data storage sled.
Example 58 includes the subject matter of any of Examples 53-57, and further including executing, by the data storage sled, multiple I/O intensive phases of different workloads concurrently; and applying, by the data storage sled, a quality of service management policy to the execution of the workloads to maintain a target quality of service as the I/O intensive phases are executed.
Example 59 includes the subject matter of any of Examples 53-58, and further including sending, by the data storage sled, output data from execution of the I/O intensive phase to the compute sled.
Example 60 includes the subject matter of any of Examples 53-59, and further including receiving, by the data storage sled, a first set of input data from the compute sled; and accessing, by the data storage sled, a second set of input data from a data storage device of the data storage sled, wherein the first and second sets of input data are usable to execute the I/O intensive phase and the second data set is larger than the first data set.
Example 61 includes the subject matter of any of Examples 53-60, and wherein determining whether the I/O intensive phase has ended comprises determining whether executable code associated with the I/O intensive phase has been completely executed.
Example 62 includes the subject matter of any of Examples 53-61, and wherein executing the I/O intensive phase comprises executing the I/O intensive phase with an I/O accelerator unit of the data storage sled.
Example 63 includes one or more machine-readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, cause a data storage sled to perform the method of any of Examples 53-62.
Example 64 includes a data storage sled comprising means for performing the method of any of Examples 53-62.
Example 65 includes a data storage sled comprising means for executing an I/O intensive phase of a workload, wherein the I/O intensive phase is indicative of a period of execution in which an amount of data to be accessed from a data storage device of the data storage sled satisfies a predefined threshold; means for determining whether the I/O intensive phase has ended; and means for migrating, in response to a determination that the I/O intensive phase has ended, the execution of the workload to a compute sled.
Example 66 includes the subject matter of Example 65, and further including means for determining whether a network path to the compute sled satisfies a predefined level of congestion; and means for migrating, in response to a determination that the network path does not satisfy the predefined level of congestion, execution of the workload to the compute sled.
Example 67 includes the subject matter of any of Examples 65 and 66, and further including means for mapping a memory region to a main memory of the compute sled; and means for accessing data in the main memory of the compute sled as the I/O intensive phase is executed on the data storage sled.
Example 68 includes the subject matter of any of Examples 65-67, and further including means for receiving executable code associated with the I/O intensive phase from the compute sled; and wherein the means for executing the I/O intensive phase comprises means for executing the received executable code.
Example 69 includes the subject matter of any of Examples 65-68, and further including means for receiving an input set of data from the compute sled; and means for reformatting the input set of data to a format that is usable by an I/O accelerator unit of the data storage sled.
Example 70 includes the subject matter of any of Examples 65-69, and further including means for executing multiple I/O intensive phases of different workloads concurrently; and means for applying a quality of service management policy to the execution of the workloads to maintain a target quality of service as the I/O intensive phases are executed.
Example 71 includes the subject matter of any of Examples 65-70, and further including means for sending output data from execution of the I/O intensive phase to the compute sled.
Example 72 includes the subject matter of any of Examples 65-71, and further including means for receiving a first set of input data from the compute sled; and means for accessing a second set of input data from a data storage device of the data storage sled, wherein the first and second sets of input data are usable to execute the I/O intensive phase and the second data set is larger than the first data set.
Example 73 includes the subject matter of any of Examples 65-72, and wherein the means for determining whether the I/O intensive phase has ended comprises means for determining whether executable code associated with the I/O intensive phase has been completely executed.
Example 74 includes the subject matter of any of Examples 65-73, and wherein the means for executing the I/O intensive phase comprises means for executing the I/O intensive phase with an I/O accelerator unit of the data storage sled.
Claims
1. A compute sled comprising:
- a compute engine to:
- execute a workload that includes multiple phases, wherein each phase is indicative of a different resource utilization over a time period;
- identify an I/O intensive phase of the workload wherein an amount of data to be communicated through a network path between the compute sled and the data storage sled to execute the I/O intensive phase satisfies a predefined threshold; and
- migrate the workload to the data storage sled to execute the I/O intensive phase locally on the data storage sled.
2. The compute sled of claim 1, wherein the compute engine is further to:
- send memory map data to the data storage sled, wherein the memory map data is usable by the data storage sled to access main memory of the compute sled as local memory as the I/O intensive phase is executed on the data storage sled.
3. The compute sled of claim 1, wherein the compute engine is further to determine whether the I/O intensive phase will occur within a predefined time period; and
- wherein to migrate comprises to migrate, in response to a determination that the I/O intensive phase will occur within the predefined time period, the workload to the data storage sled.
4. The compute sled of claim 3, wherein the compute engine is further to identify a pattern of phases over time as the workload is executed; and
- wherein to determine whether the I/O intensive phase will occur within a predefined time period comprises to:
- determine a likelihood, as a function of a present time and the identified pattern of phases, that the I/O intensive phase will occur within the predefined time period;
- determine whether the likelihood satisfies a predefined threshold likelihood; and
- determine, in response to a determination that the likelihood satisfies the predefined threshold likelihood, that the I/O intensive phase will occur within the predefined time period.
5. The compute sled of claim 3, wherein the compute engine is further to:
- determine whether the network path between the compute sled and the data storage sled satisfies a predefined level of congestion; and
- wherein to migrate further comprises to migrate, in response to a determination that the network path satisfies the predefined level of congestion, the workload to the data storage sled.
6. The compute sled of claim 5, wherein to determine whether the network path between the compute sled and the data storage sled satisfies a predefined level of congestion comprises to determine whether access of data on the data storage sled through the network path would reduce the execution speed of the I/O intensive phase.
7. The compute sled of claim 1, wherein to identify the I/O intensive phase comprises to identify the I/O intensive phase as a function of workload metadata that identifies executable code associated with the I/O intensive phase.
8. The compute sled of claim 1, wherein to identify the I/O intensive phase comprises to identify the I/O intensive phase with pattern recognition.
9. The compute sled of claim 8, wherein to identify the I/O intensive phase with pattern recognition comprises to determine historical I/O usage associated with different periods of execution of the workload.
10. The compute sled of claim 1, wherein to migrate the workload to the data storage sled comprises to send a request to the data storage sled to execute the I/O intensive phase of the workload.
11. The compute sled of claim 10, wherein to send the request comprises to send executable code associated with the I/O intensive phase to the data storage sled.
12. The compute sled of claim 11, wherein to send the request comprises to send input data from a main memory of the compute sled to the data storage sled for use in execution of the I/O intensive phase.
13. One or more machine-readable storage media comprising a plurality of instructions stored thereon that, when executed by a compute sled cause the compute sled to:
- execute a workload that includes multiple phases, wherein each phase is indicative of a different resource utilization over a time period;
- identify an I/O intensive phase of the workload wherein an amount of data to be communicated through a network path between the compute sled and the data storage sled to execute the I/O intensive phase satisfies a predefined threshold; and
- migrate the workload to the data storage sled to execute the I/O intensive phase locally on the data storage sled.
14. The one or more machine-readable storage media of claim 13, wherein the plurality of instructions, when executed, further cause the compute sled to:
- send memory map data to the data storage sled, wherein the memory map data is usable by the data storage sled to access main memory of the compute sled as local memory as the I/O intensive phase is executed on the data storage sled.
15. The one or more machine-readable storage media of claim 13, wherein the plurality of instructions, when executed, further cause the compute sled to determine whether the I/O intensive phase will occur within a predefined time period; and
- wherein to migrate comprises to migrate, in response to a determination that the I/O intensive phase will occur within the predefined time period, the workload to the data storage sled.
16. The one or more machine-readable storage media of claim 15, wherein the plurality of instructions, when executed, further cause the compute sled to identify a pattern of phases over time as the workload is executed; and
- wherein to determine whether the I/O intensive phase will occur within a predefined time period comprises to:
- determine a likelihood, as a function of a present time and the identified pattern of phases, that the I/O intensive phase will occur within the predefined time period;
- determine whether the likelihood satisfies a predefined threshold likelihood; and
- determine, in response to a determination that the likelihood satisfies the predefined threshold likelihood, that the I/O intensive phase will occur within the predefined time period.
17. The one or more machine-readable storage media of claim 15, wherein the plurality of instructions, when executed, further cause the compute sled to:
- determine whether the network path between the compute sled and the data storage sled satisfies a predefined level of congestion; and
- wherein to migrate further comprises to migrate, in response to a determination that the network path satisfies the predefined level of congestion, the workload to the data storage sled.
18. The one or more machine-readable storage media of claim 17, wherein to determine whether the network path between the compute sled and the data storage sled satisfies a predefined level of congestion comprises to determine whether access of data on the data storage sled through the network path would reduce the execution speed of the I/O intensive phase.
19. A method comprising:
- executing, by a compute sled, a workload that includes multiple phases, wherein each phase is indicative of a different resource utilization over a time period;
- identifying, by the compute sled, an I/O intensive phase of the workload wherein an amount of data to be communicated through a network path between the compute sled and the data storage sled to execute the I/O intensive phase satisfies a predefined threshold; and
- migrating, by the compute sled, the workload to the data storage sled to execute the I/O intensive phase locally on the data storage sled.
20. The method of claim 19, further comprising:
- sending, by the compute sled, memory map data to the data storage sled, wherein the memory map data is usable by the data storage sled to access main memory of the compute sled as local memory as the I/O intensive phase is executed on the data storage sled.
Type: Application
Filed: Sep 29, 2017
Publication Date: May 31, 2018
Inventors: Francesc Guim Bernat (Barcelona), Karthik Kumar (Chandler, AZ), Mark A. Schmisseur (Phoenix, AZ), Thomas Willhalm (Sandhausen)
Application Number: 15/720,236