TECHNOLOGIES FOR DISTRIBUTING DATA TO IMPROVE DATA THROUGHPUT RATES
Technologies for managing distributed data to improve data throughput rates include a managed node to distribute a dataset over multiple data storage devices coupled to a network. Each data storage device has a peak data throughput rate. The managed node is further to request a corresponding portion of the dataset from each data storage device, receive the requested portions of the dataset at a combined data throughput rate that is greater than the peak data throughput rate of any of the data storage devices, and combine the received portions of the dataset to reconstruct the dataset. Other embodiments are also described and claimed.
The present application claims the benefit of U.S. Provisional Patent Application No. 62/365,969, filed Jul. 22, 2016, U.S. Provisional Patent Application No. 62/376,859, filed Aug. 18, 2016, and U.S. Provisional Patent Application No. 62/427,268, filed Nov. 29, 2016.
BACKGROUNDIn a typical cloud-based computing environment (e.g., a data center), data may be written to and retrieved from data storage devices as workloads (e.g., applications, processes, services, etc.) are executed on behalf of customers. The data storage devices typically have a peak data throughput rate at which they can write and/or retrieve data. As such, in a system in which the peak data throughput rate of a data storage device is less than the data throughput rate of the data communication bus that couples the data storage device to a compute device requesting the access to the data, the peak data throughput rate of the data storage device becomes a bottleneck and may reduce the performance of any workloads executed by the compute device. To address such bottlenecks, administrators of data centers may purchase more expensive data storage devices that provide greater data throughput rates. As a result, the cost of the data center increases.
The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.
While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.
References in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of “at least one A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).
The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.
The illustrative data center 100 differs from typical data centers in many ways. For example, in the illustrative embodiment, the circuit boards (“sleds”) on which components such as CPUs, memory, and other components are placed are designed for increased thermal performance In particular, in the illustrative embodiment, the sleds are shallower than typical boards. In other words, the sleds are shorter from the front to the back, where cooling fans are located. This decreases the length of the path that air must to travel across the components on the board. Further, the components on the sled are spaced further apart than in typical circuit boards, and the components are arranged to reduce or eliminate shadowing (i.e., one component in the air flow path of another component). In the illustrative embodiment, processing components such as the processors are located on a top side of a sled while near memory, such as dual in-line memory modules (DIMMs), are located on a bottom side of the sled. As a result of the enhanced airflow provided by this design, the components may operate at higher frequencies and power levels than in typical systems, thereby increasing performance. Furthermore, the sleds are configured to blindly mate with power and data communication cables in each rack 102A, 102B, 102C, 102D, enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, individual components located on the sleds, such as processors, accelerators, memory, and data storage drives, are configured to be easily upgraded due to their increased spacing from each other. In the illustrative embodiment, the components additionally include hardware attestation features to prove their authenticity.
Furthermore, in the illustrative embodiment, the data center 100 utilizes a single network architecture (“fabric”) that supports multiple other network architectures including Ethernet and Omni-Path. The sleds, in the illustrative embodiment, are coupled to switches via optical fibers, which provide higher bandwidth and lower latency than typical twisted pair cabling (e.g., Category 5, Category 5e, Category 6, etc.). Due to the high bandwidth, low latency interconnections and network architecture, the data center 100 may, in use, pool resources, such as memory, accelerators (e.g., graphics accelerators, FPGAs, application specific integrated circuits (ASICs), etc.), and data storage drives that are physically disaggregated, and provide them to compute resources (e.g., processors) on an as needed basis, enabling the compute resources to access the pooled resources as if they were local. The illustrative data center 100 additionally receives usage information for the various resources, predicts resource usage for different types of workloads based on past resource usage, and dynamically reallocates the resources based on this information.
The racks 102A, 102B, 102C, 102D of the data center 100 may include physical design features that facilitate the automation of a variety of types of maintenance tasks. For example, data center 100 may be implemented using racks that are designed to be robotically-accessed, and to accept and house robotically-manipulatable resource sleds. Furthermore, in the illustrative embodiment, the racks 102A, 102B, 102C, 102D include integrated power sources that receive a greater voltage than is typical for power sources. The increased voltage enables the power sources to provide additional power to the components on each sled, enabling the components to operate at higher than typical frequencies.
In various embodiments, dual-mode optical switches may be capable of receiving both Ethernet protocol communications carrying Internet Protocol (IP packets) and communications according to a second, high-performance computing (HPC) link-layer protocol (e.g., Intel's Omni-Path Architecture's, Infiniband) via optical signaling media of an optical fabric. As reflected in
MPCMs 916-1 to 916-7 may be configured to provide inserted sleds with access to power sourced by respective power modules 920-1 to 920-7, each of which may draw power from an external power source 921. In various embodiments, external power source 921 may deliver alternating current (AC) power to rack 902, and power modules 920-1 to 920-7 may be configured to convert such AC power to direct current (DC) power to be sourced to inserted sleds. In some embodiments, for example, power modules 920-1 to 920-7 may be configured to convert 277-volt AC power into 12-volt DC power for provision to inserted sleds via respective MPCMs 916-1 to 916-7. The embodiments are not limited to this example.
MPCMs 916-1 to 916-7 may also be arranged to provide inserted sleds with optical signaling connectivity to a dual-mode optical switching infrastructure 914, which may be the same as—or similar to—dual-mode optical switching infrastructure 514 of
Sled 1004 may also include dual-mode optical network interface circuitry 1026. Dual-mode optical network interface circuitry 1026 may generally comprise circuitry that is capable of communicating over optical signaling media according to each of multiple link-layer protocols supported by dual-mode optical switching infrastructure 914 of
Coupling MPCM 1016 with a counterpart MPCM of a sled space in a given rack may cause optical connector 1016A to couple with an optical connector comprised in the counterpart MPCM. This may generally establish optical connectivity between optical cabling of the sled and dual-mode optical network interface circuitry 1026, via each of a set of optical channels 1025. Dual-mode optical network interface circuitry 1026 may communicate with the physical resources 1005 of sled 1004 via electrical signaling media 1028. In addition to the dimensions of the sleds and arrangement of components on the sleds to provide improved cooling and enable operation at a relatively higher thermal envelope (e.g., 250 W), as described above with reference to
As shown in
In another example, in various embodiments, one or more pooled storage sleds 1132 may be included among the physical infrastructure 1100A of data center 1100, each of which may comprise a pool of storage resources that is available globally accessible to other sleds via optical fabric 1112 and dual-mode optical switching infrastructure 1114. In some embodiments, such pooled storage sleds 1132 may comprise pools of solid-state storage devices such as solid-state drives (SSDs). In various embodiments, one or more high-performance processing sleds 1134 may be included among the physical infrastructure 1100A of data center 1100. In some embodiments, high-performance processing sleds 1134 may comprise pools of high-performance processors, as well as cooling features that enhance air cooling to yield a higher thermal envelope of up to 250 W or more. In various embodiments, any given high-performance processing sled 1134 may feature an expansion connector 1117 that can accept a far memory expansion sled, such that the far memory that is locally available to that high-performance processing sled 1134 is disaggregated from the processors and near memory comprised on that sled. In some embodiments, such a high-performance processing sled 1134 may be configured with far memory using an expansion sled that comprises low-latency SSD storage. The optical infrastructure allows for compute resources on one sled to utilize remote accelerator/FPGA, memory, and/or SSD resources that are disaggregated on a sled located on the same rack or any other rack in the data center. The remote resources can be located one switch jump away or two-switch jumps away in the spine-leaf network architecture described above with reference to
In various embodiments, one or more layers of abstraction may be applied to the physical resources of physical infrastructure 1100A in order to define a virtual infrastructure, such as a software-defined infrastructure 1100B. In some embodiments, virtual computing resources 1136 of software-defined infrastructure 1100B may be allocated to support the provision of cloud services 1140. In various embodiments, particular sets of virtual computing resources 1136 may be grouped for provision to cloud services 1140 in the form of SDI services 1138. Examples of cloud services 1140 may include—without limitation—software as a service (SaaS) services 1142, platform as a service (PaaS) services 1144, and infrastructure as a service (IaaS) services 1146.
In some embodiments, management of software-defined infrastructure 1100B may be conducted using a virtual infrastructure management framework 1150B. In various embodiments, virtual infrastructure management framework 1150B may be designed to implement workload fingerprinting techniques and/or machine-learning techniques in conjunction with managing allocation of virtual computing resources 1136 and/or SDI services 1138 to cloud services 1140. In some embodiments, virtual infrastructure management framework 1150B may use/consult telemetry data in conjunction with performing such resource allocation. In various embodiments, an application/service management framework 1150C may be implemented in order to provide quality of service (QoS) management capabilities for cloud services 1140. The embodiments are not limited in this context.
As shown in
As discussed in more detail herein, the managed nodes 1260 may write data to and read data from multiple data storage devices (e.g., physical storage resources 205-1 located in one or more of the managed nodes 1260). In doing so, the managed nodes 1260 may partition a dataset to be written into multiple portions and write each portion to a different data storage device (e.g., different SSDs). Each data storage device may have a data throughput rate that is less than the throughput rate of the communication bus (e.g., the optical fabric 412 described with reference to
Referring now to
The CPU 1302 may be embodied as any type of processor capable of performing the functions described herein. The CPU 1302 may be embodied as a single or multi-core processor(s), a microcontroller, or other processor or processing/controlling circuit. In some embodiments, the CPU 1302 may be embodied as, include, or be coupled to a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein. As discussed above, the managed node 1260 may include resources distributed across multiple sleds and in such embodiments, the CPU 1302 may include portions thereof located on the same sled or different sled. Similarly, the main memory 1304 may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage capable of performing the functions described herein. In some embodiments, all or a portion of the main memory 1304 may be integrated into the CPU 1302. In operation, the main memory 1304 may store various software and data used during operation, such as portions of datasets, a map of the locations (e.g., data storage devices 1312 in various managed nodes 1260 and keys associated with the portions) where portions of datasets are stored, operating systems, applications, programs, libraries, and drivers. As discussed above, the managed node 1260 may include resources distributed across multiple sleds and in such embodiments, the main memory 1304 may include portions thereof located on the same sled or different sled.
The I/O subsystem 1306 may be embodied as circuitry and/or components to facilitate input/output operations with the CPU 1302, the main memory 1304, and other components of the managed node 1260. For example, the I/O subsystem 1306 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 1306 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with one or more of the CPU 1302, the main memory 1304, and other components of the managed node 1260, on a single integrated circuit chip.
The communication circuitry 1308 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications over the network 1230 between the managed node 1260 and another compute device (e.g., the orchestrator server 1240 and/or one or more other managed nodes 1260). The communication circuitry 1308 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication.
The illustrative communication circuitry 1308 includes a network interface controller (NIC) 1310, which may also be referred to as a host fabric interface (HFI). The NIC 1310 may be embodied as one or more add-in-boards, daughtercards, network interface cards, controller chips, chipsets, or other devices that may be used by the managed node 1260 to connect with another compute device (e.g., the orchestrator server 1240 and/or one or more other managed nodes 1260). In some embodiments, the NIC 1310 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, the NIC 1310 may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC 1310. In such embodiments, the local processor of the NIC 1310 may be capable of performing one or more of the functions of the CPU 1302 described herein. Additionally or alternatively, in such embodiments, the local memory of the NIC 1310 may be integrated into one or more components of the managed node 1260 at the board level, socket level, chip level, and/or other levels. As discussed above, the managed node 1260 may include resources distributed across multiple sleds and in such embodiments, the communication circuitry 1308 may include portions thereof located on the same sled or different sled.
The one or more illustrative data storage devices 1312, may be embodied as any type of devices configured for short-term or long-term storage of data such as, for example, solid-state drives (SSDs), hard disk drives, memory cards, and/or other memory devices and circuits. Each data storage device 1312 may include a system partition that stores data and firmware code for the data storage device 1312. Each data storage device 1312 may also include an operating system partition that stores data files and executables for an operating system. In the illustrative embodiment, each data storage device 1312 includes non-volatile memory. Non-volatile memory may be embodied as any type of data storage capable of storing data in a persistent manner (even if power is interrupted to the non-volatile memory). For example, in the illustrative embodiment, the non-volatile memory is embodied as Flash memory (e.g., NAND memory). In other embodiments, the non-volatile memory may be embodied as any combination of memory devices that use chalcogenide phase change material (e.g., chalcogenide glass), or other types of byte-addressable, write-in-place non-volatile memory, ferroelectric transistor random-access memory (FeTRAM), nanowire-based non-volatile memory, phase change memory (PCM), memory that incorporates memristor technology, magnetoresistive random-access memory (MRAM) or Spin Transfer Torque (STT)-MRAM.
Additionally, the managed node 1260 may include a display 1314. The display 1314 may be embodied as, or otherwise use, any suitable display technology including, for example, a liquid crystal display (LCD), a light emitting diode (LED) display, a cathode ray tube (CRT) display, a plasma display, and/or other display usable in a compute device. The display 1314 may include a touchscreen sensor that uses any suitable touchscreen input technology to detect the user's tactile selection of information displayed on the display including, but not limited to, resistive touchscreen sensors, capacitive touchscreen sensors, surface acoustic wave (SAW) touchscreen sensors, infrared touchscreen sensors, optical imaging touchscreen sensors, acoustic touchscreen sensors, and/or other type of touchscreen sensors.
Additionally or alternatively, the managed node 1260 may include one or more peripheral devices 1316. Such peripheral devices 1316 may include any type of peripheral device commonly found in a compute device such as speakers, a mouse, a keyboard, and/or other input/output devices, interface devices, and/or other peripheral devices.
The client device 1220 and the orchestrator server 1240 may have components similar to those described in
As described above, the client device 1220, the orchestrator server 1240 and the managed nodes 1260 are illustratively in communication via the network 1230, which may be embodied as any type of wired or wireless communication network, including global networks (e.g., the Internet), local area networks (LANs) or wide area networks (WANs), cellular networks (e.g., Global System for Mobile Communications (GSM), 3G, Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), etc.), digital subscriber line (DSL) networks, cable networks (e.g., coaxial networks, fiber networks, etc.), or any combination thereof.
Referring now to
In the illustrative environment 1400, the network communicator 1420, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to facilitate inbound and outbound network communications (e.g., network traffic, network packets, network flows, etc.) to and from the orchestrator server 1240, respectively. To do so, the network communicator 1420 is configured to receive and process data packets from one system or computing device (e.g., the orchestrator server 1240, a managed node 1260, etc.) and to prepare and send data packets to another computing device or system (e.g., another managed node 1260). Accordingly, in some embodiments, at least a portion of the functionality of the network communicator 1420 may be performed by the communication circuitry 1308, and, in the illustrative embodiment, by the NIC 1310.
The distributed data manager 1430, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to manage data access (e.g., writing data and/or reading data) to and from data storage devices 1312 local to the managed node 1260 or available in one or more other managed nodes 1260 to obtain a higher data throughput rate than would be available if the data was written to and read from a single data storage device 1312. To do so, in the illustrative embodiment, the distributed data manager 1430 includes a map manager 1432, a local data servicer 1434, and a remote data servicer 1436. The map manager 1432, in the illustrative embodiment, is configured to track where portions 1404 of datasets are stored among the data storage devices 1312 of the set of managed nodes 1260, partition datasets used by workloads executed by the present managed node 1260 into the portions 1404, including redundant portions for error correction schemes, associate unique keys (e.g., generated by the map manager 1432 based on a hash of the portion 1404 combined with an address such as a media access control address of the managed node 1260 to store the portion 1404 and a unique address (e.g., media access control address) of the present managed node 1260, and/or based on any other suitable method for uniquely identifying the portion 1404) with the portions 1404, track the availability of the data storage devices 1312 and the associated managed nodes 1260 to determine where to write and read dataset portions 1404, and recombine read dataset portions 1404 into the original datasets.
The local data servicer 1434, in the illustrative embodiment, is configured to write dataset portions 1404 in association with assigned keys (e.g., determined by the map manager 1432) to one or more data storage devices 1312 local to the managed node 1260, read requested dataset portions 1404 (e.g., dataset portions 1404 identified by their corresponding keys) from the local data storage devices 1312, and apply any error correction algorithms in the processes of writing or reading the dataset portions 1404. The remote data servicer 1436, in the illustrative embodiment, is configured to issue requests to other managed nodes 1260 (e g, managed nodes 1260 determined by the map manager 1432) to write dataset portions 1404 in association with keys provided by the map manager 1432 and issue requests to read dataset portions 1404 from the other managed nodes 1260 using keys provided by the map manager 1432.
It should be appreciated that each of the map manager 1432, the local data servicer 1434, and the remote data servicer 1436 may be separately embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof and/or may be embodied as distributed services across multiple managed nodes 1260. For example, the map manager 1432 may be embodied as a hardware component, while the local data servicer 1434 and the remote data servicer 1436 are embodied as virtualized hardware components or as some other combination of hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof.
Referring now to
In distributing the dataset, in the illustrative embodiment, the managed node 1260 partitions the dataset into multiple portions 1404 (e.g., subsets), as indicated in block 1510. For example, the managed node 1260 may divide the size of the dataset by a number of portions 1404 to be written, such that each portion 1404 is of equal size. In other embodiments, the managed node 1260 may partition the dataset into unequally sized portions 1404. As indicated in block 1512, the managed node 1260 may generate redundant portions 1404 using an error correction scheme. The redundant portions 1404 may be copies of other portions 1404 or may be complementary portions 1404 suitable for use in reconstructing a dataset when one or more of the portions 1404 cannot be recovered (e.g., the result of an XOR operation on one or more of the other portions 1404).
In block 1514, in the illustrative embodiment, the managed node 1260 determines an assignment of the portions 1404 to data storage devices 1312 in the present managed node 1260 and/or other managed nodes 1260. The managed node 1260, in the illustrative embodiment, may determine to distribute the portions 1404 across multiple managed nodes 1260. By doing so, if any one managed node 1260 becomes unavailable, a relatively large percentage of the portions 1404 may still be obtained from the other managed nodes 1260. Further, as indicated in block 1516, the managed node 1260 may assign the redundant portions 1404 to managed nodes 1260 that are different from the managed nodes 1260 that are to store the original portions 1404 (e.g., the portions 1404 that the redundant portions 1404 would be used to recreate), so that both the original and redundant version of a portion 1404 do not become lost if the corresponding managed node 1260 becomes inoperative. In block 1518, in the illustrative embodiment, the managed node 1260 associates a key with each portion 1404. As described above, the key uniquely identifies each portion 1404 and may be generated by executing a hash function on the portion 1404 and combining the hash with target location information such as by appending a unique address (e.g., media access control address) of the managed node 1260 to store the data and a unique address of the present managed node 1260, or based on any other method for uniquely identifying the portion 1404. In block 1520, in the illustrative embodiment, the managed node 1260 may generate and store a map of the portions 1404, the corresponding keys, and the data storage devices 1312 that are to store the portions 1404 (e.g., a dataset map 1402). When a portion 1404 is to be stored on a remote managed node 1260, the present managed node 1260 may not have information regarding the specific data storage devices 1312 present in the remote managed node 1260. Accordingly, in such embodiments, the present managed node 1260 stores an identifier (e.g., the media access control address or other unique identifier) of the remote managed node 1260 where the one or more portions 1404 are to be stored, rather than identifiers of specific data storage devices 1312 within the remote managed node 1260.
In block 1522, the managed node 1260 writes the portions 1404 to the multiple data storage devices 1312, such as based on the determination of the assignment of the portions 1404 from block 1514. In doing so, the managed node 1260 may write one or more portions 1404 to data storage devices 1312 local to the present managed node 1260, as indicated in block 1524. In doing so, in the illustrative embodiment, the managed node 1260 stores the corresponding portions 1404 in one or more of the local data storage devices 1312 with their corresponding keys (e.g., in a table of the keys and corresponding logical block addresses where the portions 1404 are written). Additionally, as indicated in block 1526, the managed node 1260 may write one or more portions 1404 to remote data storage devices 1312 of other managed nodes 1260, such as by issuing requests to those managed nodes 1260 with the portions 1404 to write and the keys to be associated with the portions 1404. As indicated in block 1528, by concurrently writing the various portions 1404 to different data storage devices 1312, the managed node 1260, in effect, writes the dataset at a combined rate that is greater than the peak data throughput rate of any one of the data storage devices 1312. Subsequently, the method 1500 advances to block 1530 of
Referring now to
As indicated in block 1548, in reading the portions 1404, the managed node 1260 may identify one or more inoperative data storage devices 1312 (e.g., local data storage devices 1312 that are inoperative and/or one or more managed nodes 1260 that have become unresponsive or have reported an inoperative status for one or more data storage devices 1312 local to them) and read the corresponding redundant portions 1404 from other data storage devices 1312. As indicated in block 1550, in reading the portions 1404, the managed node 1260 effectively reads the dataset requested by the workload at a combined rate that is greater than the peak data throughput rate of any one of the data storage devices 1312 one which a portion 1404 of the dataset is stored. After reading the portions 1404 of the dataset, the managed node 1260, in block 1552, combines the read portions 1404 to reconstruct the dataset requested by the workload. In doing so, the managed node 1260 may apply an error correction scheme (e.g., a low density parity check, a Reed-Solomon scheme, etc.) to correct any data corruption present in the read portions 1404. Afterwards, the method 1500 loops back to block 1502 of
Illustrative examples of the technologies disclosed herein are provided below. An embodiment of the technologies may include any one or more, and any combination of, the examples described below.
Example 1 includes a managed node to manage distributed data, the managed node comprising a distributed data manager to distribute a dataset over multiple data storage devices coupled to a network, wherein each data storage device has a peak data throughput rate; and a network communicator to request a corresponding portion of the dataset from each data storage device and receive the requested portions of the dataset at a combined data throughput rate that is greater than the peak data throughput rate of any one of the data storage devices; wherein the distributed data manager is further to combine the received portions of the dataset to reconstruct the dataset.
Example 2 includes the subject matter of Example 1, and wherein to request the corresponding portion of the dataset from each data storage device comprises to receive a request from a workload for the dataset; determine, in response to the request from the workload, the corresponding data storage device on which each portion is stored; and request the corresponding portion after determining the corresponding data storage devices.
Example 3 includes the subject matter of any of Examples 1 and 2, and wherein to distribute the dataset over multiple data storage devices comprises to distribute the dataset in response to a request from a workload to store the dataset.
Example 4 includes the subject matter of any of Examples 1-3, and wherein to distribute the dataset comprises to write the portions on data storage devices that are physically located on different managed nodes.
Example 5 includes the subject matter of any of Examples 1-4, and wherein to distribute the dataset comprises to write the portions on solid state drives.
Example 6 includes the subject matter of any of Examples 1-5, and wherein the distributed data manager is further to associate each portion with a key and wherein to request the corresponding portion comprises to request the portion stored in association with each key.
Example 7 includes the subject matter of any of Examples 1-6, and wherein the distributed data manager is further to store a map indicative of locations of the portions of the dataset among the data storage devices.
Example 8 includes the subject matter of any of Examples 1-7, and wherein to request the corresponding portion comprises to access the map to determine the data storage device on which each corresponding portion is stored.
Example 9 includes the subject matter of any of Examples 1-8, and wherein to distribute the dataset comprises to write at least one redundant portion of the data set to at least one of the data storage devices.
Example 10 includes the subject matter of any of Examples 1-9, and wherein to request a corresponding portion comprises to determine whether a data storage device on which one of the portions is stored is inoperative; determine, in response to a determination that the data storage device is inoperative, an alternative data storage device on which a redundant version of the portion is stored; and request the redundant version of the portion from the alternative data storage device.
Example 11 includes the subject matter of any of Examples 1-10, and wherein to combine the received portions comprises to apply an error correction scheme to the received portions to correct corrupted data.
Example 12 includes the subject matter of any of Examples 1-11, and wherein to distribute the dataset over multiple data storage devices comprises to apply an error correction scheme to generate one or more redundant versions of one or more of the portions; and write the redundant versions to data storage devices in managed nodes that are separate from original versions of the corresponding portions.
Example 13 includes the subject matter of any of Examples 1-12, and wherein to distribute the dataset comprises to write the portions of the dataset to the data storage devices at a data throughput rate that is greater than the peak data throughput rate of any of the data storage devices.
Example 14 includes a method for managing distributed data, the method comprising distributing, by a managed node, a dataset over multiple data storage devices coupled to a network, wherein each data storage device has a peak data throughput rate; requesting, by the managed node, a corresponding portion of the dataset from each data storage device; receiving, by the managed node, the requested portions of the dataset at a combined data throughput rate that is greater than the peak data throughput rate of any one of the data storage devices; and combining, by the managed node, the received portions of the dataset to reconstruct the dataset.
Example 15 includes the subject matter of Example 14, and wherein requesting the corresponding portion of the dataset from each data storage device comprises receiving a request from a workload for the dataset; determining, in response to the request from the workload, the corresponding data storage device on which each portion is stored; and requesting the corresponding portion after determining the corresponding data storage devices.
Example 16 includes the subject matter of any of Examples 14 and 15, and wherein distributing the dataset over multiple data storage devices comprises distributing the dataset in response to a request from a workload to store the dataset.
Example 17 includes the subject matter of any of Examples 14-16, and wherein distributing the dataset comprises writing the portions on data storage devices that are physically located on different managed nodes.
Example 18 includes the subject matter of any of Examples 14-17, and wherein distributing the dataset comprises writing the portions on solid state drives.
Example 19 includes the subject matter of any of Examples 14-18, and further including associating, by the managed node, each portion with a key and wherein requesting the corresponding portion comprises requesting the portion stored in association with each key.
Example 20 includes the subject matter of any of Examples 14-19, and further including storing, by the managed node, a map indicative of locations of the portions of the dataset among the data storage devices.
Example 21 includes the subject matter of any of Examples 14-20, and wherein requesting the corresponding portion comprises accessing the map to determine the data storage device on which each corresponding portion is stored.
Example 22 includes the subject matter of any of Examples 14-21, and wherein distributing the dataset comprises writing at least one redundant portion of the data set to at least one of the data storage devices.
Example 23 includes the subject matter of any of Examples 14-22, and wherein requesting a corresponding portion comprises determining whether a data storage device on which one of the portions is stored is inoperative; determining, in response to a determination that the data storage device is inoperative, an alternative data storage device on which a redundant version of the portion is stored; and requesting the redundant version of the portion from the alternative data storage device.
Example 24 includes the subject matter of any of Examples 14-23, and wherein combining the received portions comprises applying an error correction scheme to the received portions to correct corrupted data.
Example 25 includes the subject matter of any of Examples 14-24, and wherein distributing the dataset over multiple data storage devices comprises applying an error correction scheme to generate one or more redundant versions of one or more of the portions; and writing the redundant versions to data storage devices in managed nodes that are separate from original versions of the corresponding portions.
Example 26 includes the subject matter of any of Examples 14-25, and wherein distributing the dataset comprises writing the portions of the dataset to the data storage devices at a data throughput rate that is greater than the peak data throughput rate of any of the data storage devices.
Example 27 includes one or more computer-readable storage media comprising a plurality of instructions that, when executed by a managed node, cause the managed node to perform the method of any of Examples 14-26.
Example 28 includes a managed node comprising means for distributing a dataset over multiple data storage devices coupled to a network, wherein each data storage device has a peak data throughput rate; means for requesting a corresponding portion of the dataset from each data storage device; means for receiving the requested portions of the dataset at a combined data throughput rate that is greater than the peak data throughput rate of any one of the data storage devices; and means for combining the received portions of the dataset to reconstruct the dataset.
Example 29 includes the subject matter of Example 28, and wherein the means for requesting the corresponding portion of the dataset from each data storage device comprises means for receiving a request from a workload for the dataset; means for determining, in response to the request from the workload, the corresponding data storage device on which each portion is stored; and means for requesting the corresponding portion after determining the corresponding data storage devices.
Example 30 includes the subject matter of any of Examples 28 and 29, and wherein the means for distributing the dataset over multiple data storage devices comprises means for distributing the dataset in response to a request from a workload to store the dataset.
Example 31 includes the subject matter of any of Examples 28-30, and wherein the means for distributing the dataset comprises means for writing the portions on data storage devices that are physically located on different managed nodes.
Example 32 includes the subject matter of any of Examples 28-31, and wherein the means for distributing the dataset comprises means for writing the portions on solid state drives.
Example 33 includes the subject matter of any of Examples 28-32, and further including means for associating each portion with a key and wherein the means for requesting the corresponding portion comprises means for requesting the portion stored in association with each key.
Example 34 includes the subject matter of any of Examples 28-33, and further including means for storing a map indicative of locations of the portions of the dataset among the data storage devices.
Example 35 includes the subject matter of any of Examples 28-34, and wherein the means for requesting the corresponding portion comprises means for accessing the map to determine the data storage device on which each corresponding portion is stored.
Example 36 includes the subject matter of any of Examples 28-35, and wherein the means for distributing the dataset comprises means for writing at least one redundant portion of the data set to at least one of the data storage devices.
Example 37 includes the subject matter of any of Examples 28-36, and wherein the means for requesting a corresponding portion comprises means for determining whether a data storage device on which one of the portions is stored is inoperative; means for determining, in response to a determination that the data storage device is inoperative, an alternative data storage device on which a redundant version of the portion is stored; and means for requesting the redundant version of the portion from the alternative data storage device.
Example 38 includes the subject matter of any of Examples 28-37, and wherein the means for combining the received portions comprises means for applying an error correction scheme to the received portions to correct corrupted data.
Example 39 includes the subject matter of any of Examples 28-38, and wherein the means for distributing the dataset over multiple data storage devices comprises means for applying an error correction scheme to generate one or more redundant versions of one or more of the portions; and means for writing the redundant versions to data storage devices in managed nodes that are separate from original versions of the corresponding portions.
Example 40 includes the subject matter of any of Examples 28-39, and wherein the means for distributing the dataset comprises means for writing the portions of the dataset to the data storage devices at a data throughput rate that is greater than the peak data throughput rate of any of the data storage devices.
Claims
1. A managed node to manage distributed data, the managed node comprising:
- a distributed data manager to distribute a dataset over multiple data storage devices coupled to a network, wherein each data storage device has a peak data throughput rate; and
- a network communicator to request a corresponding portion of the dataset from each data storage device and receive the requested portions of the dataset at a combined data throughput rate that is greater than the peak data throughput rate of any one of the data storage devices;
- wherein the distributed data manager is further to combine the received portions of the dataset to reconstruct the dataset.
2. The managed node of claim 1, wherein to request the corresponding portion of the dataset from each data storage device comprises to:
- receive a request from a workload for the dataset;
- determine, in response to the request from the workload, the corresponding data storage device on which each portion is stored; and
- request the corresponding portion after determining the corresponding data storage devices.
3. The managed node of claim 1, wherein to distribute the dataset over multiple data storage devices comprises to distribute the dataset in response to a request from a workload to store the dataset.
4. The managed node of claim 1, wherein to distribute the dataset comprises to write the portions on data storage devices that are physically located on different managed nodes.
5. The managed node of claim 1, wherein to distribute the dataset comprises to write the portions on solid state drives.
6. The managed node of claim 1, wherein the distributed data manager is further to associate each portion with a key and wherein to request the corresponding portion comprises to request the portion stored in association with each key.
7. The managed node of claim 1, wherein the distributed data manager is further to store a map indicative of locations of the portions of the dataset among the data storage devices.
8. The managed node of claim 7, wherein to request the corresponding portion comprises to access the map to determine the data storage device on which each corresponding portion is stored.
9. The managed node of claim 1, wherein to distribute the dataset comprises to write at least one redundant portion of the data set to at least one of the data storage devices.
10. The managed node of claim 1, wherein to request a corresponding portion comprises to:
- determine whether a data storage device on which one of the portions is stored is inoperative;
- determine, in response to a determination that the data storage device is inoperative, an alternative data storage device on which a redundant version of the portion is stored; and
- request the redundant version of the portion from the alternative data storage device.
11. The managed node of claim 1, wherein to combine the received portions comprises to apply an error correction scheme to the received portions to correct corrupted data.
12. One or more computer-readable storage media comprising a plurality of instructions that, when executed by a managed node, cause the managed node to:
- distribute a dataset over multiple data storage devices coupled to a network, wherein each data storage device has a peak data throughput rate;
- request a corresponding portion of the dataset from each data storage device;
- receive the requested portions of the dataset at a combined data throughput rate that is greater than the peak data throughput rate of any one of the data storage devices; and
- combine the received portions of the dataset to reconstruct the dataset.
13. The one or more computer-readable storage media of claim 12, wherein to request the corresponding portion of the dataset from each data storage device comprises to:
- receive a request from a workload for the dataset;
- determine, in response to the request from the workload, the corresponding data storage device on which each portion is stored; and
- request the corresponding portion after determining the corresponding data storage devices.
14. The one or more computer-readable storage media of claim 12, wherein to distribute the dataset over multiple data storage devices comprises to distribute the dataset in response to a request from a workload to store the dataset.
15. The one or more computer-readable storage media of claim 12, wherein to distribute the dataset comprises to write the portions on data storage devices that are physically located on different managed nodes.
16. The one or more computer-readable storage media of claim 12, wherein to distribute the dataset comprises to write the portions on solid state drives.
17. The one or more computer-readable storage media of claim 12, wherein the plurality of instructions, when executed, cause the managed node to associate each portion with a key and wherein to request the corresponding portion comprises to request the portion stored in association with each key.
18. The one or more computer-readable storage media of claim 12, wherein the plurality of instructions, when executed, cause the managed node to store a map indicative of locations of the portions of the dataset among the data storage devices.
19. The one or more computer-readable storage media of claim 18, wherein to request the corresponding portion comprises to access the map to determine the data storage device on which each corresponding portion is stored.
20. The one or more computer-readable storage media of claim 12, wherein to distribute the dataset comprises to write at least one redundant portion of the data set to at least one of the data storage devices.
21. The one or more computer-readable storage media of claim 12, wherein to request a corresponding portion comprises to:
- determine whether a data storage device on which one of the portions is stored is inoperative;
- determine, in response to a determination that the data storage device is inoperative, an alternative data storage device on which a redundant version of the portion is stored; and
- request the redundant version of the portion from the alternative data storage device.
22. A method for managing distributed data, the method comprising:
- distributing, by a managed node, a dataset over multiple data storage devices coupled to a network, wherein each data storage device has a peak data throughput rate;
- requesting, by the managed node, a corresponding portion of the dataset from each data storage device;
- receiving, by the managed node, the requested portions of the dataset at a combined data throughput rate that is greater than the peak data throughput rate of any one of the data storage devices; and
- combining, by the managed node, the received portions of the dataset to reconstruct the dataset.
23. The method of claim 22, wherein requesting the corresponding portion of the dataset from each data storage device comprises:
- receiving a request from a workload for the dataset;
- determining, in response to the request from the workload, the corresponding data storage device on which each portion is stored; and
- requesting the corresponding portion after determining the corresponding data storage devices.
24. The method of claim 22, wherein distributing the dataset over multiple data storage devices comprises distributing the dataset in response to a request from a workload to store the dataset.
25. The method of claim 22, wherein distributing the dataset comprises writing the portions on data storage devices that are physically located on different managed nodes.
Type: Application
Filed: Dec 30, 2016
Publication Date: Jan 25, 2018
Inventor: Steven C. Miller (Livermore, CA)
Application Number: 15/395,572