METHOD FOR DATA CENTER STORAGE EVALUATION FRAMEWORK SIMULATION
A method for simulating a data center is provided and a non-transitory computer-readable storage medium having recorded thereon a computer program for executing the method of simulating a data center. The method includes generating, by a first application, a simulation program of a data center using a hardware configuration file and a functional description file; and executing, by a simulator, a simulation on the simulation program by obtaining, by the simulator, at least one record from a second application and producing at least one job corresponding to the at least one record, entering the at least one job in a job queue, and executing a flow, by a third application, using a job selected from the job queue.
Latest Patents:
This application is a Continuation of U.S. application Ser. No. 15/896,590, which was filed in the U.S. Patent and Trademark Office (USPTO) on Feb. 14, 2018, and claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Application No. 62/598,788, which was filed in the USPTO on Dec. 14, 2017, the entire content of each of which is incorporated herein by reference.
FIELDThe present disclosure relates generally to a method and an apparatus for simulation, and more particularly, to a method and apparatus for data center storage evaluation framework (DCEF) simulation.
BACKGROUNDOrganizing a data center with tens of physical host machines running hundreds of virtual machines is costly and time-consuming. This is even more tangible when a company upgrades an existing data center to potentially benefit from installing new devices like new processors, emerging memory technologies, or high-end storage devices, or employing new management algorithms like resource allocation. However, upgrading an entire system is not only expensive but may also harm ongoing tasks on a data center and incur even more expense, where the results may not be worth the effort.
Thus, there is a need for an apparatus and a method of evaluating and estimating performance changes (e.g., an improvement or a degradation) brought on by hardware and software changes to a datacenter storage system, without physically changing hardware and software.
SUMMARYAccording to one embodiment, a method of simulating a data center is provided. The method includes generating, by a first application, a simulation program of a data center using a hardware configuration file and a functional description file; and executing, by a simulator, a simulation on the simulation program by obtaining, by the simulator, at least one record from a second application and producing at least one job corresponding to the at least one record, entering the at least one job in a job queue, and executing a flow, by a third application, using a job selected from the job queue.
A non-transitory computer-readable recording medium having recorded thereon a computer program for executing a method of simulating a data center, the method comprising: generating, by a first application, a simulation program of a data center using a hardware configuration file and a functional description file; and executing, by a simulator, a simulation on the simulation program by obtaining, by the simulator, at least one record from a second application and producing at least one job corresponding to the at least one record; entering the at least one job in a job queue; and executing a flow, by a third application, using a job selected from the job queue.
The above and other aspects, features, and advantages of certain embodiments of the present disclosure will be more apparent from the following detailed description, taken in conjunction with the accompanying drawings, in which:
Hereinafter, embodiments of the disclosure are described in detail with reference to the accompanying drawings. It should be noted that the same elements will be designated by the same reference numerals although they are shown in different drawings. In the following description, specific details such as detailed configurations and components are merely provided to assist with the overall understanding of the embodiments of the present disclosure. Therefore, it should be apparent to those skilled in the art that various changes and modifications of the embodiments described herein may be made without departing from the scope of the present disclosure. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness. The terms described below are terms defined in consideration of the functions in the present disclosure, and may be different according to users, intentions of the users, or customs. Therefore, the definitions of the terms should be determined based on the contents throughout this specification.
The present disclosure may have various modifications and various embodiments, among which embodiments are described below in detail with reference to the accompanying drawings. However, it should be understood that the present disclosure is not limited to the embodiments, but includes all modifications, equivalents, and alternatives within the scope of the present disclosure.
Although the terms including an ordinal number such as first, second, etc. may be used for describing various elements, the structural elements are not restricted by the terms. The terms are only used to distinguish one element from another element. For example, without departing from the scope of the present disclosure, a first structural element may be referred to as a second structural element. Similarly, the second structural element may also be referred to as the first structural element. As used herein, the term “and/or” includes any and all combinations of one or more associated items.
The terms used herein are merely used to describe various embodiments of the present disclosure but are not intended to limit the present disclosure. Singular forms are intended to include plural forms unless the context clearly indicates otherwise. In the present disclosure, it should be understood that the terms “include” or “have” indicate existence of a feature, a number, a step, an operation, a structural element, parts, or a combination thereof, and do not exclude the existence or probability of the addition of one or more other features, numerals, steps, operations, structural elements, parts, or combinations thereof.
Unless defined differently, all terms used herein have the same meanings as those understood by a person skilled in the art to which the present disclosure belongs. Terms such as those defined in a generally used dictionary are to be interpreted to have the same meanings as the contextual meanings in the relevant field of art, and are not to be interpreted to have ideal or excessively formal meanings unless clearly defined in the present disclosure.
The present disclosure concerns evaluating and estimating performance changes (e.g., an improvement or a degradation) brought on by hardware and software changes to a data center storage system without physically changing hardware and software. Thus, a data center administrator may easily address storage resource management and optimization issues in a data center that includes heterogeneous storage media (e.g., a solid state drive (SSD) and a hard disk drive (HDD)) and complex network topologies.
According to one embodiment, a company may evaluate a new or proposed installation of a data center through flow-based simulation on a single machine without to expend funds to physically install the data center.
According to one embodiment, the present disclosure is module-based, highly encapsulated, pluggable, and scalable.
According to one embodiment, the present disclosure uses automatic programming to generate a simulator program based on user hardware configuration description files.
According to one embodiment, the present disclosure may conduct multiple types of performance evaluations, where a generated simulator reads workload samples (e.g., workload trace metadata), simulates performance using a flow-based methodology, and outputs numerous performance metrics, such as input/output (I/O) performance, energy consumption, total cost of ownership (TCO), a reliability audit, and availability.
According to one embodiment, the present disclosure includes a decision making assistant, where decisions may be made by a system administrator or by the system automatically for load balancing (including virtual machine disk (VMDK) migration), and reorganizing a topology of a data center to improve performance, based on performance evaluation results of different hardware configurations under a certain workload pattern.
Referring to
An event-driven simulator is usually used to simulate high-level computer systems, where every possible event in a system is described and precisely handled. An event-drive simulator is much faster than a cycle-driven simulator. The cycle-driven simulator and the event-driven simulator can each simulate concurrency, but their rigid implementation does not allow users to easily simulate arbitrary computer systems. As a result, the cycle-driven simulator and the event-driven simulator, by themselves, each lack of flexibility, which may require a long development time. On the other hand, while a functional simulator may only require a short development time, the functional simulator, by itself, does not provide any information about physical characteristics since the functional simulator is merely used to verify functionality.
Many timing simulators use event-driven and backward-looking models, and a memory element has a history instead of a single value. Flow-based simulation leverages the key insight that a complex algorithm may be better understood when it is serialized. Thus, in contrast with event-driven methods in which an invocation time of events is sophisticated (e.g., depends on many factors), in a flow-based technique, event-driven methods are predictable because of serial execution.
The goal of flow-based simulation is to let architects measure the performance, energy consumption, and functional verification by simulating only the functionality. In a flow-based simulator, the content of every memory element under simulation is bonded to time. Conceptually, time is a global floating point number which may increase or decrease. Since the state of a system is also time-bonded, it is possible to rollback a sequence of events after a call chain, which is referred to as a flashback.
In the present disclosure, a flow-based simulation methodology is employed. That is, a flow is defined as a sequence of events occurring successively which is triggered by invoking a routine and ends when a program counter returns from that routine. A routine is a piece of code that describes the functionality of an element, which is referred to as a block, in a system. Besides task functions, each block has a latency/power component which is used for timing/energy evaluation. A block may receive a job and process the job. In addition, a block may optionally produce other jobs and send the produced jobs to a job queue, similar to an event-driven simulation, but at a higher level.
Flow-based techniques rely on a call stack to resemble buffering in hardware, so that a programmer need not use a job queue very frequently. As shown in
Referring to
Each of the data centers 201, 203, 205, and 207 may have multiple hosts (e.g., physical host machines or servers) as described in more detail below with reference to
Referring to
Inside each host machine 301, 303, 305, and 307, different hardware devices may be included, such as a central processing unit (CPU), memory (dual in-line memory module (DIMM)/nonvolatile DIMM (NVDIMM)), HDD, SSD, network interface controller (NIC), and interfaces (e.g., address/data (ADDR/DATA) bus, a peripheral component interconnect (PCI) bus, a PCI express (PCI-e) bus, a serial advanced technology attachment (SATA) bus, and an intelligent drive electronics (IDE) bus).
Referring to
Referring to
In an embodiment, a non-transitory computer-readable recording medium having recorded thereon a computer program for executing the method of simulating a data center illustrated in
Referring to
Referring to
At 702, it is determined if the job queue is empty. If the job queue is not empty, the method proceeds to 703. If the job queue is empty, the method proceeds to 711.
At 703, a job is selected from the job queue and provided to a simulator (e.g., a flow-based simulator) to run a simulation on the job. For example, trace records from a trace file are provided one by one to the flow-based simulator, where the simulator produces corresponding jobs and provides each job to a job distribution application of a simulator (e.g. the simulator 505 in
At 705, the job distribution application starts a flow for the job. That is, for each job in the job queue that is provided to the job distribution application, the job distributing application tries to start a flow.
At 707, a flow started by the job distribution application is executed from its beginning. That is, for a flow that is started, the flow returns to its beginning to be executed.
At 709, it is determined if a new trace is required. If a new trace is not required, the method returns to 702. If a new trace is required, the method proceeds to 711.
At 711, it is determined if the end of the trace file is reached. If the end of the trace file is not reached then the method returns to 701. If the end of the trace file is reached then the method proceeds to 713.
At 713, an evaluation report is provided (e.g., output, printed). That is, when all of the jobs in the trace file are finished, an evaluation report is provided.
Referring to
The method 800 executes the simulator generated by the DCEF.
Referring to
The job distributing application 807 receives a job from the job queue application 805 and decodes the job in the job decoder 901 to obtain the job's flow identifier (ID) and metadata. Then, the flow ID is sent to a matcher application 905, and the matcher application 905 uses the flow ID to look up corresponding device object descriptions in the device pool application 809. The device pool application 809 contains pointers to all object descriptions including both hardware and software components, and has information on all of the components of a system in a database.
The matcher application 905 looks up corresponding device object descriptions in the device pool application 809. A device model is built from the device object descriptions, and a runner for the built device initializes and operates the built device model. Additionally, one or more new job(s) may be dispatched to the job queue if necessary. The job queue application 805 may not empty if a hardware device includes an endless loop to keep it active.
Referring to
A hardware description and programming language (HDPL) for the present disclosure may be defined as follows:
In the HDPL, hardware description languages and programming languages are merged. With the HDPL, a user may describe a hardware device using a programming language. An example HDPL may be an extension of C++ for module description. The syntax is described above. Such an HDPL enables a user to define a new hardware module with a pipeline inside C++ code. Other programming language implementations are specifically envisioned.
An HDPL syntax template is as follows:
Template code for describing a generic module is described above. The description may be for both hardware and software modules. A flow-based simulation method and design is used as a transpiler, which takes code in HDPL and generates its equivalent C++ code to implement a flow-based simulation of a described system. A user need only care about the functionality, and latency/power components of each block (which may be an estimation). The overall timing measurement is conducted by the transpiler. A flow-based method is used to make the transformation from HDPL to C++ easier, and make the cycle-accurate simulation faster. However, the present disclosure is not limited to using only a flow-based method. Event-driven and cycle-driven methods may also be used. Only a functional simulation may be considered in a case where a user is only interested in investigating functionality in a faster way.
A module description may include multiple scope specifiers as follows:
A scope specifier “parameters” is used to parametrize the module. A parameter may be used everywhere throughout a module description. A parameter is similar to a template class in C++ , but is more specialized in HDPL. For example, a parameter may alter the functionality of a module or alter a dimension of a member array.
A scope specifier “specifications” is used to reconfigure a base-module, where the base module's parameters may be modified using this scope specifier of an inheritor module.
A scope specifier “channels” is used to implement a channel as a C++ class consisting of a set of access functions and a buffer. A module may have three types of channels to provide various conductivities: input, output, and inout.
A scope specifier “input” is used to specify a connection that can only be connected to an output channel of another module. An input channel may be listened to, read, and flushed within the module description scope.
A scope specifier “output” is an output port that may be connect to another module's input channel. The output port may be connected to several input channels.
A scope specifier “inout” is a full-duplex channel which may be read and written from both sides. An inout may be connected only to another module's input, output, and inout channel.
A scope specifier “architecture” contains architectural declarations of a module, which may include multiple instances of objects, storage, pipeline, submodule, and variables.
A scope specifier “storage” describes a logical structure of a storage space of a device, which may form an access tree such as hdd.sector.page.line. The storage structure is similar to a regular structure in C, but the storage structure implements a complicated class which stores a history of modifications inside a hash table with a tiering or cutoff algorithm.
A scope specifier “pipeline” is a main description of a module's functionality. A module performs its tasks in one or more described pipelines. A pipeline includes a name for referring to the pipeline, a dimension to enable a super-pipeline, and some possible inputs. To describe a pipeline, its stages should be listed inside the body of the pipeline. Each stage is a function call which is described below with reference to an “implementation” scope of a module. Moreover, a pipeline may be conditional, which indicates that it is possible to perform a pipeline stage only when a particular condition exists.
A scope specifier “submodule” is similar to defining a new module, but a submodule is restricted to being used only inside the submodule's parent module.
A scope specifier “variables” is similar to a C++ class structure. It is possible to instantiate some variables, arrays, structures (or structs), classes, and modules inside a module. A variable may be declared by a “hist” specifier which makes it time-bonded by assigning a history of modifications to the variable. Such variables may be accessed by using getter and setter methods. Notice that, hist may also be used inside a storage structure.
A scope specifier “topology” is used to describe a connectivity of submodules.
A scope specifier “fields” describes address fields. A field may be used later in a format of addr@field which returns a particular bit-range of the address.
A scope specifier “implementation” describes all function bodies. In addition, a pipeline stage may be used in an architecture scope. A function is described in the form of a block, which includes a name, a possible input vector, a possible output type, physical characteristics such as latency and power, and a function body. The body is C++ code with some additional keywords for timing simulation: fire, wait, serial, and concurrent.
A scope specifier “fire” puts a job in a simulator job queue for starting a pipeline.
A scope specifier “wait” is used to call another block and make a caller busy.
A scope specifier “serial” is a code block which makes everything inside it run serially or sequentially.
A scope specifier “concurrent” is a code block which makes everything inside it run concurrently.
An application of the present disclosure varies based on the need. Overall, the simulator generates a complete report in terms of performance, energy, and reliability.
The term “performance” indicates an estimation of the performance of a whole system and a report of accurate results of running a given workload of a data center. Performance may also include throughput, bandwidth, and total execution time.
The term “throughput” reports a number of input/output (I/O) operations per second (IOPS) for each part of a system based on a user's requirement.
The term “bandwidth” is a report of an exact bandwidth of devices including storage and networking devices after a simulation is completed.
The term “total execution time” is an estimate of a total execution time for each workload, which may be helpful for estimating costs.
The term “energy” is a complete report at the completion of a simulation for energy consumption of each physical device, as well as an estimate of a whole data center.
The term “reliability” indicates that a simulator may receive a reliability model for each device and estimate each device's life time, failure rate, drop rate, mean time to failure, recovery, repair, mean time between failures, and mean down time.
Based on the output reports, it is possible to calculate a total cost of ownership, availability, and several other measurements and estimations.
Although certain embodiments of the present disclosure have been described in the detailed description of the present disclosure, the present disclosure may be modified in various forms without departing from the scope of the present disclosure. Thus, the scope of the present disclosure shall not be determined merely based on the described embodiments, but rather determined based on the accompanying claims and equivalents thereto.
Claims
1. A method of simulating a data center, the method comprising:
- generating, by a first application, a simulation program of a data center using a hardware configuration file and a functional description file; and
- executing, by a simulator, a simulation on the simulation program by: obtaining, by the simulator, at least one record from a second application and producing at least one job corresponding to the at least one record, entering the at least one job in a job queue, and executing a flow, by a third application, using a job selected from the job queue.
2. The method of claim 1, further comprising generating, by the first application, models of devices associated with the data center.
3. The method of claim 2, wherein the devices associated with the data center include at least one of a central processing unit (CPU), memory including a dual in-line memory application (DIMM)/nonvolatile DIMM (NVDIMM)), a hard disk drive (HDD), a solid state drive (SSD), a network interface controller (NIC), a network switch, an operating system, a workload application, a file system, a driver, a device with flows and blocks, and interfaces including an address/data (ADDR/DATA) bus, a peripheral component interconnect (PCI) bus, a PCI express (PCI-e) bus, serial advanced technology attachment (SATA) bus, and an intelligent drive electronics (IDE) bus.
4. The method of claim 1, further comprising outputting, by the simulator, a performance metric associated with a single device or the data center.
5. The method of claim 4, wherein the performance metric includes at least one of input/output (I/O) performance, energy consumption, total cost of ownership (TCO), reliability, and availability associated with the single device or the data center.
6. The method of claim 5, wherein I/O performance includes throughput, bandwidth, and total execution time.
7. The method of claim 4, further comprising performing load balancing and topology reorganization to improve performance of the data center based on the performance metric.
8. The method of claim 1, further comprising storing the at least one record in a file application in the simulator.
9. The method of claim 8, further comprising:
- decoding the job to obtain a flow identifier (ID) and metadata by the third application;
- initializing and operating the data center by the third application; and
- looking up corresponding device object descriptions by the third application.
10. The method of claim 1, wherein the data center includes at least one multiple host machine, wherein a host machine is one of a virtual machine (VM) and a hypervisor.
11. The method of claim 1, wherein the first application includes a data center storage evaluation framework (DCEF) application.
12. The method of claim 1, wherein the third application includes a job distribution application.
13. The method of claim 1, wherein the second application includes a trace file application.
14. A non-transitory computer-readable recording medium having recorded thereon a computer program for executing a method of simulating a data center, the method comprising:
- generating, by a first application, a simulation program of a data center using a hardware configuration file and a functional description file; and
- executing, by a simulator, a simulation on the simulation program by:
- obtaining, by the simulator, at least one record from a second application and producing at least one job corresponding to the at least one record;
- entering the at least one job in a job queue; and
- executing a flow, by a third application, using a job selected from the job queue.
15. The non-transitory computer-readable recording medium of claim 14, the computer program further comprising generating, by the first application, models of devices associated with the data center.
16. The non-transitory computer-readable recording medium of claim 15, wherein the devices associated with the data center include at least one of a central processing unit (CPU), memory including a dual in-line memory application (DIM)/nonvolatile DIMM (NVDIMM)), a hard disk drive (HDD), a solid state drive (SSD), a network interface controller (NIC), a network switch, an operating system, a workload application, a file system, a driver, a device with flows and blocks, and interfaces including an address/data (ADDR/DATA) bus, a peripheral component interconnect (PCI) bus, a PCI express (PCI-e) bus, serial advanced technology attachment (SATA) bus, and an intelligent drive electronics (IDE) bus.
17. The non-transitory computer-readable recording medium of claim 14, the computer program further comprising outputting, by the simulator, a performance metric associated with a single device or the data center.
18. The non-transitory computer-readable recording medium of claim 17, wherein the performance metric includes at least one of input/output (I/O) performance, energy consumption, total cost of ownership (TCO), reliability, and availability associated with the single device or the data center.
19. The non-transitory computer-readable recording medium of claim 18, wherein I/O performance includes throughput, bandwidth, and total execution time.
20. The non-transitory computer-readable recording medium of claim 17, further comprising performing load balancing and topology reorganization to improve performance of the data center based on the performance metric.
Type: Application
Filed: Apr 28, 2021
Publication Date: Aug 12, 2021
Applicant:
Inventors: Morteza HOSEINZADEH (La Jolla, CA), Zhengyu YANG (Malden, MA), Terence Ping WONG (San Diego, CA), David EVANS (San Marcos, CA)
Application Number: 17/243,109