METHOD AND SYSTEM FOR GPU VIRTUALIZATION BASED ON CONTAINER

A GPU virtualization method based on a container comprises the steps of: transmitting, if the container is created, a configuration file including GPU resource constraint information and an API profile to the container, by a node controller; and implementing a virtual GPU, when the container is executed, by intercepting a library call and changing an argument related to a GPU resource amount by a library controller provided in the container, and by intercepting a system call and changing argument and return values by a system call controller.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
TECHNICAL FIELD

The present invention relates to a method and a system for GPU virtualization based on a container, and particularly, to a method and a system for GPU virtualization based on a container, which implements the GPU virtualization by changing argument values or the like related to GPU resources, by a library controller and a system controller in the container.

BACKGROUND ART

Recently, virtualization techniques are used much to improve efficiency, security and compatibility of large-scale computing for multiple users. Representatively, there is a virtual machine, which is applied in various fields such as applications, servers, storages, networks and the like. However, although the level of compatibility and isolation is the highest since the virtual machine virtualizes all physical hardware components from CPUs to disks, networks and even I/O devices, there is a disadvantage in that additional consumption (overheads) of computing resources is large.

Meanwhile, containers emerge as a virtualization technique which overcomes the disadvantage of virtual machines by using an isolation technique of an operating system level, not virtualization. The container is implemented in a method of using a virtualized name space of resource elements provided by a completely isolated file system and a kernel as a user level execution environment, while sharing an operating system kernel of a host as a kernel level execution environment. The content of the isolated file system is configured by combining, in one package, an application and all dependencies, libraries, other binaries, configuration files and the like needed for driving the application. The resource elements of the kernel divided in a virtualized name space and provided to the container include a process ID, a network socket, a user account, shared memory for inter-process communication (IPC) and the like. Since the other hardware accesses are processed in the same manner as that of a case not a container, performance of host hardware can be completely used without an overhead. Here, the operating system provides an option for limiting a maximum amount of hardware resources available for each container.

Recently, as deep learning techniques are developed and demands for large-scale computing increase, techniques for optimally sharing and managing computing resources are requested. To improve performance, accelerated processing hardware optimized for characteristics of deep learning operation appears, and a GPU is also one of them. However, a virtualization technique based on a container provided by an existing operating system supports only sharing and limitation of resources for the CPU, memory, disk, and file system of each container, and a technique for simultaneously sharing the accelerated processing hardware like GPU among several containers is not provided. Accordingly, there is a difficulty in efficiently sharing and managing the GPU.

DISCLOSURE OF INVENTION Technical Problem

Therefore, the present invention has been made in view of the above problems, and it is an object of the present invention to provide a method and a system for GPU virtualization based on a container, which can dynamically allocate and share GPU resources through virtualization of an operating system level, not physical virtualization, using the container.

Technical Solution

A GPU virtualization method based on a container according to an embodiment of the present invention includes the steps of: transmitting, if the container is created, a configuration file including GPU resource constraint information and an API profile to the container, by a node controller; and implementing a virtual GPU, when the container is executed, by intercepting a library call and changing an argument related to a GPU resource amount by a library controller provided in the container, and by intercepting a system call and changing argument and return values by a system call controller.

A GPU virtualization system based on a container according to an embodiment of the present invention includes: an operating system including a node controller for transferring a configuration file including resource constraint information and a system call/API profile to the container; and the container configured of a library controller for determining, when a library function call event of a user application is received, whether the event is an API call related to inquiry and allocation of GPU resources, changing at least one among an argument, a structure field and a return value related to GPU resource amounts, and calling an original library function, and a system controller determining, when a system call event of the user program is received, whether the event is a system call of at least one among permission, block and change according to a predefined API profile, and changing argument and return values before and after a call of original system call according to rules of the API profile.

Advantageous Effects

According to the present invention, a GPU computing system, in which a single GPU is allocated to a single container, multiple GPUs are allocated to a single container, a single GPU is shared by multiple containers, or multiple GPUs are shared by multiple containers, can be implemented by expanding a container virtualization technique.

In addition, as the GPU computing system is implemented using a container, compared with a virtual machine, there is an effect in that system resources can be used more efficiently, and update is easy as applications can be moved and the scaling is simple.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a view showing the software structure of a GPU virtualization system based on a container according to an embodiment of the present invention.

FIG. 2 is a flowchart illustrating a GPU virtualization method based on a container according to an embodiment of the present invention.

FIG. 3 is a flowchart illustrating an operation method of a node controller according to an embodiment of the present invention.

FIG. 4 is a flowchart illustrating an operation method of a library controller according to an embodiment of the present invention.

FIG. 5 is a flowchart illustrating an operation method of a system controller according to an embodiment of the present invention.

BEST MODE FOR CARRYING OUT THE INVENTION

Examples of specific structural or functional descriptions on the embodiments according to the concept of the present invention disclosed in this specification are only to explain the embodiments according to the concept of the present invention, and the embodiments according to the concept of the present invention may be embodied in a variety of forms and are not limited to the embodiments described in this specification.

Since the embodiments according to the concept of the present invention may make diverse changes and have various forms, the embodiments will be shown in the figures and described in detail in the specification. However, this is not intended to limit the embodiments according to the concept of the present invention to specific disclosed forms, and the embodiments include all changes, equivalents and substitutions included in the spirit and scope of the present invention.

The terms used in this specification are used to describe only particular embodiments and are not intended to limit the present invention. A singular expression includes a plural expression unless the context clearly indicates otherwise. In this specification, the terms such as “include” or “have” are to specify the presence of features, integers, steps, operations, components, parts or combinations of these stated in this specification, but do not preclude in advance the presence or addition of one or more of other features, integers, steps, operations, components, parts or combinations of these.

Hereinafter, the embodiments of the present invention will be described in detail with reference to the figures attached in this specification.

FIG. 1 is a view showing the software structure of a GPU virtualization system based on a container according to an embodiment of the present invention.

Referring to FIG. 1, the software structure of a GPU virtualization system 100 is configured of a physical GPU 110, an operating system 120, and a plurality of containers 130.

The operating system 120 is configured of a node controller 121, a container engine 123, and an operating system kernel 125. The operating system 120 communicates with the physical GPU 110 through a GPU driver 127 installed in the operating system kernel 125.

The node controller 121 may transfer a configuration file including resource constraint information and a system call/API profile to the container 130 and store them in the container. The node controller 121 may confirm GPU resource availability and initialize resource information of the node controller. The GPU resources may be GPU processing units and GPU memory, but they are not limited thereto. The node controller 121 may report the confirmed GPU resource availability to a manager and may receive a job assigned by the manager. The node controller 121 may update information on the GPU resource availability, and at this point, it may subtract the resources as much as a requested amount. If a container is created, the node controller 121 may transfer the configuration file including the resource constraint information to the container, and if the end of executing the container is sensed, the node controller 121 may collect the resources as much as a requested amount and update the resource availability information of the node controller. The node controller 121 may execute a code execution request of a user in the container.

The container engine 123 creates and distributes the container 130 and allocates GPU resources so that each container 130 may execute a corresponding application program. The container engine 123 may execute and terminate the container.

The container 130 is a space including an image which combines various programs, source codes and libraries needed for driving a user program. Driving of a user program is practically accomplished in the operating system 120. That is, the operating system 120 may access each container 130 through the container engine 123 and execute and process a corresponding user program.

The container 130 is configured of a user program 131, a GPU library 133, a GPU runtime 135, a library controller 137 and a system call controller 139.

The user program 131 may operate to execute a code execution request of a user of the node controller in the container.

The GPU library 133 may include a library so that a deep learning framework may operate, and for example, at least one of deep learning frameworks such as TensorFlow, Caffe, Pytorch, CNTK and Chainer may operate.

CUDA, OpenCL or ROCM, which are parallel processing algorithms executed in a GPU, may be installed and used in the GPU runtime 135. The CUDA is a GPU middleware utilized in the machine learning field and may operate in the GPU runtime. The OpenCL may operate as parallel processing and a cross platform utilized in the field of machine learning and in high-performance computing (HPC).

When a library function call event of a user program is received, the library controller 137 may determine whether the event is an API call related to inquiry and allocation of GPU resources, change at least one among an argument, a structure field and a return value related to GPU resource amounts, and call an original library function. If the event is not an API call related to inquiry and allocation of GPU resources, the library controller 137 may call the original library function without changing an argument and return the return value as is.

When a system call event of a user program is received, the system call controller 139 determines whether the event is a system call of at least one among permission, block and change according to a predefined API profile, and may change argument and return values before and after the call of an original system call according to rules of the API profile. If the event is not a system call of at least one among permission, block and change according to the predefined API profile, the system call controller 139 may call the original system call without changing an argument and return the return value as is.

That is, as the library controller 137 in the container intercepts the library call and changes arguments related to the GPU resource amounts and the system call controller 139 intercepts the system call and changes argument and return values, a virtual GPU can be implemented.

FIG. 2 is a flowchart illustrating a container virtualization method according to an embodiment of the present invention.

Referring to FIG. 2, if a container is created (step S201), the node controller 121 transmits a configuration file including GPU resource constraint information and a system call/API profile to the container (step S203). The library controller and the system call controller in the container may receive and store the configuration file including the resource constraint information.

When the container is executed, as the library controller 137 provided in the container intercepts the library call and changes an argument related to the GPU resource amounts, and the system call controller 139 intercepts the system call and changes the argument and return values, the virtual GPU is implemented (step S205). At this point, the library controller 137 may change structure fields and return values, as well as the arguments related to the GPU resource amounts, and call the original library function.

FIG. 3 is a flowchart illustrating an operation method of a node controller according to an embodiment of the present invention.

Referring to FIG. 3, the node controller first confirms GPU resource availability (step S301). Then, the node controller initializes resource information (step S303).

Hereinafter, the process described below may be repeatedly performed by a server execution loop (step S305). The node controller reports the confirmed GPU resource availability to the manager (step S307). The node controller receives a job assigned by the manager (job specification) (step S309). The node controller 121 updates resource availability information (step S311). At this point, the resources may be subtracted as much as a requested amount. Then, a container is created (step S313), and the configuration file including the resource constraint information, which will be read by the library controller and the system controller, is transmitted to the container and stored in the container (step S315). Then, the container is executed (step S317), and the resource availability information of the node controller is updated (step S319) if the end of executing the container is sensed. At this point, the node controller may collect the resources as much as a requested amount.

FIG. 4 is a flowchart illustrating an operation method of a library controller according to an embodiment of the present invention.

Referring to FIG. 4, the library controller receives a library function call event of a user program (step S401). Then, the library controller determines whether the event is an API call related to inquiry and allocation of GPU resources (step S403).

If the event is an API call related to inquiry and allocation of GPU resources as a result of the determination, the library controller changes at least one among an argument, a structure field and a return value related to GPU resource amounts (step S405). At this point, they may be changed on the basis of an embedded API profile and the configuration file of the container.

Then, after at least one among an argument, a structure field and a return value is changed, the library controller calls the original library function (step S407).

If the event is not an API call related to inquiry and allocation of GPU resources as a result of the determination, the library controller calls the original library function without changing an argument and returns the return value as is (step S409).

FIG. 5 is a flowchart illustrating an operation method of a system controller according to an embodiment of the present invention.

Referring to FIG. 5, the system controller receives a system call event of a user program (step S501). The system controller determines whether the event is a system call which needs a change in a predefined API profile (step S503). At this point, the system controller may determine whether it is a case which needs a permission or a block, as well as a change. If the event is a system call which needs a permission, a block or a change as a result of the determination, the system controller changes the argument and return values before and after the call of original system call according to rules of the API profile (step S505).

If the system call does not need a permission, a block or a change as a result of the determination, the system controller calls the original library function without changing an argument and returns the return value as is (step S507).

While the present invention has been described with reference to the embodiments shown in the figures, this is only an example, and those skilled in the art may understand that various modifications and equivalent other embodiments are possible from the description. Therefore, the true scope of the present invention should be defined by the technical spirit of the appended claims.

Claims

1. A GPU virtualization method based on a container, the method comprising the steps of:

transmitting, if the container is created, a configuration file including GPU resource constraint information and an API profile to the container, by a node controller; and
implementing a virtual GPU, when the container is executed, by intercepting a library call and changing an argument related to a GPU resource amount by a library controller provided in the container, and by intercepting a system call and changing argument and return values by a system call controller.

2. The method according to claim 1, wherein the node controller confirms GPU resource availability, initializes resource information of the node controller, reports the resource availability to a manager, receives a job assigned by the manager, and updates resource availability information of the node controller by subtracting the resources as much as a requested amount.

3. The method according to claim 2, wherein the node controller stores the configuration file including resource constraint information in the container when the container is created, and collects the resources as much as a requested amount and updates the resource availability information of the node controller if an end of executing the container is sensed.

4. The method according to claim 1, wherein when a library function call event of a user program is received, the library controller determines whether the event is an API call related to inquiry and allocation of GPU resources, changes at least one among an argument, a structure field and a return value related to GPU resource amounts, and calls an original library function.

5. The method according to claim 1, wherein when a system call event of a user program is received, the system call controller 139 determines whether the event is a system call of at least one among permission, block and change according to a predefined API profile, and changes argument and return values before and after the call of original system call according to rules of the API profile.

6. A GPU virtualization system based on a container, the system comprising:

an operating system including a node controller for transferring a configuration file including resource constraint information and a system call/API profile to the container; and
the container configured of a library controller for determining, when a library function call event of a user application is received, whether the event is an API call related to inquiry and allocation of GPU resources, changing at least one among an argument, a structure field and a return value related to GPU resource amounts, and calling an original library function, and a system controller determining, when a system call event of the user program is received, whether the event is a system call of at least one among permission, block and change according to a predefined API profile, and changing argument and return values before and after a call of original system call according to rules of the API profile.
Patent History
Publication number: 20200210241
Type: Application
Filed: Mar 27, 2019
Publication Date: Jul 2, 2020
Inventors: Joon Gi KIM (Seoul), Jeong Kyu SHIN (Seoul), Jong Hyun PARK (Seoul)
Application Number: 16/366,303
Classifications
International Classification: G06F 9/50 (20060101); G06F 9/54 (20060101); G06T 1/20 (20060101); G06F 9/455 (20060101);