Patents by Inventor Jae Hoon AN
Jae Hoon AN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12369286Abstract: There is provided an adaptive temperature control method based on log analysis of a chassis manager in an edge server. The adaptive temperature control method of the edge server system according to an embodiment includes: collecting, by a chassis manger module of the edge server system, work logs of a computing module and a storage module; predicting a future work load from the collected work logs; predicting a future internal temperature of the edge server system, based on the work load and a future temperature; and controlling, by the chassis manager module, the edge server system, based on the predicted future internal temperature. Accordingly, a configuration module of an edge server system may be managed/controlled in a rugged environment, and temperature of the edge server system may be adaptively controlled by transferring or additionally generating works of an edge server.Type: GrantFiled: November 14, 2022Date of Patent: July 22, 2025Assignee: Korea Electronics Technology InstituteInventors: Jae Hoon An, Young Hwan Kim
-
Patent number: 12339863Abstract: There is provided a method for scheduling offloading snippets based on a large amount of DBMS task computation. A DB scheduling method according to an embodiment of the disclosure includes determining, by a DBMS, whether to offload a part of query computations upon receiving a query execution request from a client, generating, by the DBMS, an offloading code which is a code for offloading a part of the query computations, based on the received query, when offloading is determined, selecting one of the plurality of storages in which a DB is established, and delivering the offloading code. Accordingly, snippets which will be generated simultaneously are scheduled for CSDs, so that resources are equally utilized, a query execution time is reduced, and reliability on data processing is enhanced.Type: GrantFiled: November 14, 2022Date of Patent: June 24, 2025Assignee: Korea Electronics Technology InstituteInventors: Jae Hoon An, Young Hwan Kim
-
Publication number: 20250155960Abstract: There is provided a method for applying a learning model-based power saving model in an intelligent BMC. According to an embodiment, a BMC includes: a prediction module configured to predict future computing resource usage and a future CPU temperature from monitoring data on computing resources; a power capping module configured to control power capping based on the predicted future computing resource usage; a fan control module configured to control a cooling fan based on the predicted future CPU temperature. Accordingly, the BMC effectively/efficiently controls power capping and cooling fans based on prediction by interworking with the on-device AI, thereby reducing power consumption of a data center infrastructure effectively/efficiently.Type: ApplicationFiled: September 26, 2024Publication date: May 15, 2025Applicant: Korea Electronics Technology InstituteInventors: Jae Hoon AN, Young Hwan KIM
-
Publication number: 20250156421Abstract: There is provided a dynamic data block caching automation application method for high-speed data access based on a computational storage. A query execution method according to an embodiment includes the steps of: synchronizing, by a DBMS, an ECC which is a cache of the DBMS and an ICC which is a cache of a computational storage in which a DB is established; generating an offloading execution code that defines operation information necessary for query computation offloading based on a query requested by a client; and processing the offloading execution code by using the ECC and the ICC which are synchronized. Accordingly, a load even in a CSD for reducing a load of a DBMS is reduced through snippet offloading reduction, snippet processing reduction, and high-speed query processing is enabled by disk I/O optimized data access.Type: ApplicationFiled: September 30, 2024Publication date: May 15, 2025Applicant: Korea Electronics Technology InstituteInventors: Jae Hoon AN, Young Hwan KIM
-
Patent number: 12292861Abstract: There is provided a query execution method in a DB system in which a plurality of CSDs are used as a storage. According to an embodiment, a query execution method includes: generating snippets for offloading a part of query computations for a query received from a client to CSDs; scheduling the generated snippets for the CSDs; collecting results of offloading; and merging the collected results of offloading. Accordingly, by dividing query computations, offloading, and processing in parallel, while processing query computations that are inappropriate for offloading by a DBMS, a query request from a client can be executed effectively and rapidly.Type: GrantFiled: November 6, 2023Date of Patent: May 6, 2025Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTEInventors: Jae Hoon An, Young Hwan Kim
-
Patent number: 12099884Abstract: There are provided a cloud management method and a cloud management apparatus for rapidly scheduling arrangements of service resources by considering equal distribution of resources in a large-scale container environment of a distributed collaboration type. The cloud management method according to an embodiment includes: receiving, by a cloud management apparatus, a resource allocation request for a specific service; monitoring, by the cloud management apparatus, available resource current statuses of a plurality of clusters, and selecting a cluster that is able to be allocated a requested resource; calculating, by the cloud management apparatus, a suitable score with respect to each of the selected clusters; and selecting, by the cloud management apparatus, a cluster that is most suitable to the requested resource for executing a requested service from among the selected clusters, based on the respective suitable scores.Type: GrantFiled: September 7, 2021Date of Patent: September 24, 2024Assignee: Korea Electronics Technology InstituteInventors: Jae Hoon An, Young Hwan Kim
-
Patent number: 12088665Abstract: There is provided a method of data replication between management modules in a rugged environment. According to an embodiment of the present disclosure, an edge server management module replication method includes: a step of collecting, by a first management module, environment information of an edge server; a step of managing, by the first management module, the edge server, based on the collected environmental information; a first storage step of storing, by the first management module, management data related to the edge server in a repository of the first management module; and a second storage step of storing, by a second management module, the management data stored at the first storage step in a repository of the second management module.Type: GrantFiled: October 8, 2021Date of Patent: September 10, 2024Assignee: Korea Electronics Technology InstituteInventors: Jae Hoon An, Young Hwan Kim
-
Patent number: 12028269Abstract: There are provided a method and an apparatus for cloud management, which selects optimal resources based on graphic processing unit (GPU) resource analysis in a large-scale container platform environment. According to an embodiment, a GPU bottleneck phenomenon occurring in an application of a large-scale container environment may be reduced by processing partitioned allocation of GPU resources, rather than existing 1:1 allocation, through real-time GPU data analysis (application of a threshold) and synthetic analysis of GPU performance degrading factors.Type: GrantFiled: November 9, 2022Date of Patent: July 2, 2024Assignee: Korea Electronics Technology InstituteInventors: Jae Hoon An, Young Hwan Kim
-
Publication number: 20240160487Abstract: There is provided a cloud management method and apparatus for available GPU resource scheduling in a large-scale container platform environment. Accordingly, a list of available GPUs may be reflected through a GPU resource metric collected in a large-scale container driving (operating) environment, and an allocable GPU may be selected from the GPU list according to a request, so that GPU resources can be allocated flexibly in response to a GPU resource request of a user (resource allocation reflecting requested resources rather than 1:1 allocation).Type: ApplicationFiled: November 10, 2023Publication date: May 16, 2024Applicant: Korea Electronics Technology InstituteInventors: Jae Hoon AN, Young Hwan KIM, Ju Hyun KIL
-
Publication number: 20240160465Abstract: There are a method and an apparatus for managing a hybrid cloud to perform consistent resource management for all resources in a heterogeneous cluster environment which is comprised of an on-premise cloud and a plurality of public clouds. Accordingly, the method and apparatus for hybrid cloud management provides an integration support function between different cluster orchestrations in a heterogenous cluster environment which is comprised of an on-premise cloud and a plurality of public clouds, supports consistent resource management for all resources, and provides optimal workload deployment, free optimal reconfiguration, migration and restoration, whole resource integration scaling.Type: ApplicationFiled: November 10, 2023Publication date: May 16, 2024Applicant: Korea Electronics Technology InstituteInventors: Jae Hoon AN, Young Hwan KIM
-
Publication number: 20240160610Abstract: There is provided a query execution method in a DB system in which a plurality of CSDs are used as a storage. According to an embodiment, a query execution method includes: generating snippets for offloading a part of query computations for a query received from a client to CSDs; scheduling the generated snippets for the CSDs; collecting results of offloading; and merging the collected results of offloading. Accordingly, by dividing query computations, offloading, and processing in parallel, while processing query computations that are inappropriate for offloading by a DBMS, a query request from a client can be executed effectively and rapidly.Type: ApplicationFiled: November 6, 2023Publication date: May 16, 2024Applicant: Korea Electronics Technology InstituteInventors: Jae Hoon AN, Young Hwan KIM
-
Publication number: 20240160612Abstract: There is provided a method for dividing query computations and scheduling for CSDs in a DB system in which a plurality of CSDs are used as a storage. A scheduling method according to an embodiment includes: selecting one of a plurality of scheduling polices; selecting a CSD to which snippets included in a group are delivered according to the selected scheduling policy; and delivering the snippets to the selected CSD, and the scheduling polices are polices for selecting CSDs to which snippets are delivered, based on different criteria. Accordingly, CSDs may be randomly selected according to user setting or a query execution environment, or an optimal CSD may be selected according to a CSD status or a content of an offload snippet, so that a query execution speed can be enhanced.Type: ApplicationFiled: November 7, 2023Publication date: May 16, 2024Applicant: Korea Electronics Technology InstituteInventors: Jae Hoon AN, Young Hwan KIM, Ri A CHOI
-
Publication number: 20240160261Abstract: There is provided a smart power management method for power consumption reduction based on an intelligent BMC. A cooling fan control method by a BMC according to an embodiment includes: collecting monitoring data regarding computing modules; calculating a current CPU power from the collected monitoring data; predicting a future CPU temperature from the collected monitoring data; setting a rotation speed of a cooling fan based on the calculated current CPU power and the predicted future CPU temperature; and controlling the cooling fan at the set rotation speed. Accordingly, the BMC controls a cooling fan effectively/efficiently by interworking with on-device AI, thereby reducing power consumption in a server.Type: ApplicationFiled: November 8, 2023Publication date: May 16, 2024Applicant: Korea Electronics Technology InstituteInventors: Jae Hoon AN, Young Hwan KIM
-
Publication number: 20240160963Abstract: There is provided an intelligent BMC for predicting a fault by interworking on-device AI. A fault prediction method of a BMC according to an embodiment includes: collecting monitoring information regarding computing modules installed on a main board; calculating a FOFL from the collected monitoring data; and constructing an AI model related to the calculated FOFL and predicting a FOFL from the monitoring data. Accordingly, a fault occurring in various patterns may be predicted based on monitoring data by interworking with on-device AI.Type: ApplicationFiled: November 6, 2023Publication date: May 16, 2024Applicant: Korea Electronics Technology InstituteInventors: Jae Hoon AN, Young Hwan KIM, Han Gyeol KIM
-
Publication number: 20240155815Abstract: There is a method for applying a BMC analytical local fan control model in a rugged environment. A server chassis cooling fan control method according to an embodiment controls rotation speeds of cooling fans on a zone basis while identifying/managing a temperature distribution of an edge server chassis on a zone basis through BMC data analysis. Accordingly, a damage that may be caused by increased temperature of an edge server in a rugged environment may be minimized, and also, power consumption for cooling an edge server may be reduced.Type: ApplicationFiled: November 6, 2023Publication date: May 9, 2024Applicant: Korea Electronics Technology InstituteInventors: Jae Hoon AN, Young Hwan KIM, Ki Cheol PARK
-
Publication number: 20230155958Abstract: There are provided a method and an apparatus for cloud management, which selects optimal resources based on graphic processing unit (GPU) resource analysis in a large-scale container platform environment. According to an embodiment, a GPU bottleneck phenomenon occurring in an application of a large-scale container environment may be reduced by processing partitioned allocation of GPU resources, rather than existing 1:1 allocation, through real-time GPU data analysis (application of a threshold) and synthetic analysis of GPU performance degrading factors.Type: ApplicationFiled: November 9, 2022Publication date: May 18, 2023Applicant: Korea Electronics Technology InstituteInventors: Jae Hoon AN, Young Hwan KIM
-
Publication number: 20230153317Abstract: There is provided a method for scheduling offloading snippets based on a large amount of DBMS task computation. A DB scheduling method according to an embodiment of the disclosure includes determining, by a DBMS, whether to offload a part of query computations upon receiving a query execution request from a client, generating, by the DBMS, an offloading code which is a code for offloading a part of the query computations, based on the received query, when offloading is determined, selecting one of the plurality of storages in which a DB is established, and delivering the offloading code. Accordingly, snippets which will be generated simultaneously are scheduled for CSDs, so that resources are equally utilized, a query execution time is reduced, and reliability on data processing is enhanced.Type: ApplicationFiled: November 14, 2022Publication date: May 18, 2023Applicant: Korea Electronics Technology InstituteInventors: Jae Hoon AN, Young Hwan KIM
-
Publication number: 20230156976Abstract: There is provided an adaptive temperature control method based on log analysis of a chassis manager in an edge server. The adaptive temperature control method of the edge server system according to an embodiment includes: collecting, by a chassis manger module of the edge server system, work logs of a computing module and a storage module; predicting a future work load from the collected work logs; predicting a future internal temperature of the edge server system, based on the work load and a future temperature; and controlling, by the chassis manager module, the edge server system, based on the predicted future internal temperature. Accordingly, a configuration module of an edge server system may be managed/controlled in a rugged environment, and temperature of the edge server system may be adaptively controlled by transferring or additionally generating works of an edge server.Type: ApplicationFiled: November 14, 2022Publication date: May 18, 2023Applicant: Korea Electronics Technology InstituteInventors: Jae Hoon AN, Young Hwan KIM
-
Publication number: 20230153170Abstract: There are provided a method and an apparatus for hybrid cloud management, which configure, reconfigure, and manage service resources in order to rapidly deploy a service in a hybrid cloud environment. According to embodiments of the disclosure, when there is a request for resources of a service operating in an existing cloud environment (Kubernetes), problems of a method of simply expanding replicas may be solved, and rapid processing (deployment) may be performed in response to a continuous resource request. In addition, an available space for using resources may be guaranteed by applying a method of HPA (increasing the number of resource replicas), VPA (increasing allocated resources), migration (transferring resources), rather than simply expanding the number of replicas.Type: ApplicationFiled: November 9, 2022Publication date: May 18, 2023Applicant: Korea Electronics Technology InstituteInventors: Jae Hoon AN, Young Hwan KIM
-
Publication number: 20230153306Abstract: There is provided an offloading data interfacing method between a DBMS storage engine and a computational storage device. In a query offloading method according to an embodiment of the disclosure, a DBMS may generate an offloading code which is a code for offloading a part of query computations, based on a received query, when a query execution request is received from a client, and may deliver the offloading code to a storage in which a DB is established in a CSD. Accordingly, in a DB system using a CSD, a snippet for offloading a part of query computations may be defined, and a DBMS and a storage are interfaced by using the offloading snippet, so that a guideline for execution of a query by interworking between the CSD-based storage and the DBMS is provided.Type: ApplicationFiled: November 14, 2022Publication date: May 18, 2023Applicant: Korea Electronics Technology InstituteInventors: Jae Hoon AN, Young Hwan KIM