Patents by Inventor Sung-In Jung

Sung-In Jung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230297442
    Abstract: Provided is a method of configuring a cluster, which is a method of assigning graphics processing unit (GPU) servers in a cloud in which a plurality of machine learning (ML) services are executed using an apparatus for configuring a cluster. The apparatus for configuring a cluster is configured to measure the power consumption characteristics of each of the GPU servers constituting the cloud for each of a plurality of different models processing the plurality of ML services and assign at least one GPU server to each of the plurality of models using power consumption characteristics of each of the GPU servers for each of the plurality of models to configure a GPU cluster for each of the plurality of models.
    Type: Application
    Filed: March 10, 2023
    Publication date: September 21, 2023
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Eui Seong SEO, Jong Seok KIM, Jun Yeol YU, Sung In JUNG
  • Publication number: 20210382752
    Abstract: Disclosed herein are an apparatus and method for accelerating file I/O offload for a unikernel. The method, performed by the apparatus and server for accelerating file I/O offload for the unikernel, includes; executing, by the apparatus, an application in the unikernal and calling, by the thread of the application, a file I/O function; generating, by the unikernal, a file I/O offload request using the file I/O function; transmitting, by the unikernal, the file I/O offload request to Linux of the server; receiving, by Linux, the file offload request from the thread of the unikernel and processing, by Linux, the file I/O offload request; transmitting, by Linux, a file FO offload result for the file I/O I/O offload request to the unikernel; and delivering the file I/O offload result to the thread of the application.
    Type: Application
    Filed: June 7, 2021
    Publication date: December 9, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Yeon-Jeong JEONG, Jin-Mee KIM, Young-Joo WOO, Yong-Seob LEE, Seung-Hyub JEON, Sung-In JUNG, Seung-Jun CHA
  • Patent number: 11042394
    Abstract: Disclosed is an apparatus and method of processing input and output in a multi-kernel system. A method of processing input and output in a multi-kernel system according to the present disclosure includes: setting a shared memory between a first kernel on a main processor and a lightweight kernel on a parallel processor; setting a data transmission and reception channel between the first kernel on the main processor and the lightweight kernel on the parallel processor using the shared memory; providing, on the basis of the data transmission and reception channel, an input/output task that occurs in the lightweight kernel to the first kernel on the main processor; processing, by the first kernel on the main processor, an operation corresponding to the input/output task; and providing a result of the processing to the lightweight kernel.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: June 22, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seung Jun Cha, Jin Mee Kim, Seung Hyub Jeon, Sung In Jung, Yeon Jeong Jeong
  • Publication number: 20210182103
    Abstract: A method and an apparatus for processing to support scalability in a many-core environment are provided. The processing apparatus includes: a counter unit configured to include a global reference counter, at least one category reference counter configured to access the global reference counter, and at least one local reference counter configured to access the category reference counter; and a processor connected to the counter unit and configured to increase or decrease each reference counter. The at least one category reference counter has a hierarchical structure including at least one layer.
    Type: Application
    Filed: November 24, 2020
    Publication date: June 17, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seung Hyub JEON, Jin Mee KIM, Sung-In JUNG, Yeon Jeong JEONG, Seung-Jun CHA
  • Publication number: 20200151118
    Abstract: Disclosed herein is an operating method of an apparatus for offloading file I/O of a unikernel based on Remote Direct Memory Access (RDMA). The operating method may include calling, by the application of the unikernel, a file I/O kernel function; generating, by the file I/O kernel function, file I/O information; transmitting, by the unikernel, the file I/O information to a Linux; configuring, by the Linux, a file I/O function using the file I/O information and calling the same; and transmitting, by the file I/O function, a file I/O request, corresponding to the file I/O information, to a file server.
    Type: Application
    Filed: September 25, 2019
    Publication date: May 14, 2020
    Inventors: Yeon-Jeong JEONG, Jin-Mee KIM, Ramneek, Seung-Hyub JEON, Sung-In JUNG, Seung-Jun CHA
  • Patent number: 10296379
    Abstract: Scheduling threads in a system with many cores includes generating a thread map where a connection relationship between a plurality of threads is represented by a frequency of inter-process communication (IPC) between threads, generating a core map where a connection relationship between a plurality of cores is represented by a hop between cores, and respectively allocating the plurality of threads to the plurality of cores defined by the core map, based on a thread allocation policy defining a mapping rule between the thread map and the core map.
    Type: Grant
    Filed: March 17, 2017
    Date of Patent: May 21, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Kang Ho Kim, Kwang Won Koh, Jin Mee Kim, Jeong Hwan Lee, Seung Hyub Jeon, Sung In Jung, Yeon Jeong Jeong, Seung Jun Cha
  • Publication number: 20190114193
    Abstract: Disclosed is an apparatus and method of processing input and output in a multi-kernel system. A method of processing input and output in a multi-kernel system according to the present disclosure includes: setting a shared memory between a first kernel on a main processor and a lightweight kernel on a parallel processor; setting a data transmission and reception channel between the first kernel on the main processor and the lightweight kernel on the parallel processor using the shared memory; providing, on the basis of the data transmission and reception channel, an input/output task that occurs in the lightweight kernel to the first kernel on the main processor; processing, by the first kernel on the main processor, an operation corresponding to the input/output task; and providing a result of the processing to the lightweight kernel.
    Type: Application
    Filed: October 12, 2018
    Publication date: April 18, 2019
    Inventors: Seung Jun CHA, Jin Mee KIM, Seung Hyub JEON, Sung In JUNG, Yeon Jeong JEONG
  • Publication number: 20180315515
    Abstract: A nuclear fuel simulation method is provided that includes: step (a) for receiving an input of data on the order in which nuclear fuels are moved; step (b) for extracting, from the data, information on nuclear fuels, the coordinates of locations from which nuclear fuels are unloaded, and the coordinates of locations into which nuclear fuels are loaded; and step (c) for simulating the information extracted in step (b) according to a flowchart of the data. The present invention has an advantage in that it is possible to accurately and quickly verify all fuel movement works requiring the unloading and loading of nuclear fuels by receiving an input of a huge amount of data on the order in which nuclear fuels are moved and systematically verify an error that may occur during a simulation according to a flowchart, which enables the workload of about three man-days, required per cycle for each reactor, to be done in three man-hours, thereby achieving a significant reduction in working time.
    Type: Application
    Filed: August 1, 2016
    Publication date: November 1, 2018
    Applicant: KOREA HYDRO & NUCLEAR POWER CO., LTD.
    Inventors: Sung-In Jung, Chan-Yan Park, Taek-Yoon Lee, Byeong-Kil Ko
  • Publication number: 20180277271
    Abstract: A method for managing the movements of nuclear fuels is provided that includes: (a) loading a storage status map in which storage racks of an SFPR where spent fuels are stored and an NFS where new fuels are stored are mapped; (b) assigning the storage locations of nuclear fuels and the colors thereof in the storage status map; (c) receiving a pattern type input of tasks for designating the order in which nuclear fuels are moved and the locations to which the nuclear fuels are moved, and generating a movement flowchart; and (d) updating the storage status map according to the level at which the movement flowchart has progressed. The present invention has an advantage in that it is possible to quickly create a nuclear fuel movement flowchart by automating all fuel movement works requiring the unloading and loading of nuclear fuels, which enables the workload of about 30 man-days, required per cycle for each reactor, to be done in three man-hours, thereby achieving a significant reduction in working time.
    Type: Application
    Filed: July 27, 2016
    Publication date: September 27, 2018
    Applicant: KOREA HYDRO & NUCLEAR POWER CO., LTD.
    Inventors: Sung-In Jung, Chan-Yan Park, Taek-Yoon Lee, Byeong-Kil Ko
  • Publication number: 20170329642
    Abstract: Provided is a many-core system including a resource unit including a resource needed for execution of an operating system and a resource needed for execution of a lightweight kernel, a programing constructing unit configured to convert an input program into an application program and to load the application program into the resource unit, a run-time management unit configured to manage a running environment for executing the application program, and a self-organization management unit configured to monitor the application program and the resources in the resource unit, to dynamically adjust the running environment to prevent a risk factor from occurring during the execution of the application program, and to cure a risk factor which occurred.
    Type: Application
    Filed: August 11, 2016
    Publication date: November 16, 2017
    Inventors: Jin Mee KIM, Kwang Won KOH, Kang Ho KIM, Jeong Hwan LEE, Seung Hyub JEON, Sung In JUNG, Yeon Jeong JEONG, Seung Jun CHA
  • Publication number: 20170269966
    Abstract: Provided is a method of scheduling threads in a many-cores system. The method includes generating a thread map where a connection relationship between a plurality of threads is represented by a frequency of inter-process communication (IPC) between threads, generating a core map where a connection relationship between a plurality of cores is represented by a hop between cores, and respectively allocating the plurality of threads to the plurality of cores defined by the core map, based on a thread allocation policy defining a mapping rule between the thread map and the core map.
    Type: Application
    Filed: March 17, 2017
    Publication date: September 21, 2017
    Inventors: Kang Ho KIM, Kwang Won KOH, Jin Mee KIM, Jeong Hwan LEE, Seung Hyub JEON, Sung In JUNG, Yeon Jeong JEONG, Seung Jun CHA
  • Publication number: 20150195128
    Abstract: Disclosed herein is an apparatus for supporting automation of configuration management of a virtual machine applicable to a multi-cloud environment. In accordance with an embodiment, the apparatus includes an interface unit for receiving configuration management information or information of a virtual machine to which configuration management is to be applied. A configuration management verification unit verifies the received configuration management information, and combines the virtual machine information with configuration management information corresponding to the virtual machine. A configuration management distribution unit distributes the configuration management information combined with the virtual machine information to a cloud in which the virtual machine is created.
    Type: Application
    Filed: January 7, 2015
    Publication date: July 9, 2015
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Nam-Woo KIM, Sung-In JUNG, Jin-Mee KIM, Dong-Jae KANG
  • Publication number: 20150134485
    Abstract: Disclosed herein is a cloud service broker apparatus and method for providing a cloud service using the broker apparatus, which provide an optimal cloud service for a cloud service user through negotiations between the cloud service user and a cloud service provider. The cloud service broker apparatus includes an operation information management unit for examining a demand entered by a cloud service user. A broker intermediary unit detects a cloud service based on the demand for which verification has been completed by the operation information management unit, and sends a request for positioning of the cloud service for the cloud service user to a cloud service provider that provides the detected cloud service. A life cycle management unit monitors a cloud service positioned in and used by the cloud service user at a request of the broker intermediary unit.
    Type: Application
    Filed: November 4, 2014
    Publication date: May 14, 2015
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin-Mee KIM, Sung-In JUNG, Dong-Jae KANG, Nam-Woo KIM
  • Patent number: 8943294
    Abstract: Disclosed is a software architecture supporting a large-capacity collective memory layer in a multi-node system by using a remote direct memory access technique and a software virtualization technique and a computing system performing computing processing by using the architecture. In particular, provided is a software architecture including: a memory region managing module collectively managing a predetermined memory region of a node, a memory service providing module providing a large-capacity collective memory service to a virtual address space in a user process, and a memory sharing support module supporting sharing of the large-capacity collective memory of the multi-node system.
    Type: Grant
    Filed: December 8, 2011
    Date of Patent: January 27, 2015
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Gyu II Cha, Young Ho Kim, Eun Ji Lim, Dong Jae Kang, Sung In Jung
  • Patent number: 8850158
    Abstract: Disclosed is an apparatus for processing a remote page fault included in an optional local node within a cluster system configuring a large integration memory (CVM) by integrating individual memories of a plurality of nodes. The apparatus includes a memory including a CVM-map, a node memory information table, a virtual memory area, and a CVM page table, and a main controller mapping the large integration memory to an address space of a process when a user process requests memory allocation.
    Type: Grant
    Filed: December 15, 2011
    Date of Patent: September 30, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Eun Ji Lim, Gyu Il Cha, Young Ho Kim, Dong Jae Kang, Sung In Jung
  • Patent number: 8418143
    Abstract: Provided are a software reliability test method using selective fault activation, a test area restriction method, a workload generation method and a computing apparatus for testing software reliability using the same. The software reliability test method registers a test target module. The software reliability test method injects a fault into a fault injection target function when a caller of the fault injection target function is included in the registered module, in a case of calling the fault injection target function.
    Type: Grant
    Filed: October 2, 2009
    Date of Patent: April 9, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Gyu Il Cha, Young Ho Kim, Sung In Jung
  • Patent number: 8244919
    Abstract: A data transfer apparatus, system and method using the same are provided. A data transfer system according to an exemplary embodiment includes a user process space, a kernel space and a hardware space. A plurality of user processes are executed in the user process space. The kernel space includes a kernel thread. The hardware space performs an input/output according to the input/output request of the each user process. When input data based on the input request are received to the hardware space, the data transfer system checks whether the user process requesting the input is in an execution state, and allows the kernel thread to copy the input data from the kernel space to the user process space when the user process is in the execution state.
    Type: Grant
    Filed: July 24, 2009
    Date of Patent: August 14, 2012
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kang Ho Kim, Eun Ji Lim, Soo Young Kim, Sung In Jung
  • Publication number: 20120159116
    Abstract: Disclosed is an apparatus for processing a remote page fault included in an optional local node within a cluster system configuring a large integration memory (CVM) by integrating individual memories of a plurality of nodes. The apparatus includes a memory including a CVM-map, a node memory information table, a virtual memory area, and a CVM page table, and a main controller mapping the large integration memory to an address space of a process when a user process requests memory allocation.
    Type: Application
    Filed: December 15, 2011
    Publication date: June 21, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Eun Ji LIM, Gyu II Cha, Young Ho Kim, Dong Jae Kang, Sung In Jung
  • Publication number: 20120159115
    Abstract: Disclosed is a software architecture supporting a large-capacity collective memory layer in a multi-node system by using a remote direct memory access technique and a software virtualization technique and a computing system performing computing processing by using the architecture. In particular, provided is a software architecture including: a memory region managing module collectively managing a predetermined memory region of a node, a memory service providing module providing a large-capacity collective memory service to a virtual address space in a user process, and a memory sharing support module supporting sharing of the large-capacity collective memory of the multi-node system.
    Type: Application
    Filed: December 8, 2011
    Publication date: June 21, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Gyu Il CHA, Young Ho KIM, Eun Ji LIM, Dong Jae KANG, Sung In JUNG
  • Publication number: 20120151175
    Abstract: Disclosed are a memory apparatus for a collective volume memory and a method for managing metadata thereof. The memory apparatus for a collective volume memory includes a CVM (Collective Volume Memory) command tool configured to provide a command tool for CVM operation and translate a command input by a user to control the CVM operation; and a CVM engine configured to perform at least one of CVM configuration and initialization, and CVM allocation and access according to data transmitted from the CVM command tool.
    Type: Application
    Filed: December 2, 2011
    Publication date: June 14, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Young Ho KIM, Eun Ji Lim, Gyu II Cha, Dong Jae Kang, Sung In Jung