Patents by Inventor Jin Mee Kim

Jin Mee Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210382752
    Abstract: Disclosed herein are an apparatus and method for accelerating file I/O offload for a unikernel. The method, performed by the apparatus and server for accelerating file I/O offload for the unikernel, includes; executing, by the apparatus, an application in the unikernal and calling, by the thread of the application, a file I/O function; generating, by the unikernal, a file I/O offload request using the file I/O function; transmitting, by the unikernal, the file I/O offload request to Linux of the server; receiving, by Linux, the file offload request from the thread of the unikernel and processing, by Linux, the file I/O offload request; transmitting, by Linux, a file FO offload result for the file I/O I/O offload request to the unikernel; and delivering the file I/O offload result to the thread of the application.
    Type: Application
    Filed: June 7, 2021
    Publication date: December 9, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Yeon-Jeong JEONG, Jin-Mee KIM, Young-Joo WOO, Yong-Seob LEE, Seung-Hyub JEON, Sung-In JUNG, Seung-Jun CHA
  • Patent number: 11042394
    Abstract: Disclosed is an apparatus and method of processing input and output in a multi-kernel system. A method of processing input and output in a multi-kernel system according to the present disclosure includes: setting a shared memory between a first kernel on a main processor and a lightweight kernel on a parallel processor; setting a data transmission and reception channel between the first kernel on the main processor and the lightweight kernel on the parallel processor using the shared memory; providing, on the basis of the data transmission and reception channel, an input/output task that occurs in the lightweight kernel to the first kernel on the main processor; processing, by the first kernel on the main processor, an operation corresponding to the input/output task; and providing a result of the processing to the lightweight kernel.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: June 22, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seung Jun Cha, Jin Mee Kim, Seung Hyub Jeon, Sung In Jung, Yeon Jeong Jeong
  • Publication number: 20210182103
    Abstract: A method and an apparatus for processing to support scalability in a many-core environment are provided. The processing apparatus includes: a counter unit configured to include a global reference counter, at least one category reference counter configured to access the global reference counter, and at least one local reference counter configured to access the category reference counter; and a processor connected to the counter unit and configured to increase or decrease each reference counter. The at least one category reference counter has a hierarchical structure including at least one layer.
    Type: Application
    Filed: November 24, 2020
    Publication date: June 17, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seung Hyub JEON, Jin Mee KIM, Sung-In JUNG, Yeon Jeong JEONG, Seung-Jun CHA
  • Patent number: 10673935
    Abstract: A cloud service broker apparatus and method thereof are provided. The cloud service broker apparatus includes a controller configured to provide a brokerage service between a plurality of cloud service providers and a cloud service user by dividing a cloud service requested by the cloud service user into a plurality of cloud service segments and distributing each of the cloud service segments to each of the clouds.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: June 2, 2020
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seok Ho Son, Dong Jae Kang, Jin Mee Kim, Hyun Joo Bae, Ji Hyun Lee
  • Publication number: 20200151118
    Abstract: Disclosed herein is an operating method of an apparatus for offloading file I/O of a unikernel based on Remote Direct Memory Access (RDMA). The operating method may include calling, by the application of the unikernel, a file I/O kernel function; generating, by the file I/O kernel function, file I/O information; transmitting, by the unikernel, the file I/O information to a Linux; configuring, by the Linux, a file I/O function using the file I/O information and calling the same; and transmitting, by the file I/O function, a file I/O request, corresponding to the file I/O information, to a file server.
    Type: Application
    Filed: September 25, 2019
    Publication date: May 14, 2020
    Inventors: Yeon-Jeong JEONG, Jin-Mee KIM, Ramneek, Seung-Hyub JEON, Sung-In JUNG, Seung-Jun CHA
  • Patent number: 10296379
    Abstract: Scheduling threads in a system with many cores includes generating a thread map where a connection relationship between a plurality of threads is represented by a frequency of inter-process communication (IPC) between threads, generating a core map where a connection relationship between a plurality of cores is represented by a hop between cores, and respectively allocating the plurality of threads to the plurality of cores defined by the core map, based on a thread allocation policy defining a mapping rule between the thread map and the core map.
    Type: Grant
    Filed: March 17, 2017
    Date of Patent: May 21, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Kang Ho Kim, Kwang Won Koh, Jin Mee Kim, Jeong Hwan Lee, Seung Hyub Jeon, Sung In Jung, Yeon Jeong Jeong, Seung Jun Cha
  • Publication number: 20190114193
    Abstract: Disclosed is an apparatus and method of processing input and output in a multi-kernel system. A method of processing input and output in a multi-kernel system according to the present disclosure includes: setting a shared memory between a first kernel on a main processor and a lightweight kernel on a parallel processor; setting a data transmission and reception channel between the first kernel on the main processor and the lightweight kernel on the parallel processor using the shared memory; providing, on the basis of the data transmission and reception channel, an input/output task that occurs in the lightweight kernel to the first kernel on the main processor; processing, by the first kernel on the main processor, an operation corresponding to the input/output task; and providing a result of the processing to the lightweight kernel.
    Type: Application
    Filed: October 12, 2018
    Publication date: April 18, 2019
    Inventors: Seung Jun CHA, Jin Mee KIM, Seung Hyub JEON, Sung In JUNG, Yeon Jeong JEONG
  • Publication number: 20170329642
    Abstract: Provided is a many-core system including a resource unit including a resource needed for execution of an operating system and a resource needed for execution of a lightweight kernel, a programing constructing unit configured to convert an input program into an application program and to load the application program into the resource unit, a run-time management unit configured to manage a running environment for executing the application program, and a self-organization management unit configured to monitor the application program and the resources in the resource unit, to dynamically adjust the running environment to prevent a risk factor from occurring during the execution of the application program, and to cure a risk factor which occurred.
    Type: Application
    Filed: August 11, 2016
    Publication date: November 16, 2017
    Inventors: Jin Mee KIM, Kwang Won KOH, Kang Ho KIM, Jeong Hwan LEE, Seung Hyub JEON, Sung In JUNG, Yeon Jeong JEONG, Seung Jun CHA
  • Publication number: 20170269966
    Abstract: Provided is a method of scheduling threads in a many-cores system. The method includes generating a thread map where a connection relationship between a plurality of threads is represented by a frequency of inter-process communication (IPC) between threads, generating a core map where a connection relationship between a plurality of cores is represented by a hop between cores, and respectively allocating the plurality of threads to the plurality of cores defined by the core map, based on a thread allocation policy defining a mapping rule between the thread map and the core map.
    Type: Application
    Filed: March 17, 2017
    Publication date: September 21, 2017
    Inventors: Kang Ho KIM, Kwang Won KOH, Jin Mee KIM, Jeong Hwan LEE, Seung Hyub JEON, Sung In JUNG, Yeon Jeong JEONG, Seung Jun CHA
  • Publication number: 20170041384
    Abstract: A cloud service broker apparatus and method thereof are provided. The cloud service broker apparatus includes a controller configured to provide a brokerage service between a plurality of cloud service providers and a cloud service user by dividing a cloud service requested by the cloud service user into a plurality of cloud service segments and distributing each of the cloud service segments to each of the clouds.
    Type: Application
    Filed: December 18, 2015
    Publication date: February 9, 2017
    Inventors: Seok Ho SON, Dong Jae KANG, Jin Mee KIM, Hyun Joo BAE, Ji Hyun LEE
  • Publication number: 20160364792
    Abstract: Disclosed herein are a cloud service brokerage method and apparatus using a service image store. The cloud service brokerage apparatus includes a reception unit for receiving service requirements from each of users, a service image recommendation unit for recommending one or more candidate service images that satisfy the requirements among multiple service images stored in a service image store, a cloud server recommendation unit for recommending one or more candidate cloud servers that satisfy the requirements among multiple cloud servers, a registration unit for registering an optimal service image selected by the user from among the one or more candidate service images in an optimal cloud server selected by the user from among the one or more candidate cloud servers, and a transmission unit for transmitting results of a service implemented by executing the optimal service image on the optimal cloud server to the user.
    Type: Application
    Filed: January 29, 2016
    Publication date: December 15, 2016
    Inventors: Dong-Jae KANG, Won-Young KIM, Jin-Mee KIM, Hyun-Joo BAE, Seok-Ho SON, Ji-Hyun LEE
  • Publication number: 20150195128
    Abstract: Disclosed herein is an apparatus for supporting automation of configuration management of a virtual machine applicable to a multi-cloud environment. In accordance with an embodiment, the apparatus includes an interface unit for receiving configuration management information or information of a virtual machine to which configuration management is to be applied. A configuration management verification unit verifies the received configuration management information, and combines the virtual machine information with configuration management information corresponding to the virtual machine. A configuration management distribution unit distributes the configuration management information combined with the virtual machine information to a cloud in which the virtual machine is created.
    Type: Application
    Filed: January 7, 2015
    Publication date: July 9, 2015
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Nam-Woo KIM, Sung-In JUNG, Jin-Mee KIM, Dong-Jae KANG
  • Publication number: 20150134485
    Abstract: Disclosed herein is a cloud service broker apparatus and method for providing a cloud service using the broker apparatus, which provide an optimal cloud service for a cloud service user through negotiations between the cloud service user and a cloud service provider. The cloud service broker apparatus includes an operation information management unit for examining a demand entered by a cloud service user. A broker intermediary unit detects a cloud service based on the demand for which verification has been completed by the operation information management unit, and sends a request for positioning of the cloud service for the cloud service user to a cloud service provider that provides the detected cloud service. A life cycle management unit monitors a cloud service positioned in and used by the cloud service user at a request of the broker intermediary unit.
    Type: Application
    Filed: November 4, 2014
    Publication date: May 14, 2015
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin-Mee KIM, Sung-In JUNG, Dong-Jae KANG, Nam-Woo KIM
  • Patent number: 8949847
    Abstract: Disclosed herein are a resource manager node and a resource management method. The resource manager node includes a resource management unit, a resource policy management unit, a shared resource capability management unit, a shared resource status monitoring unit, and a shared resource allocation unit. The resource management unit performs an operation necessary for resource allocation when a resource allocation request is received. The resource policy management unit determines a resource allocation policy based on the characteristic of the task, and generates resource allocation information. The shared resource capability management unit manages the topology of nodes, information about the capabilities of resources, and resource association information. The shared resource status monitoring unit monitors and manages information about the status of each node and the use of allocated resources. The shared resource allocation unit sends a resource allocation request to at least one of the plurality of nodes.
    Type: Grant
    Filed: August 13, 2012
    Date of Patent: February 3, 2015
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Young-Ho Kim, Gyu-Il Cha, Shin-Young Ahn, Eun-Ji Lim, Jin-Mee Kim, Seung-Jo Bae
  • Patent number: 8799895
    Abstract: A computing system for virtualization-based resource management includes a plurality of physical machines, a plurality of virtual machines and a management virtual machine. The virtual machines are configured by virtualizing each of the plurality of physical machines. The management virtual machine is located at any one of the plurality physical machines. The management virtual machine monitors amounts of network resources utilized by the plurality of physical machines and time costs of the plurality of virtual machines, and performs a resource reallocation and a resource reclamation.
    Type: Grant
    Filed: August 18, 2009
    Date of Patent: August 5, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kwang Won Koh, Jin Mee Kim, Young Woo Jung, Young Choon Woo
  • Publication number: 20130198755
    Abstract: Disclosed herein are a resource manager node and a resource management method. The resource manager node includes a resource management unit, a resource policy management unit, a shared resource capability management unit, a shared resource status monitoring unit, and a shared resource allocation unit. The resource management unit performs an operation necessary for resource allocation when a resource allocation request is received. The resource policy management unit determines a resource allocation policy based on the characteristic of the task, and generates resource allocation information. The shared resource capability management unit manages the topology of nodes, information about the capabilities of resources, and resource association information. The shared resource status monitoring unit monitors and manages information about the status of each node and the use of allocated resources. The shared resource allocation unit sends a resource allocation request to at least one of the plurality of nodes.
    Type: Application
    Filed: August 13, 2012
    Publication date: August 1, 2013
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Young-Ho KIM, Gyu-II CHA, Shin-Young AHN, Eun-Ji LIM, Jin-Mee KIM, Seung-Jo BAE
  • Publication number: 20120166926
    Abstract: Provided is a hyperlink display method based on a visit history accumulation, which is capable of providing search convenience between web documents by providing a self-updating hyperlink through accumulation of a visit history. The hyperlink display method includes displaying a plurality of hyperlink included in a web document when the web document is selected, recording a weight value of a selected hyperlink when any one of the plurality of hyperlinks is selected, and displaying the weight value of the selected hyperlink on the web document.
    Type: Application
    Filed: October 28, 2011
    Publication date: June 28, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Ji Yong KIM, Hyeon Jin Kim, Jin Mee Kim, Dong Hwan Son, Sung Ho Im, Shin Young Ahn, Kyoung Park, Seung Jo Bae, Wan Choi
  • Publication number: 20110247050
    Abstract: Provided is a method of pairing terminals with each other, and a terminal for the method. The method includes sensing a physical motion of a terminal caused by a user and outputting a sensing value, comparing a reception value received from an external terminal with the sensing value, and establishing a communication path with the external terminal according to the comparison result.
    Type: Application
    Filed: March 31, 2011
    Publication date: October 6, 2011
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Donghwan Son, Jin-Mee Kim, Shin-Young Ahn, Sung-Won Yi, Jong-Sung Kim, Ji-Yong Kim, Hyeon-Jin Kim, Sung-Ho Im, Seung-Jo Bae, Kyoung Park, Wan Choi
  • Patent number: 8032780
    Abstract: Provided are a virtualization based high availability cluster system and a method for managing failures in a virtualization based high availability cluster system. The high availability cluster system includes a plurality of virtual nodes, and a plurality of physical nodes each including a message generator for generating a message denoting that the virtual nodes are in a normal state and transmitting the generated message to virtual nodes in a same physical node. One of the virtual nodes not included in a first physical node among the plurality of the physical nodes takes over resources related to a service if a failure is generated in one of virtual nodes included in the first physical node.
    Type: Grant
    Filed: November 26, 2007
    Date of Patent: October 4, 2011
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kwang-Won Koh, Seungjo Bae, Jin Mee Kim, Young-Woo Jung, Young Choon Woo, Myung-Joon Kim
  • Patent number: 7873609
    Abstract: Provided is a contents distribution management system and method for supporting a plurality of global servers that provide the contents and managing the contents by applying different policies based on the global server and service type of the contents. The inventive system comprises a plurality of global servers for supplying contents to a plurality of local servers, each global server having a large capacity contents library, and the local servers for managing the contents provided from the global servers based on global servers and service types using contents tables, and providing a contents service in response to a contents streaming service demand from a last terminal using a local contents cache.
    Type: Grant
    Filed: December 20, 2005
    Date of Patent: January 18, 2011
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Chang-Soo Kim, Seung-Jo Bae, Jin-Mee Kim, Yu-Hyeon Bak, Sang-Min Woo, Seung-Hyub Jeon, Won-Jae Lee, Hag-Young Kim