Patents by Inventor Su Min Jang

Su Min Jang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240143448
    Abstract: Disclosed herein are a distributed cloud system, a data processing method of the distributed cloud system, a storage medium. The data processing method of the distributed cloud system includes running an application of an edge computing system requested by a user device, generating a snapshot image of the application, and storing the generated snapshot image and transmitting the stored image during migration.
    Type: Application
    Filed: October 25, 2023
    Publication date: May 2, 2024
    Inventors: Dae-Won KIM, Sun-Wook KIM, Su-Min JANG, Jae-Geun CHA, Hyun-Hwa CHOI
  • Publication number: 20240121956
    Abstract: A semiconductor device may include: first insulating pillars arranged in a first direction; second insulating pillars arranged alternately with the first insulating pillars and having a first width in the first direction and a second width in a second direction intersecting the first direction, the first width being greater than the second width; first memory cells located between the second insulating pillars and stacked along a first sidewall of each of the first insulating pillars; and second memory cells located between the second insulating pillars and stacked along a second sidewall of each of the first insulating pillars.
    Type: Application
    Filed: March 31, 2023
    Publication date: April 11, 2024
    Inventors: Rho Gyu KWAK, In Su PARK, Jung Shik JANG, Seok Min CHOI, Won Geun CHOI
  • Publication number: 20240109858
    Abstract: The present invention relates to a compound capable of lowering the flammability of a non-aqueous electrolyte when included in the non-aqueous electrolyte and improving the life properties of a battery by forming an electrode-electrolyte interface which is stable at high temperatures and low in resistance, and relates to a compound represented by Formula I descried herein, a non-aqueous electrolyte solution and a lithium secondary battery both including the compound, n, m, Ak, and X are described herein.
    Type: Application
    Filed: March 23, 2022
    Publication date: April 4, 2024
    Applicants: LG Chem, Ltd., LG Energy Solution, Ltd.
    Inventors: Jung Keun Kim, Su Jeong Kim, Mi Sook Lee, Won Kyun Lee, Duk Hun Jang, Jeong Ae Yoon, Kyoung Hoon Kim, Chul Haeng Lee, Mi Yeon Oh, Kil Sun Lee, Jung Min Lee, Esder Kang, Chan Woo Noh, Chul Eun Yeom
  • Publication number: 20240073298
    Abstract: Disclosed herein are an intelligent scheduling apparatus and method. The intelligent scheduling apparatus includes one or more processors, and an execution memory for storing at least program that is executed by the one or more processors, wherein the at least one program is configured to, in a hybrid cloud environment including a cloud, an edge system, and a near-edge system, configure schedulers for scheduling tasks of the cloud, the edge system, and the near-edge systems, store data, requested by a client, in a work queue by controlling the schedulers based on a scheduler policy and process the tasks based on data stored in the work queue, and collect history data resulting from processing of the tasks depending on the scheduler policy, and train the scheduler policy based on the history data.
    Type: Application
    Filed: October 24, 2023
    Publication date: February 29, 2024
    Inventor: Su-Min JANG
  • Patent number: 11916998
    Abstract: Disclosed herein is a multi-cloud edge system. The multi-cloud edge system includes a core cloud, a multi-cluster-based first edge node system, and a multi-cluster-based near edge node system, wherein the multi-cluster-based first edge node system includes multiple worker nodes, and a master node including a scheduler.
    Type: Grant
    Filed: November 11, 2022
    Date of Patent: February 27, 2024
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Dae-Won Kim, Su-Min Jang, Jae-Geun Cha, Hyun-Hwa Choi, Sun-Wook Kim
  • Publication number: 20230412671
    Abstract: Disclosed herein are a distributed cloud system, a data processing method of a distributed cloud system, and a storage medium. The data processing method of a distributed cloud system includes receiving a request of a user for an edge cloud and controlling a distributed cloud system, wherein the distributed cloud system comprises a core cloud including a large-scale resource, the edge cloud, and a local cloud including a middle-scale resource between the core cloud and the edge cloud, processing tasks corresponding to the user request through a scheduler of the core cloud, distributing the tasks based on a queue, and aggregating results of processed tasks, and providing processed data in response to a request of the user, wherein the distributed cloud system provides a management function in case of failure in the distributed cloud system.
    Type: Application
    Filed: June 13, 2023
    Publication date: December 21, 2023
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Dae-Won KIM, Sun-Wook KIM, Su-Min JANG, Jae-Geun CHA, Hyun-Hwa CHOI
  • Patent number: 11838384
    Abstract: Disclosed herein are an intelligent scheduling apparatus and method. The intelligent scheduling apparatus includes one or more processors, and an execution memory for storing at least program that is executed by the one or more processors, wherein the at least one program is configured to, in a hybrid cloud environment including a cloud, an edge system, and a near-edge system, configure schedulers for scheduling tasks of the cloud, the edge system, and the near-edge systems, store data, requested by a client, in a work queue by controlling the schedulers based on a scheduler policy and process the tasks based on data stored in the work queue, and collect history data resulting from processing of the tasks depending on the scheduler policy, and train the scheduler policy based on the history data.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: December 5, 2023
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventor: Su-Min Jang
  • Publication number: 20230155959
    Abstract: Disclosed herein is a method for resource allocation in an edge-computing environment. The method includes receiving a request for an intelligent edge service, selecting the worker server to execute the service based on an input/output congestion level, allocating resources based on topology information of the worker server, and configuring a virtual environment based on the allocated resources.
    Type: Application
    Filed: November 9, 2022
    Publication date: May 18, 2023
    Inventors: Hyun-Hwa CHOI, Dae-Won KIM, Sun-Wook KIM, Su-Min JANG, Jae-Geun CHA
  • Publication number: 20230156074
    Abstract: Disclosed herein is a multi-cloud edge system. The multi-cloud edge system includes a core cloud, a multi-cluster-based first edge node system, and a multi-cluster-based near edge node system, wherein the multi-cluster-based first edge node system includes multiple worker nodes, and a master node including a scheduler.
    Type: Application
    Filed: November 11, 2022
    Publication date: May 18, 2023
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Dae-Won KIM, Su-Min JANG, Jae-Geun CHA, Hyun-Hwa CHOI, Sun-Wook KIM
  • Publication number: 20230091587
    Abstract: Disclosed herein are a Docker image creation apparatus and method. The Docker image creation apparatus may include one or more processors, and execution memory for storing at least one program that is executed by the one or more processors, wherein the at least one program is configured to create a Docker image based on a previous Docker file, and executing a push command to store the created the Docker file to a registry, wherein the Docker file is created from a partial Docker file of the previous Docker file, wherein the Docker image is created from a verified partial Docker file, after the partial Docker file is verified.
    Type: Application
    Filed: June 9, 2022
    Publication date: March 23, 2023
    Inventor: Su-Min JANG
  • Publication number: 20220405236
    Abstract: Disclosed herein are an apparatus and method for managing in-memory container storage. The apparatus includes one or more processors, executable memory for storing at least one program executed by the one or more processors, and a container file system for storing a container, which provides application virtualization. Here, the container file system includes a merged access layer, a container layer, and an image layer, and the at least one program provides an application with link information of files in the container layer and the image layer, thereby allowing the application to access the files.
    Type: Application
    Filed: June 17, 2022
    Publication date: December 22, 2022
    Inventors: Dae-Won KIM, Sun-Wook KIM, Su-Min JANG, Jae-Geun CHA, Hyun-Hwa CHOI
  • Publication number: 20220405021
    Abstract: Disclosed herein are an apparatus and method for managing memory-based integrated storage. The apparatus includes one or more processors and executable memory for storing at least one program executed by the one or more processors. The at least one program converts data operation tasks in response to a request for access to memory-based integrated storage from a user, a single virtual disk of a virtual storage pool of the memory-based integrated storage converts a disk access command into a command for connecting to a storage backend depending on the data operation tasks, and conversion of the data operation tasks into the command includes target identification indicating which local storage of the memory-based integrated storage is to be used.
    Type: Application
    Filed: June 17, 2022
    Publication date: December 22, 2022
    Inventors: Dae-Won KIM, Sun-Wook KIM, Su-Min JANG, Jae-Geun CHA, Hyun-Hwa CHOI
  • Publication number: 20220286499
    Abstract: Disclosed herein are an apparatus and method for autoscaling a service shared in a cloud. The apparatus may include memory in which at least one program is recorded and a processor for executing the program, and the program may perform autoscaling by which at least one second service for performing the same function as a first service is additionally generated or deleted depending on a load that is incurred when multiple clients call the first service in the cloud. The at least one second service may be set to one of two or more execution types having different response times depending on a response time required by each of the multiple clients.
    Type: Application
    Filed: March 4, 2022
    Publication date: September 8, 2022
    Inventors: Hyun-Hwa CHOI, Dae-Won KIM, Sun-Wook KIM, Su-Min JANG, Jae-Geun CHA
  • Publication number: 20220006879
    Abstract: Disclosed herein are an intelligent scheduling apparatus and method. The intelligent scheduling apparatus includes one or more processors, and an execution memory for storing at least program that is executed by the one or more processors, wherein the at least one program is configured to, in a hybrid cloud environment including a cloud, an edge system, and a near-edge system, configure schedulers for scheduling tasks of the cloud, the edge system, and the near-edge systems, store data, requested by a client, in a work queue by controlling the schedulers based on a scheduler policy and process the tasks based on data stored in the work queue, and collect history data resulting from processing of the tasks depending on the scheduler policy, and train the scheduler policy based on the history data.
    Type: Application
    Filed: April 28, 2021
    Publication date: January 6, 2022
    Inventor: Su-Min JANG
  • Patent number: 11175960
    Abstract: A method and apparatus are disclosed which relate generally to worker-scheduling technology in a serverless cloud-computing environment, and more particularly, to technology that allocates workers for executing functions on a micro-function platform which provides a function-level micro-service. The method and apparatus process a worker allocation task in a distributed manner as two-step pre-allocation schemes before a worker allocation request occurs, and pre-allocates workers required for a service using a function request period and a function execution time, thus minimizing scheduling costs incurred by worker allocation requests.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: November 16, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Young-Ho Kim, Chei-Yol Kim, Jin-Ho On, Su-Min Jang, Gyu-Il Cha
  • Patent number: 10983835
    Abstract: Disclosed herein are an apparatus and method for setting the allocation rate of a parallel-computing accelerator. The method includes monitoring the utilization rate of the parallel-computing accelerator by an application and setting a start point, at which measurement of utilization data to be used for setting the allocation rate of the parallel-computing accelerator for the application is started, using the result of monitoring the utilization rate; setting an end point, at which the measurement of the utilization data is finished, based on the monitoring result; and setting the allocation rate of the parallel-computing accelerator using the utilization data measured during a time period from the start point to the end point.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: April 20, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Chei-Yol Kim, Young-Ho Kim, Jin-Ho On, Su-Min Jang, Gyu-Il Cha
  • Patent number: 10977007
    Abstract: An apparatus and method for executing a function. The apparatus includes one or more processors and executable memory for storing at least one program executed by the one or more processors, and the at least one program is configured to determine whether it is possible to reengineer a user function source using interface description language (IDL) code, to generate a reengineered function source by reengineering the user function source, and to execute the reengineered function source.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: April 13, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin-Ho On, Young-Ho Kim, Chei-Yol Kim, Su-Min Jang, Gyu-Il Cha
  • Patent number: 10789085
    Abstract: Disclosed are a method, apparatus, and system for selectively providing a virtual machine through actual measurement of efficiency of power usage. When a user terminal requests to provide a virtual machine, candidate virtual machines are activated on multiple virtual machine servers. Input data provided by the user terminal are provided to each of the multiple candidate virtual machines through replication and network virtualization, and identical candidate virtual machines are run on the multiple virtual machine servers through replication and network virtualization. When the candidate virtual machines are run, one of the candidate virtual machines is finally selected as the virtual machine to be provided to the user terminal based on efficiency of power usage.
    Type: Grant
    Filed: August 16, 2017
    Date of Patent: September 29, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventor: Su-Min Jang
  • Publication number: 20200183746
    Abstract: Disclosed herein are an apparatus and method for setting the allocation rate of a parallel-computing accelerator. The method includes monitoring the utilization rate of the parallel-computing accelerator by an application and setting a start point, at which measurement of utilization data to be used for setting the allocation rate of the parallel-computing accelerator for the application is started, using the result of monitoring the utilization rate; setting an end point, at which the measurement of the utilization data is finished, based on the monitoring result; and setting the allocation rate of the parallel-computing accelerator using the utilization data measured during a time period from the start point to the end point.
    Type: Application
    Filed: October 21, 2019
    Publication date: June 11, 2020
    Inventors: Chei-Yol KIM, Young-Ho KIM, Jin-Ho ON, Su-Min JANG, Gyu-Il CHA
  • Publication number: 20200183744
    Abstract: A worker-scheduling method in a cloud-computing system and an apparatus for the same. The worker-scheduling method includes performing a first load-distribution operation of pre-creating template workers so as to process worker execution preparation loads in a distributed manner before a worker allocation request for function execution occurs, predicting a number of workers to be pre-allocated in consideration of variation in a worker allocation request period for each function, and performing a second load distribution operation of pre-allocating ready workers by performing worker upscaling on as many template workers as the number of workers to be pre-allocated.
    Type: Application
    Filed: September 26, 2019
    Publication date: June 11, 2020
    Inventors: Young-Ho KIM, Chei-Yol KIM, Jin-Ho ON, Su-Min JANG, Gyu-Il CHA