Patents by Inventor Chei-Yol Kim

Chei-Yol Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240127094
    Abstract: Disclosed herein are a logical qubit execution apparatus and method. The logical qubit execution apparatus may be configured to execute, by a logical execution layer, a quantum circuit including requested logical qubits using a lattice surgery operation, generate, by the logical execution layer, measurement results of the logical qubits by combining measurement results of logical Pauli frames, generate, by a physical execution layer, a physical qubit circuit by converting a logical qubit operation corresponding to the measurement results of the logical qubits into a physical qubit operation, and measure, by the physical execution layer, results of an operation on physical Pauli frames by executing the physical qubit circuit.
    Type: Application
    Filed: June 30, 2023
    Publication date: April 18, 2024
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin-Ho ON, Chei-Yol KIM, Soo-Cheol OH, Sang-Min LEE, Gyu-Il CHA
  • Patent number: 11762733
    Abstract: Disclosed is a quantum computing system including a first quantum chip including first physical qubits, a second quantum chip including second physical qubits, and a management device. The management device includes a physical qubit layer that manages physical qubit mapping including information about physical channels between first and second physical qubits, an abstraction qubit layer that manages abstraction qubit mapping including information about abstraction qubits and abstraction channels between the abstraction qubits based on the physical qubit mapping, a logical qubit layer that divides the abstraction qubits into logical qubits and to manage logical qubit mapping including information about logical channels between the logical qubits, based on the abstraction qubit mapping, and an application qubit layer that allocates at least one logical qubit corresponding to a qubit request received from a quantum application program based on the logical qubit mapping.
    Type: Grant
    Filed: September 10, 2021
    Date of Patent: September 19, 2023
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin Ho On, Chei-Yol Kim, SooCheol Oh, Gyuil Cha, Hee-Bum Jung
  • Publication number: 20230129967
    Abstract: A quantum computing system according to an embodiment of the present disclosure includes a logical qubit quantum compiler configured to receive a specific quantum code and to output a quantum kernel based on a quantum basic operation command, a logical qubit quantum kernel executor configured to generate a plurality of physical qubit quantum commands based on the quantum kernel, and a physical qubit quantum system configured to receive the physical qubit quantum command and to perform a physical quantum operation.
    Type: Application
    Filed: August 31, 2022
    Publication date: April 27, 2023
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Soo Cheol OH, Chei Yol KIM, Jin Ho ON, Sang Min LEE, Gyu Il CHA
  • Publication number: 20220164496
    Abstract: Disclosed is an operating method of a surface code-based quantum simulation device including physical qubit storage, which includes storing initialized entanglement states of logical qubits corresponding to different distances, receiving a surface code-based initialization request corresponding to a specific distance, and storing an initialized entanglement state of a logical qubit corresponding to the specific distance from among the initialized entanglement states of the logical qubits corresponding to the different distances in the physical qubit storage.
    Type: Application
    Filed: August 24, 2021
    Publication date: May 26, 2022
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: SooCheol OH, Chei-Yol KIM, Jin Ho ON, GYUIL CHA, Hee-Bum JUNG
  • Publication number: 20220164253
    Abstract: Disclosed is a quantum computing system including a first quantum chip including first physical qubits, a second quantum chip including second physical qubits, and a management device. The management device includes a physical qubit layer that manages physical qubit mapping including information about physical channels between first and second physical qubits, an abstraction qubit layer that manages abstraction qubit mapping including information about abstraction qubits and abstraction channels between the abstraction qubits based on the physical qubit mapping, a logical qubit layer that divides the abstraction qubits into logical qubits and to manage logical qubit mapping including information about logical channels between the logical qubits, based on the abstraction qubit mapping, and an application qubit layer that allocates at least one logical qubit corresponding to a qubit request received from a quantum application program based on the logical qubit mapping.
    Type: Application
    Filed: September 10, 2021
    Publication date: May 26, 2022
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jin Ho ON, Chei-Yol KIM, SooCheol OH, GYUIL CHA, Hee-Bum JUNG
  • Patent number: 11175960
    Abstract: A method and apparatus are disclosed which relate generally to worker-scheduling technology in a serverless cloud-computing environment, and more particularly, to technology that allocates workers for executing functions on a micro-function platform which provides a function-level micro-service. The method and apparatus process a worker allocation task in a distributed manner as two-step pre-allocation schemes before a worker allocation request occurs, and pre-allocates workers required for a service using a function request period and a function execution time, thus minimizing scheduling costs incurred by worker allocation requests.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: November 16, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Young-Ho Kim, Chei-Yol Kim, Jin-Ho On, Su-Min Jang, Gyu-Il Cha
  • Patent number: 10983835
    Abstract: Disclosed herein are an apparatus and method for setting the allocation rate of a parallel-computing accelerator. The method includes monitoring the utilization rate of the parallel-computing accelerator by an application and setting a start point, at which measurement of utilization data to be used for setting the allocation rate of the parallel-computing accelerator for the application is started, using the result of monitoring the utilization rate; setting an end point, at which the measurement of the utilization data is finished, based on the monitoring result; and setting the allocation rate of the parallel-computing accelerator using the utilization data measured during a time period from the start point to the end point.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: April 20, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Chei-Yol Kim, Young-Ho Kim, Jin-Ho On, Su-Min Jang, Gyu-Il Cha
  • Patent number: 10977007
    Abstract: An apparatus and method for executing a function. The apparatus includes one or more processors and executable memory for storing at least one program executed by the one or more processors, and the at least one program is configured to determine whether it is possible to reengineer a user function source using interface description language (IDL) code, to generate a reengineered function source by reengineering the user function source, and to execute the reengineered function source.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: April 13, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin-Ho On, Young-Ho Kim, Chei-Yol Kim, Su-Min Jang, Gyu-Il Cha
  • Patent number: 10749709
    Abstract: Disclosed herein is a distributed file system using a torus network. The distributed file system includes multiple servers. The location of a master server may be determined to shorten the latency of data input/output. The location of the master server may be determined such that the distance between the master server and a node farthest away from the master server, among nodes, is minimized. When the location of the master server is determined, the characteristics of the torus network and the features of a propagation transmission scheme may be taken into consideration.
    Type: Grant
    Filed: July 28, 2017
    Date of Patent: August 18, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Chei-Yol Kim, Dong-Oh Kim, Young-Kyun Kim, Young-Chul Kim, Hong-Yeon Kim
  • Publication number: 20200183744
    Abstract: A worker-scheduling method in a cloud-computing system and an apparatus for the same. The worker-scheduling method includes performing a first load-distribution operation of pre-creating template workers so as to process worker execution preparation loads in a distributed manner before a worker allocation request for function execution occurs, predicting a number of workers to be pre-allocated in consideration of variation in a worker allocation request period for each function, and performing a second load distribution operation of pre-allocating ready workers by performing worker upscaling on as many template workers as the number of workers to be pre-allocated.
    Type: Application
    Filed: September 26, 2019
    Publication date: June 11, 2020
    Inventors: Young-Ho KIM, Chei-Yol KIM, Jin-Ho ON, Su-Min JANG, Gyu-Il CHA
  • Publication number: 20200183657
    Abstract: An apparatus and method for executing a function. The apparatus includes one or more processors and executable memory for storing at least one program executed by the one or more processors, and the at least one program is configured to determine whether it is possible to reengineer a user function source using interface description language (IDL) code, to generate a reengineered function source by reengineering the user function source, and to execute the reengineered function source.
    Type: Application
    Filed: September 19, 2019
    Publication date: June 11, 2020
    Inventors: Jin-Ho ON, Young-Ho KIM, Chei-Yol KIM, Su-Min JANG, Gyu-Il CHA
  • Publication number: 20200183746
    Abstract: Disclosed herein are an apparatus and method for setting the allocation rate of a parallel-computing accelerator. The method includes monitoring the utilization rate of the parallel-computing accelerator by an application and setting a start point, at which measurement of utilization data to be used for setting the allocation rate of the parallel-computing accelerator for the application is started, using the result of monitoring the utilization rate; setting an end point, at which the measurement of the utilization data is finished, based on the monitoring result; and setting the allocation rate of the parallel-computing accelerator using the utilization data measured during a time period from the start point to the end point.
    Type: Application
    Filed: October 21, 2019
    Publication date: June 11, 2020
    Inventors: Chei-Yol KIM, Young-Ho KIM, Jin-Ho ON, Su-Min JANG, Gyu-Il CHA
  • Publication number: 20180212795
    Abstract: Disclosed herein is a distributed file system using a torus network. The distributed file system includes multiple servers. The location of a master server may be determined to shorten the latency of data input/output. The location of the master server may be determined such that the distance between the master server and a node farthest away from the master server, among nodes, is minimized. When the location of the master server is determined, the characteristics of the torus network and the features of a propagation transmission scheme may be taken into consideration.
    Type: Application
    Filed: July 28, 2017
    Publication date: July 26, 2018
    Inventors: Chei-Yol KIM, Dong-Oh KIM, Young-Kyun KIM, Young-Chul KIM, Hong-Yeon KIM
  • Patent number: 9965353
    Abstract: A distributed file system, based on a torus network, includes a center node and one or more storage nodes. The center node encodes data when the data is received from a client. The one or more storage nodes receive data blocks or parity blocks from the center node and store the data blocks or parity blocks.
    Type: Grant
    Filed: May 25, 2016
    Date of Patent: May 8, 2018
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Chei Yol Kim, Dong Oh Kim, Young Kyun Kim, Hong Yeon Kim
  • Patent number: 9876874
    Abstract: Disclosed herein is a network selecting apparatus including: a determining unit checking an operation state of a cache server in which replicated data corresponding to a data request signal are stored and determining whether the cache server is normally operated, when the data request signal is received from a client; and a selecting unit selecting a first network so that the replicated data are transmitted to input and output devices when the cache server is normally operated and selecting a second network so that original data corresponding to the data request signal are transmitted from a storage server in which the original data are stored to the input and output devices when the cache server is not normally operated.
    Type: Grant
    Filed: April 21, 2014
    Date of Patent: January 23, 2018
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Chei-Yol Kim, Sang-Min Lee, Jeong-Sook Park, Young-Chang Kim, Soo-Young Kim
  • Patent number: 9740604
    Abstract: Provided herein a method for allocating storage space using a buddy allocator, the method including receiving, by a buddy allocator, a block allocation request from a space allocation requestor, selecting, by the buddy allocator, a first buddy in response to the block allocation request, wherein the first buddy is one of a plurality of buddies, checking, by the buddy allocator, whether blocks of the first buddy include a first spare storage space to which storage space corresponding to the block allocation request is allocated, allocating, by the buddy allocator, the storage space to the blocks of the first buddy when it is checked that the blocks of the first buddy include the first spare storage space, and deallocating, by the buddy allocator, excess storage space of allocated storage space when a size of the allocated storage space is greater than the storage corresponding to the block allocation request, wherein the excess storage space is not corresponding to the block allocation request.
    Type: Grant
    Filed: March 18, 2016
    Date of Patent: August 22, 2017
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventor: Chei Yol Kim
  • Publication number: 20170212802
    Abstract: A distributed file system based on a torus network includes a center node configured to encode data when the data is received from a client and one or more storage nodes configured to receive data blocks or parity blocks generated by the encoding from the center node and store the data blocks or parity blocks.
    Type: Application
    Filed: May 25, 2016
    Publication date: July 27, 2017
    Inventors: Chei Yol KIM, Dong Oh KIM, Young Kyun KIM, Hong Yeon KIM
  • Patent number: 9680954
    Abstract: There are provided a system and method for providing a virtual desktop service using a cache server. A system for providing a virtual desktop service according to the invention includes a host server configured to provide a virtual desktop service to a client terminal using a virtual machine, a distributed file system configured to store data for the virtual machine, and a cache server that is provided for each host server group having at least one host server, and performs a read process or a write process of data using physically separate caches when the read process or write process of the data is requested from the virtual machine in the host server.
    Type: Grant
    Filed: February 28, 2014
    Date of Patent: June 13, 2017
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jeong-Sook Park, Soo-Young Kim, Chei-Yol Kim, Young-Chang Kim, Sang-Min Lee, Hong-Yeon Kim, Young-Kyun Kim
  • Publication number: 20160275009
    Abstract: Provided herein a method for allocating storage space using a buddy allocator, the method including receiving, by a buddy allocator, a block allocation request from a space allocation requestor, selecting, by the buddy allocator, a first buddy in response to the block allocation request, wherein the first buddy is one of a plurality of buddies, checking, by the buddy allocator, whether blocks of the first buddy include a first spare storage space to which storage space corresponding to the block allocation request is allocated, allocating, by the buddy allocator, the storage space to the blocks of the first buddy when it is checked that the blocks of the first buddy include the first spare storage space, and deallocating, by the buddy allocator, excess storage space of allocated storage space when a size of the allocated storage space is greater than the storage corresponding to the block allocation request, wherein the excess storage space is not corresponding to the block allocation request.
    Type: Application
    Filed: March 18, 2016
    Publication date: September 22, 2016
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventor: Chei Yol KIM
  • Patent number: 9158562
    Abstract: Disclosed herein is a method and apparatus for supporting virtualization. In the method, conversion of source code of a loadable module is initiated. A virtualization-sensitive instruction is searched for during the conversion of the source code. If the virtualization-sensitive instruction has been found, a virtualization-sensitive instruction table is generated based on the found virtualization-sensitive instruction. The virtualization-sensitive instruction is substituted with an instruction recognizable in a privileged mode, based on the generated virtualization-sensitive instruction table. The loadable module is loaded and executed in a kernel. Accordingly, the present invention supports virtualization, thus minimizing overhead occurring in full virtualization, and guaranteeing the high performance provided by para-virtualization without modifying a source.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: October 13, 2015
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seung-Hyub Jeon, Kwang-Won Koh, Kang-Ho Kim, Chei-Yol Kim, Chang-Won Ahn