Patents by Inventor Jin Wuk Seok

Jin Wuk Seok has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240069875
    Abstract: Disclosed herein are a neural network model deployment method and apparatus for providing a deep learning service. The neural network model deployment method may include providing a specification wizard to a user, searching for and training a neural network based on a user requirement specification that is input through the specification wizard, generating a neural network template code based on the user requirement specification and the trained neural network, converting the trained neural network into a deployment neural network that is usable in a target device based on the user requirement specification, and deploying the deployment neural network to the target device.
    Type: Application
    Filed: June 14, 2023
    Publication date: February 29, 2024
    Inventors: Jae-Bok PARK, Chang-Sik CHO, Kyung-Hee LEE, Ji-Young KWAK, Seon-Tae KIM, Hong-Soog KIM, Jin-Wuk SEOK, Hyun-Woo CHO
  • Publication number: 20230316091
    Abstract: Disclosed herein are a federated learning method and apparatus. The federated learning method includes receiving a feature vector extracted from a client side and label data corresponding to the feature vector, outputting a feature vector with phase information preserved therein by applying the feature vector as input of a Self-Organizing Feature Map (SOFM), and training a neural network model by applying both the feature vector with the phase information preserved therein and the label data as input of a neural network model.
    Type: Application
    Filed: February 9, 2023
    Publication date: October 5, 2023
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin-Wuk SEOK, Ji-Young KWAK, Seon-Tae KIM, Hong-Soog KIM, Jae-Bok PARK, Kyung-Hee LEE, Chang-Sik CHO, Hyun-Woo CHO
  • Publication number: 20230297833
    Abstract: Disclosed herein are a method and apparatus for compressing learning parameters for training of a deep-learning model and transmitting the compressed parameters in a distributed processing environment. Multiple electronic devices in the distributed processing system perform training of a neural network. By performing training, parameters are updated. The electronic device may share the updated parameter thereof with additional electronic devices. In order to efficiently share the parameter, the residual of the parameter is provided to the additional electronic devices. When the residual of the parameter is provided, the additional electronic devices update the parameter using the residual of the parameter.
    Type: Application
    Filed: April 24, 2023
    Publication date: September 21, 2023
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seung-Hyun CHO, Youn-Hee KIM, Jin-Wuk SEOK, Joo-Young LEE, Woong LIM, Jong-Ho KIM, Dae-Yeol LEE, Se-Yoon JEONG, Hui-Yong KIM, Jin-Soo CHOI, Je-Won KANG
  • Patent number: 11663476
    Abstract: Disclosed herein are a method and apparatus for compressing learning parameters for training of a deep-learning model and transmitting the compressed parameters in a distributed processing environment. Multiple electronic devices in the distributed processing system perform training of a neural network. By performing training, parameters are updated. The electronic device may share the updated parameter thereof with additional electronic devices. In order to efficiently share the parameter, the residual of the parameter is provided to the additional electronic devices. When the residual of the parameter is provided, the additional electronic devices update the parameter using the residual of the parameter.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: May 30, 2023
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seung-Hyun Cho, Youn-Hee Kim, Jin-Wuk Seok, Joo-Young Lee, Woong Lim, Jong-Ho Kim, Dae-Yeol Lee, Se-Yoon Jeong, Hui-Yong Kim, Jin-Soo Choi, Je-Won Kang
  • Patent number: 11477468
    Abstract: A method and apparatus for image compression using a latent variable are provided. The multiple components of the latent variable may be sorted in order of importance. Through sorting, when the feature information of only some of the multiple components is used, the quality of a reconstructed image may be improved. In order to generate a latent variable, the components of which are sorted in order of importance, learning may be performed in various manners. Also, less important information may be eliminated from the latent variable, and processing, such as quantization, may be applied to the latent variable. Through elimination and processing, the amount of data for the latent variable may be reduced.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: October 18, 2022
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Joo-Young Lee, Seung-Hyun Cho, Youn-Hee Kim, Jin-Wuk Seok, Woong Lim, Jong-Ho Kim, Dae-Yeol Lee, Se-Yoon Jeong, Hui-Yong Kim, Jin-Soo Choi
  • Publication number: 20220300803
    Abstract: Disclosed herein are a method for performing a dilated convolution operation using an atypical kernel pattern and a dilated convolutional neural network system using the same. The method for performing a dilated convolution operation includes learning a weight matrix for a kernel of dilated convolution through deep learning, generating an atypical kernel pattern based on the learned weight matrix, and performing a dilated convolution operation on input data by applying the atypical kernel pattern to a kernel of a dilated convolutional neural network.
    Type: Application
    Filed: June 8, 2021
    Publication date: September 22, 2022
    Inventors: Hyun-Woo CHO, Jeong-Si KIM, Hong-Soog KIM, Jin-Wuk SEOK, Seung-Tae HONG
  • Patent number: 11412225
    Abstract: Disclosed herein is a context-adaptive entropy model for end-to-end optimized image compression. The entropy model exploits two types of contexts. The two types of contexts are a bit-consuming context and a bit-free context, respectively, and these contexts are classified depending on the corresponding context requires the allocation of additional bits. Based on these contexts, the entropy model may more accurately estimate the distribution of each latent representation using a more generalized form of entropy models, thus improving compression performance.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: August 9, 2022
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Joo-Young Lee, Seung-Hyun Cho, Seung-Kwon Beack, Hyunsuk Ko, Youn-Hee Kim, Jong-Ho Kim, Jin-Wuk Seok, Woong Lim, Se-Yoon Jeong, Hui-Yong Kim, Jin-Soo Choi
  • Patent number: 11205257
    Abstract: Disclosed herein are a method and apparatus for measuring video quality based on a perceptually sensitive region. The quality of video may be measured based on a perceptually sensitive region and a change in the perceptually sensitive region. The perceptually sensitive region includes a spatial perceptually sensitive region, a temporal perceptually sensitive region, and a spatio-temporal perceptually sensitive region. Perceptual weights are applied to a detected perceptually sensitive region and a change in the detected perceptually sensitive region. Distortion is calculated based on the perceptually sensitive region and the change in the perceptually sensitive region, and a result of quality measurement for a video is generated based on the calculated distortion.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: December 21, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Se-Yoon Jeong, Dae-Yeol Lee, Seung-Hyun Cho, Hyunsuk Ko, Youn-Hee Kim, Jong-Ho Kim, Jin-Wuk Seok, Joo-Young Lee, Woong Lim, Hui-Yong Kim, Jin-Soo Choi
  • Publication number: 20210365838
    Abstract: Disclosed herein are an apparatus and method for machine learning based on monotonically increasing quantization resolution. The method, in which a quantization coefficient is defined as a monotonically increasing function of time, includes initially setting the monotonically increasing function of time, performing machine learning based on a quantized learning equation using the quantization coefficient defined by the monotonically increasing function of time, determining whether the quantization coefficient satisfies a predetermined condition after increasing the time, newly setting the monotonically increasing function of time when the quantization coefficient satisfies the predetermined condition, and updating the quantization coefficient using the newly set monotonically increasing function of time.
    Type: Application
    Filed: May 20, 2021
    Publication date: November 25, 2021
    Inventors: Jin-Wuk SEOK, Jeong-Si KIM
  • Patent number: 11166014
    Abstract: Disclosed herein are a method and apparatus for video decoding and a method and apparatus for video encoding. A prediction block for a target block is generated by predicting the target block using a prediction network, and a reconstructed block for the target block is generated based on the prediction block and a reconstructed residual block. The prediction network includes an intra-prediction network and an inter-prediction network and uses a spatial reference block and/or a temporal reference block when it performs prediction. For learning in the prediction network, a loss function is defined, and learning in the prediction network is performed based on the loss function.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: November 2, 2021
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seung-Hyun Cho, Joo-Young Lee, Youn-Hee Kim, Jin-Wuk Seok, Woong Lim, Jong-Ho Kim, Dae-Yeol Lee, Se-Yoon Jeong, Hui-Yong Kim, Jin-Soo Choi
  • Patent number: 11019355
    Abstract: An inter-prediction method and apparatus uses a reference frame generated based on deep learning. In the inter-prediction method and apparatus, a reference frame is selected, and a virtual reference frame is generated based on the selected reference frame. A reference picture list is configured to include the generated virtual reference frame, and inter prediction for a target block is performed based on the virtual reference frame. The virtual reference frame may be generated based on a deep-learning network architecture, and may be generated based on video interpolation and/or video extrapolation that use the selected reference frame.
    Type: Grant
    Filed: April 3, 2019
    Date of Patent: May 25, 2021
    Assignee: Electronics and Telecommunications Research institute
    Inventors: Seung-Hyun Cho, Je-Won Kang, Na-Young Kim, Jung-Kyung Lee, Joo-Young Lee, Hyunsuk Ko, Youn-Hee Kim, Jong-Ho Kim, Jin-Wuk Seok, Dae-Yeol Lee, Woong Lim, Se-Yoon Jeong, Hui-Yong Kim, Jin-Soo Choi
  • Publication number: 20210133626
    Abstract: Disclosed herein are an apparatus and method for optimizing a quantized machine-learning algorithm. The apparatus includes one or more processors and executable memory for storing at least one program executed by the one or more processors. The at least one program sets the learning rate of the quantized machine-learning algorithm using at least one of an Armijo rule and golden search methods, calculates a quantized orthogonal compensation search vector from the search direction vector of the quantized machine-learning algorithm, compensates for the search performance of the quantized machine-learning algorithm using the quantized orthogonal compensation search vector, and calculates an optimized quantized machine-learning algorithm using the learning rate and the quantized machine-learning algorithm, the search performance of which is compensated for.
    Type: Application
    Filed: June 5, 2020
    Publication date: May 6, 2021
    Inventor: Jin-Wuk SEOK
  • Publication number: 20210136416
    Abstract: Disclosed herein are a video decoding method and apparatus and a video encoding method and apparatus. A transformed block is generated by performing a first transformation that uses a prediction block for a target block. A reconstructed block for the target block is generated by performing a second transformation that uses the transformed block. The prediction block may be a block present in a reference image, or a reconstructed block present in a target image. The first transformation and the second transformation may be respectively performed by neural networks. Since each transformation is automatically performed by the corresponding neural network, information required for a transformation may be excluded from a bitstream.
    Type: Application
    Filed: November 27, 2018
    Publication date: May 6, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Youn-Hee KIM, Hui-Yong KIM, Seung-Hyun CHO, Jin-Wuk SEOK, Joo-Young LEE, Woong LIM, Jong-Ho KIM, Dae-Yeol LEE, Se-Yoon JEONG, Jin-Soo CHOI
  • Publication number: 20210084290
    Abstract: Disclosed herein are a method and apparatus for video decoding and a method and apparatus for video encoding. A prediction block for a target block is generated by predicting the target block using a prediction network, and a reconstructed block for the target block is generated based on the prediction block and a reconstructed residual block. The prediction network includes an intra-prediction network and an inter-prediction network and uses a spatial reference block and/or a temporal reference block when it performs prediction. For learning in the prediction network, a loss function is defined, and learning in the prediction network is performed based on the loss function.
    Type: Application
    Filed: December 13, 2018
    Publication date: March 18, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seung-Hyun CHO, Joo-Young LEE, Youn-Hee KIM, Jin-Wuk SEOK, Woong LIM, Jong-Ho KIM, Dae-Yeol LEE, Se-Yoon JEONG, Hui-Yong KIM, Jin-Soo CHOI
  • Publication number: 20200394514
    Abstract: Disclosed herein are a method and apparatus for compressing learning parameters for training of a deep-learning model and transmitting the compressed parameters in a distributed processing environment. Multiple electronic devices in the distributed processing system perform training of a neural network. By performing training, parameters are updated. The electronic device may share the updated parameter thereof with additional electronic devices. In order to efficiently share the parameter, the residual of the parameter is provided to the additional electronic devices. When the residual of the parameter is provided, the additional electronic devices update the parameter using the residual of the parameter.
    Type: Application
    Filed: December 13, 2018
    Publication date: December 17, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seung-Hyun CHO, Youn-Hee KIM, Jin-Wuk SEOK, Joo-Young LEE, Woong LIM, Jong-Ho KIM, Dae-Yeol LEE, Se-Yoon JEONG, Hui-Yong KIM, Jin-Soo CHOI, Je-Won KANG
  • Patent number: 10841577
    Abstract: Disclosed herein are a video decoding method and apparatus and a video encoding method and apparatus. A virtual frame is generated by a video generation network including a generation encoder and a generation decoder. The virtual frame is used as a reference frame in inter prediction for a target. Further, a video generation network for inter prediction may be selected from among multiple video generation networks, and inter prediction that uses the selected video generation network may be performed.
    Type: Grant
    Filed: February 7, 2019
    Date of Patent: November 17, 2020
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seung-Hyun Cho, Youn-Hee Kim, Jin-Wuk Seok, Joo-Young Lee, Woong Lim, Jong-Ho Kim, Dae-Yeol Lee, Se-Yoon Jeong, Hui-Yong Kim, Jin-Soo Choi, Je-Won Kang, Na-Young Kim
  • Publication number: 20200351509
    Abstract: A method and apparatus for image compression using a latent variable are provided. The multiple components of the latent variable may be sorted in order of importance. Through sorting, when the feature information of only some of the multiple components is used, the quality of a reconstructed image may be improved. In order to generate a latent variable, the components of which are sorted in order of importance, learning may be performed in various manners. Also, less important information may be eliminated from the latent variable, and processing, such as quantization, may be applied to the latent variable. Through elimination and processing, the amount of data for the latent variable may be reduced.
    Type: Application
    Filed: October 30, 2018
    Publication date: November 5, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Joo-Young LEE, Seung-Hyun CHO, Youn-Hee KIM, Jin-Wuk SEOK, Woong LIM, Jong-Ho KIM, Dae-Yeol LEE, Se-Yoon JEONG, Hui-Yong KIM, Jin-Soo CHOI
  • Patent number: 10827173
    Abstract: Disclosed herein are a video decoding method and apparatus and a video encoding method and apparatus. In quantization and dequantization, multiple quantization methods and multiple dequantization methods may be used. The multiple quantization methods include a variable-rate step quantization method and a fixed-rate step quantization method. The variable-rate step quantization method may be a quantization method in which an increment in a quantization step depending on an increase in a value of a quantization parameter by 1 is not fixed. The fixed-rate step quantization method may be a quantization method in which the increment in the quantization step depending on the increase of the value of the quantization parameter by 1 is fixed.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: November 3, 2020
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Woong Lim, Seung-Hyun Cho, Joo-Young Lee, Youn-Hee Kim, Jin-Wuk Seok, Jong-Ho Kim, Dae-Yeol Lee, Se-Yoon Jeong, Hui-Yong Kim, Jin-Soo Choi
  • Publication number: 20200175668
    Abstract: Disclosed herein are a method and apparatus for measuring video quality based on a perceptually sensitive region. The quality of video may be measured based on a perceptually sensitive region and a change in the perceptually sensitive region. The perceptually sensitive region includes a spatial perceptually sensitive region, a temporal perceptually sensitive region, and a spatio-temporal perceptually sensitive region. Perceptual weights are applied to a detected perceptually sensitive region and a change in the detected perceptually sensitive region. Distortion is calculated based on the perceptually sensitive region and the change in the perceptually sensitive region, and a result of quality measurement for a video is generated based on the calculated distortion.
    Type: Application
    Filed: November 27, 2019
    Publication date: June 4, 2020
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Se-Yoon JEONG, Dae-Yeol LEE, Seung-Hyun CHO, Hyunsuk KO, Youn-Hee KIM, Jong-Ho KIM, Jin-Wuk SEOK, Joo-Young LEE, Woong LIM, Hui-Yong KIM, Jin-Soo CHOI
  • Publication number: 20200169726
    Abstract: Disclosed herein are a method and apparatus for deriving motion prediction information and performing encoding and/or decoding on a video using the derived motion prediction information. Each of an encoding apparatus and a decoding apparatus generates a list for inter prediction of a target block. In the generation of the list, whether motion information of a candidate block is to be added to a list is determined based on information about the target block and the motion information. When the motion information passes a motion prediction boundary check, the motion information is added to the list. By means of the motion prediction boundary check, available motion information for prediction of the target block is selectively added to the list.
    Type: Application
    Filed: April 7, 2017
    Publication date: May 28, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Youn-Hee KIM, Jin-Wuk SEOK, Myung-Seok KI, Sung-Chang LIM, Hui-Yong KIM, Jin-Soo CHOI