Patents by Inventor Woo Taek LIM

Woo Taek LIM has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220157326
    Abstract: A method of generating a residual signal performed by an encoder includes identifying an input signal including an audio sample, generating a first residual signal from the input signal using linear predictive coding (LPC), generating a second residual signal having a less information amount than the first residual signal by transforming the first residual signal, transforming the second residual signal into a frequency domain, and generating a third residual signal having a less information amount than the second residual signal from the transformed second residual signal using frequency-domain prediction (FDP) coding.
    Type: Application
    Filed: October 21, 2021
    Publication date: May 19, 2022
    Inventors: Seung Kwon BEACK, Jongmo SUNG, Tae Jin LEE, Woo-taek LIM, Inseon JANG
  • Publication number: 20220020385
    Abstract: An audio signal encoding method performed by an encoder includes identifying a time-domain audio signal in a unit of blocks, quantizing a linear prediction coefficient extracted from a combined block in which a current original block of the audio signal and a previous original block chronologically adjacent to the current original block using frequency-domain linear predictive coding (LPC), generating a temporal envelope by dequantizing the quantized linear prediction coefficient, extracting a residual signal from the combined block based on the temporal envelope, quantizing the residual signal by one of time-domain quantization and frequency-domain quantization, and transforming the quantized residual signal and the quantized linear prediction coefficient into a bitstream.
    Type: Application
    Filed: July 15, 2021
    Publication date: January 20, 2022
    Inventors: Seung Kwon Beack, Jongmo Sung, Mi Suk Lee, Tae Jin Lee, Woo-taek Lim, Inseon Jang, Jin Soo Choi
  • Publication number: 20220005487
    Abstract: An audio signal encoding and decoding method using a neural network model, a method of training the neural network model, and an encoder and decoder performing the methods are disclosed. The encoding method includes computing the first feature information of an input signal using a recurrent encoding model, computing an output signal from the first feature information using a recurrent decoding model, calculating a residual signal by subtracting the output signal from the input signal, computing the second feature information of the residual signal using a nonrecurrent encoding model, and converting the first feature information and the second feature information to a bitstream.
    Type: Application
    Filed: July 6, 2021
    Publication date: January 6, 2022
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jongmo SUNG, Seung Kwon BEACK, Mi Suk LEE, Tae Jin LEE, Woo-taek LIM, Inseon JANG
  • Publication number: 20220005488
    Abstract: The encoding method includes computing the first feature information of an input signal using a recurrent encoding model, quantizing the first feature information and producing the first feature bitstream, computing the first output signal from the quantized first feature information using a recurrent decoding model, computing the second feature information of the input signal using a nonrecurrent encoding model, quantizing the second feature information and producing the second feature bitstream, computing the second output signal from the quantized second feature information using a nonrecurrent decoding model, determining an encoding mode based on the input signal, the first and second output signals, and the first and second feature bitstreams, and outputting an overall bitstream by multiplexing an encoding mode bit and one of the first feature bitstream and the second feature bitstream depending on the encoding mode.
    Type: Application
    Filed: July 6, 2021
    Publication date: January 6, 2022
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jongmo SUNG, Seung Kwon BEACK, Mi Suk LEE, Tae Jin LEE, Woo-taek LIM, Inseon JANG
  • Publication number: 20210398547
    Abstract: An audio signal encoding method performed by an encoder includes identifying an audio signal of a time domain in units of a block, generating a combined block by combining i) a current original block of the audio signal and ii) a previous original block chronologically adjacent to the current original block, extracting a first residual signal of a frequency domain from the combined block using linear predictive coding of a time domain, overlapping chronologically adjacent first residual signals among first residual signals converted into a time domain, and quantizing a second residual signal of a time domain extracted from the overlapped first residual signal by converting the second residual signal of the time domain into a frequency domain using linear predictive coding of a frequency domain.
    Type: Application
    Filed: May 26, 2021
    Publication date: December 23, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seung Kwon BEACK, Jongmo SUNG, Mi Suk LEE, Tae Jin LEE, Woo-taek LIM, Inseon JANG
  • Patent number: 11205442
    Abstract: Provided is a sound event recognition method that may improve a sound event recognition performance using a correlation between difference sound signal feature parameters based on a neural network, in detail, that may extract a sound signal feature parameter from a sound signal including a sound event, and recognize the sound event included in the sound signal by applying a convolutional neural network (CNN) trained using the sound signal feature parameter.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: December 21, 2021
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Young Ho Jeong, Sang Won Suh, Tae Jin Lee, Woo-taek Lim, Hui Yong Kim
  • Publication number: 20210390967
    Abstract: Disclosed is a method of encoding and decoding an audio signal using linear predictive coding (LPC) and an encoder and a decoder that perform the method. The method of encoding an audio signal to be performed by the encoder includes identifying a time-domain audio signal block-wise, quantizing a linear prediction coefficient obtained from a block of the audio signal through the LPC, generating an envelope based on the quantized linear prediction coefficient, extracting a residual signal based on the envelope and a result of converting the block into a frequency domain, grouping the residual signal by each sub-band and determining a scale factor for quantizing the grouped residual signal, quantizing the residual signal using the scale factor, and converting the quantized residual signal and the quantized linear prediction coefficient into a bitstream and transmitting the bitstream to a decoder.
    Type: Application
    Filed: April 28, 2021
    Publication date: December 16, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seung Kwon Beack, Jongmo SUNG, Mi Suk LEE, Tae Jin LEE, Woo-taek LIM, Inseon JANG, Jin Soo CHOI
  • Publication number: 20210366497
    Abstract: Methods of encoding and decoding a speech signal using a neural network model that recognizes sound sources, and encoding and decoding apparatuses for performing the methods are provided. A method of encoding a speech signal includes identifying an input signal for a plurality of sound sources; generating a latent signal by encoding the input signal; obtaining a plurality of sound source signals by separating the latent signal for each of the plurality of sound sources; determining a number of bits used for quantization of each of the plurality of sound source signals according to a type of each of the plurality of sound sources; quantizing each of the plurality of sound source signals based on the determined number of bits; and generating a bitstream by combining the plurality of quantized sound source signals.
    Type: Application
    Filed: May 20, 2021
    Publication date: November 25, 2021
    Applicants: Electronics and Telecommunications Research Institute, The Trustees of Indiana University
    Inventors: Woo-taek LIM, Seung Kwon BEACK, Jongmo SUNG, Mi Suk LEE, Tae Jin LEE, Inseon JANG, Minje KIM, Haici YANG
  • Patent number: 11133015
    Abstract: A method of predicting a channel parameter of an original signal from a downmix signal is disclosed. The method may include generating an input feature map to be used to predict a channel parameter of the original signal based on a downmix signal of an original signal, determining an output feature map including a predicted parameter to be used to predict the channel parameter by applying the input feature map to a neural network, generating a label map including information associated with the channel parameter of the original signal, and predicting the channel parameter of the original signal by comparing the output feature map and the label map.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: September 28, 2021
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seung Kwon Beack, Woo-taek Lim, Jongmo Sung, Mi Suk Lee, Tae Jin Lee, Hui Yong Kim
  • Publication number: 20210174815
    Abstract: Disclosed are a quantizing method for a latent vector and a computing device for performing the quantization method. A quantizing method of a latent vector includes performing information shaping on the latent vector resulting from reduction in a dimension of an input signal using a target neural network; clamping a residual signal of the latent vector derived based on the information shaping; performing resealing on the clamped residual signal; and performing quantization on the resealed residual signal.
    Type: Application
    Filed: December 4, 2020
    Publication date: June 10, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seung Kwon BEACK, Jooyoung LEE, Jongmo SUNG, Mi Suk LEE, Tae Jin LEE, Woo-taek LIM, Seunghyun CHO, Jin Soo CHOI
  • Publication number: 20210174252
    Abstract: Disclosed is an apparatus and method for augmenting training data using a notch filter. The method may include obtaining original data, and obtaining training data having a modified frequency component from the original data by filtering the original data using a filter configured to remove a component of a predetermined frequency band.
    Type: Application
    Filed: July 13, 2020
    Publication date: June 10, 2021
    Applicants: Electronics and Telecommunications Research Institute, Kyungpook National University Industry-Academic Cooperation Foundation
    Inventors: Young Ho JEONG, Soo Young PARK, Sang Won SUH, Woo-taek LIM, Minhan KIM, Seokjin LEE
  • Publication number: 20210166706
    Abstract: Disclosed is an apparatus and method for encoding/decoding an audio signal using information of a previous frame. An audio signal encoding method includes: generating a current latent vector by reducing dimension of a current frame of an audio signal; generating a concatenation vector by concatenating a previous latent vector generated by reducing dimension of a previous frame of the audio signal with the current latent vector; and encoding and quantizing the concatenation vector.
    Type: Application
    Filed: November 27, 2020
    Publication date: June 3, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Woo-taek LIM, Seung Kwon BEACK, Jongmo SUNG, Mi Suk LEE, Tae Jin LEE
  • Publication number: 20210166701
    Abstract: An audio signal encoding/decoding device and method using a filter bank is disclosed. The audio signal encoding method includes generating a plurality of first audio signals by performing filtering on an input audio signal using an analysis filter bank, generating a plurality of second audio signals by performing downsampling on the first audio signals, and outputting a bitstream by encoding and quantizing the second audio signals.
    Type: Application
    Filed: November 25, 2020
    Publication date: June 3, 2021
    Inventors: Woo-taek LIM, Seung Kwon BEACK, Jongmo SUNG, Mi Suk LEE, Tae Jin LEE
  • Publication number: 20210074306
    Abstract: Provided are an audio encoding method, an audio decoding method, an audio encoding apparatus, and an audio decoding apparatus using dynamic model parameters. The audio encoding method using dynamic model parameters may use dynamic model parameters corresponding to each of the levels of the encoding network when reducing the dimension of an audio signal in the encoding network. In addition, the audio decoding method using the dynamic model parameter may use a dynamic model parameter corresponding to each of the levels of the decoding network when extending the dimension of an audio signal in an encoding network.
    Type: Application
    Filed: September 10, 2020
    Publication date: March 11, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jongmo SUNG, Seung Kwon BEACK, Mi Suk LEE, Tae Jin LEE, Woo-taek LIM, Jin Soo CHOI
  • Publication number: 20200312350
    Abstract: A sound event detection method includes receiving a sound signal and determining and outputting whether a sound event is present in the sound signal by applying a trained neural network to the received sound signal, and performing post-processing of the output to reduce an error in the determination, wherein the neural network is trained to early stop at an optimal epoch based on a different threshold for each of at least one sound event present in a pre-processed sound signal. That is, the sound event detection method may detect an optimal epoch to stop training by applying different characteristics for respective sound events and improve the sound event detection performance based on the optimal epoch.
    Type: Application
    Filed: September 5, 2019
    Publication date: October 1, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Woo-taek LIM, Sang Won SUH, Young Ho JEONG
  • Publication number: 20200302949
    Abstract: Provided is a sound event recognition method that may improve a sound event recognition performance using a correlation between difference sound signal feature parameters based on a neural network, in detail, that may extract a sound signal feature parameter from a sound signal including a sound event, and recognize the sound event included in the sound signal by applying a convolutional neural network (CNN) trained using the sound signal feature parameter.
    Type: Application
    Filed: September 5, 2019
    Publication date: September 24, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Young Ho JEONG, Sang Won SUH, Tae Jin LEE, Woo-taek LIM, Hui Yong KIM
  • Publication number: 20200302917
    Abstract: A data augmentation method includes extracting one or more basis vectors and coefficient vectors corresponding to sound source data classified in advance into a target class by applying non-negative matrix factorization (NMF) to the sound source data, generating a new basis vector using the extracted basis vectors, and generating new sound source data using the generated new basis vector and the extracted coefficient vectors.
    Type: Application
    Filed: October 18, 2019
    Publication date: September 24, 2020
    Applicants: Electronics and Telecommunications Research Institute, GANGNEUNG-WONJU NATIONAL UNIVERSITY INDUSTRY ACADEMY COOPERATION GROUP
    Inventors: Young Ho JEONG, Sang Won SUH, Woo-taek LIM, SUNG WOOK PARK, HYEON GI MOON, YOUNG CHEOL PARK, SHIN HYUK JEON
  • Publication number: 20200211576
    Abstract: A loss function determining method and a loss function determining system for an audio signal are disclosed. A method of determining a loss function capable of being defined when a neural network is used to reconstruct an audio signal is provided.
    Type: Application
    Filed: December 27, 2019
    Publication date: July 2, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seung Kwon BEACK, Woo-taek LIM, Tae Jin LEE
  • Patent number: 10552711
    Abstract: Disclosed is an apparatus and method for extracting a sound source from a multi-channel audio signal. A sound source extracting method includes transforming a multi-channel audio signal into two-dimensional (2D) data, extracting a plurality of feature maps by inputting the 2D data into a convolutional neural network (CNN) including at least one layer, and extracting a sound source from the multi-channel audio signal using the feature maps.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: February 4, 2020
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Woo-taek Lim, Seung Kwon Beack
  • Patent number: 10540988
    Abstract: Disclosed is a sound event detecting method including receiving an audio signal, transforming the audio signal into a two-dimensional (2D) signal, extracting a feature map by training a convolutional neural network (CNN) using the 2D signal, pooling the feature map based on a frequency, and determining whether a sound event occurs with respect to each of at least one time interval based on a result of the pooling.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: January 21, 2020
    Assignee: Electronics and Telecommunications Research Institute
    Inventor: Woo-taek Lim