Patents by Inventor Woo Taek LIM

Woo Taek LIM has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210174252
    Abstract: Disclosed is an apparatus and method for augmenting training data using a notch filter. The method may include obtaining original data, and obtaining training data having a modified frequency component from the original data by filtering the original data using a filter configured to remove a component of a predetermined frequency band.
    Type: Application
    Filed: July 13, 2020
    Publication date: June 10, 2021
    Applicants: Electronics and Telecommunications Research Institute, Kyungpook National University Industry-Academic Cooperation Foundation
    Inventors: Young Ho JEONG, Soo Young PARK, Sang Won SUH, Woo-taek LIM, Minhan KIM, Seokjin LEE
  • Publication number: 20210174815
    Abstract: Disclosed are a quantizing method for a latent vector and a computing device for performing the quantization method. A quantizing method of a latent vector includes performing information shaping on the latent vector resulting from reduction in a dimension of an input signal using a target neural network; clamping a residual signal of the latent vector derived based on the information shaping; performing resealing on the clamped residual signal; and performing quantization on the resealed residual signal.
    Type: Application
    Filed: December 4, 2020
    Publication date: June 10, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seung Kwon BEACK, Jooyoung LEE, Jongmo SUNG, Mi Suk LEE, Tae Jin LEE, Woo-taek LIM, Seunghyun CHO, Jin Soo CHOI
  • Publication number: 20210166701
    Abstract: An audio signal encoding/decoding device and method using a filter bank is disclosed. The audio signal encoding method includes generating a plurality of first audio signals by performing filtering on an input audio signal using an analysis filter bank, generating a plurality of second audio signals by performing downsampling on the first audio signals, and outputting a bitstream by encoding and quantizing the second audio signals.
    Type: Application
    Filed: November 25, 2020
    Publication date: June 3, 2021
    Inventors: Woo-taek LIM, Seung Kwon BEACK, Jongmo SUNG, Mi Suk LEE, Tae Jin LEE
  • Publication number: 20210166706
    Abstract: Disclosed is an apparatus and method for encoding/decoding an audio signal using information of a previous frame. An audio signal encoding method includes: generating a current latent vector by reducing dimension of a current frame of an audio signal; generating a concatenation vector by concatenating a previous latent vector generated by reducing dimension of a previous frame of the audio signal with the current latent vector; and encoding and quantizing the concatenation vector.
    Type: Application
    Filed: November 27, 2020
    Publication date: June 3, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Woo-taek LIM, Seung Kwon BEACK, Jongmo SUNG, Mi Suk LEE, Tae Jin LEE
  • Publication number: 20210074306
    Abstract: Provided are an audio encoding method, an audio decoding method, an audio encoding apparatus, and an audio decoding apparatus using dynamic model parameters. The audio encoding method using dynamic model parameters may use dynamic model parameters corresponding to each of the levels of the encoding network when reducing the dimension of an audio signal in the encoding network. In addition, the audio decoding method using the dynamic model parameter may use a dynamic model parameter corresponding to each of the levels of the decoding network when extending the dimension of an audio signal in an encoding network.
    Type: Application
    Filed: September 10, 2020
    Publication date: March 11, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jongmo SUNG, Seung Kwon BEACK, Mi Suk LEE, Tae Jin LEE, Woo-taek LIM, Jin Soo CHOI
  • Publication number: 20200312350
    Abstract: A sound event detection method includes receiving a sound signal and determining and outputting whether a sound event is present in the sound signal by applying a trained neural network to the received sound signal, and performing post-processing of the output to reduce an error in the determination, wherein the neural network is trained to early stop at an optimal epoch based on a different threshold for each of at least one sound event present in a pre-processed sound signal. That is, the sound event detection method may detect an optimal epoch to stop training by applying different characteristics for respective sound events and improve the sound event detection performance based on the optimal epoch.
    Type: Application
    Filed: September 5, 2019
    Publication date: October 1, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Woo-taek LIM, Sang Won SUH, Young Ho JEONG
  • Publication number: 20200302949
    Abstract: Provided is a sound event recognition method that may improve a sound event recognition performance using a correlation between difference sound signal feature parameters based on a neural network, in detail, that may extract a sound signal feature parameter from a sound signal including a sound event, and recognize the sound event included in the sound signal by applying a convolutional neural network (CNN) trained using the sound signal feature parameter.
    Type: Application
    Filed: September 5, 2019
    Publication date: September 24, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Young Ho JEONG, Sang Won SUH, Tae Jin LEE, Woo-taek LIM, Hui Yong KIM
  • Publication number: 20200302917
    Abstract: A data augmentation method includes extracting one or more basis vectors and coefficient vectors corresponding to sound source data classified in advance into a target class by applying non-negative matrix factorization (NMF) to the sound source data, generating a new basis vector using the extracted basis vectors, and generating new sound source data using the generated new basis vector and the extracted coefficient vectors.
    Type: Application
    Filed: October 18, 2019
    Publication date: September 24, 2020
    Applicants: Electronics and Telecommunications Research Institute, GANGNEUNG-WONJU NATIONAL UNIVERSITY INDUSTRY ACADEMY COOPERATION GROUP
    Inventors: Young Ho JEONG, Sang Won SUH, Woo-taek LIM, SUNG WOOK PARK, HYEON GI MOON, YOUNG CHEOL PARK, SHIN HYUK JEON
  • Publication number: 20200211576
    Abstract: A loss function determining method and a loss function determining system for an audio signal are disclosed. A method of determining a loss function capable of being defined when a neural network is used to reconstruct an audio signal is provided.
    Type: Application
    Filed: December 27, 2019
    Publication date: July 2, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seung Kwon BEACK, Woo-taek LIM, Tae Jin LEE
  • Patent number: 10552711
    Abstract: Disclosed is an apparatus and method for extracting a sound source from a multi-channel audio signal. A sound source extracting method includes transforming a multi-channel audio signal into two-dimensional (2D) data, extracting a plurality of feature maps by inputting the 2D data into a convolutional neural network (CNN) including at least one layer, and extracting a sound source from the multi-channel audio signal using the feature maps.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: February 4, 2020
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Woo-taek Lim, Seung Kwon Beack
  • Patent number: 10540988
    Abstract: Disclosed is a sound event detecting method including receiving an audio signal, transforming the audio signal into a two-dimensional (2D) signal, extracting a feature map by training a convolutional neural network (CNN) using the 2D signal, pooling the feature map based on a frequency, and determining whether a sound event occurs with respect to each of at least one time interval based on a result of the pooling.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: January 21, 2020
    Assignee: Electronics and Telecommunications Research Institute
    Inventor: Woo-taek Lim
  • Publication number: 20190287550
    Abstract: Disclosed is a sound event detecting method including receiving an audio signal, transforming the audio signal into a two-dimensional (2D) signal, extracting a feature map by training a convolutional neural network (CNN) using the 2D signal, pooling the feature map based on a frequency, and determining whether a sound event occurs with respect to each of at least one time interval based on a result of the pooling.
    Type: Application
    Filed: November 20, 2018
    Publication date: September 19, 2019
    Applicant: Electronics and Telecommunications Research Institute
    Inventor: Woo-taek LIM
  • Publication number: 20190180142
    Abstract: Disclosed is an apparatus and method for extracting a sound source from a multi-channel audio signal. A sound source extracting method includes transforming a multi-channel audio signal into two-dimensional (2D) data, extracting a plurality of feature maps by inputting the 2D data into a convolutional neural network (CNN) including at least one layer, and extracting a sound source from the multi-channel audio signal using the feature maps.
    Type: Application
    Filed: November 29, 2018
    Publication date: June 13, 2019
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Woo-taek LIM, Seung Kwon BEACK
  • Publication number: 20190180763
    Abstract: A method of predicting a channel parameter of an original signal from a downmix signal is disclosed. The method may include generating an input feature map to be used to predict a channel parameter of the original signal based on a downmix signal of an original signal, determining an output feature map including a predicted parameter to be used to predict the channel parameter by applying the input feature map to a neural network, generating a label map including information associated with the channel parameter of the original signal, and predicting the channel parameter of the original signal by comparing the output feature map and the label map.
    Type: Application
    Filed: November 5, 2018
    Publication date: June 13, 2019
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seung Kwon BEACK, Woo-taek LIM, Jongmo SUNG, Mi Suk LEE, Tae Jin LEE, Hui Yong KIM
  • Patent number: 10271137
    Abstract: A method of detecting a sound event includes receiving sound signals using one or more directional microphones, extracting a time interval of each of the sound signals, extracting time information and an azimuth of a sound event included in the sound signals during the extracted time interval, mixing the sound signals received from the directional microphones using the extracted time interval, and determining a direction of the sound event generated at a specific time from a mixed sound signal obtained through the mixing using the extracted time information and azimuth of the sound event.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: April 23, 2019
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Young Ho Jeong, Sang Won Suh, Jae-hyoun Yoo, Tae Jin Lee, Woo-taek Lim, Hui Yong Kim
  • Patent number: 9934420
    Abstract: A fingerprint information processing method and apparatus in which the method includes: obtaining a fingerprint image; calculating an average value of shading values of pixels in a specific region based on a pixel with respect to each pixel of the fingerprint image, performing a first processing of calculating a sum of average values of shading values of pixels included in an expanded region while gradually expanding the specific region, and generating a first processing image for the fingerprint image using a first processing-performed value for each pixel; and forming a window including a predetermined region in the first processing image, and selecting feature points among pixels in a window region while moving the window.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: April 3, 2018
    Assignees: CRUCIALTEC CO., LTD., CANVASBIO CO., LTD.
    Inventors: Baek Bum Pyun, Woo Taek Lim, Eun Kyung Ma
  • Patent number: 9858465
    Abstract: According to one embodiment, provided is a method by which an electronic device comprising a minimum fingerprint sensing area processes fingerprint information, comprising the steps of: acquiring a fingerprint image from the fingerprint sensing area; calculating a shade change value, defined by a shade difference value from a neighboring pixel, for each pixel of the fingerprint image; selecting points, as feature point candidates, of which the shade change value is a threshold value or more; applying artificial distortion for noise filtering to an area including the feature point candidates and neighboring pixels thereof; and selecting, as final feature points, candidates of which the shade change value after the artificial distortion is within a threshold range from among the feature point candidates.
    Type: Grant
    Filed: June 11, 2014
    Date of Patent: January 2, 2018
    Assignees: CRUCIALTEC CO., LTD., CANVASBIO CO., LTD.
    Inventors: Baek Bum Pyun, Woo Taek Lim, Sung Chan Park, Jae Han Kim
  • Publication number: 20170103253
    Abstract: A fingerprint information processing method and apparatus in which the method includes: obtaining a fingerprint image; calculating an average value of shading values of pixels in a specific region based on a pixel with respect to each pixel of the fingerprint image, performing a first processing of calculating a sum of average values of shading values of pixels included in an expanded region while gradually expanding the specific region, and generating a first processing image for the fingerprint image using a first processing-performed value for each pixel; and forming a window including a predetermined region in the first processing image, and selecting feature points among pixels in a window region while moving the window.
    Type: Application
    Filed: October 6, 2016
    Publication date: April 13, 2017
    Inventors: Baek Bum PYUN, Woo Taek LIM, Eun Kyung MA
  • Publication number: 20160350580
    Abstract: According to one embodiment, provided is a method by which an electronic device comprising a minimum fingerprint sensing area processes fingerprint information, comprising the steps of: acquiring a fingerprint image from the fingerprint sensing area; calculating a shade change value, defined by a shade difference value from a neighboring pixel, for each pixel of the fingerprint image; selecting points, as feature point candidates, of which the shade change value is a threshold value or more; applying artificial distortion for noise filtering to an area including the feature point candidates and neighboring pixels thereof; and selecting, as final feature points, candidates of which the shade change value after the artificial distortion is within a threshold range from among the feature point candidates.
    Type: Application
    Filed: June 11, 2014
    Publication date: December 1, 2016
    Inventors: Baek Bum PYUN, Woo Taek LIM, Sung Chan PARK, Jae Han KIM
  • Patent number: 9336796
    Abstract: Provided is an apparatus for detecting a speech/non-speech section. The apparatus includes an acquisition unit which obtains inter-channel relation information of a stereo audio signal, a separation unit which separates each element of the stereo audio signal into a center channel element and a surround element on the basis of the inter-channel relation information, a calculation unit which calculates an energy ratio value between a center channel signal composed of center channel elements and a surround channel signal composed of surround elements, for each frame, and an energy ratio value between the stereo audio signal and a mono signal generated on the basis of the stereo audio signal, and a judgment unit which determines a speech section and a non-speech section from the stereo audio signal by comparing the energy ratio values.
    Type: Grant
    Filed: February 5, 2014
    Date of Patent: May 10, 2016
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: In Seon Jang, Woo Taek Lim