Patents by Inventor Joon-Hyuk Chang

Joon-Hyuk Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10854218
    Abstract: A multichannel microphone-based reverberation time estimation method and device which use a deep neural network (DNN) are disclosed. A multichannel microphone-based reverberation time estimation method using a DNN, according to one embodiment, comprises the steps of: receiving a voice signal through a multichannel microphone; deriving a feature vector including spatial information by using the inputted voice signal; and estimating the degree of reverberation by applying the feature vector to the DNN.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: December 1, 2020
    Assignee: Industry-University Cooperation Foundation Hanyang University
    Inventors: Joon-Hyuk Chang, Myung In Lee
  • Publication number: 20200193291
    Abstract: A noise data artificial intelligence learning method for identifying the source of problematic noise may include a noise data pre-conditioning method for identifying the source of problematic noise including: selecting a unit frame for the problematic noise among noises sampled with time; dividing the unit frame into N segments; analyzing frequency characteristic for each segment of the N segments and extracting a frequency component of each segment by applying Log Mel Filter; and outputting a feature parameter as one representative frame by averaging information on the N segments, wherein an artificial intelligence learning by the feature parameter extracted according to a change in time by the noise data pre-conditioning method applies Bidirectional RNN.
    Type: Application
    Filed: November 18, 2019
    Publication date: June 18, 2020
    Applicants: Hyundai Motor Company, Kia Motors Corporation, IUCF-HYU (Industry-University Corporation Foundation Hanyang University)
    Inventors: Dong-Chul Lee, In-Soo Jung, Joon-Hyuk Chang, Kyoung-Jin Noh
  • Publication number: 20200105287
    Abstract: Disclosed is a deep neural network-based method and apparatus for combining noise and echo removal. The deep neural network-based method for combining noise and echo removal according to one embodiment of the present invention may comprise the steps of extracting a feature vector from an audio signal that includes noise and echo; and acquiring a final audio signal from which both noise and echo have been removed, by using a combined nose and echo removal gain estimated by means of the feature vector and deep neural network DNN.
    Type: Application
    Filed: April 2, 2018
    Publication date: April 2, 2020
    Applicant: Industry-University Cooperation Foundation Hanyang University
    Inventors: Joon-Hyuk CHANG, Hyeji SEO
  • Publication number: 20200082843
    Abstract: A multichannel microphone-based reverberation time estimation method and device which use a deep neural network (DNN) are disclosed. A multichannel microphone-based reverberation time estimation method using a DNN, according to one embodiment, comprises the steps of: receiving a voice signal through a multichannel microphone; deriving a feature vector including spatial information by using the inputted voice signal; and estimating the degree of reverberation by applying the feature vector to the DNN.
    Type: Application
    Filed: December 15, 2017
    Publication date: March 12, 2020
    Applicant: Industry-University Cooperation Foundation Hanyang University
    Inventors: Joon-Hyuk CHANG, Myung In LEE
  • Patent number: 10431240
    Abstract: Provided is a speech enhancement method and a system therefor. The speech enhancement method includes receiving at least one speech signal; generating a first speech signal by performing a primary speech enhancement on the at least one speech signal; selecting a noise removing gain corresponding to the first speech signal from pre-learned noise removing gain information; and generating a second speech signal by performing a secondary speech enhancement on the first speech signal based on the selected noise removing gain.
    Type: Grant
    Filed: September 23, 2015
    Date of Patent: October 1, 2019
    Assignees: Samsung Electronics Co., Ltd, Industry-University Cooperation Foundation Hanyang University
    Inventors: Seung-yeol Lee, Joon-hyuk Chang, Byeong-seob Ko, Song-kyu Park, Tae-jun Park
  • Publication number: 20190295553
    Abstract: Disclosed herein are an apparatus and method for recognizing a voice speaker. The apparatus for recognizing a voice speaker includes a voice feature extraction unit configured to extract a feature vector from a voice signal inputted through a microphone; and a speaker recognition unit configured to calculate a speaker recognition score by selecting a reverberant environment from multiple reverberant environment learning data sets based on the feature vector extracted by the voice feature extraction unit and to recognize a speaker by assigning a weight depending on the selected reverberant environment to the speaker recognition score.
    Type: Application
    Filed: March 20, 2019
    Publication date: September 26, 2019
    Inventors: Yu Jin JUNG, Ki Hee PARK, Chang Won LEE, Doh Hyun KIM, Tae Kyung KIM, Tae Yoon SON, Joon Hyuk CHANG, Joon Young YANG
  • Publication number: 20190051310
    Abstract: Disclosed are a packet loss concealment method and apparatus a using a generative adversarial network. A method for packet loss concealment in voice communication may include training a classification model based on a generative adversarial network (GAN) with respect to a voice signal including a plurality of frames, training a generative model having a contention relation with the classification model based on the GAN, estimating lost packet information based on the trained generative model with respect to the voice signal encoded by a codec, and restoring a lost packet based on the estimated packet information.
    Type: Application
    Filed: August 9, 2018
    Publication date: February 14, 2019
    Inventors: Joon-Hyuk Chang, Bong-Ki Lee
  • Publication number: 20170365275
    Abstract: Provided is a speech enhancement method and a system therefor. The speech enhancement method includes receiving at least one speech signal; generating a first speech signal by performing a primary speech enhancement on the at least one speech signal; selecting a noise removing gain corresponding to the first speech signal from pre-learned noise removing gain information; and generating a second speech signal by performing a secondary speech enhancement on the first speech signal based on the selected noise removing gain.
    Type: Application
    Filed: September 23, 2015
    Publication date: December 21, 2017
    Inventors: Seung-yeol LEE, Joon-hyuk CHANG, Byeong-seob KO, Song-kyu PARK, Tae-jun PARK
  • Patent number: 9847093
    Abstract: An apparatus for processing a speech signal is provided.
    Type: Grant
    Filed: June 14, 2016
    Date of Patent: December 19, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Kang-eun Lee, Joon-hyuk Chang, Byeong-yong Jeon, Hyeon-seong Kim, Tae-jun Park, Kwang-sub Song, Tae-hyun Yoon, Seong-hyeon Choe, Hyun-chul Choi
  • Patent number: 9729602
    Abstract: Technologies are generally described for a method for measuring a quality of an audio signal in a mobile device. In some examples, the mobile device includes a receiving unit configured to receive an audio signal transmitted from another device; an audio quality measuring unit configured to measure a quality of the received audio signal; and a transmission unit configured to transmit the measured quality of the audio signal to the another device.
    Type: Grant
    Filed: February 9, 2016
    Date of Patent: August 8, 2017
    Assignee: INHA-INDUSTRY PARTNERSHIP INSTITUTE
    Inventor: Joon-Hyuk Chang
  • Patent number: 9536539
    Abstract: A nonlinear acoustic echo signal suppression system and method using a Volterra filter is disclosed. The nonlinear acoustic echo signal suppression system includes an acoustic echo signal estimator configured to estimate a nonlinear acoustic echo signal by using a Volterra filter in a frequency filter, and a near-end talker speech signal generator configured to generate a near-end talker speech signal, in which the nonlinear acoustic echo signal is suppressed, by using a gain function based on a statistical model.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: January 3, 2017
    Assignee: INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY
    Inventors: Joon Hyuk Chang, Ji Hwan Park
  • Publication number: 20160372135
    Abstract: An apparatus for processing a speech signal is provided.
    Type: Application
    Filed: June 14, 2016
    Publication date: December 22, 2016
    Inventors: Kang-eun LEE, Joon-hyuk CHANG, Byeong-yong JEON, Hyeon-seong KIM, Tae-jun PARK, Kwang-sub SONG, Tae-hyun YOON, Seong-hyeon CHOE, Hyun-chul CHOI
  • Publication number: 20160156692
    Abstract: Technologies are generally described for a method for measuring a quality of an audio signal in a mobile device. In some examples, the mobile device includes a receiving unit configured to receive an audio signal transmitted from another device; an audio quality measuring unit configured to measure a quality of the received audio signal; and a transmission unit configured to transmit the measured quality of the audio signal to the another device.
    Type: Application
    Filed: February 9, 2016
    Publication date: June 2, 2016
    Applicant: INHA-INDUSTRY PARTNERSHIP INSTITUTE
    Inventor: Joon-Hyuk CHANG
  • Patent number: 9300694
    Abstract: Technologies are generally described for a method for measuring a quality of an audio signal in a mobile device. In some examples, the mobile device includes a receiving unit configured to receive an audio signal transmitted from another device; an audio quality measuring unit configured to measure a quality of the received audio signal; and a transmission unit configured to transmit the measured quality of the audio signal to the another device.
    Type: Grant
    Filed: January 11, 2011
    Date of Patent: March 29, 2016
    Assignee: INHA—Industry Partnership Institute
    Inventor: Joon-Hyuk Chang
  • Publication number: 20160005419
    Abstract: A nonlinear acoustic echo signal suppression system and method using a Volterra filter is disclosed. The nonlinear acoustic echo signal suppression system includes an acoustic echo signal estimator configured to estimate a nonlinear acoustic echo signal by using a Volterra filter in a frequency filter, and a near-end talker speech signal generator configured to generate a near-end talker speech signal, in which the nonlinear acoustic echo signal is suppressed, by using a gain function based on a statistical model.
    Type: Application
    Filed: June 30, 2015
    Publication date: January 7, 2016
    Inventors: Joon Hyuk CHANG, Ji Hwan PARK
  • Publication number: 20120177207
    Abstract: Technologies are generally described for a method for measuring a quality of an audio signal in a mobile device. In some examples, the mobile device includes a receiving unit configured to receive an audio signal transmitted from another device; an audio quality measuring unit configured to measure a quality of the received audio signal; and a transmission unit configured to transmit the measured quality of the audio signal to the another device.
    Type: Application
    Filed: January 11, 2011
    Publication date: July 12, 2012
    Applicant: INHA-INDUSTRY PARTNERSHIP INSTITUTE
    Inventor: Joon-Hyuk Chang
  • Patent number: 8180638
    Abstract: Disclosed herein is a method for emotion recognition based on a minimum classification error. In the method, a speaker's neutral emotion is extracted using a Gaussian mixture model (GMM), other emotions except the neutral emotion are classified using the Gaussian Mixture Model to which a discriminative weight for minimizing the loss function of a classification error for the feature vector for emotion recognition is applied. In the emotion recognition, the emotion recognition is performed by applying a discriminative weight evaluated using the Gaussian Mixture Model based on minimum classification error to feature vectors of the emotion classified with difficult, thereby enhancing the performance of emotion recognition.
    Type: Grant
    Filed: February 23, 2010
    Date of Patent: May 15, 2012
    Assignee: Korea Institute of Science and Technology
    Inventors: Hyoung Gon Kim, Ig Jae Kim, Joon-Hyuk Chang, Kye Hwan Lee, Chang Seok Bae
  • Publication number: 20100217595
    Abstract: Disclosed herein is a method for emotion recognition based on a minimum classification error. In the method, a speaker's neutral emotion is extracted using a Gaussian mixture model (GMM), other emotions except the neutral emotion are classified using the Gaussian Mixture Model to which a discriminative weight for minimizing the loss function of a classification error for the feature vector for emotion recognition is applied. In the emotion recognition, the emotion recognition is performed by applying a discriminative weight evaluated using the Gaussian Mixture Model based on minimum classification error to feature vectors of the emotion classified with difficult, thereby enhancing the performance of emotion recognition.
    Type: Application
    Filed: February 23, 2010
    Publication date: August 26, 2010
    Applicants: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY, Electronics and Telecommunications Research Institute
    Inventors: Hyoung Gon KIM, Ig Jae KIM, Joon-Hyuk CHANG, Kye Hwan LEE, Chang Seok BAE
  • Publication number: 20040122667
    Abstract: Disclosed is a voice activity detector using a complex Laplacian statistic module, the voice activity detector including: a fast Fourier transformer for performing a fast Fourier transform on input speech to analyze speech signals of a time domain in a frequency domain; a noise power estimator for estimating a power of noise signals from noisy speech of the frequency domain output from the fast Fourier transformer; and a likelihood ratio test (LRT) calculator for calculating a decision rule of voice activity detection (VAD) from the estimated power of noise signals from the noise power estimator and a complex Laplacian probabilistic statistical model.
    Type: Application
    Filed: October 30, 2003
    Publication date: June 24, 2004
    Inventors: Mi-Suk Lee, Dae-Hwan Hwang, Joon-Hyuk Chang, Nam-Soo Kim