Patents by Inventor Joon-Hyuk Chang

Joon-Hyuk Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220287571
    Abstract: An apparatus for estimating bio-information is disclosed. The apparatus may include: a pulse wave sensor configured to measure a pulse wave signal from an object; a force sensor configured to obtain a force signal by measuring an external force exerted onto the force sensor; and a processor configured to obtain a first input value, a second input value, and a third input value based on the pulse wave signal and the force signal, to extract a feature vector by inputting the first input value, the second input value, and the third input value into a first neural network model, and to obtain the bio-information by inputting the feature vector into a second neural network model.
    Type: Application
    Filed: July 7, 2021
    Publication date: September 15, 2022
    Applicants: SAMSUNG ELECTRONICS CO., LTD., IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Sang Kon BAE, Joon-Hyuk CHANG, Jin Woo CHOI, Youn Ho KIM, Jehyun KYUNG, Joon-Young YANG, Inmo YEON, Jeong-Hwan CHOI
  • Publication number: 20220270414
    Abstract: A device for AI-based vehicle diagnosis using CAN data may include an engine; a vibration sensor mounted in an engine compartment in which the engine is mounted and configured for detecting a vibration signal; and a controller area network (CAN) communicating with one or more of an environmental condition, a vehicle status, an engine status, and an engine control parameter, wherein data preprocessing from the vibration sensor and the CAN is performed to determine features in which correlation between vibration data (dB) exceeding a threshold value of irregular vibrations being generated by the engine and the CAN data is equal to or greater than 90%.
    Type: Application
    Filed: July 15, 2021
    Publication date: August 25, 2022
    Applicants: Hyundai Motor Company, Kia Corporation, IUCF-HYU (Industry-University Cooperation Foundation Hanyang University)
    Inventors: Dong-Chul LEE, In-Soo JUNG, Dong-Yeoup JEON, Joon-Hyuk CHANG
  • Publication number: 20220230627
    Abstract: Disclosed are a method and an apparatus for detecting a voice end point by using acoustic and language modeling information to accomplish strong voice recognition. A voice end point detection method according to an embodiment may comprise the steps of: inputting an acoustic feature vector sequence extracted from a microphone input signal into an acoustic embedding extraction unit, a phonemic embedding extraction unit, and a decoder embedding extraction unit, which are based on a recurrent neural network (RNN); combining acoustic embedding, phonemic embedding, and decoder embedding to configure a feature vector by the acoustic embedding extraction unit, the phonemic embedding extraction unit, and the decoder embedding extraction unit; and inputting the combined feature vector into a deep neural network (DNN)-based classifier to detect a voice end point.
    Type: Application
    Filed: June 9, 2020
    Publication date: July 21, 2022
    Applicant: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon-Hyuk CHANG, Inyoung HWANG
  • Publication number: 20220208198
    Abstract: Presented are a combined learning method and device using a transformed loss function and feature enhancement based on a deep neural network for speaker recognition that is robust in a noisy environment. A combined learning method using a transformed loss function and feature enhancement based on a deep neural network, according to one embodiment, can comprise the steps of: learning a feature enhancement model based on a deep neural network; learning a speaker feature vector extraction model based on the deep neural network; connecting an output layer of the feature enhancement model with an input layer of the speaker feature vector extraction model; and considering the connected feature enhancement model and speaker feature vector extraction model as one mode and performing combined learning for additional learning.
    Type: Application
    Filed: March 30, 2020
    Publication date: June 30, 2022
    Applicant: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY
    Inventors: Joon-Hyuk CHANG, Joonyoung YANG
  • Publication number: 20220199095
    Abstract: Presented are a combined learning method and device using a transformed loss function and feature enhancement based on a deep neural network for speaker recognition that is robust to a noisy environment.
    Type: Application
    Filed: March 30, 2020
    Publication date: June 23, 2022
    Applicant: INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY
    Inventors: Joon-Hyuk CHANG, Joonyoung YANG
  • Publication number: 20220108681
    Abstract: Proposed are a deep neural network-based non-autoregressive voice synthesizing method and a system therefor. A deep neural network-based non-autoregressive voice synthesizing system according to an embodiment may comprise: a voice feature vector column synthesizing unit which constitutes a non-recursive deep neural network based on multiple decoders, and gradually produces a voice feature vector column through the multiple decoders from a template including temporal information of a voice; and a voice reconstituting unit which transforms the voice feature vector column into voice data, wherein the voice feature vector column synthesizing unit produces a template input, and produces a voice feature vector column by adding, to the template input, sentence data refined through an attention mechanism.
    Type: Application
    Filed: June 26, 2020
    Publication date: April 7, 2022
    Applicant: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon-Hyuk CHANG, Moa LEE
  • Patent number: 11238877
    Abstract: Proposed are a generative adversarial network-based speech bandwidth extender and extension method. A generative adversarial network-based speech bandwidth extension method, according to an embodiment, comprises the steps of: extracting feature vectors from a narrowband (NB) signal and a wideband (WB) signal of a speech; estimating the feature vector of the wideband signal from the feature vector of the narrowband signal; and learning a deep neural network classification model for discriminating the estimated feature vector of the wideband signal from the actually extracted feature vector of the wideband signal and the actually extracted feature vector of the narrowband signal.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: February 1, 2022
    Assignee: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon-Hyuk Chang, Kyoungjin Noh
  • Publication number: 20220015651
    Abstract: An apparatus for and a method for estimating blood pressure are provided. The apparatus for estimating blood pressure includes: a sensor configured to measure a pulse wave signal from an object; and a processor configured to obtain a mean arterial pressure (MAP) based on the pulse wave signal, configured to classify a phase of the obtained MAP according to at least one classification criterion, and to obtain a systolic blood pressure (SBP) by using an estimation model corresponding to the classified phase of the MAP among estimation models corresponding to respective phases of the MAP.
    Type: Application
    Filed: November 13, 2020
    Publication date: January 20, 2022
    Applicants: SAMSUNG ELECTRONICS CO., LTD., IUCF-HYU (Industry-University Cooperation Foundation Hanyang University)
    Inventors: Sang Kon Bae, Joon-Hyuk Chang, Chang Mok Choi, Youn Ho Kim, Jin Woo Choi, Jehyun Kyung, Tae-Jun Park, Joon-Young Yang, Inmo Yeon
  • Patent number: 11176950
    Abstract: Disclosed herein are an apparatus and method for recognizing a voice speaker. The apparatus for recognizing a voice speaker includes a voice feature extraction unit configured to extract a feature vector from a voice signal inputted through a microphone; and a speaker recognition unit configured to calculate a speaker recognition score by selecting a reverberant environment from multiple reverberant environment learning data sets based on the feature vector extracted by the voice feature extraction unit and to recognize a speaker by assigning a weight depending on the selected reverberant environment to the speaker recognition score.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: November 16, 2021
    Assignee: Hyundai Mobis Co., Ltd.
    Inventors: Yu Jin Jung, Ki Hee Park, Chang Won Lee, Doh Hyun Kim, Tae Kyung Kim, Tae Yoon Son, Joon Hyuk Chang, Joon Young Yang
  • Publication number: 20210166705
    Abstract: Proposed are a generative adversarial network-based speech bandwidth extender and extension method. A generative adversarial network-based speech bandwidth extension method, according to an embodiment, comprises the steps of: extracting feature vectors from a narrowband (NB) signal and a wideband (WB) signal of a speech; estimating the feature vector of the wideband signal from the feature vector of the narrowband signal; and learning a deep neural network classification model for discriminating the estimated feature vector of the wideband signal from the actually extracted feature vector of the wideband signal and the actually extracted feature vector of the narrowband signal.
    Type: Application
    Filed: May 17, 2018
    Publication date: June 3, 2021
    Applicant: INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY
    Inventors: Joon-Hyuk CHANG, Kyoungjin NOH
  • Patent number: 11017791
    Abstract: Disclosed is a deep neural network-based method and apparatus for combining noise and echo removal. The deep neural network-based method for combining noise and echo removal according to one embodiment of the present invention may comprise the steps of extracting a feature vector from an audio signal that includes noise and echo; and acquiring a final audio signal from which both noise and echo have been removed, by using a combined nose and echo removal gain estimated by means of the feature vector and deep neural network DNN.
    Type: Grant
    Filed: April 2, 2018
    Date of Patent: May 25, 2021
    Assignee: Industry-University Cooperation Foundation Hanyang University
    Inventors: Joon-Hyuk Chang, Hyeji Seo
  • Patent number: 10861466
    Abstract: Disclosed are a packet loss concealment method and apparatus a using a generative adversarial network. A method for packet loss concealment in voice communication may include training a classification model based on a generative adversarial network (GAN) with respect to a voice signal including a plurality of frames, training a generative model having a contention relation with the classification model based on the GAN, estimating lost packet information based on the trained generative model with respect to the voice signal encoded by a codec, and restoring a lost packet based on the estimated packet information.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: December 8, 2020
    Assignee: INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY
    Inventors: Joon-Hyuk Chang, Bong-Ki Lee
  • Patent number: 10854218
    Abstract: A multichannel microphone-based reverberation time estimation method and device which use a deep neural network (DNN) are disclosed. A multichannel microphone-based reverberation time estimation method using a DNN, according to one embodiment, comprises the steps of: receiving a voice signal through a multichannel microphone; deriving a feature vector including spatial information by using the inputted voice signal; and estimating the degree of reverberation by applying the feature vector to the DNN.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: December 1, 2020
    Assignee: Industry-University Cooperation Foundation Hanyang University
    Inventors: Joon-Hyuk Chang, Myung In Lee
  • Publication number: 20200193291
    Abstract: A noise data artificial intelligence learning method for identifying the source of problematic noise may include a noise data pre-conditioning method for identifying the source of problematic noise including: selecting a unit frame for the problematic noise among noises sampled with time; dividing the unit frame into N segments; analyzing frequency characteristic for each segment of the N segments and extracting a frequency component of each segment by applying Log Mel Filter; and outputting a feature parameter as one representative frame by averaging information on the N segments, wherein an artificial intelligence learning by the feature parameter extracted according to a change in time by the noise data pre-conditioning method applies Bidirectional RNN.
    Type: Application
    Filed: November 18, 2019
    Publication date: June 18, 2020
    Applicants: Hyundai Motor Company, Kia Motors Corporation, IUCF-HYU (Industry-University Corporation Foundation Hanyang University)
    Inventors: Dong-Chul Lee, In-Soo Jung, Joon-Hyuk Chang, Kyoung-Jin Noh
  • Publication number: 20200105287
    Abstract: Disclosed is a deep neural network-based method and apparatus for combining noise and echo removal. The deep neural network-based method for combining noise and echo removal according to one embodiment of the present invention may comprise the steps of extracting a feature vector from an audio signal that includes noise and echo; and acquiring a final audio signal from which both noise and echo have been removed, by using a combined nose and echo removal gain estimated by means of the feature vector and deep neural network DNN.
    Type: Application
    Filed: April 2, 2018
    Publication date: April 2, 2020
    Applicant: Industry-University Cooperation Foundation Hanyang University
    Inventors: Joon-Hyuk CHANG, Hyeji SEO
  • Publication number: 20200082843
    Abstract: A multichannel microphone-based reverberation time estimation method and device which use a deep neural network (DNN) are disclosed. A multichannel microphone-based reverberation time estimation method using a DNN, according to one embodiment, comprises the steps of: receiving a voice signal through a multichannel microphone; deriving a feature vector including spatial information by using the inputted voice signal; and estimating the degree of reverberation by applying the feature vector to the DNN.
    Type: Application
    Filed: December 15, 2017
    Publication date: March 12, 2020
    Applicant: Industry-University Cooperation Foundation Hanyang University
    Inventors: Joon-Hyuk CHANG, Myung In LEE
  • Patent number: 10431240
    Abstract: Provided is a speech enhancement method and a system therefor. The speech enhancement method includes receiving at least one speech signal; generating a first speech signal by performing a primary speech enhancement on the at least one speech signal; selecting a noise removing gain corresponding to the first speech signal from pre-learned noise removing gain information; and generating a second speech signal by performing a secondary speech enhancement on the first speech signal based on the selected noise removing gain.
    Type: Grant
    Filed: September 23, 2015
    Date of Patent: October 1, 2019
    Assignees: Samsung Electronics Co., Ltd, Industry-University Cooperation Foundation Hanyang University
    Inventors: Seung-yeol Lee, Joon-hyuk Chang, Byeong-seob Ko, Song-kyu Park, Tae-jun Park
  • Publication number: 20190295553
    Abstract: Disclosed herein are an apparatus and method for recognizing a voice speaker. The apparatus for recognizing a voice speaker includes a voice feature extraction unit configured to extract a feature vector from a voice signal inputted through a microphone; and a speaker recognition unit configured to calculate a speaker recognition score by selecting a reverberant environment from multiple reverberant environment learning data sets based on the feature vector extracted by the voice feature extraction unit and to recognize a speaker by assigning a weight depending on the selected reverberant environment to the speaker recognition score.
    Type: Application
    Filed: March 20, 2019
    Publication date: September 26, 2019
    Inventors: Yu Jin JUNG, Ki Hee PARK, Chang Won LEE, Doh Hyun KIM, Tae Kyung KIM, Tae Yoon SON, Joon Hyuk CHANG, Joon Young YANG
  • Publication number: 20190051310
    Abstract: Disclosed are a packet loss concealment method and apparatus a using a generative adversarial network. A method for packet loss concealment in voice communication may include training a classification model based on a generative adversarial network (GAN) with respect to a voice signal including a plurality of frames, training a generative model having a contention relation with the classification model based on the GAN, estimating lost packet information based on the trained generative model with respect to the voice signal encoded by a codec, and restoring a lost packet based on the estimated packet information.
    Type: Application
    Filed: August 9, 2018
    Publication date: February 14, 2019
    Inventors: Joon-Hyuk Chang, Bong-Ki Lee
  • Publication number: 20170365275
    Abstract: Provided is a speech enhancement method and a system therefor. The speech enhancement method includes receiving at least one speech signal; generating a first speech signal by performing a primary speech enhancement on the at least one speech signal; selecting a noise removing gain corresponding to the first speech signal from pre-learned noise removing gain information; and generating a second speech signal by performing a secondary speech enhancement on the first speech signal based on the selected noise removing gain.
    Type: Application
    Filed: September 23, 2015
    Publication date: December 21, 2017
    Inventors: Seung-yeol LEE, Joon-hyuk CHANG, Byeong-seob KO, Song-kyu PARK, Tae-jun PARK