Patents by Inventor Joon-Hyuk Chang

Joon-Hyuk Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12207907
    Abstract: An apparatus for and a method for estimating blood pressure are provided. The apparatus for estimating blood pressure includes: a sensor configured to measure a pulse wave signal from an object; and a processor configured to obtain a mean arterial pressure (MAP) based on the pulse wave signal, configured to classify a phase of the obtained MAP according to at least one classification criterion, and to obtain a systolic blood pressure (SBP) by using an estimation model corresponding to the classified phase of the MAP among estimation models corresponding to respective phases of the MAP.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: January 28, 2025
    Assignees: SAMSUNG ELECTRONICS CO., LTD., IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Sang Kon Bae, Joon-Hyuk Chang, Chang Mok Choi, Youn Ho Kim, Jin Woo Choi, Jehyun Kyung, Tae-Jun Park, Joon-Young Yang, Inmo Yeon
  • Patent number: 12067989
    Abstract: Presented are a combined learning method and device using a transformed loss function and feature enhancement based on a deep neural network for speaker recognition that is robust in a noisy environment. A combined learning method using a transformed loss function and feature enhancement based on a deep neural network, according to one embodiment, can comprise the steps of: learning a feature enhancement model based on a deep neural network; learning a speaker feature vector extraction model based on the deep neural network; connecting an output layer of the feature enhancement model with an input layer of the speaker feature vector extraction model; and considering the connected feature enhancement model and speaker feature vector extraction model as one mode and performing combined learning for additional learning.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: August 20, 2024
    Assignee: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon-Hyuk Chang, Joonyoung Yang
  • Patent number: 12033613
    Abstract: Proposed are a deep neural network-based non-autoregressive voice synthesizing method and a system therefor. A deep neural network-based non-autoregressive voice synthesizing system according to an embodiment may comprise: a voice feature vector column synthesizing unit which constitutes a non-recursive deep neural network based on multiple decoders, and gradually produces a voice feature vector column through the multiple decoders from a template including temporal information of a voice; and a voice reconstituting unit which transforms the voice feature vector column into voice data, wherein the voice feature vector column synthesizing unit produces a template input, and produces a voice feature vector column by adding, to the template input, sentence data refined through an attention mechanism.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: July 9, 2024
    Assignee: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon-Hyuk Chang, Moa Lee
  • Publication number: 20240169973
    Abstract: An exemplary embodiment of the present disclosure is a speech synthesis method based on multi-speaker training dataset of a speech synthesis apparatus including pre-training a speech synthesis model using a previously stored neural network with a single speaker training dataset with most spoken sentences, among training datasets of a plurality of speakers, fine-tuning the pre-trained speech synthesis model with a training dataset of a plurality of speakers, and applying a target speech dataset to the fine-tuned speech synthesis model to be converted into a mel spectrogram.
    Type: Application
    Filed: November 19, 2021
    Publication date: May 23, 2024
    Applicant: Industry-University Cooperation Foundation Hanyang University
    Inventors: Joon Hyuk CHANG, Jae Uk LEE
  • Publication number: 20240153486
    Abstract: The present disclosure provides an operating method of a speech synthesis system, which includes, inputting a first text and a first speech for the first text, and a second text and a second speech for the second text; generating a speech synthesis model trained by applying the first and second texts and the first and second speeches to curriculum learning; and outputting a target synthesis speech corresponding to a target text based on the speech synthesis model when inputting the target text for speech output, and the generating of the speech synthesis model includes generating a concatenation text in which the first and second texts are concatenated and a concatenation speech in which the first and second speeches are concatenated, and adding the concatenation text and the concatenation speech to the speech synthesis model when an error rate is smaller than a set reference rate when learning-concatenating the concatenation text and the concatenation speech.
    Type: Application
    Filed: December 2, 2021
    Publication date: May 9, 2024
    Applicant: INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY
    Inventors: Joon Hyuk CHANG, Sung Woong HWANG
  • Patent number: 11972751
    Abstract: Disclosed are a method and an apparatus for detecting a voice end point by using acoustic and language modeling information to accomplish strong voice recognition. A voice end point detection method according to an embodiment may comprise the steps of: inputting an acoustic feature vector sequence extracted from a microphone input signal into an acoustic embedding extraction unit, a phonemic embedding extraction unit, and a decoder embedding extraction unit, which are based on a recurrent neural network (RNN); combining acoustic embedding, phonemic embedding, and decoder embedding to configure a feature vector by the acoustic embedding extraction unit, the phonemic embedding extraction unit, and the decoder embedding extraction unit; and inputting the combined feature vector into a deep neural network (DNN)-based classifier to detect a voice end point.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: April 30, 2024
    Assignee: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon-Hyuk Chang, Inyoung Hwang
  • Publication number: 20240135954
    Abstract: A voice signal estimation apparatus using attention mechanism according to an embodiment may comprise a microphone encoder that receives a microphone input signal including an echo signal, and a user's voice signal, converts the microphone input signal into first input information, and outputs the converted first input information, a far-end signal encoder that receives a far-end signal, converts the far-end signal into second input information, and outputs the converted second input information, an attention unit outputting weight information by applying an attention mechanism to the first input information and the second input information, a pre-learned first artificial neural network with third input information, which is the sum information of the weight information and the second input information, as input information, and with first output information including mask information for estimating the voice signal from the second input information as output information and a voice signal estimator outputtin
    Type: Application
    Filed: January 21, 2022
    Publication date: April 25, 2024
    Applicant: Industry-University Cooperation Foundation Hanyang University
    Inventors: Joon Hyuk CHANG, Song Kyu PARK
  • Publication number: 20240129410
    Abstract: [SUMMARY] An integrated noise and echo signal removal device using parallel deep neural network according to an embodiment comprises a microphone encoder that receives a microphone input signal including an echo signal, and a speaker's voice signal, converts the microphone input signal into first input information, and outputs the converted first input information, a far-end signal encoder that receives a far-end signal, converts the far-end signal into second input information, and outputs the converted second input information, a pre-learned second artificial neural network having a third input information, which is the sum of the first input information and the second input information, as input information, and having an estimated echo signal obtained by estimating the echo signal from the second input information as output information, a pre-learned third artificial neural network having the third input information as input information and having an estimated noise signal obtained by estimating the noise
    Type: Application
    Filed: January 21, 2022
    Publication date: April 18, 2024
    Applicant: Industry-University Cooperation Foundation Hanyang University
    Inventors: Joon Hyuk CHANG, Song Kyu PARK
  • Publication number: 20240105199
    Abstract: A multi-channel based noise and echo signal integrated cancellation device using deep neural network according to an embodiment comprises a plurality of microphone encoders that receive a plurality of microphone input signals including an echo signal, and a speaker's voice signal, convert the plurality of microphone input signals into a plurality of conversion information, and output the plurality of conversion information, a channel convert unit for compressing the plurality of pieces of conversion information and converting them into first input information having a size of a single channel and outputting the converted first input information, a far-end signal encoder that receives a far-end signal, converts the far-end signal into second input information, and outputs the converted second input information, an attention unit outputting weight information by applying an attention mechanism to the first input information and the second input information, a pre-learned first artificial neural network taking t
    Type: Application
    Filed: January 21, 2022
    Publication date: March 28, 2024
    Applicant: Industry-University Cooperation Foundation Hanyang Universtiy
    Inventors: Joon Hyuk CHANG, Song Kyu PARK
  • Patent number: 11908447
    Abstract: According to an aspect, method for synthesizing multi-speaker speech using an artificial neural network comprises generating and storing a speech learning model for a plurality of users by subjecting a synthetic artificial neural network of a speech synthesis model to learning, based on speech data of the plurality of users, generating speaker vectors for a new user who has not been learned and the plurality of users who have already been learned by using a speaker recognition model, determining a speaker vector having the most similar relationship with the speaker vector of the new user according to preset criteria out of the speaker vectors of the plurality of users who have already been learned, and generating and learning a speaker embedding of the new user by subjecting the synthetic artificial neural network of the speech synthesis model to learning, by using a value of a speaker embedding of a user for the determined speaker vector as an initial value and based on speaker data of the new user.
    Type: Grant
    Filed: August 4, 2021
    Date of Patent: February 20, 2024
    Assignee: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon Hyuk Chang, Jae Uk Lee
  • Patent number: 11854497
    Abstract: The present invention relates to a display apparatus that allows a compensated data voltage to be supplied to each pixel by compensating for the data voltage so as to prevent burn-in from occurring in a display panel, a method for compensating a data signal thereof, and a method for generating a deep learning-based compensation model. To implement same, the present invention provides the display apparatus comprising a timing controller having mounted therein the compensation model generated by learning, in a deep learning method, the temperature, time, average brightness, and data voltage for each pixel. Accordingly, the present invention has an effect of preventing burn-in from occurring in each pixel by supplying each pixel with the compensated data voltage generated via the compensation model.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: December 26, 2023
    Assignees: LG Display Co., Ltd., Industry-University Cooperation Foundation Hanyang University
    Inventors: Joon-Hyuk Chang, Kwanghwan Ji, Kwan-Ho Park, Kiseok Chang, Junghoon Seo, Kipyo Hong, Hyojung Park, Seunghyuck Lee
  • Patent number: 11854554
    Abstract: Presented are a combined learning method and device using a transformed loss function and feature enhancement based on a deep neural network for speaker recognition that is robust to a noisy environment.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: December 26, 2023
    Assignee: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon-Hyuk Chang, Joonyoung Yang
  • Publication number: 20230386457
    Abstract: Disclosed is a transformer-based voice recognition technology using an improved voice as a conditioning feature. A voice recognition method performed by a voice recognition system may include inputting, to a voice recognition model, clean voice data estimated by a voice improvement model and voice data including noise and performing voice recognition based on the estimated clean voice data and the voice data including the noise by using the voice recognition model. The voice recognition model may be trained to perform the voice recognition robust against noise through a combination of a voice feature of the voice data including the noise and a voice feature of the estimated clean voice data by using the estimated clean voice data as a conditioning feature.
    Type: Application
    Filed: August 9, 2022
    Publication date: November 30, 2023
    Applicant: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon-Hyuk CHANG, Da-Hee YANG
  • Publication number: 20230359885
    Abstract: Disclosed are a system and method for automating the design of a sound source separation deep learning model. A method of automating a design of a sound source separation deep learning model, which is performed by a design automation system, may include automatically searching for a combination of hyper parameters of a separation model constructed in a sound source separation deep learning model by using a neural architecture search (NAS) algorithm and reconstructing the sound source separation deep learning model based on the retrieved combination of the hyper parameters of the separation model.
    Type: Application
    Filed: August 10, 2022
    Publication date: November 9, 2023
    Applicant: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon-Hyuk CHANG, Joo-Hyun LEE
  • Publication number: 20230351174
    Abstract: A method of automatically creating an artificial intelligence (AI) diagnostic model for diagnosing an abnormal state of a vehicle includes: acquiring noise and vibration data measured by a sensor of the vehicle as input data, processing the input data, searching and selecting an architecture of the AI diagnostic model based on the processed input data, and providing the AI diagnostic model to diagnose the abnormal state of the vehicle, where an efficient neural architecture search (ENAS) is applied to update the AI diagnostic model and a parameter configuring the AI diagnostic model, the ENAS sharing the parameter with the updated AI diagnostic model.
    Type: Application
    Filed: September 21, 2022
    Publication date: November 2, 2023
    Inventors: Dong-Chul Lee, In-Soo Jung, Joo-Hyun Lee, Joon-Hyuk Chang, Kyoung-Jin Noh
  • Publication number: 20230329566
    Abstract: An apparatus for estimating blood pressure includes: a photoplethysmogram (PPG) sensor configured to measure a PPG signal from an object; a force sensor configured to measure a force signal acting between the object and the PPG sensor; and a processor configured to (i) divide a predetermined blood pressure range into a plurality of classes, (ii) input the measured PPG signal and the measured force signal into a blood pressure estimation model to obtain the probability values for each of the classes, and (iii) estimate blood pressure based on the obtained probability values for the respective classes.
    Type: Application
    Filed: October 18, 2022
    Publication date: October 19, 2023
    Applicants: SAMSUNG ELECTRONICS CO., LTD., IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Sang Kon BAE, Joon-Hyuk CHANG, Youn Ho KIM, Jin Woo CHOI, Jehyun KYUNG, Joon-Young YANG, Ye-Rin JEOUNG, Jeong-Hwan CHOI
  • Patent number: 11790929
    Abstract: According to an aspect, a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network includes a signal reception unit for receiving as input a first speech signal through a single channel microphone, a signal generation unit for generating a second speech signal by applying a virtual acoustic channel expansion algorithm based on a deep neural network to the first speech signal and a dereverberation unit for removing reverberation of the first speech signal and generating a dereverberated signal from which the reverberation has been removed by applying a dual-channel weighted prediction error (WPE) algorithm based on a deep neural network to the first speech signal and the second speech signal.
    Type: Grant
    Filed: August 4, 2021
    Date of Patent: October 17, 2023
    Assignee: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon Hyuk Chang, Joon Young Yang
  • Publication number: 20230290336
    Abstract: Proposed are a speech recognition system and method for automatically calibrating a data label. A speech recognition method for automatically calibrating a data label according to an embodiment may comprise the steps of: performing confidence-based filtering to find the location of occurrence of a wrong label in time-series speech data, in which a correct label and the wrong label are temporally mixed, by using a transformer-based speech recognition model; and after performing filtering, replacing a label at a decoder time step, which has been determined to be a wrong label by the location of occurrence of the wrong label, so as to improve the performance of the transformer-based speech recognition model, wherein the step of performing confidence-based filtering to find the location of occurrence of the wrong label in the time-series speech data comprises finding and calibrating the wrong label using the confidence obtained by using a transition probability between labels at every decoder time step.
    Type: Application
    Filed: July 19, 2021
    Publication date: September 14, 2023
    Applicant: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon-Hyuk CHANG, Jaehong LEE
  • Publication number: 20230256982
    Abstract: A vehicle judder diagnostic method using artificial intelligence applied to a mobile-based GDS according to the present disclosure is characterized in that the mobile-based GDS samples a plurality of sensor signals of a sensor mounted in a vehicle in a vehicle during operation in a judder evaluation mode to quickly, separately diagnose whether the judder phenomenon of the vehicle is a geometric judder or a friction judder by mounting a deep neural network (DNN) model, developed by the trial and error process of a DNN by using the plurality of sensor signals of a test vehicle mounted with a double clutch transmission (DCT), as a judder determination artificial intelligence model 30 in the mobile-based GDS.
    Type: Application
    Filed: November 14, 2022
    Publication date: August 17, 2023
    Inventors: Jae-Min JIN, ln-Soo JUNG, Joon-Hyuk CHANG
  • Publication number: 20230252946
    Abstract: The present invention relates to a display apparatus that allows a compensated data voltage to be supplied to each pixel by compensating for the data voltage so as to prevent burn-in from occurring in a display panel, a method for compensating a data signal thereof, and a method for generating a deep learning-based compensation model. To implement same, the present invention provides the display apparatus comprising a timing controller having mounted therein the compensation model generated by learning, in a deep learning method, the temperature, time, average brightness, and data voltage for each pixel. Accordingly, the present invention has an effect of preventing burn-in from occurring in each pixel by supplying each pixel with the compensated data voltage generated via the compensation model.
    Type: Application
    Filed: June 21, 2021
    Publication date: August 10, 2023
    Inventors: Joon-Hyuk CHANG, Kwanghwan JI, Kwan-Ho PARK, Kiseok CHANG, Junghoon SEO, Kipyo HONG, Hyojung PARK, Seunghyuck LEE