Patents by Inventor Hyuk Chang

Hyuk Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220270414
    Abstract: A device for AI-based vehicle diagnosis using CAN data may include an engine; a vibration sensor mounted in an engine compartment in which the engine is mounted and configured for detecting a vibration signal; and a controller area network (CAN) communicating with one or more of an environmental condition, a vehicle status, an engine status, and an engine control parameter, wherein data preprocessing from the vibration sensor and the CAN is performed to determine features in which correlation between vibration data (dB) exceeding a threshold value of irregular vibrations being generated by the engine and the CAN data is equal to or greater than 90%.
    Type: Application
    Filed: July 15, 2021
    Publication date: August 25, 2022
    Applicants: Hyundai Motor Company, Kia Corporation, IUCF-HYU (Industry-University Cooperation Foundation Hanyang University)
    Inventors: Dong-Chul LEE, In-Soo JUNG, Dong-Yeoup JEON, Joon-Hyuk CHANG
  • Publication number: 20220230627
    Abstract: Disclosed are a method and an apparatus for detecting a voice end point by using acoustic and language modeling information to accomplish strong voice recognition. A voice end point detection method according to an embodiment may comprise the steps of: inputting an acoustic feature vector sequence extracted from a microphone input signal into an acoustic embedding extraction unit, a phonemic embedding extraction unit, and a decoder embedding extraction unit, which are based on a recurrent neural network (RNN); combining acoustic embedding, phonemic embedding, and decoder embedding to configure a feature vector by the acoustic embedding extraction unit, the phonemic embedding extraction unit, and the decoder embedding extraction unit; and inputting the combined feature vector into a deep neural network (DNN)-based classifier to detect a voice end point.
    Type: Application
    Filed: June 9, 2020
    Publication date: July 21, 2022
    Applicant: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon-Hyuk CHANG, Inyoung HWANG
  • Publication number: 20220208198
    Abstract: Presented are a combined learning method and device using a transformed loss function and feature enhancement based on a deep neural network for speaker recognition that is robust in a noisy environment. A combined learning method using a transformed loss function and feature enhancement based on a deep neural network, according to one embodiment, can comprise the steps of: learning a feature enhancement model based on a deep neural network; learning a speaker feature vector extraction model based on the deep neural network; connecting an output layer of the feature enhancement model with an input layer of the speaker feature vector extraction model; and considering the connected feature enhancement model and speaker feature vector extraction model as one mode and performing combined learning for additional learning.
    Type: Application
    Filed: March 30, 2020
    Publication date: June 30, 2022
    Applicant: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY
    Inventors: Joon-Hyuk CHANG, Joonyoung YANG
  • Publication number: 20220199095
    Abstract: Presented are a combined learning method and device using a transformed loss function and feature enhancement based on a deep neural network for speaker recognition that is robust to a noisy environment.
    Type: Application
    Filed: March 30, 2020
    Publication date: June 23, 2022
    Applicant: INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY
    Inventors: Joon-Hyuk CHANG, Joonyoung YANG
  • Publication number: 20220165011
    Abstract: A method for rearranging image cuts of cartoon content is performed by a computing device and includes the steps of loading first content in which a plurality of image cuts are arrayed two-dimensionally; extracting a plurality of cut areas, in which the plurality of image cuts from the first content are positioned, respectively; determining the arrayed order of the plurality of image cuts; and generating second content by rearranging the plurality of cut areas according to the arrayed order.
    Type: Application
    Filed: February 7, 2022
    Publication date: May 26, 2022
    Inventors: Jae Hyuk CHANG, Chan Kyu PARK, Sung Kil LEE, Soon Hyeon KWON, So Young PARK
  • Publication number: 20220108681
    Abstract: Proposed are a deep neural network-based non-autoregressive voice synthesizing method and a system therefor. A deep neural network-based non-autoregressive voice synthesizing system according to an embodiment may comprise: a voice feature vector column synthesizing unit which constitutes a non-recursive deep neural network based on multiple decoders, and gradually produces a voice feature vector column through the multiple decoders from a template including temporal information of a voice; and a voice reconstituting unit which transforms the voice feature vector column into voice data, wherein the voice feature vector column synthesizing unit produces a template input, and produces a voice feature vector column by adding, to the template input, sentence data refined through an attention mechanism.
    Type: Application
    Filed: June 26, 2020
    Publication date: April 7, 2022
    Applicant: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon-Hyuk CHANG, Moa LEE
  • Publication number: 20220092106
    Abstract: A deep learning-based coloring system includes a memory network configured to provide a color feature in response to a specific query and a coloring network configured to perform coloring, based on the color feature generated by the memory network. The memory network includes: a query generation unit configured to generate a query; a neighbor calculation unit configured to calculate k-nearest neighbors, based on similarities between the query and key memory values; a color feature determination unit configured to generate color features for indicating color information stored in the key memory; a threshold triplet loss calculation unit configured to calculate a threshold triplet loss, based on a comparison between a threshold and a distance between the color features; and a memory update unit configured to update a memory, based on whether a distance between a top value and a value of a newly input query is within the threshold.
    Type: Application
    Filed: October 1, 2021
    Publication date: March 24, 2022
    Inventors: Jae Hyuk CHANG, Jae Gul CHOO, Seung Joo YOO, Sung Hyo CHUNG, Ga Young LEE, Hyo Jin BAHNG
  • Patent number: 11238877
    Abstract: Proposed are a generative adversarial network-based speech bandwidth extender and extension method. A generative adversarial network-based speech bandwidth extension method, according to an embodiment, comprises the steps of: extracting feature vectors from a narrowband (NB) signal and a wideband (WB) signal of a speech; estimating the feature vector of the wideband signal from the feature vector of the narrowband signal; and learning a deep neural network classification model for discriminating the estimated feature vector of the wideband signal from the actually extracted feature vector of the wideband signal and the actually extracted feature vector of the narrowband signal.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: February 1, 2022
    Assignee: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon-Hyuk Chang, Kyoungjin Noh
  • Publication number: 20220015651
    Abstract: An apparatus for and a method for estimating blood pressure are provided. The apparatus for estimating blood pressure includes: a sensor configured to measure a pulse wave signal from an object; and a processor configured to obtain a mean arterial pressure (MAP) based on the pulse wave signal, configured to classify a phase of the obtained MAP according to at least one classification criterion, and to obtain a systolic blood pressure (SBP) by using an estimation model corresponding to the classified phase of the MAP among estimation models corresponding to respective phases of the MAP.
    Type: Application
    Filed: November 13, 2020
    Publication date: January 20, 2022
    Applicants: SAMSUNG ELECTRONICS CO., LTD., IUCF-HYU (Industry-University Cooperation Foundation Hanyang University)
    Inventors: Sang Kon Bae, Joon-Hyuk Chang, Chang Mok Choi, Youn Ho Kim, Jin Woo Choi, Jehyun Kyung, Tae-Jun Park, Joon-Young Yang, Inmo Yeon
  • Patent number: 11176950
    Abstract: Disclosed herein are an apparatus and method for recognizing a voice speaker. The apparatus for recognizing a voice speaker includes a voice feature extraction unit configured to extract a feature vector from a voice signal inputted through a microphone; and a speaker recognition unit configured to calculate a speaker recognition score by selecting a reverberant environment from multiple reverberant environment learning data sets based on the feature vector extracted by the voice feature extraction unit and to recognize a speaker by assigning a weight depending on the selected reverberant environment to the speaker recognition score.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: November 16, 2021
    Assignee: Hyundai Mobis Co., Ltd.
    Inventors: Yu Jin Jung, Ki Hee Park, Chang Won Lee, Doh Hyun Kim, Tae Kyung Kim, Tae Yoon Son, Joon Hyuk Chang, Joon Young Yang
  • Patent number: 11165984
    Abstract: A camera system with a complementary pixlet structure and a method of operating the same are provided. The camera system includes an image sensor that includes at least one 2×2 pixel block including a first pixel, a second pixel, and two third pixels—the two third pixels are disposed at positions diagonal to each other in the 2×2 pixel block and include deflected small pixlets, which are deflected in opposite directions to be symmetrical to each other with respect to each pixel center, and large pixlets adjacent to the deflected small pixlets, respectively, and each pixlet includes an photodiode converting an optical signal to an electrical signal and a depth calculator that receives images acquired from the deflected small pixlets of the two third pixels and calculates a depth between the image sensor and an object using a parallax between the images.
    Type: Grant
    Filed: November 4, 2020
    Date of Patent: November 2, 2021
    Assignee: Dexelion Inc.
    Inventors: Chong Min Kyung, Hyun Sang Park, Seung Hyuk Chang, Jong Ho Park, Sang Jin Lee
  • Publication number: 20210281790
    Abstract: A camera system with a complementary pixlet structure and a method of operating the same are provided. The camera system includes an image sensor that includes at least one 2×2 pixel block including a first pixel, a second pixel, and two third pixels—the two third pixels are disposed at positions diagonal to each other in the 2×2 pixel block and include deflected small pixlets, which are deflected in opposite directions to be symmetrical to each other with respect to each pixel center, and large pixlets adjacent to the deflected small pixlets, respectively, and each pixlet includes an photodiode converting an optical signal to an electrical signal and a depth calculator that receives images acquired from the deflected small pixlets of the two third pixels and calculates a depth between the image sensor and an object using a parallax between the images.
    Type: Application
    Filed: November 4, 2020
    Publication date: September 9, 2021
    Inventors: Chong Min KYUNG, Hyun Sang PARK, Seung Hyuk CHANG, Jong Ho PARK, Sang Jin LEE
  • Publication number: 20210258522
    Abstract: A camera system with a complementary pixlet structure and a method of operating the same are provided. According to an embodiment, the camera system includes an image sensor that includes two pixels, each of the two pixels including a deflected small pixlet deflected in one direction based on a pixel center and a large pixlet disposed adjacent to the deflected small pixlet, each pixlet including a photodiode converting an optical signal to an electric signal, and the deflected small pixlets of the two pixels being arranged to be symmetrical to each other with respect to each of the pixel centers within each of the two pixels, respectively, and a depth calculator that receives images acquired from the deflected small pixlets of the two pixels and calculates a depth between the image sensor and an object using a parallax between the images.
    Type: Application
    Filed: November 4, 2020
    Publication date: August 19, 2021
    Inventors: Chong Min KYUNG, Seung Hyuk CHANG, Hyun Sang PARK, Jong Ho PARK, Sang Jin LEE
  • Publication number: 20210166705
    Abstract: Proposed are a generative adversarial network-based speech bandwidth extender and extension method. A generative adversarial network-based speech bandwidth extension method, according to an embodiment, comprises the steps of: extracting feature vectors from a narrowband (NB) signal and a wideband (WB) signal of a speech; estimating the feature vector of the wideband signal from the feature vector of the narrowband signal; and learning a deep neural network classification model for discriminating the estimated feature vector of the wideband signal from the actually extracted feature vector of the wideband signal and the actually extracted feature vector of the narrowband signal.
    Type: Application
    Filed: May 17, 2018
    Publication date: June 3, 2021
    Applicant: INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY
    Inventors: Joon-Hyuk CHANG, Kyoungjin NOH
  • Patent number: 11017791
    Abstract: Disclosed is a deep neural network-based method and apparatus for combining noise and echo removal. The deep neural network-based method for combining noise and echo removal according to one embodiment of the present invention may comprise the steps of extracting a feature vector from an audio signal that includes noise and echo; and acquiring a final audio signal from which both noise and echo have been removed, by using a combined nose and echo removal gain estimated by means of the feature vector and deep neural network DNN.
    Type: Grant
    Filed: April 2, 2018
    Date of Patent: May 25, 2021
    Assignee: Industry-University Cooperation Foundation Hanyang University
    Inventors: Joon-Hyuk Chang, Hyeji Seo
  • Patent number: 10916576
    Abstract: A camera system includes a single lens and an image sensor including a reference pixel array including a plurality of W (white) pixels in a two-dimensional arrangement and a single microlens formed on the plurality of W pixels to be shared, and at least one color pixel array including two W pixels and two color pixels in a two-dimensional arrangement, and a single microlens disposed on the two W pixels and the two color pixels to be shared. Light shielding layers formed with Offset Pixel Apertures (OPAs) are disposed on the plurality of W pixels included in the reference pixel array and the two W pixels included in the at least one color pixel array, respectively, and the OPAs are formed on the light shielding layers in the reference pixel array and the at least one color pixel array, respectively, to maximize a spaced distance between the OPAs.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: February 9, 2021
    Assignee: CENTER FOR INTEGRATED SMART SENSORS FOUNDATION
    Inventors: Chong Min Kyung, Seung Hyuk Chang, Won Seok Choi
  • Patent number: 10893255
    Abstract: A camera system is provided to increase a baseline. The camera system includes a single lens, and an image sensor includes at least one pixel array, each of the at least one pixel array including a plurality of pixels in a two-dimensional arrangement and a single microlens disposed on the plurality of pixels to be shared. Light shielding layers formed with Offset Pixel Apertures (OPAs) are disposed on at least two pixels of the plurality of pixels, and the OPAs are formed on the light shielding layers to maximize a spaced distance between the OPAs.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: January 12, 2021
    Assignee: Center For Integrated Smart Sensors Foundation
    Inventors: Chong Min Kyung, Seung Hyuk Chang
  • Patent number: 10861466
    Abstract: Disclosed are a packet loss concealment method and apparatus a using a generative adversarial network. A method for packet loss concealment in voice communication may include training a classification model based on a generative adversarial network (GAN) with respect to a voice signal including a plurality of frames, training a generative model having a contention relation with the classification model based on the GAN, estimating lost packet information based on the trained generative model with respect to the voice signal encoded by a codec, and restoring a lost packet based on the estimated packet information.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: December 8, 2020
    Assignee: INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY
    Inventors: Joon-Hyuk Chang, Bong-Ki Lee
  • Patent number: 10854218
    Abstract: A multichannel microphone-based reverberation time estimation method and device which use a deep neural network (DNN) are disclosed. A multichannel microphone-based reverberation time estimation method using a DNN, according to one embodiment, comprises the steps of: receiving a voice signal through a multichannel microphone; deriving a feature vector including spatial information by using the inputted voice signal; and estimating the degree of reverberation by applying the feature vector to the DNN.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: December 1, 2020
    Assignee: Industry-University Cooperation Foundation Hanyang University
    Inventors: Joon-Hyuk Chang, Myung In Lee
  • Publication number: 20200193291
    Abstract: A noise data artificial intelligence learning method for identifying the source of problematic noise may include a noise data pre-conditioning method for identifying the source of problematic noise including: selecting a unit frame for the problematic noise among noises sampled with time; dividing the unit frame into N segments; analyzing frequency characteristic for each segment of the N segments and extracting a frequency component of each segment by applying Log Mel Filter; and outputting a feature parameter as one representative frame by averaging information on the N segments, wherein an artificial intelligence learning by the feature parameter extracted according to a change in time by the noise data pre-conditioning method applies Bidirectional RNN.
    Type: Application
    Filed: November 18, 2019
    Publication date: June 18, 2020
    Applicants: Hyundai Motor Company, Kia Motors Corporation, IUCF-HYU (Industry-University Corporation Foundation Hanyang University)
    Inventors: Dong-Chul Lee, In-Soo Jung, Joon-Hyuk Chang, Kyoung-Jin Noh