Patents by Inventor Byung Ok Kang

Byung Ok Kang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240105166
    Abstract: Provided is a self-supervised learning method based on permutation invariant cross entropy. A self-supervised learning method based on permutation invariant cross entropy performed by an electronic device includes: defining a cross entropy loss function for pre-training of an end-to-end speech recognition model; configuring non-transcription speech corpus data composed only of speech as input data of the cross entropy loss function; setting all permutations of classes included in the non-transcription speech corpus data as an output target and calculating cross entropy losses for each class; and determining a minimum cross entropy loss among the calculated cross entropy losses for each class as a final loss.
    Type: Application
    Filed: July 11, 2023
    Publication date: March 28, 2024
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hoon CHUNG, Byung Ok KANG, Yoonhyung KIM
  • Publication number: 20240092141
    Abstract: An air conditioning device for a vehicle includes: a housing having an inside divided into an inflow space, a heat exchange space, and an outflow space, which are straightly arranged, and having a plurality of discharge ports, which communicates with an interior, at the inflow space; a blowing unit disposed at the inflow space of the housing and configured to blow air; a heat exchange unit disposed at the heat exchange space of the housing and configured to adjust a temperature of conditioned air by exchanging heat with air; and an opening-closing door disposed at the outflow space of the housing and configured to open and close the plurality of discharge ports such that conditioned air at an adjusted temperature selectively flows to the plurality of discharge ports. The air conditioning device adjusts the temperature of conditioned air for respective modes and reduces a flow resistance of air.
    Type: Application
    Filed: March 8, 2023
    Publication date: March 21, 2024
    Applicants: HYUNDAI MOTOR COMPANY, KIA CORPORATION, DOOWON CLIMATE CONTROL CO., LTD.
    Inventors: Kwang Ok Han, Young Tae Song, Yong Chul Kim, Gee Young Shin, Su Yeon Kang, Jae Sik Choi, Dae Hee Lee, Byeong Moo Jang, Ung Hwi Kim, Jae Won Cha, Won Jun Joung, Byung Guk An
  • Publication number: 20240083811
    Abstract: A glass article includes a first surface, a second surface opposed to the first surface, a first compressive region extending from the first surface to a first compression depth, a second compressive region extending from the second surface to a second compression depth and a tensile region between the first compression depth and the second compression depth. A stress profile of the first compressive region includes a first segment located between the first surface and a first transition point and a second segment located between the first transition point and the first compression depth. A depth from the first surface to the first transition point ranges from 6.1 ?m to 8.1 ?m. A compressive stress at the first transition point ranges from 207 MPa to 254 MPa. A stress-depth ratio of the first transition point ranges from 28 MPa/?m to 35 MPa/?m.
    Type: Application
    Filed: November 20, 2023
    Publication date: March 14, 2024
    Inventors: Gyu In SHIM, Seung KIM, Byung Hoon KANG, Young Ok PARK, Su Jin SUNG
  • Patent number: 11912613
    Abstract: A glass article includes lithium aluminosilicate, includes a first surface, a second surface opposed to the first surface, a first compressive region extending from the first surface to a first compression depth, a second compressive region extending from the second surface to a second compression depth, and, a tensile region disposed between the first compression depth and the second compression depth, where a stress profile of the first compressive region has a first local minimum point at which the stress profile is convex downward and a first local maximum point at which the stress profile is convex upward, where a depth of the first local maximum point is greater than a depth of the first local minimum point, and where a stress of the first local maximum point is greater than a compressive stress of the first local minimum point.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: February 27, 2024
    Assignee: SAMSUNG DISPLAY CO., LTD.
    Inventors: Su Jin Sung, Byung Hoon Kang, Seung Kim, Young Ok Park, Gyu In Shim
  • Patent number: 11912603
    Abstract: A glass article includes a first surface; a second surface opposed to the first surface; a side surface connecting the first surface to the second surface; a first surface compressive region extending from the first surface to a first depth; a second surface compressive region extending from the second surface to a second depth; and a side compressive region extending from the side surface to a third depth, where the first surface and the side surface are non-tin surfaces, the second surface is a tin surface, and a maximum compressive stress of the second surface compressive region is greater than a maximum compressive stress of the first surface compressive region.
    Type: Grant
    Filed: January 8, 2021
    Date of Patent: February 27, 2024
    Assignee: SAMSUNG DISPLAY CO., LTD.
    Inventors: Su Jin Sung, Byung Hoon Kang, Seung Kim, Young Ok Park, Gyu In Shim
  • Publication number: 20230134942
    Abstract: Disclosed herein are an apparatus and method for self-supervised training of an end-to-end speech recognition model. The apparatus includes memory in which at least one program is recorded and a processor for executing the program. The program trains an end-to-end speech recognition model, including an encoder and a decoder, using untranscribed speech data. The program may add predetermined noise to the input signal of the end-to-end speech recognition model, and may calculate loss by reflecting a predetermined constraint based on the output of the encoder of the end-to-end speech recognition model.
    Type: Application
    Filed: October 7, 2022
    Publication date: May 4, 2023
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hoon CHUNG, Byung-Ok KANG, Jeom-Ja KANG, Yun-Kyung LEE, Hyung-Bae JEON
  • Publication number: 20230009771
    Abstract: Disclosed herein is a method for data augmentation, which includes pretraining latent variables using first data corresponding to target speech and second data corresponding to general speech, training data augmentation parameters by receiving the first data and the second data as input, and augmenting target data using the first data and the second data through the pretrained latent variables and the trained parameters.
    Type: Application
    Filed: July 1, 2022
    Publication date: January 12, 2023
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Byung-Ok KANG, Jeon-Gue PARK, Hyung-Bae JEON
  • Patent number: 11423238
    Abstract: Provided are sentence embedding method and apparatus based on subword embedding and skip-thoughts. To integrate skip-thought sentence embedding learning methodology with a subword embedding technique, a skip-thought sentence embedding learning method based on subword embedding and methodology for simultaneously learning subword embedding learning and skip-thought sentence embedding learning, that is, multitask learning methodology, are provided as methodology for applying intra-sentence contextual information to subword embedding in the case of subword embedding learning. This makes it possible to apply a sentence embedding approach to agglutinative languages such as Korean in a bag-of-words form. Also, skip-thought sentence embedding learning methodology is integrated with a subword embedding technique such that intra-sentence contextual information can be used in the case of subword embedding learning.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: August 23, 2022
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Eui Sok Chung, Hyun Woo Kim, Hwa Jeon Song, Ho Young Jung, Byung Ok Kang, Jeon Gue Park, Yoo Rhee Oh, Yun Keun Lee
  • Patent number: 10705155
    Abstract: Power management apparatuses are provided. A power management apparatus includes a secondary power device that includes at least one capacitor. The power management apparatus includes a charging circuit that includes a direct current (DC)-DC converter and that is configured to supply power to the secondary power device. Moreover, the power management apparatus includes a measuring circuit that is configured to measure a switching profile of the DC-DC converter, and to determine a state of the secondary power device by comparing at least one time period of the switching profile with a reference time. Related memory systems and methods of operation are also provided.
    Type: Grant
    Filed: July 18, 2018
    Date of Patent: July 7, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Su-yong An, Byung-ok Kang, Woo-sung Lee, Jae-woong Choi, Young-sang Cho
  • Publication number: 20200175119
    Abstract: Provided are sentence embedding method and apparatus based on subword embedding and skip-thoughts. To integrate skip-thought sentence embedding learning methodology with a subword embedding technique, a skip-thought sentence embedding learning method based on subword embedding and methodology for simultaneously learning subword embedding learning and skip-thought sentence embedding learning, that is, multitask learning methodology, are provided as methodology for applying intra-sentence contextual information to subword embedding in the case of subword embedding learning. This makes it possible to apply a sentence embedding approach to agglutinative languages such as Korean in a bag-of-words form. Also, skip-thought sentence embedding learning methodology is integrated with a subword embedding technique such that intra-sentence contextual information can be used in the case of subword embedding learning.
    Type: Application
    Filed: November 1, 2019
    Publication date: June 4, 2020
    Inventors: Eui Sok CHUNG, Hyun Woo KIM, Hwa Jeon SONG, Ho Young JUNG, Byung Ok KANG, Jeon Gue PARK, Yoo Rhee OH, Yun Keun LEE
  • Patent number: 10402494
    Abstract: Provided is a method of automatically expanding input text. The method includes receiving input text composed of a plurality of documents, extracting a sentence pair that is present in different documents among the plurality of documents, setting the extracted sentence pair as an input of an encoder of a sequence-to-sequence model, setting an output of the encoder as an output of a decoder of the sequence-to-sequence model and generating a sentence corresponding to the input, and generating expanded text based on the generated sentence.
    Type: Grant
    Filed: February 22, 2017
    Date of Patent: September 3, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Eui Sok Chung, Byung Ok Kang, Ki Young Park, Jeon Gue Park, Hwa Jeon Song, Sung Joo Lee, Yun Keun Lee, Hyung Bae Jeon
  • Publication number: 20190162797
    Abstract: Power management apparatuses are provided. A power management apparatus includes a secondary power device that includes at least one capacitor. The power management apparatus includes a charging circuit that includes a direct current (DC)-DC converter and that is configured to supply power to the secondary power device. Moreover, the power management apparatus includes a measuring circuit that is configured to measure a switching profile of the DC-DC converter, and to determine a state of the secondary power device by comparing at least one time period of the switching profile with a reference time. Related memory systems and methods of operation are also provided.
    Type: Application
    Filed: July 18, 2018
    Publication date: May 30, 2019
    Inventors: Su-yong An, Byung-ok Kang, Woo-sung Lee, Jae-woong Choi, Young-sang Cho
  • Publication number: 20180157640
    Abstract: Provided is a method of automatically expanding input text. The method includes receiving input text composed of a plurality of documents, extracting a sentence pair that is present in different documents among the plurality of documents, setting the extracted sentence pair as an input of an encoder of a sequence-to-sequence model, setting an output of the encoder as an output of a decoder of the sequence-to-sequence model and generating a sentence corresponding to the input, and generating expanded text based on the generated sentence.
    Type: Application
    Filed: February 22, 2017
    Publication date: June 7, 2018
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Eui Sok CHUNG, Byung Ok Kang, Ki Young Park, Jeon Gue Park, Hwa Jeon Song, Sung Joo Lee, Yun Keun Lee, Hyung Bae Jeon
  • Patent number: 9959862
    Abstract: A speech recognition apparatus based on a deep-neural-network (DNN) sound model includes a memory and a processor. As the processor executes a program stored in the memory, the processor generates sound-model state sets corresponding to a plurality of pieces of set training speech data included in multi-set training speech data, generates a multi-set state cluster from the sound-model state sets, and sets the multi-set training speech data as an input node and the multi-set state cluster as output nodes so as to learn a DNN structured parameter.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: May 1, 2018
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Byung Ok Kang, Jeon Gue Park, Hwa Jeon Song, Yun Keun Lee, Eui Sok Chung
  • Publication number: 20180059761
    Abstract: Embodiments include a method of managing power and performance of an electronic device, the method comprising: providing a plurality of capacitors configured to be electrically connected to a power rail of the electronic device to supply auxiliary power to the electronic device when interrupt occurs in input power supplied to the electronic device; monitoring states of the capacitors; and controlling operations of the electronic device based on the results of the monitoring.
    Type: Application
    Filed: April 6, 2017
    Publication date: March 1, 2018
    Inventors: Su-Yong AN, Byung-Ok KANG, Chung-Hyun RYU
  • Publication number: 20180047389
    Abstract: Provided are an apparatus and method for recognizing speech using an attention-based content-dependent (CD) acoustic model. The apparatus includes a predictive deep neural network (DNN) configured to receive input data from an input layer and output predictive values to a buffer of a first output layer, and a context DNN configured to receive a context window from the first output layer and output a final result value.
    Type: Application
    Filed: January 12, 2017
    Publication date: February 15, 2018
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hwa Jeon SONG, Byung Ok KANG, Jeon Gue PARK, Yun Keun LEE, Hyung Bae JEON, Ho Young JUNG
  • Patent number: 9805716
    Abstract: Provided is an apparatus for large vocabulary continuous speech recognition (LVCSR) based on a context-dependent deep neural network hidden Markov model (CD-DNN-HMM) algorithm. The apparatus may include an extractor configured to extract acoustic model-state level information corresponding to an input speech signal from a training data model set using at least one of a first feature vector based on a gammatone filterbank signal analysis algorithm and a second feature vector based on a bottleneck algorithm, and a speech recognizer configured to provide a result of recognizing the input speech signal based on the extracted acoustic model-state level information.
    Type: Grant
    Filed: February 12, 2016
    Date of Patent: October 31, 2017
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Sung Joo Lee, Byung Ok Kang, Jeon Gue Park, Yun Keun Lee, Hoon Chung
  • Publication number: 20170206894
    Abstract: A speech recognition apparatus based on a deep-neural-network (DNN) sound model includes a memory and a processor. As the processor executes a program stored in the memory, the processor generates sound-model state sets corresponding to a plurality of pieces of set training speech data included in multi-set training speech data, generates a multi-set state cluster from the sound-model state sets, and sets the multi-set training speech data as an input node and the multi-set state cluster as output nodes so as to learn a DNN structured parameter.
    Type: Application
    Filed: June 20, 2016
    Publication date: July 20, 2017
    Inventors: Byung Ok KANG, Jeon Gue PARK, Hwa Jeon SONG, Yun Keun LEE, Eui Sok CHUNG
  • Publication number: 20160240190
    Abstract: Provided is an apparatus for large vocabulary continuous speech recognition (LVCSR) based on a context-dependent deep neural network hidden Markov model (CD-DNN-HMM) algorithm. The apparatus may include an extractor configured to extract acoustic model-state level information corresponding to an input speech signal from a training data model set using at least one of a first feature vector based on a gammatone filterbank signal analysis algorithm and a second feature vector based on a bottleneck algorithm, and a speech recognizer configured to provide a result of recognizing the input speech signal based on the extracted acoustic model-state level information.
    Type: Application
    Filed: February 12, 2016
    Publication date: August 18, 2016
    Inventors: Sung Joo LEE, Byung Ok KANG, Jeon Gue PARK, Yun Keun LEE, Hoon CHUNG
  • Publication number: 20150012274
    Abstract: An apparatus for extracting features for speech recognition in accordance with the present invention includes: a frame forming portion configured to separate input speech signals in frame units having a prescribed size; a static feature extracting portion configured to extract a static feature vector for each frame of the speech signals; a dynamic feature extracting portion configured to extract a dynamic feature vector representing a temporal variance of the extracted static feature vector by use of a basis function or a basis vector; and a feature vector combining portion configured to combine the extracted static feature vector with the extracted dynamic feature vector to configure a feature vector stream.
    Type: Application
    Filed: May 15, 2014
    Publication date: January 8, 2015
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Sung-Joo LEE, Byung-Ok Kang, Hoon Chung, Ho-Young Jung, Hwa-Jeon Song, Yoo-Rhee Oh, Yun-Keun Lee