Patents by Inventor Byung Ok Kang
Byung Ok Kang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240105166Abstract: Provided is a self-supervised learning method based on permutation invariant cross entropy. A self-supervised learning method based on permutation invariant cross entropy performed by an electronic device includes: defining a cross entropy loss function for pre-training of an end-to-end speech recognition model; configuring non-transcription speech corpus data composed only of speech as input data of the cross entropy loss function; setting all permutations of classes included in the non-transcription speech corpus data as an output target and calculating cross entropy losses for each class; and determining a minimum cross entropy loss among the calculated cross entropy losses for each class as a final loss.Type: ApplicationFiled: July 11, 2023Publication date: March 28, 2024Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hoon CHUNG, Byung Ok KANG, Yoonhyung KIM
-
Publication number: 20240092141Abstract: An air conditioning device for a vehicle includes: a housing having an inside divided into an inflow space, a heat exchange space, and an outflow space, which are straightly arranged, and having a plurality of discharge ports, which communicates with an interior, at the inflow space; a blowing unit disposed at the inflow space of the housing and configured to blow air; a heat exchange unit disposed at the heat exchange space of the housing and configured to adjust a temperature of conditioned air by exchanging heat with air; and an opening-closing door disposed at the outflow space of the housing and configured to open and close the plurality of discharge ports such that conditioned air at an adjusted temperature selectively flows to the plurality of discharge ports. The air conditioning device adjusts the temperature of conditioned air for respective modes and reduces a flow resistance of air.Type: ApplicationFiled: March 8, 2023Publication date: March 21, 2024Applicants: HYUNDAI MOTOR COMPANY, KIA CORPORATION, DOOWON CLIMATE CONTROL CO., LTD.Inventors: Kwang Ok Han, Young Tae Song, Yong Chul Kim, Gee Young Shin, Su Yeon Kang, Jae Sik Choi, Dae Hee Lee, Byeong Moo Jang, Ung Hwi Kim, Jae Won Cha, Won Jun Joung, Byung Guk An
-
Publication number: 20240083811Abstract: A glass article includes a first surface, a second surface opposed to the first surface, a first compressive region extending from the first surface to a first compression depth, a second compressive region extending from the second surface to a second compression depth and a tensile region between the first compression depth and the second compression depth. A stress profile of the first compressive region includes a first segment located between the first surface and a first transition point and a second segment located between the first transition point and the first compression depth. A depth from the first surface to the first transition point ranges from 6.1 ?m to 8.1 ?m. A compressive stress at the first transition point ranges from 207 MPa to 254 MPa. A stress-depth ratio of the first transition point ranges from 28 MPa/?m to 35 MPa/?m.Type: ApplicationFiled: November 20, 2023Publication date: March 14, 2024Inventors: Gyu In SHIM, Seung KIM, Byung Hoon KANG, Young Ok PARK, Su Jin SUNG
-
Patent number: 11912613Abstract: A glass article includes lithium aluminosilicate, includes a first surface, a second surface opposed to the first surface, a first compressive region extending from the first surface to a first compression depth, a second compressive region extending from the second surface to a second compression depth, and, a tensile region disposed between the first compression depth and the second compression depth, where a stress profile of the first compressive region has a first local minimum point at which the stress profile is convex downward and a first local maximum point at which the stress profile is convex upward, where a depth of the first local maximum point is greater than a depth of the first local minimum point, and where a stress of the first local maximum point is greater than a compressive stress of the first local minimum point.Type: GrantFiled: July 10, 2020Date of Patent: February 27, 2024Assignee: SAMSUNG DISPLAY CO., LTD.Inventors: Su Jin Sung, Byung Hoon Kang, Seung Kim, Young Ok Park, Gyu In Shim
-
Patent number: 11912603Abstract: A glass article includes a first surface; a second surface opposed to the first surface; a side surface connecting the first surface to the second surface; a first surface compressive region extending from the first surface to a first depth; a second surface compressive region extending from the second surface to a second depth; and a side compressive region extending from the side surface to a third depth, where the first surface and the side surface are non-tin surfaces, the second surface is a tin surface, and a maximum compressive stress of the second surface compressive region is greater than a maximum compressive stress of the first surface compressive region.Type: GrantFiled: January 8, 2021Date of Patent: February 27, 2024Assignee: SAMSUNG DISPLAY CO., LTD.Inventors: Su Jin Sung, Byung Hoon Kang, Seung Kim, Young Ok Park, Gyu In Shim
-
Publication number: 20230134942Abstract: Disclosed herein are an apparatus and method for self-supervised training of an end-to-end speech recognition model. The apparatus includes memory in which at least one program is recorded and a processor for executing the program. The program trains an end-to-end speech recognition model, including an encoder and a decoder, using untranscribed speech data. The program may add predetermined noise to the input signal of the end-to-end speech recognition model, and may calculate loss by reflecting a predetermined constraint based on the output of the encoder of the end-to-end speech recognition model.Type: ApplicationFiled: October 7, 2022Publication date: May 4, 2023Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hoon CHUNG, Byung-Ok KANG, Jeom-Ja KANG, Yun-Kyung LEE, Hyung-Bae JEON
-
Publication number: 20230009771Abstract: Disclosed herein is a method for data augmentation, which includes pretraining latent variables using first data corresponding to target speech and second data corresponding to general speech, training data augmentation parameters by receiving the first data and the second data as input, and augmenting target data using the first data and the second data through the pretrained latent variables and the trained parameters.Type: ApplicationFiled: July 1, 2022Publication date: January 12, 2023Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Byung-Ok KANG, Jeon-Gue PARK, Hyung-Bae JEON
-
Patent number: 11423238Abstract: Provided are sentence embedding method and apparatus based on subword embedding and skip-thoughts. To integrate skip-thought sentence embedding learning methodology with a subword embedding technique, a skip-thought sentence embedding learning method based on subword embedding and methodology for simultaneously learning subword embedding learning and skip-thought sentence embedding learning, that is, multitask learning methodology, are provided as methodology for applying intra-sentence contextual information to subword embedding in the case of subword embedding learning. This makes it possible to apply a sentence embedding approach to agglutinative languages such as Korean in a bag-of-words form. Also, skip-thought sentence embedding learning methodology is integrated with a subword embedding technique such that intra-sentence contextual information can be used in the case of subword embedding learning.Type: GrantFiled: November 1, 2019Date of Patent: August 23, 2022Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Eui Sok Chung, Hyun Woo Kim, Hwa Jeon Song, Ho Young Jung, Byung Ok Kang, Jeon Gue Park, Yoo Rhee Oh, Yun Keun Lee
-
Patent number: 10705155Abstract: Power management apparatuses are provided. A power management apparatus includes a secondary power device that includes at least one capacitor. The power management apparatus includes a charging circuit that includes a direct current (DC)-DC converter and that is configured to supply power to the secondary power device. Moreover, the power management apparatus includes a measuring circuit that is configured to measure a switching profile of the DC-DC converter, and to determine a state of the secondary power device by comparing at least one time period of the switching profile with a reference time. Related memory systems and methods of operation are also provided.Type: GrantFiled: July 18, 2018Date of Patent: July 7, 2020Assignee: Samsung Electronics Co., Ltd.Inventors: Su-yong An, Byung-ok Kang, Woo-sung Lee, Jae-woong Choi, Young-sang Cho
-
Publication number: 20200175119Abstract: Provided are sentence embedding method and apparatus based on subword embedding and skip-thoughts. To integrate skip-thought sentence embedding learning methodology with a subword embedding technique, a skip-thought sentence embedding learning method based on subword embedding and methodology for simultaneously learning subword embedding learning and skip-thought sentence embedding learning, that is, multitask learning methodology, are provided as methodology for applying intra-sentence contextual information to subword embedding in the case of subword embedding learning. This makes it possible to apply a sentence embedding approach to agglutinative languages such as Korean in a bag-of-words form. Also, skip-thought sentence embedding learning methodology is integrated with a subword embedding technique such that intra-sentence contextual information can be used in the case of subword embedding learning.Type: ApplicationFiled: November 1, 2019Publication date: June 4, 2020Inventors: Eui Sok CHUNG, Hyun Woo KIM, Hwa Jeon SONG, Ho Young JUNG, Byung Ok KANG, Jeon Gue PARK, Yoo Rhee OH, Yun Keun LEE
-
Patent number: 10402494Abstract: Provided is a method of automatically expanding input text. The method includes receiving input text composed of a plurality of documents, extracting a sentence pair that is present in different documents among the plurality of documents, setting the extracted sentence pair as an input of an encoder of a sequence-to-sequence model, setting an output of the encoder as an output of a decoder of the sequence-to-sequence model and generating a sentence corresponding to the input, and generating expanded text based on the generated sentence.Type: GrantFiled: February 22, 2017Date of Patent: September 3, 2019Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Eui Sok Chung, Byung Ok Kang, Ki Young Park, Jeon Gue Park, Hwa Jeon Song, Sung Joo Lee, Yun Keun Lee, Hyung Bae Jeon
-
Publication number: 20190162797Abstract: Power management apparatuses are provided. A power management apparatus includes a secondary power device that includes at least one capacitor. The power management apparatus includes a charging circuit that includes a direct current (DC)-DC converter and that is configured to supply power to the secondary power device. Moreover, the power management apparatus includes a measuring circuit that is configured to measure a switching profile of the DC-DC converter, and to determine a state of the secondary power device by comparing at least one time period of the switching profile with a reference time. Related memory systems and methods of operation are also provided.Type: ApplicationFiled: July 18, 2018Publication date: May 30, 2019Inventors: Su-yong An, Byung-ok Kang, Woo-sung Lee, Jae-woong Choi, Young-sang Cho
-
Publication number: 20180157640Abstract: Provided is a method of automatically expanding input text. The method includes receiving input text composed of a plurality of documents, extracting a sentence pair that is present in different documents among the plurality of documents, setting the extracted sentence pair as an input of an encoder of a sequence-to-sequence model, setting an output of the encoder as an output of a decoder of the sequence-to-sequence model and generating a sentence corresponding to the input, and generating expanded text based on the generated sentence.Type: ApplicationFiled: February 22, 2017Publication date: June 7, 2018Applicant: Electronics and Telecommunications Research InstituteInventors: Eui Sok CHUNG, Byung Ok Kang, Ki Young Park, Jeon Gue Park, Hwa Jeon Song, Sung Joo Lee, Yun Keun Lee, Hyung Bae Jeon
-
Patent number: 9959862Abstract: A speech recognition apparatus based on a deep-neural-network (DNN) sound model includes a memory and a processor. As the processor executes a program stored in the memory, the processor generates sound-model state sets corresponding to a plurality of pieces of set training speech data included in multi-set training speech data, generates a multi-set state cluster from the sound-model state sets, and sets the multi-set training speech data as an input node and the multi-set state cluster as output nodes so as to learn a DNN structured parameter.Type: GrantFiled: June 20, 2016Date of Patent: May 1, 2018Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Byung Ok Kang, Jeon Gue Park, Hwa Jeon Song, Yun Keun Lee, Eui Sok Chung
-
Publication number: 20180059761Abstract: Embodiments include a method of managing power and performance of an electronic device, the method comprising: providing a plurality of capacitors configured to be electrically connected to a power rail of the electronic device to supply auxiliary power to the electronic device when interrupt occurs in input power supplied to the electronic device; monitoring states of the capacitors; and controlling operations of the electronic device based on the results of the monitoring.Type: ApplicationFiled: April 6, 2017Publication date: March 1, 2018Inventors: Su-Yong AN, Byung-Ok KANG, Chung-Hyun RYU
-
Publication number: 20180047389Abstract: Provided are an apparatus and method for recognizing speech using an attention-based content-dependent (CD) acoustic model. The apparatus includes a predictive deep neural network (DNN) configured to receive input data from an input layer and output predictive values to a buffer of a first output layer, and a context DNN configured to receive a context window from the first output layer and output a final result value.Type: ApplicationFiled: January 12, 2017Publication date: February 15, 2018Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hwa Jeon SONG, Byung Ok KANG, Jeon Gue PARK, Yun Keun LEE, Hyung Bae JEON, Ho Young JUNG
-
Patent number: 9805716Abstract: Provided is an apparatus for large vocabulary continuous speech recognition (LVCSR) based on a context-dependent deep neural network hidden Markov model (CD-DNN-HMM) algorithm. The apparatus may include an extractor configured to extract acoustic model-state level information corresponding to an input speech signal from a training data model set using at least one of a first feature vector based on a gammatone filterbank signal analysis algorithm and a second feature vector based on a bottleneck algorithm, and a speech recognizer configured to provide a result of recognizing the input speech signal based on the extracted acoustic model-state level information.Type: GrantFiled: February 12, 2016Date of Patent: October 31, 2017Assignee: Electronics and Telecommunications Research InstituteInventors: Sung Joo Lee, Byung Ok Kang, Jeon Gue Park, Yun Keun Lee, Hoon Chung
-
Publication number: 20170206894Abstract: A speech recognition apparatus based on a deep-neural-network (DNN) sound model includes a memory and a processor. As the processor executes a program stored in the memory, the processor generates sound-model state sets corresponding to a plurality of pieces of set training speech data included in multi-set training speech data, generates a multi-set state cluster from the sound-model state sets, and sets the multi-set training speech data as an input node and the multi-set state cluster as output nodes so as to learn a DNN structured parameter.Type: ApplicationFiled: June 20, 2016Publication date: July 20, 2017Inventors: Byung Ok KANG, Jeon Gue PARK, Hwa Jeon SONG, Yun Keun LEE, Eui Sok CHUNG
-
Publication number: 20160240190Abstract: Provided is an apparatus for large vocabulary continuous speech recognition (LVCSR) based on a context-dependent deep neural network hidden Markov model (CD-DNN-HMM) algorithm. The apparatus may include an extractor configured to extract acoustic model-state level information corresponding to an input speech signal from a training data model set using at least one of a first feature vector based on a gammatone filterbank signal analysis algorithm and a second feature vector based on a bottleneck algorithm, and a speech recognizer configured to provide a result of recognizing the input speech signal based on the extracted acoustic model-state level information.Type: ApplicationFiled: February 12, 2016Publication date: August 18, 2016Inventors: Sung Joo LEE, Byung Ok KANG, Jeon Gue PARK, Yun Keun LEE, Hoon CHUNG
-
Publication number: 20150012274Abstract: An apparatus for extracting features for speech recognition in accordance with the present invention includes: a frame forming portion configured to separate input speech signals in frame units having a prescribed size; a static feature extracting portion configured to extract a static feature vector for each frame of the speech signals; a dynamic feature extracting portion configured to extract a dynamic feature vector representing a temporal variance of the extracted static feature vector by use of a basis function or a basis vector; and a feature vector combining portion configured to combine the extracted static feature vector with the extracted dynamic feature vector to configure a feature vector stream.Type: ApplicationFiled: May 15, 2014Publication date: January 8, 2015Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Sung-Joo LEE, Byung-Ok Kang, Hoon Chung, Ho-Young Jung, Hwa-Jeon Song, Yoo-Rhee Oh, Yun-Keun Lee