Patents by Inventor Kyu-Jeong Han

Kyu-Jeong Han has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11862146
    Abstract: Audio signals of speech may be processed using an acoustic model. An acoustic model may be implemented with multiple streams of processing where different streams perform processing using different dilation rates. For example, a first stream may process features of the audio signal with one or more convolutional neural network layers having a first dilation rate, and a second stream may process features of the audio signal with one or more convolutional neural network layers having a second dilation rate. Each stream may compute a stream vector, and the stream vectors may be combined to a vector of speech unit scores, where the vector of speech unit scores provides information about the acoustic content of the audio signal. The vector of speech unit scores may be used for any appropriate application of speech, such as automatic speech recognition.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: January 2, 2024
    Assignee: ASAPP, INC.
    Inventors: Kyu Jeong Han, Tao Ma, Daniel Povey
  • Publication number: 20230237990
    Abstract: A speech processing model may be trained using pseudo tokens. Training a speech processing model with pseudo tokens may allow for training with a smaller amount of labeled training data and accordingly lower costs. A set of pseudo tokens may be determined by computing feature vectors from unlabeled training data, clustering the feature vectors, and performing token compression using the clustered feature vectors. A first speech processing model may be trained using unlabeled training data by determining sequences of pseudo tokens corresponding to the unlabeled training data. A second speech processing model may be initialized using the first speech processing model and then trained using labeled training data. The second speech processing model may then be deployed to a speech processing application.
    Type: Application
    Filed: July 7, 2022
    Publication date: July 27, 2023
    Inventors: Felix Wu, Kwangyoun Kim, Ryan Thomas McDonald, Kilian Quirin Weinberger, Kyu Jeong Han, Yoav Artzi
  • Patent number: 11521639
    Abstract: The present disclosure describes a system, method, and computer program for predicting sentiment labels for audio speech utterances using an audio speech sentiment classifier pretrained with pseudo sentiment labels. A speech sentiment classifier for audio speech (“a speech sentiment classifier”) is pretrained in an unsupervised manner by leveraging a pseudo labeler previously trained to predict sentiments for text. Specifically, a text-trained pseudo labeler is used to autogenerate pseudo sentiment labels for the audio speech utterances using transcriptions of the utterances, and the speech sentiment classifier is trained to predict the pseudo sentiment labels given corresponding embeddings of the audio speech utterances. The speech sentiment classifier is then subsequently fine tuned using a sentiment-annotated dataset of audio speech utterances, which may be significantly smaller than the unannotated dataset used in the unsupervised pretraining phase.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: December 6, 2022
    Assignee: ASAPP, INC.
    Inventors: Suwon Shon, Pablo Brusco, Jing Pan, Kyu Jeong Han
  • Publication number: 20220383858
    Abstract: For any application that processes speech, improving the quality of the feature vectors may improve the quality of the speech application. The quality of feature vectors may be improved by modifying a neural network architecture for computing feature vectors to allocate computational resources where they are more effective for learning and computing the feature vectors. Contextual feature vectors may be computed from feature vectors by using a parameterized downsampling operation that decreases a vector sequence rate, processing the downsampled vectors with a neural network, and using a parameterized upsampling operation that increases a vector sequence rate. For example, parameterized downsampling may decrease a vector sequence rate by a factor of two, a neural may require fewer computational resources since it operates with a lower vector sequence rate, and parameterized upsampling may then increase the vector sequence rate by a factor of two.
    Type: Application
    Filed: October 4, 2021
    Publication date: December 1, 2022
    Inventors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Jeong Han, Kilian Quirin Weinberger, Yoav Artzi
  • Publication number: 20220319501
    Abstract: The amount of future context used in a speech processing application allows for tradeoffs between performance and the delay in providing results to users. Existing speech processing applications may be trained with a specified future context size and perform poorly when used in production with a different future context size. A speech processing application trained using a stochastic future context allows a trained neural network to be used in production with different amounts of future context. During an update step in training, a future-context size may be sampled from a probability distribution, used to mask a neural network, and compute an output of the masked neural network. The output may then be used to compute a loss value and update parameters of the neural network. The trained neural network may then be used in production with different amounts of future context to provide greater flexibility for production speech processing applications.
    Type: Application
    Filed: November 18, 2021
    Publication date: October 6, 2022
    Inventors: Kwangyoun Kim, Felix Wu, Prashant Sridhar, Kyu Jeong Han
  • Patent number: 11138970
    Abstract: The present disclosure relates to a system, method, and computer program for creating a complete transcription of an audio recording from separately transcribed redacted and unredacted words. The system receives an original audio recording and redacts a plurality of words from the original audio recording to obtain a modified audio recording. The modified audio recording is outputted to a first transcription service. Audio clips of the redacted words from the original audio recording are extracted using word-level timestamps for the redacted words. The extracted audio clips are outputted to a second transcription service. The system receives a transcription of the modified audio recording from the first transcription service and transcriptions of the extracted audio clips from the second transcription service.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: October 5, 2021
    Assignee: ASAPP, Inc.
    Inventors: Kyu Jeong Han, Madison Chandler Riley, Tao Ma
  • Publication number: 20210005182
    Abstract: Audio signals of speech may be processed using an acoustic model. An acoustic model may be implemented with multiple streams of processing where different streams perform processing using different dilation rates. For example, a first stream may process features of the audio signal with one or more convolutional neural network layers having a first dilation rate, and a second stream may process features of the audio signal with one or more convolutional neural network layers having a second dilation rate. Each stream may compute a stream vector, and the stream vectors may be combined to a vector of speech unit scores, where the vector of speech unit scores provides information about the acoustic content of the audio signal. The vector of speech unit scores may be used for any appropriate application of speech, such as automatic speech recognition.
    Type: Application
    Filed: July 2, 2020
    Publication date: January 7, 2021
    Inventors: Kyu Jeong Han, Tao Ma, Daniel Povey
  • Patent number: 10753929
    Abstract: Provided is a portable urine analysis device including: a main housing including: a tray including a tray driving unit in which a strip is loaded to introduce and withdraw the strip, wherein the strip that is completely analyzed is dropped via the tray, and a urine analysis module analyzing urine of the strip to generate urine analysis information; a sub-housing coupled under the main housing and including an accommodation box which is slidable while the accommodation box has a space that temporarily stores the dropped strip; and a support supporting the sub-housing. According to the portable urine analysis device, urine is analyzed by using the strip as a medium, portability is reinforced with a compact and slim structure, and a completely analyzed strip is disposed of hygienically.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: August 25, 2020
    Assignee: PROTEC LIFE & HEALTH Co., Ltd.
    Inventors: Sung Hwan Choi, In Soo Jeon, Kyu Jeong Han
  • Patent number: 10474964
    Abstract: A machine learning model is trained by defining a scenario including models of vehicles and a typical driving environment. A model of a subject vehicle is added to the scenario and sensor locations are defined on the subject vehicle. A perception of the scenario by sensors at the sensor locations is simulated. The scenario further includes a model of a lane-splitting vehicle. The location of the lane-splitting vehicle and the simulated outputs of the sensors perceiving the scenario are input to a machine learning algorithm that trains a model to detect the location of a lane-splitting vehicle based on the sensor outputs. A vehicle controller then incorporates the machine learning model and estimates the presence and/or location of a lane-splitting vehicle based on actual sensor outputs input to the machine learning model.
    Type: Grant
    Filed: January 26, 2016
    Date of Patent: November 12, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Jinesh J Jain, Harpreetsingh Banvait, Kyu Jeong Han
  • Publication number: 20190204315
    Abstract: Provided is a portable urine analysis device including: a main housing including: a tray including a tray driving unit in which a strip is loaded to introduce and withdraw the strip, wherein the strip that is completely analyzed is dropped via the tray, and a urine analysis module analyzing urine of the strip to generate urine analysis information; a sub-housing coupled under the main housing and including an accommodation box which is slidable while the accommodation box has a space that temporarily stores the dropped strip; and a support supporting the sub-housing. According to the portable urine analysis device, urine is analyzed by using the strip as a medium, portability is reinforced with a compact and slim structure, and a completely analyzed strip is disposed of hygienically.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 4, 2019
    Applicant: PROTEC LIFE & HEALTH Co., Ltd.
    Inventors: Sung Hwan CHOI, In Soo JEON, Kyu Jeong HAN
  • Patent number: 10182359
    Abstract: A method and apparatus for testing signal reception sensitivity of a wireless access point are provided. The apparatus includes a receiver which is configured to receive a test start request signal, a transmission power level change request signal and a test packet retransmission request signal from a neighboring access point apparatus; a test signal generator which is configured to generate a test packet signal to be transmitted to the neighboring access point apparatus; a transmission power level adjuster which is configured to adjust a transmission power level of the test packet signal; a transmitter which is configured to transmit the test packet signal to the neighboring access point apparatus with a power having a level adjusted by the transmission power level adjuster; and a test executor which is configured to execute a test program having a predetermined sequence for testing a signal reception sensitivity of the neighboring access point apparatus.
    Type: Grant
    Filed: October 17, 2011
    Date of Patent: January 15, 2019
    Assignee: KT Corporation
    Inventors: Jae Ho Chung, Wi Sang Rho, Yung Ha Ji, Kyu Jeong Han
  • Patent number: 10096263
    Abstract: Methods, devices, and systems pertaining to in-vehicle tutorials are described. A method may involve receiving a request for an in-vehicle tutorial of an operational feature of a vehicle from a user and simulating expected driving behavior corresponding to the operational feature in the vehicle. The method may further include monitoring operational behavior of the user, comparing the operational behavior with the expected driving behavior, and providing a feedback to the user based on the comparison.
    Type: Grant
    Filed: September 2, 2015
    Date of Patent: October 9, 2018
    Inventors: Jinesh J Jain, Daniel Levine, Kyu Jeong Han, Gintaras Vincent Puskorius
  • Patent number: 10055675
    Abstract: A machine learning model is trained by defining a scenario including models of vehicles and a typical driving environment. A model of a subject vehicle is added to the scenario and sensor locations are defined on the subject vehicle. A perception of the scenario by sensors at the sensor locations is simulated. The scenario further includes a model of a parked vehicle with its engine running. The location of the parked vehicle and the simulated outputs of the sensors perceiving the scenario are input to a machine learning algorithm that trains a model to detect the location of the parked vehicle based on the sensor outputs. A vehicle controller then incorporates the machine learning model and estimates the presence and/or location of a parked vehicle with its engine running based on actual sensor outputs input to the machine learning model.
    Type: Grant
    Filed: June 15, 2016
    Date of Patent: August 21, 2018
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Jinesh J Jain, Kyu Jeong Han, Harpreetsingh Banvait
  • Patent number: 9996080
    Abstract: A controller for an autonomous vehicle receives audio signals from one or more microphones. The audio signals are input to a machine learning model that classifies the source of the audio features. For example, features may be classified as originating from a vehicle. A direction to a source of the audio features is determined based on relative delays of the audio features in signals from multiple microphones. Where audio features are classified with an above-threshold confidence as originating from a vehicle, collision avoidance is performed with respect to the direction to the source of the audio features. The direction to the source of the audio features may be correlated with vehicle images and/or map data to increase a confidence score that the source of the audio features is a parked vehicle with its engine running. Collision avoidance may then be performed with potential paths of the parked vehicle.
    Type: Grant
    Filed: February 26, 2016
    Date of Patent: June 12, 2018
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Harpreetsingh Banvait, Jinesh J Jain, Kyu Jeong Han
  • Publication number: 20180032902
    Abstract: Training tuples including text and a question and answer corresponding to the text are input to a machine learning algorithm, such as a deep neural network. A Q&A model is obtained that outputs questions and answers given an input text. The training tuples may be obtained from standardized test such that the text is a question prompt and the questions and answers are based on the prompt. Raw text is input to the Q&A model to obtain second training tuples including a question and an answer. An NLU model is trained according to the second training tuples. The NLU model may then be installed on a consumer device, which will then use the model to respond to conversational queries and provide an appropriate response.
    Type: Application
    Filed: July 27, 2016
    Publication date: February 1, 2018
    Inventors: Lakshmi Krishnan, Kyu Jeong Han, Francois Charette, Gintaras Vincent Puskorius
  • Patent number: 9873428
    Abstract: A controller for an autonomous vehicle receives audio signals from one or more microphones. The outputs of the microphones are pre-processed to enhance audio features that originated from vehicles. The outputs may also be processed to remove noise. The audio features are input to a machine learning model that classifies the source of the audio features. For example, features may be classified as originating from a vehicle. A direction to a source of the audio features is determined based on relative delays of the audio features in signals from multiple microphones. Where audio features are classified with an above-threshold confidence as originating from a vehicle, collision avoidance is performed with respect to the direction to the source of the audio features.
    Type: Grant
    Filed: October 27, 2015
    Date of Patent: January 23, 2018
    Assignee: Ford Global Technologies, LLC
    Inventors: Harpreetsingh Banvait, Kyu Jeong Han, Jinesh J Jain
  • Publication number: 20170364776
    Abstract: A machine learning model is trained by defining a scenario including models of vehicles and a typical driving environment. A model of a subject vehicle is added to the scenario and sensor locations are defined on the subject vehicle. A perception of the scenario by sensors at the sensor locations is simulated. The scenario further includes a model of a parked vehicle with its engine running. The location of the parked vehicle and the simulated outputs of the sensors perceiving the scenario are input to a machine learning algorithm that trains a model to detect the location of the parked vehicle based on the sensor outputs. A vehicle controller then incorporates the machine learning model and estimates the presence and/or location of a parked vehicle with its engine running based on actual sensor outputs input to the machine learning model.
    Type: Application
    Filed: June 15, 2016
    Publication date: December 21, 2017
    Inventors: Ashley Elizabeth Micks, Jinesh J. Jain, Kyu Jeong Han, Harpreetsingh Banvait
  • Publication number: 20170248955
    Abstract: A controller for an autonomous vehicle receives audio signals from one or more microphones. The audio signals are input to a machine learning model that classifies the source of the audio features. For example, features may be classified as originating from a vehicle. A direction to a source of the audio features is determined based on relative delays of the audio features in signals from multiple microphones. Where audio features are classified with an above-threshold confidence as originating from a vehicle, collision avoidance is performed with respect to the direction to the source of the audio features. The direction to the source of the audio features may be correlated with vehicle images and/or map data to increase a confidence score that the source of the audio features is a parked vehicle with its engine running. Collision avoidance may then be performed with potential paths of the parked vehicle.
    Type: Application
    Filed: February 26, 2016
    Publication date: August 31, 2017
    Inventors: Harpreetsingh Banvait, Jinesh J. Jain, Kyu Jeong Han
  • Publication number: 20170213149
    Abstract: A machine learning model is trained by defining a scenario including models of vehicles and a typical driving environment. A model of a subject vehicle is added to the scenario and sensor locations are defined on the subject vehicle. A perception of the scenario by sensors at the sensor locations is simulated. The scenario further includes a model of a lane-splitting vehicle. The location of the lane-splitting vehicle and the simulated outputs of the sensors perceiving the scenario are input to a machine learning algorithm that trains a model to detect the location of a lane-splitting vehicle based on the sensor outputs. A vehicle controller then incorporates the machine learning model and estimates the presence and/or location of a lane-splitting vehicle based on actual sensor outputs input to the machine learning model.
    Type: Application
    Filed: January 26, 2016
    Publication date: July 27, 2017
    Inventors: ASHLEY ELIZABETH MICKS, JINESH J. JAIN, HARPREETSINGH BANVAIT, KYU JEONG HAN
  • Patent number: 9707911
    Abstract: The present invention extends to methods, systems, and computer program products for identifying a person as a driver of a vehicle. Aspects include using in-vehicle sensors to increase the accuracy of driver identification initially determined using other mechanisms. As such, at least two different types of sensory devices can be utilized to gather data in an effort to identify the driver. The data gathered from the first sensor (e.g., at a key fob) is processed to identify the driver based on learned characteristic patterns, such as the driver's gait, before the driver enters the vehicle. The data gathered from the second sensor (e.g., in an in-vehicle face or voice recognition system) is processed to confirm the driver. The confirmation is based on biometric data and learned characteristic patterns. Data from the second sensor is provided back to the first sensor to confirm the driver identity.
    Type: Grant
    Filed: March 21, 2016
    Date of Patent: July 18, 2017
    Assignee: Ford Global Technologies, LLC
    Inventors: Scott Vincent Myers, Shane Elwart, Walter Joseph Talamonti, Jonathan Thomas Mullen, Zachary David Nelson, Tory Smith, Bibhrajit Halder, Kyu Jeong Han