Patents by Inventor Suwon SHON

Suwon SHON has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11664015
    Abstract: A method for searching content having same voice as a voice of a target speaker from among a plurality of contents includes extracting a feature vector corresponding to the voice of the target speaker, selecting any subset of speakers from a training dataset repeatedly by a predetermined number of times, generating linear discriminant analysis (LDA) transformation matrices using each of the selected any subsets of speakers repeatedly by a predetermined number of times, projecting the extracted speaker feature vector to the selected corresponding subsets of speakers using each of the generated LDA transformation matrices, assigning a value corresponding to nearby speaker class among corresponding subsets of speakers, to each of projection regions of the extracted speaker feature vector, generating a hash value corresponding to the extracted feature vector based on the assigned values, and searching content having a similar hash value to the generated hash value among the contents.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: May 30, 2023
    Assignee: NEOSAPIENCE, INC.
    Inventors: Suwon Shon, Younggun Lee, Taesu Kim
  • Patent number: 11521639
    Abstract: The present disclosure describes a system, method, and computer program for predicting sentiment labels for audio speech utterances using an audio speech sentiment classifier pretrained with pseudo sentiment labels. A speech sentiment classifier for audio speech (“a speech sentiment classifier”) is pretrained in an unsupervised manner by leveraging a pseudo labeler previously trained to predict sentiments for text. Specifically, a text-trained pseudo labeler is used to autogenerate pseudo sentiment labels for the audio speech utterances using transcriptions of the utterances, and the speech sentiment classifier is trained to predict the pseudo sentiment labels given corresponding embeddings of the audio speech utterances. The speech sentiment classifier is then subsequently fine tuned using a sentiment-annotated dataset of audio speech utterances, which may be significantly smaller than the unannotated dataset used in the unsupervised pretraining phase.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: December 6, 2022
    Assignee: ASAPP, INC.
    Inventors: Suwon Shon, Pablo Brusco, Jing Pan, Kyu Jeong Han
  • Publication number: 20210280173
    Abstract: A method for searching content having same voice as a voice of a target speaker from among a plurality of contents includes extracting a feature vector corresponding to the voice of the target speaker, selecting any subset of speakers from a training dataset repeatedly by a predetermined number of times, generating linear discriminant analysis (LDA) transformation matrices using each of the selected any subsets of speakers repeatedly by a predetermined number of times, projecting the extracted speaker feature vector to the selected corresponding subsets of speakers using each of the generated LDA transformation matrices, assigning a value corresponding to nearby speaker class among corresponding subsets of speakers, to each of projection regions of the extracted speaker feature vector, generating a hash value corresponding to the extracted feature vector based on the assigned values, and searching content having a similar hash value to the generated hash value among the contents.
    Type: Application
    Filed: May 13, 2021
    Publication date: September 9, 2021
    Applicant: NEOSAPIENCE, INC.
    Inventors: Suwon SHON, Younggun LEE, Taesu KIM
  • Patent number: 10410638
    Abstract: A method of converting a feature vector includes extracting a feature sequence from an audio signal including utterance of a user; extracting a feature vector from the feature sequence; acquiring a conversion matrix for reducing a dimension of the feature vector, based on a probability value acquired based on different covariance values; and converting the feature vector by using the conversion matrix.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: September 10, 2019
    Assignees: SAMSUNG ELECTRONICS CO., LTD., Korea University Research and Business Foundation
    Inventors: Hanseok Ko, Sung-soo Kim, Jinsang Rho, Suwon Shon, Jae-won Lee
  • Publication number: 20180033439
    Abstract: A method of converting a feature vector includes extracting a feature sequence from an audio signal including utterance of a user; extracting a feature vector from the feature sequence; acquiring a conversion matrix for reducing a dimension of the feature vector, based on a probability value acquired based on different covariance values; and converting the feature vector by using the conversion matrix.
    Type: Application
    Filed: February 27, 2015
    Publication date: February 1, 2018
    Applicants: SAMSUNG ELECTRONICS CO., LTD., Korea University Research and Business Foundation
    Inventors: Hanseok KO, Sung-soo KIM, Jinsang RHO, Suwon SHON, Jae-won LEE