Patents by Inventor Takehiko Mizuguchi

Takehiko Mizuguchi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11355138
    Abstract: A method is provided. Intermediate audio features are generated from respective segments of an input acoustic time series for a same scene. Using a nearest neighbor search, respective segments of the input acoustic time series are classified based on the intermediate audio features to generate a final intermediate feature as a classification for the input acoustic time series. Each respective segment corresponds to a respective different acoustic window. The generating step includes learning the intermediate audio features from Multi-Frequency Cepstral Component (MFCC) features extracted from the input acoustic time series, dividing the same scene into the different windows having varying MFCC features, and feeding the MFCC features of each window into respective LSTM units such that a hidden state of each respective LSTM unit is passed through an attention layer to identify feature correlations between hidden states at different time steps corresponding to different ones of the different windows.
    Type: Grant
    Filed: August 19, 2020
    Date of Patent: June 7, 2022
    Inventors: Cristian Lumezanu, Yuncong Chen, Dongjin Song, Takehiko Mizuguchi, Haifeng Chen, Bo Dong
  • Publication number: 20210065735
    Abstract: A method is provided. Intermediate audio features are generated from an input acoustic sequence. Using a nearest neighbor search, segments of the input acoustic sequence are classified based on the intermediate audio features to generate a final intermediate feature as a classification for the input acoustic sequence. Each segment corresponds to a respective different acoustic window. The generating step includes learning the intermediate audio features from Multi-Frequency Cepstral Component (MFCC) features extracted from the input acoustic sequence. The generating step includes dividing the same scene into the different acoustic windows having varying MFCC features.
    Type: Application
    Filed: August 19, 2020
    Publication date: March 4, 2021
    Inventors: Cristian Lumezanu, Yuncong Chen, Dongjin Song, Takehiko Mizuguchi, Haifeng Chen, Bo Dong
  • Publication number: 20210065734
    Abstract: A method is provided. Intermediate audio features are generated from respective segments of an input acoustic time series for a same scene. Using a nearest neighbor search, respective segments of the input acoustic time series are classified based on the intermediate audio features to generate a final intermediate feature as a classification for the input acoustic time series. Each respective segment corresponds to a respective different acoustic window. The generating step includes learning the intermediate audio features from Multi-Frequency Cepstral Component (MFCC) features extracted from the input acoustic time series, dividing the same scene into the different windows having varying MFCC features, and feeding the MFCC features of each window into respective LSTM units such that a hidden state of each respective LSTM unit is passed through an attention layer to identify feature correlations between hidden states at different time steps corresponding to different ones of the different windows.
    Type: Application
    Filed: August 19, 2020
    Publication date: March 4, 2021
    Inventors: Cristian Lumezanu, Yuncong Chen, Dongjin Song, Takehiko Mizuguchi, Haifeng Chen, Bo Dong
  • Patent number: 10930301
    Abstract: A method is provided. Intermediate audio features are generated from an input acoustic sequence. Using a nearest neighbor search, segments of the input acoustic sequence are classified based on the intermediate audio features to generate a final intermediate feature as a classification for the input acoustic sequence. Each segment corresponds to a respective different acoustic window. The generating step includes learning the intermediate audio features from Multi-Frequency Cepstral Component (MFCC) features extracted from the input acoustic sequence. The generating step includes dividing the same scene into the different acoustic windows having varying MFCC features.
    Type: Grant
    Filed: August 19, 2020
    Date of Patent: February 23, 2021
    Inventors: Cristian Lumezanu, Yuncong Chen, Dongjin Song, Takehiko Mizuguchi, Haifeng Chen, Bo Dong