Patents by Inventor Kenichi Kumatani

Kenichi Kumatani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11935525
    Abstract: Systems and methods for utilizing microphone array information for acoustic modeling are disclosed. Audio data may be received from a device having a microphone array configuration. Microphone configuration data may also be received that indicates the configuration of the microphone array. The microphone configuration data may be utilized as an input vector to an acoustic model, along with the audio data, to generate phoneme data. Additionally, the microphone configuration data may be utilized to train and/or generate acoustic models, select an acoustic model to perform speech recognition with, and/or to improve trigger sound detection.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: March 19, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Shiva Kumar Sundaram, Minhua Wu, Anirudh Raju, Spyridon Matsoukas, Arindam Mandal, Kenichi Kumatani
  • Publication number: 20230368782
    Abstract: Systems and methods are provided for training a machine learning model to learn speech representations. Labeled speech data or both labeled and unlabeled data sets is applied to a feature extractor of a machine learning model to generate latent speech representations. The latent speech representations are applied to a quantizer to generate quantized latent speech representations and to a transformer context network to generate contextual representations. Each contextual representation included in the contextual representations is aligned with a phoneme label to generate phonetically-aware contextual representations. Quantized latent representations are aligned with phoneme labels to generate phonetically aware latent speech representations.
    Type: Application
    Filed: July 3, 2023
    Publication date: November 16, 2023
    Inventors: Yao QIAN, Yu WU, Kenichi KUMATANI, Shujie LIU, Furu WEI, Nanshan ZENG, Xuedong David HUANG, Chengyi WANG
  • Patent number: 11735171
    Abstract: Systems and methods are provided for training a machine learning model to learn speech representations. Labeled speech data or both labeled and unlabeled data sets is applied to a feature extractor of a machine learning model to generate latent speech representations. The latent speech representations are applied to a quantizer to generate quantized latent speech representations and to a transformer context network to generate contextual representations. Each contextual representation included in the contextual representations is aligned with a phoneme label to generate phonetically-aware contextual representations. Quantized latent representations are aligned with phoneme labels to generate phonetically aware latent speech representations.
    Type: Grant
    Filed: May 14, 2021
    Date of Patent: August 22, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yao Qian, Yu Wu, Kenichi Kumatani, Shujie Liu, Furu Wei, Nanshan Zeng, Xuedong David Huang, Chengyi Wang
  • Patent number: 11574628
    Abstract: Techniques for speech processing using a deep neural network (DNN) based acoustic model front-end are described. A new modeling approach directly models multi-channel audio data received from a microphone array using a first model (e.g., multi-geometry/multi-channel DNN) that is trained using a plurality of microphone array geometries. Thus, the first model may receive a variable number of microphone channels, generate multiple outputs using multiple microphone array geometries, and select the best output as a first feature vector that may be used similarly to beamformed features generated by an acoustic beamformer. A second model (e.g., feature extraction DNN) processes the first feature vector and transforms it to a second feature vector having a lower dimensional representation. A third model (e.g., classification DNN) processes the second feature vector to perform acoustic unit classification and generate text data. The DNN front-end enables improved performance despite a reduction in microphones.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: February 7, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Kenichi Kumatani, Minhua Wu, Shiva Sundaram, Nikko Strom, Bjorn Hoffmeister
  • Publication number: 20220366898
    Abstract: Systems and methods are provided for training a machine learning model to learn speech representations. Labeled speech data or both labeled and unlabeled data sets is applied to a feature extractor of a machine learning model to generate latent speech representations. The latent speech representations are applied to a quantizer to generate quantized latent speech representations and to a transformer context network to generate contextual representations. Each contextual representation included in the contextual representations is aligned with a phoneme label to generate phonetically-aware contextual representations. Quantized latent representations are aligned with phoneme labels to generate phonetically aware latent speech representations.
    Type: Application
    Filed: May 14, 2021
    Publication date: November 17, 2022
    Inventors: Yao QIAN, Yu WU, Kenichi KUMATANI, Shujie LIU, Furu WEI, Nanshan ZENG, Xuedong David HUANG, Chengyi WANG
  • Patent number: 11495215
    Abstract: Techniques for speech processing using a deep neural network (DNN) based acoustic model front-end are described. A new modeling approach directly models multi-channel audio data received from a microphone array using a first model (e.g., multi-geometry/multi-channel DNN) that includes a frequency aligned network (FAN) architecture. Thus, the first model may perform spatial filtering to generate a first feature vector by processing individual frequency bins separately, such that multiple frequency bins are not combined. The first feature vector may be used similarly to beamformed features generated by an acoustic beamformer. A second model (e.g., feature extraction DNN) processes the first feature vector and transforms it to a second feature vector having a lower dimensional representation. A third model (e.g., classification DNN) processes the second feature vector to perform acoustic unit classification and generate text data. The DNN front-end enables improved performance despite a reduction in microphones.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: November 8, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Minhua Wu, Shiva Sundaram, Tae Jin Park, Kenichi Kumatani
  • Patent number: 11475881
    Abstract: Techniques for speech processing using a deep neural network (DNN) based acoustic model front-end are described. A new modeling approach directly models multi-channel audio data received from a microphone array using a first model (e.g., multi-channel DNN) that takes in raw signals and produces a first feature vector that may be used similarly to beamformed features generated by an acoustic beamformer. A second model (e.g., feature extraction DNN) processes the first feature vector and transforms it to a second feature vector having a lower dimensional representation. A third model (e.g., classification DNN) processes the second feature vector to perform acoustic unit classification and generate text data. These three models may be jointly optimized for speech processing (as opposed to individually optimized for signal enhancement), enabling improved performance despite a reduction in microphones and a reduction in bandwidth consumption during real-time processing.
    Type: Grant
    Filed: July 17, 2020
    Date of Patent: October 18, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Arindam Mandal, Kenichi Kumatani, Nikko Strom, Minhua Wu, Shiva Sundaram, Bjorn Hoffmeister, Jeremie Lecomte
  • Publication number: 20220036178
    Abstract: The disclosure herein describes training a global model based on a plurality of data sets. The global model is applied to each data set of the plurality of data sets and a plurality of gradients is generated based on that application. At least one gradient quality metric is determined for each gradient of the plurality of gradients. Based on the determined gradient quality metrics of the plurality of gradients, a plurality of weight factors is calculated. The plurality of gradients is transformed into a plurality of weighted gradients based on the calculated plurality of weight factors and a global gradient is generated based on the plurality of weighted gradients. The global model is updated based on the global gradient, wherein the updated global model, when applied to a data set, performs a task based on the data set and provides model output based on performing the task.
    Type: Application
    Filed: July 31, 2020
    Publication date: February 3, 2022
    Inventors: Dimitrios B. DIMITRIADIS, Kenichi KUMATANI, Robert Peter GMYR, Masaki ITAGAKI, Yashesh GAUR, Nanshan ZENG, Xuedong HUANG
  • Patent number: 10847137
    Abstract: An approach to speech recognition, and in particular trigger word detection, implements fixed feature extraction form waveform samples with a neural network (NN). For example, rather than computing Log Frequency Band Energies (LFBEs), a convolutional neural network is used. In some implementations, this NN waveform processing is combined with a trained secondary classification that makes use of phonetic segmentation of a possible trigger word occurrence.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: November 24, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Arindam Mandal, Nikko Strom, Kenichi Kumatani, Sankaran Panchapagesan
  • Publication number: 20200349928
    Abstract: Techniques for speech processing using a deep neural network (DNN) based acoustic model front-end are described. A new modeling approach directly models multi-channel audio data received from a microphone array using a first model (e.g., multi-channel DNN) that takes in raw signals and produces a first feature vector that may be used similarly to beamformed features generated by an acoustic beamformer. A second model (e.g., feature extraction DNN) processes the first feature vector and transforms it to a second feature vector having a lower dimensional representation. A third model (e.g., classification DNN) processes the second feature vector to perform acoustic unit classification and generate text data. These three models may be jointly optimized for speech processing (as opposed to individually optimized for signal enhancement), enabling improved performance despite a reduction in microphones and a reduction in bandwidth consumption during real-time processing.
    Type: Application
    Filed: July 17, 2020
    Publication date: November 5, 2020
    Inventors: Arindam Mandal, Kenichi Kumatani, Nikko Strom, Minhua Wu, Shiva Sundaram, Bjorn Hoffmeister, Jeremie Lecomte
  • Patent number: 10726830
    Abstract: Techniques for speech processing using a deep neural network (DNN) based acoustic model front-end are described. A new modeling approach directly models multi-channel audio data received from a microphone array using a first model (e.g., multi-channel DNN) that takes in raw signals and produces a first feature vector that may be used similarly to beamformed features generated by an acoustic beamformer. A second model (e.g., feature extraction DNN) processes the first feature vector and transforms it to a second feature vector having a lower dimensional representation. A third model (e.g., classification DNN) processes the second feature vector to perform acoustic unit classification and generate text data. These three models may be jointly optimized for speech processing (as opposed to individually optimized for signal enhancement), enabling improved performance despite a reduction in microphones and a reduction in bandwidth consumption during real-time processing.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: July 28, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Arindam Mandal, Kenichi Kumatani, Nikko Strom, Minhua Wu, Shiva Sundaram, Bjorn Hoffmeister, Jeremie Lecomte
  • Patent number: 10679621
    Abstract: Systems and methods for utilizing microphone array information for acoustic modeling are disclosed. Audio data may be received from a device having a microphone array configuration. Microphone configuration data may also be received that indicates the configuration of the microphone array. The microphone configuration data may be utilized as an input vector to an acoustic model, along with the audio data, to generate phoneme data. Additionally, the microphone configuration data may be utilized to train and/or generate acoustic models, select an acoustic model to perform speech recognition with, and/or to improve trigger sound detection.
    Type: Grant
    Filed: March 21, 2018
    Date of Patent: June 9, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Shiva Kumar Sundaram, Minhua Wu, Anirudh Raju, Spyridon Matsoukas, Arindam Mandal, Kenichi Kumatani
  • Patent number: 9817881
    Abstract: A method, apparatus, and tangible computer readable medium for processing a Hidden Markov Model (HMM) structure are disclosed herein. For example, the method includes receiving Hidden Markov Model (HMM) information from an external system. The method also includes processing back pointer data and first HMM states scores for one or more NULL states in the HMM information. Second HMM state scores are processed for one or more non-NULL states in the HMM information based on at least one predecessor state. Further, the method includes transferring the second HMM state scores to the external system.
    Type: Grant
    Filed: October 16, 2013
    Date of Patent: November 14, 2017
    Assignee: Cypress Semiconductor Corporation
    Inventors: Ojas A. Bapat, Richard M. Fastow, Jens Olson, Kenichi Kumatani
  • Patent number: 9373321
    Abstract: A method, system and tangible computer readable medium for generating one or more wake-up words are provided. For example, the method can include receiving a text representation of the one or more wake-up words. A strength of the text representation of the one or more wake-up words can be determined based on one or more static measures. The method can also include receiving an audio representation of the one or more wake-up words. A strength of the audio representation of the one or more wake-up words can be determined based on one or more dynamic measures. Feedback on the one or more wake-up words is provided (e.g., to an end user) based on the strengths of the text and audio representations.
    Type: Grant
    Filed: December 2, 2013
    Date of Patent: June 21, 2016
    Assignee: Cypress Semiconductor Corporation
    Inventors: Ojas Ashok Bapat, Kenichi Kumatani
  • Publication number: 20150154953
    Abstract: A method, system and tangible computer readable medium for generating one or more wake-up words are provided. For example, the method can include receiving a text representation of the one or more wake-up words. A strength of the text representation of the one or more wake-up words can be determined based on one or more static measures. The method can also include receiving an audio representation of the one or more wake-up words. A strength of the audio representation of the one or more wake-up words can be determined based on one or more dynamic measures. Feedback on the one or more wake-up words is provided (e.g., to an end user) based on the strengths of the text and audio representations.
    Type: Application
    Filed: December 2, 2013
    Publication date: June 4, 2015
    Applicant: Spansion LLC
    Inventors: Ojas A. Bapat, Kenichi Kumatani
  • Publication number: 20150106405
    Abstract: A method, apparatus, and tangible computer readable medium for processing a Hidden Markov Model (HMM) structure are disclosed herein. For example, the method includes receiving Hidden Markov Model (HMM) information from an external system. The method also includes processing back pointer data and first HMM states scores for one or more NULL states in the HMM information. Second HMM state scores are processed for one or more non-NULL states in the HMM information based on at least one predecessor state. Further, the method includes transferring the second HMM state scores to the external system.
    Type: Application
    Filed: October 16, 2013
    Publication date: April 16, 2015
    Applicant: Spansion LLC
    Inventors: Ojas BAPAT, Richard Fastow, Jens Olson, Kenichi Kumatani