Patents by Inventor Kenichi Kumatani
Kenichi Kumatani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11935525Abstract: Systems and methods for utilizing microphone array information for acoustic modeling are disclosed. Audio data may be received from a device having a microphone array configuration. Microphone configuration data may also be received that indicates the configuration of the microphone array. The microphone configuration data may be utilized as an input vector to an acoustic model, along with the audio data, to generate phoneme data. Additionally, the microphone configuration data may be utilized to train and/or generate acoustic models, select an acoustic model to perform speech recognition with, and/or to improve trigger sound detection.Type: GrantFiled: June 8, 2020Date of Patent: March 19, 2024Assignee: Amazon Technologies, Inc.Inventors: Shiva Kumar Sundaram, Minhua Wu, Anirudh Raju, Spyridon Matsoukas, Arindam Mandal, Kenichi Kumatani
-
Publication number: 20230368782Abstract: Systems and methods are provided for training a machine learning model to learn speech representations. Labeled speech data or both labeled and unlabeled data sets is applied to a feature extractor of a machine learning model to generate latent speech representations. The latent speech representations are applied to a quantizer to generate quantized latent speech representations and to a transformer context network to generate contextual representations. Each contextual representation included in the contextual representations is aligned with a phoneme label to generate phonetically-aware contextual representations. Quantized latent representations are aligned with phoneme labels to generate phonetically aware latent speech representations.Type: ApplicationFiled: July 3, 2023Publication date: November 16, 2023Inventors: Yao QIAN, Yu WU, Kenichi KUMATANI, Shujie LIU, Furu WEI, Nanshan ZENG, Xuedong David HUANG, Chengyi WANG
-
Patent number: 11735171Abstract: Systems and methods are provided for training a machine learning model to learn speech representations. Labeled speech data or both labeled and unlabeled data sets is applied to a feature extractor of a machine learning model to generate latent speech representations. The latent speech representations are applied to a quantizer to generate quantized latent speech representations and to a transformer context network to generate contextual representations. Each contextual representation included in the contextual representations is aligned with a phoneme label to generate phonetically-aware contextual representations. Quantized latent representations are aligned with phoneme labels to generate phonetically aware latent speech representations.Type: GrantFiled: May 14, 2021Date of Patent: August 22, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Yao Qian, Yu Wu, Kenichi Kumatani, Shujie Liu, Furu Wei, Nanshan Zeng, Xuedong David Huang, Chengyi Wang
-
Patent number: 11574628Abstract: Techniques for speech processing using a deep neural network (DNN) based acoustic model front-end are described. A new modeling approach directly models multi-channel audio data received from a microphone array using a first model (e.g., multi-geometry/multi-channel DNN) that is trained using a plurality of microphone array geometries. Thus, the first model may receive a variable number of microphone channels, generate multiple outputs using multiple microphone array geometries, and select the best output as a first feature vector that may be used similarly to beamformed features generated by an acoustic beamformer. A second model (e.g., feature extraction DNN) processes the first feature vector and transforms it to a second feature vector having a lower dimensional representation. A third model (e.g., classification DNN) processes the second feature vector to perform acoustic unit classification and generate text data. The DNN front-end enables improved performance despite a reduction in microphones.Type: GrantFiled: March 28, 2019Date of Patent: February 7, 2023Assignee: Amazon Technologies, Inc.Inventors: Kenichi Kumatani, Minhua Wu, Shiva Sundaram, Nikko Strom, Bjorn Hoffmeister
-
Publication number: 20220366898Abstract: Systems and methods are provided for training a machine learning model to learn speech representations. Labeled speech data or both labeled and unlabeled data sets is applied to a feature extractor of a machine learning model to generate latent speech representations. The latent speech representations are applied to a quantizer to generate quantized latent speech representations and to a transformer context network to generate contextual representations. Each contextual representation included in the contextual representations is aligned with a phoneme label to generate phonetically-aware contextual representations. Quantized latent representations are aligned with phoneme labels to generate phonetically aware latent speech representations.Type: ApplicationFiled: May 14, 2021Publication date: November 17, 2022Inventors: Yao QIAN, Yu WU, Kenichi KUMATANI, Shujie LIU, Furu WEI, Nanshan ZENG, Xuedong David HUANG, Chengyi WANG
-
Patent number: 11495215Abstract: Techniques for speech processing using a deep neural network (DNN) based acoustic model front-end are described. A new modeling approach directly models multi-channel audio data received from a microphone array using a first model (e.g., multi-geometry/multi-channel DNN) that includes a frequency aligned network (FAN) architecture. Thus, the first model may perform spatial filtering to generate a first feature vector by processing individual frequency bins separately, such that multiple frequency bins are not combined. The first feature vector may be used similarly to beamformed features generated by an acoustic beamformer. A second model (e.g., feature extraction DNN) processes the first feature vector and transforms it to a second feature vector having a lower dimensional representation. A third model (e.g., classification DNN) processes the second feature vector to perform acoustic unit classification and generate text data. The DNN front-end enables improved performance despite a reduction in microphones.Type: GrantFiled: December 11, 2019Date of Patent: November 8, 2022Assignee: Amazon Technologies, Inc.Inventors: Minhua Wu, Shiva Sundaram, Tae Jin Park, Kenichi Kumatani
-
Patent number: 11475881Abstract: Techniques for speech processing using a deep neural network (DNN) based acoustic model front-end are described. A new modeling approach directly models multi-channel audio data received from a microphone array using a first model (e.g., multi-channel DNN) that takes in raw signals and produces a first feature vector that may be used similarly to beamformed features generated by an acoustic beamformer. A second model (e.g., feature extraction DNN) processes the first feature vector and transforms it to a second feature vector having a lower dimensional representation. A third model (e.g., classification DNN) processes the second feature vector to perform acoustic unit classification and generate text data. These three models may be jointly optimized for speech processing (as opposed to individually optimized for signal enhancement), enabling improved performance despite a reduction in microphones and a reduction in bandwidth consumption during real-time processing.Type: GrantFiled: July 17, 2020Date of Patent: October 18, 2022Assignee: Amazon Technologies, Inc.Inventors: Arindam Mandal, Kenichi Kumatani, Nikko Strom, Minhua Wu, Shiva Sundaram, Bjorn Hoffmeister, Jeremie Lecomte
-
Publication number: 20220036178Abstract: The disclosure herein describes training a global model based on a plurality of data sets. The global model is applied to each data set of the plurality of data sets and a plurality of gradients is generated based on that application. At least one gradient quality metric is determined for each gradient of the plurality of gradients. Based on the determined gradient quality metrics of the plurality of gradients, a plurality of weight factors is calculated. The plurality of gradients is transformed into a plurality of weighted gradients based on the calculated plurality of weight factors and a global gradient is generated based on the plurality of weighted gradients. The global model is updated based on the global gradient, wherein the updated global model, when applied to a data set, performs a task based on the data set and provides model output based on performing the task.Type: ApplicationFiled: July 31, 2020Publication date: February 3, 2022Inventors: Dimitrios B. DIMITRIADIS, Kenichi KUMATANI, Robert Peter GMYR, Masaki ITAGAKI, Yashesh GAUR, Nanshan ZENG, Xuedong HUANG
-
Patent number: 10847137Abstract: An approach to speech recognition, and in particular trigger word detection, implements fixed feature extraction form waveform samples with a neural network (NN). For example, rather than computing Log Frequency Band Energies (LFBEs), a convolutional neural network is used. In some implementations, this NN waveform processing is combined with a trained secondary classification that makes use of phonetic segmentation of a possible trigger word occurrence.Type: GrantFiled: December 12, 2017Date of Patent: November 24, 2020Assignee: Amazon Technologies, Inc.Inventors: Arindam Mandal, Nikko Strom, Kenichi Kumatani, Sankaran Panchapagesan
-
Publication number: 20200349928Abstract: Techniques for speech processing using a deep neural network (DNN) based acoustic model front-end are described. A new modeling approach directly models multi-channel audio data received from a microphone array using a first model (e.g., multi-channel DNN) that takes in raw signals and produces a first feature vector that may be used similarly to beamformed features generated by an acoustic beamformer. A second model (e.g., feature extraction DNN) processes the first feature vector and transforms it to a second feature vector having a lower dimensional representation. A third model (e.g., classification DNN) processes the second feature vector to perform acoustic unit classification and generate text data. These three models may be jointly optimized for speech processing (as opposed to individually optimized for signal enhancement), enabling improved performance despite a reduction in microphones and a reduction in bandwidth consumption during real-time processing.Type: ApplicationFiled: July 17, 2020Publication date: November 5, 2020Inventors: Arindam Mandal, Kenichi Kumatani, Nikko Strom, Minhua Wu, Shiva Sundaram, Bjorn Hoffmeister, Jeremie Lecomte
-
Patent number: 10726830Abstract: Techniques for speech processing using a deep neural network (DNN) based acoustic model front-end are described. A new modeling approach directly models multi-channel audio data received from a microphone array using a first model (e.g., multi-channel DNN) that takes in raw signals and produces a first feature vector that may be used similarly to beamformed features generated by an acoustic beamformer. A second model (e.g., feature extraction DNN) processes the first feature vector and transforms it to a second feature vector having a lower dimensional representation. A third model (e.g., classification DNN) processes the second feature vector to perform acoustic unit classification and generate text data. These three models may be jointly optimized for speech processing (as opposed to individually optimized for signal enhancement), enabling improved performance despite a reduction in microphones and a reduction in bandwidth consumption during real-time processing.Type: GrantFiled: September 27, 2018Date of Patent: July 28, 2020Assignee: Amazon Technologies, Inc.Inventors: Arindam Mandal, Kenichi Kumatani, Nikko Strom, Minhua Wu, Shiva Sundaram, Bjorn Hoffmeister, Jeremie Lecomte
-
Patent number: 10679621Abstract: Systems and methods for utilizing microphone array information for acoustic modeling are disclosed. Audio data may be received from a device having a microphone array configuration. Microphone configuration data may also be received that indicates the configuration of the microphone array. The microphone configuration data may be utilized as an input vector to an acoustic model, along with the audio data, to generate phoneme data. Additionally, the microphone configuration data may be utilized to train and/or generate acoustic models, select an acoustic model to perform speech recognition with, and/or to improve trigger sound detection.Type: GrantFiled: March 21, 2018Date of Patent: June 9, 2020Assignee: Amazon Technologies, Inc.Inventors: Shiva Kumar Sundaram, Minhua Wu, Anirudh Raju, Spyridon Matsoukas, Arindam Mandal, Kenichi Kumatani
-
Patent number: 9817881Abstract: A method, apparatus, and tangible computer readable medium for processing a Hidden Markov Model (HMM) structure are disclosed herein. For example, the method includes receiving Hidden Markov Model (HMM) information from an external system. The method also includes processing back pointer data and first HMM states scores for one or more NULL states in the HMM information. Second HMM state scores are processed for one or more non-NULL states in the HMM information based on at least one predecessor state. Further, the method includes transferring the second HMM state scores to the external system.Type: GrantFiled: October 16, 2013Date of Patent: November 14, 2017Assignee: Cypress Semiconductor CorporationInventors: Ojas A. Bapat, Richard M. Fastow, Jens Olson, Kenichi Kumatani
-
Patent number: 9373321Abstract: A method, system and tangible computer readable medium for generating one or more wake-up words are provided. For example, the method can include receiving a text representation of the one or more wake-up words. A strength of the text representation of the one or more wake-up words can be determined based on one or more static measures. The method can also include receiving an audio representation of the one or more wake-up words. A strength of the audio representation of the one or more wake-up words can be determined based on one or more dynamic measures. Feedback on the one or more wake-up words is provided (e.g., to an end user) based on the strengths of the text and audio representations.Type: GrantFiled: December 2, 2013Date of Patent: June 21, 2016Assignee: Cypress Semiconductor CorporationInventors: Ojas Ashok Bapat, Kenichi Kumatani
-
Publication number: 20150154953Abstract: A method, system and tangible computer readable medium for generating one or more wake-up words are provided. For example, the method can include receiving a text representation of the one or more wake-up words. A strength of the text representation of the one or more wake-up words can be determined based on one or more static measures. The method can also include receiving an audio representation of the one or more wake-up words. A strength of the audio representation of the one or more wake-up words can be determined based on one or more dynamic measures. Feedback on the one or more wake-up words is provided (e.g., to an end user) based on the strengths of the text and audio representations.Type: ApplicationFiled: December 2, 2013Publication date: June 4, 2015Applicant: Spansion LLCInventors: Ojas A. Bapat, Kenichi Kumatani
-
Publication number: 20150106405Abstract: A method, apparatus, and tangible computer readable medium for processing a Hidden Markov Model (HMM) structure are disclosed herein. For example, the method includes receiving Hidden Markov Model (HMM) information from an external system. The method also includes processing back pointer data and first HMM states scores for one or more NULL states in the HMM information. Second HMM state scores are processed for one or more non-NULL states in the HMM information based on at least one predecessor state. Further, the method includes transferring the second HMM state scores to the external system.Type: ApplicationFiled: October 16, 2013Publication date: April 16, 2015Applicant: Spansion LLCInventors: Ojas BAPAT, Richard Fastow, Jens Olson, Kenichi Kumatani