Normalizing Patents (Class 704/234)
  • Patent number: 10446156
    Abstract: Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: October 15, 2019
    Assignee: Verint Systems Ltd.
    Inventors: Omer Ziv, Ran Achituv, Ido Shapira, Jeremie Dreyfuss
  • Patent number: 10438592
    Abstract: Systems and method of diarization of audio files use an acoustic voiceprint model. A plurality of audio files are analyzed to arrive at an acoustic voiceprint model associated to an identified speaker. Metadata associate with an audio file is used to select an acoustic voiceprint model. The selected acoustic voiceprint model is applied in a diarization to identify audio data of the identified speaker.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: October 8, 2019
    Assignee: Verint Systems Ltd.
    Inventors: Omer Ziv, Ran Achituv, Ido Shapira, Jeremie Dreyfuss
  • Patent number: 10418030
    Abstract: An acoustic model training device includes: a processor to execute a program; and a memory to store the program which, when executed by the processor, performs processes of: generating, based on feature vectors obtained by analyzing utterance data items of a plurality of speakers, a training data item of each speaker by subtracting, for each speaker, a mean vector of all the feature vectors of the speaker from each of the feature vectors of the speaker; generating a training data item of all the speakers by subtracting a mean vector of all the feature vectors of all the speakers from each of the feature vectors of all the speakers; and training an acoustic model using the training data item of each speaker and the training data item of all the speakers.
    Type: Grant
    Filed: May 20, 2016
    Date of Patent: September 17, 2019
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventor: Toshiyuki Hanazawa
  • Patent number: 10210864
    Abstract: Methods and computing systems for enabling a voice command for communication between related devices are described. A training voice command of a user is processed to generate a voice command signature including a content characteristic and a sound characteristic. When the user wishes to transfer an on-going packet data session from a current device to a related device, the user inputs the same voice command. The voice command will be analyzed with the voice command signature to determine a correspondence before being executed.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: February 19, 2019
    Assignee: T-Mobile USA, Inc.
    Inventors: Yasmin Karimli, Gunjan Nimbavikar
  • Patent number: 10134401
    Abstract: Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
    Type: Grant
    Filed: November 20, 2013
    Date of Patent: November 20, 2018
    Assignee: Verint Systems Ltd.
    Inventors: Omer Ziv, Ran Achituv, Ido Shapira, Jeremie Dreyfuss
  • Patent number: 10134400
    Abstract: Systems and method of diarization of audio files use an acoustic voiceprint model. A plurality of audio files are analyzed to arrive at an acoustic voiceprint model associated to an identified speaker. Metadata associate with an audio file is used to select an acoustic voiceprint model. The selected acoustic voiceprint model is applied in a diarization to identify audio data of the identified speaker.
    Type: Grant
    Filed: November 20, 2013
    Date of Patent: November 20, 2018
    Assignee: Verint Systems Ltd.
    Inventors: Omer Ziv, Ran Achituv, Ido Shapira, Jeremie Dreyfuss
  • Patent number: 10044533
    Abstract: A method (200) of bias cancellation for a radio channel sequence includes: receiving (201) a radio signal, the radio signal comprising a radio channel sequence coded by a first signature, the first signature belonging to a set of orthogonal signatures; decoding (202) the radio channel sequence based on the first signature to generate a decoded radio channel sequence; decoding (203) the radio channel sequence based on a second signature, wherein the second signature is orthogonal to the signatures of the set of orthogonal signatures, to generate a bias of the radio channel sequence; and canceling (204) the bias of the radio channel sequence from the decoded radio channel sequence.
    Type: Grant
    Filed: January 12, 2016
    Date of Patent: August 7, 2018
    Assignee: Intel IP Corporation
    Inventors: Thomas Esch, Edgar Bolinth, Markus Jordan, Tobias Scholand, Michael Speth
  • Patent number: 9886948
    Abstract: Features are disclosed for improving the robustness of a neural network by using multiple (e.g., two or more) feature streams, combing data from the feature streams, and comparing the combined data to data from a subset of the feature streams (e.g., comparing values from the combined feature stream to values from one of the component feature streams of the combined feature stream). The neural network can include a component or layer that selects the data with the highest value, which can suppress or exclude some or all corrupted data from the combined feature stream. Subsequent layers of the neural network can restrict connections from the combined feature stream to a component feature stream to reduce the possibility that a corrupted combined feature stream will corrupt the component feature stream.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: February 6, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Sri Venkata Surya Siva Rama Krishna Garimella, Bjorn Hoffmeister
  • Patent number: 9876985
    Abstract: A system, apparatus, and computer program product for monitoring a subject person's environment while the person is isolated from the environment. The system can use a microphone and/or a digital camera or imager to detect and capture sounds, voices, object, symbols, and faces in the subject person's environment, for example. The captured items can be analyzed, identified, and provided in an events log. The subject person can later review the events log to understand what happened while isolated. In various instances, the subject person can select an event from the log and review the underlying detected sounds, voices, object, symbols, and faces.
    Type: Grant
    Filed: September 3, 2014
    Date of Patent: January 23, 2018
    Assignee: HARMAN INTERNATIONAL INDUSTIES, INCORPORATED
    Inventors: Davide Di Censo, Stefan Marti
  • Patent number: 9418679
    Abstract: A method for processing a received set of speech data, wherein the received set of speech data comprises an utterance, is provided. The method executes a process to generate a plurality of confidence scores, wherein each of the plurality of confidence scores is associated with one of a plurality of candidate utterances; determines a plurality of difference values, each of the plurality of difference values comprising a difference between two of the plurality of confidence scores; and compares the plurality of difference values to determine at least one disparity.
    Type: Grant
    Filed: August 12, 2014
    Date of Patent: August 16, 2016
    Assignee: HONEYWELL INTERNATIONAL INC.
    Inventor: Erik T. Nelson
  • Patent number: 9251783
    Abstract: In syllable or vowel or phone boundary detection during speech, an auditory spectrum may be determined for an input window of sound and one or more multi-scale features may be extracted from the auditory spectrum. Each multi-scale feature can be extracted using a separate two-dimensional spectro-temporal receptive filter. One or more feature maps corresponding to the one or more multi-scale features can be generated and an auditory gist vector can be extracted from each of the one or more feature maps. A cumulative gist vector may be obtained through augmentation of each auditory gist vector extracted from the one or more feature maps. One or more syllable or vowel or phone boundaries in the input window of sound can be detected by mapping the cumulative gist vector to one or more syllable or vowel or phone boundary characteristics using a machine learning algorithm.
    Type: Grant
    Filed: June 17, 2014
    Date of Patent: February 2, 2016
    Assignee: Sony Computer Entertainment Inc.
    Inventors: Ozlem Kalinli-Akbacak, Ruxin Chen
  • Patent number: 9137611
    Abstract: In response to a signal failing to exceed an estimated level of noise by more than a predetermined amount for more than a predetermined continuous duration, the estimated level of noise is adjusted according to a first time constant in response to the signal rising and a second time constant in response to the signal falling, so that the estimated level of noise falls more quickly than it rises. In response to the signal exceeding the estimated level of noise by more than the predetermined amount for more than the predetermined continuous duration, a speed of adjusting the estimated level of noise is accelerated.
    Type: Grant
    Filed: August 24, 2012
    Date of Patent: September 15, 2015
    Assignee: TEXAS INSTRUMENTS INCORPORATION
    Inventors: Takahiro Unno, Nitish Krishna Murthy
  • Patent number: 8996368
    Abstract: A feature transform for speech recognition is described. An input speech utterance is processed to produce a sequence of representative speech vectors. A time-synchronous speech recognition pass is performed using a decoding search to determine a recognition output corresponding to the speech input. The decoding search includes, for each speech vector after some first threshold number of speech vectors, estimating a feature transform based on the preceding speech vectors in the utterance and partial decoding results of the decoding search. The current speech vector is then adjusted based on the current feature transform, and the adjusted speech vector is used in a current frame of the decoding search.
    Type: Grant
    Filed: February 22, 2010
    Date of Patent: March 31, 2015
    Assignee: Nuance Communications, Inc.
    Inventor: Daniel Willett
  • Patent number: 8996373
    Abstract: A state detection device includes: a first model generation unit to generate a first specific speaker model obtained by modeling speech features of a specific speaker in an undepressed state; a second model generation unit to generate a second specific speaker model obtained by modeling speech features of the specific speaker in the depressed state; a likelihood calculation unit to calculate a first likelihood as a likelihood of the first specific speaker model with respect to input voice, and a second likelihood as a likelihood of the second specific speaker model with respect to the input voice; and a state determination unit to determine a state of the speaker of the input voice using the first likelihood and the second likelihood.
    Type: Grant
    Filed: October 5, 2011
    Date of Patent: March 31, 2015
    Assignee: Fujitsu Limited
    Inventors: Shoji Hayakawa, Naoshi Matsuo
  • Publication number: 20150088498
    Abstract: A method and system for training an automatic speech recognition system are provided. The method includes separating training data into speaker specific segments, and for each speaker specific segment, performing the following acts: generating spectral data, selecting a first warping factor and warping the spectral data, and comparing the warped spectral data with a speech model. The method also includes iteratively performing the steps of selecting another warping factor and generating another warped spectral data, comparing the other warped spectral data with the speech model, and if the other warping factor produces a closer match to the speech model, saving the other warping factor as the best warping factor for the speaker specific segment. The system includes modules configured to control a processor in the system to perform the steps of the method.
    Type: Application
    Filed: November 26, 2014
    Publication date: March 26, 2015
    Inventors: Vincent GOFFIN, Andrej LJOLJE, Murat Saraclar
  • Patent number: 8990080
    Abstract: Techniques to normalize names for name-based speech recognition grammars are described. Some embodiments are particularly directed to techniques to normalize names for name-based speech recognition grammars more efficiently by caching, and on a per-culture basis. A technique may comprise receiving a name for normalization, during name processing for a name-based speech grammar generating process. A normalization cache may be examined to determine if the name is already in the cache in a normalized form. When the name is not already in the cache, the name may be normalized and added to the cache. When the name is in the cache, the normalization result may be retrieved and passed to the next processing step. Other embodiments are described and claimed.
    Type: Grant
    Filed: January 27, 2012
    Date of Patent: March 24, 2015
    Assignee: Microsoft Corporation
    Inventors: Mini Varkey, Bernardo Sana, Victor Boctor, Diego Carlomagno
  • Patent number: 8990086
    Abstract: A recognition confidence measurement method, medium and system which can more accurately determine whether an input speech signal is an in-vocabulary, by extracting an optimum number of candidates that match a phone string extracted from the input speech signal and estimating a lexical distance between the extracted candidates is provided. A recognition confidence measurement method includes: extracting a phoneme string from a feature vector of an input speech signal; extracting candidates by matching the extracted phoneme string and phoneme strings of vocabularies registered in a predetermined dictionary and; estimating a lexical distance between the extracted candidates; and determining whether the input speech signal is an in-vocabulary, based on the lexical distance.
    Type: Grant
    Filed: July 31, 2006
    Date of Patent: March 24, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sang-Bae Jeong, Nam Hoon Kim, Ick Sang Han, In Jeong Choi, Gil Jin Jang, Jae-Hoon Jeong
  • Patent number: 8949128
    Abstract: Techniques for providing speech output for speech-enabled applications. A synthesis system receives from a speech-enabled application a text input including a text transcription of a desired speech output. The synthesis system selects one or more audio recordings corresponding to one or more portions of the text input. In one aspect, the synthesis system selects from audio recordings provided by a developer of the speech-enabled application. In another aspect, the synthesis system selects an audio recording of a speaker speaking a plurality of words. The synthesis system forms a speech output including the one or more selected audio recordings and provides the speech output for the speech-enabled application.
    Type: Grant
    Filed: February 12, 2010
    Date of Patent: February 3, 2015
    Assignee: Nuance Communications, Inc.
    Inventors: Darren C. Meyer, Corinne Bos-Plachez, Martine Marguerite Staessen
  • Patent number: 8942975
    Abstract: Techniques are described herein that suppress noise in a Mel-filtered spectral domain. For example, a window may be applied to a representation of a speech signal in a time domain. The windowed representation in the time domain may be converted to a subsequent representation of the speech signal in the Mel-filtered spectral domain. A noise suppression operation may be performed with respect to the subsequent representation to provide noise-suppressed Mel coefficients.
    Type: Grant
    Filed: March 22, 2011
    Date of Patent: January 27, 2015
    Assignee: Broadcom Corporation
    Inventor: Jonas Borgstrom
  • Patent number: 8930188
    Abstract: An error concealment method and apparatus for an audio signal and a decoding method and apparatus for an audio signal using the error concealment method and apparatus. The error concealment method includes selecting one of an error concealment in a frequency domain and an error concealment in a time domain as an error concealment scheme for a current frame based on a predetermined criteria when an error occurs in the current frame, selecting one of a repetition scheme and an interpolation scheme in the frequency domain as the error concealment scheme for the current frame based on a predetermined criteria when the error concealment in the frequency domain is selected, and concealing the error of the current frame using the selected scheme.
    Type: Grant
    Filed: July 2, 2013
    Date of Patent: January 6, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Eun-mi Oh, Ki-hyun Choo, Ho-sang Sung, Chang-yong Son, Jung-hoe Kim, Kang eun Lee
  • Patent number: 8914291
    Abstract: Techniques for generating synthetic speech with contrastive stress. In one aspect, a speech-enabled application generates a text input including a text transcription of a desired speech output, and inputs the text input to a speech synthesis system. The synthesis system generates an audio speech output corresponding to at least a portion of the text input, with at least one portion carrying contrastive stress, and provides the audio speech output for the speech-enabled application. In another aspect, a speech-enabled application inputs a plurality of text strings, each corresponding to a portion of a desired speech output, to a software module for rendering contrastive stress. The software module identifies a plurality of audio recordings that render at least one portion of at least one of the text strings as speech carrying contrastive stress. The speech-enabled application generates an audio speech output corresponding to the desired speech output using the audio recordings.
    Type: Grant
    Filed: September 24, 2013
    Date of Patent: December 16, 2014
    Assignee: Nuance Communications, Inc.
    Inventors: Darren C. Meyer, Stephen R. Springer
  • Patent number: 8909527
    Abstract: A method and system for training an automatic speech recognition system are provided. The method includes separating training data into speaker specific segments, and for each speaker specific segment, performing the following acts: generating spectral data, selecting a first warping factor and warping the spectral data, and comparing the warped spectral data with a speech model. The method also includes iteratively performing the steps of selecting another warping factor and generating another warped spectral data, comparing the other warped spectral data with the speech model, and if the other warping factor produces a closer match to the speech model, saving the other warping factor as the best warping factor for the speaker specific segment. The system includes modules configured to control a processor in the system to perform the steps of the method.
    Type: Grant
    Filed: June 24, 2009
    Date of Patent: December 9, 2014
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Vincent Goffin, Andrej Ljolje, Murat Saraclar
  • Patent number: 8838455
    Abstract: A system and method for facilitating user interaction with a voice application. A VoiceXML browser runs locally on a mobile device. Supporting components, such as a Resource Manager, a Call Data Manager, and a MRCP Gateway Client support operation of the VoiceXML browser. The Resource Manager servers either those files stored locally on the mobile device, or files accessible via a network connection using the wireless or mobile broadband capabilities of the mobile device. The Call Data Manager communicates call-specific data back to the application's system of origin or another configured target system. The MRCP Gateway Client provides the VoiceXML browser with access to media resources via a MRCP Gateway Client.
    Type: Grant
    Filed: June 13, 2008
    Date of Patent: September 16, 2014
    Assignee: West Corporation
    Inventor: Chad Daniel Fox
  • Patent number: 8825486
    Abstract: Techniques for generating synthetic speech with contrastive stress. In one aspect, a speech-enabled application generates a text input including a text transcription of a desired speech output, and inputs the text input to a speech synthesis system. The synthesis system generates an audio speech output corresponding to at least a portion of the text input, with at least one portion carrying contrastive stress, and provides the audio speech output for the speech-enabled application. In another aspect, a speech-enabled application inputs a plurality of text strings, each corresponding to a portion of a desired speech output, to a software module for rendering contrastive stress. The software module identifies a plurality of audio recordings that render at least one portion of at least one of the text strings as speech carrying contrastive stress. The speech-enabled application generates an audio speech output corresponding to the desired speech output using the audio recordings.
    Type: Grant
    Filed: January 22, 2014
    Date of Patent: September 2, 2014
    Assignee: Nuance Communications, Inc.
    Inventors: Darren C. Meyer, Stephen R. Springer
  • Publication number: 20140222423
    Abstract: Most speaker recognition systems use i-vectors which are compact representations of speaker voice characteristics. Typical i-vector extraction procedures are complex in terms of computations and memory usage. According an embodiment, a method and corresponding apparatus for speaker identification, comprise determining a representation for each component of a variability operator, representing statistical inter- and intra-speaker variability of voice features with respect to a background statistical model, in terms of an orthogonal operator common to all components of the variability operator and having a first dimension larger than a second dimension of the components of the variability operator; computing statistical voice characteristics of a particular speaker using the determined representations; and employing the statistical voice characteristics of the particular speaker in performing speaker recognition.
    Type: Application
    Filed: February 7, 2013
    Publication date: August 7, 2014
    Applicant: NUANCE COMMUNICATIONS, INC.
    Inventors: Sandro Cumani, Pietro Laface
  • Patent number: 8798992
    Abstract: An signal processing apparatus, system and software product for audio modification/substitution of a background noise generated during an event including, but not be limited to, substituting or partially substituting a noise signal from one or more microphones by a pre-recorded noise, and/or selecting one or more noise signals from a plurality of microphones for further processing in real-time or near real-time broadcasting.
    Type: Grant
    Filed: May 18, 2011
    Date of Patent: August 5, 2014
    Assignee: Disney Enterprises, Inc.
    Inventors: Michael Gay, Jed Drake, Anthony Bailey
  • Patent number: 8798985
    Abstract: A method for interpreting a dialogue between two terminals includes establishing a communication channel between interpretation terminals of two parties in response to an interpretation request; specifying a language of an initiating party and a language of the other party in each of the interpretation terminals of the two parties by exchanging information about the language of the initiating party used in the interpretation terminal of the initiating party and the language of the other party used in the interpretation terminal of the other party via the communication channel; recognizing speech uttered from the interpretation terminal of the initiating party; translating the speech recognized by the interpretation terminal of the initiating party into the language of the other party; and transmitting a sentence translated into the language of the other party to the interpretation terminal of the other party.
    Type: Grant
    Filed: June 2, 2011
    Date of Patent: August 5, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seung Yun, Sanghun Kim
  • Patent number: 8798994
    Abstract: The present invention discloses a solution for conserving computing resources when implementing transformation based adaptation techniques. The disclosed solution limits the amount of speech data used by real-time adaptation algorithms to compute a transformation, which results in substantial computational savings. Appreciably, application of a transform is a relatively low memory and computationally cheap process compared to memory and resource requirements for computing the transform to be applied.
    Type: Grant
    Filed: February 6, 2008
    Date of Patent: August 5, 2014
    Assignee: International Business Machines Corporation
    Inventors: John W. Eckhart, Michael Florio, Radek Hampl, Pavel Krbec, Jonathan Palgon
  • Publication number: 20140207448
    Abstract: A speech recognition system adaptively estimates a warping factor used to reduce speaker variability. The warping factor is estimated using a small window (e.g. 100 ms) of speech. The warping factor is adaptively adjusted as more speech is obtained until the warping factor converges or a pre-defined maximum number of adaptation is reached. The speaker may be placed into a group selected from two or more groups based on characteristics that are associated with the speaker's window of speech. Different step sizes may be used within the different groups when estimating the warping factor. VTLN is applied to the speech input using the estimated warping factor. A linear transformation, including a bias term, may also be computed to assist in normalizing the speech along with the application of the VTLN.
    Type: Application
    Filed: January 23, 2013
    Publication date: July 24, 2014
    Applicant: Microsoft Corporation
    Inventors: Shizhen Wang, Yifan Gong, Fileno Alleva
  • Patent number: 8781825
    Abstract: Embodiments of the present invention improve methods of performing speech recognition. In one embodiment, the present invention includes a method comprising receiving a spoken utterance, processing the spoken utterance in a speech recognizer to generate a recognition result, determining consistencies of one or more parameters of component sounds of the spoken utterance, wherein the parameters are selected from the group consisting of duration, energy, and pitch, and wherein each component sound of the spoken utterance has a corresponding value of said parameter, and validating the recognition result based on the consistency of at least one of said parameters.
    Type: Grant
    Filed: August 24, 2011
    Date of Patent: July 15, 2014
    Assignee: Sensory, Incorporated
    Inventors: Jonathan Shaw, Pieter Vermeulen, Stephen Sutton, Robert Savoie
  • Patent number: 8768695
    Abstract: A computer-implemented arrangement is described for performing cepstral mean normalization (CMN) in automatic speech recognition. A current CMN function is stored in a computer memory as a previous CMN function. The current CMN function is updated based on a current audio input to produce an updated CMN function. The updated CMN function is used to process the current audio input to produce a processed audio input. Automatic speech recognition of the processed audio input is performed to determine representative text. If the audio input is not recognized as representative text, the updated CMN function is replaced with the previous CMN function.
    Type: Grant
    Filed: June 13, 2012
    Date of Patent: July 1, 2014
    Assignee: Nuance Communications, Inc.
    Inventors: Yun Tang, Venkatesh Nagesha
  • Patent number: 8768711
    Abstract: A method of voice-enabling an application for command and control and content navigation can include the application dynamically generating a markup language fragment specifying a command and control and content navigation grammar for the application, instantiating an interpreter from a voice library, and providing the markup language fragment to the interpreter. The method also can include the interpreter processing a speech input using the command and control and content navigation grammar specified by the markup language fragment and providing an event to the application indicating an instruction representative of the speech input.
    Type: Grant
    Filed: June 17, 2004
    Date of Patent: July 1, 2014
    Assignee: Nuance Communications, Inc.
    Inventors: Soonthorn Ativanichayaphong, Charles W. Cross, Jr., Brien H. Muschett
  • Patent number: 8762143
    Abstract: Disclosed are systems, methods, and computer readable media for identifying an acoustic environment of a caller. The method embodiment comprises analyzing acoustic features of a received audio signal from a caller, receiving meta-data information based on a previously recorded time and speed of the caller, classifying a background environment of the caller based on the analyzed acoustic features and the meta-data, selecting an acoustic model matched to the classified background environment from a plurality of acoustic models, and performing speech recognition as the received audio signal using the selected acoustic model.
    Type: Grant
    Filed: May 29, 2007
    Date of Patent: June 24, 2014
    Assignee: AT&T Intellectual Property II, L.P.
    Inventor: Mazin Gilbert
  • Patent number: 8744845
    Abstract: A noise estimation method for a noisy speech signal according to an embodiment of the present invention includes the steps of approximating a transformation spectrum by transforming an input noisy speech signal to a frequency domain, calculating a smoothed magnitude spectrum having a decreased difference in a magnitude of the transformation spectrum between neighboring frames, calculating a search spectrum to represent an estimated noise component of the smoothed magnitude spectrum, and estimating a noise spectrum by using a recursive average method using an adaptive forgetting factor defined by using the search spectrum. According to an embodiment of the present invention, the amount of calculation for noise estimation is small, and large-capacity memory is not required. Accordingly, the present invention can be easily implemented in hardware or software. Further, the accuracy of noise estimation can be increase because an adaptive procedure can be performed on each frequency sub-band.
    Type: Grant
    Filed: March 31, 2009
    Date of Patent: June 3, 2014
    Assignee: Transono Inc.
    Inventors: Sung Il Jung, Dong Gyung Ha
  • Patent number: 8711015
    Abstract: The invention relates to compressing of sparse data sets contains sequences of data values and position information therefor. The position information may be in the form of position indices defining active positions of the data values in a sparse vector of length N. The position information is encoded into the data values by adjusting one or more of the data values within a pre-defined tolerance range, so that a pre-defined mapping function of the data values and their positions is close to a target value. In one embodiment, the mapping function is defined using a sub-set of N filler values which elements are used to fill empty positions in the input sparse data vector. At the decoder, the correct data positions are identified by searching though possible sub-sets of filler values.
    Type: Grant
    Filed: August 24, 2011
    Date of Patent: April 29, 2014
    Assignee: Her Majesty the Queen in Right of Canada as represented by the Minister of Industry, through the Communications Research Centre Canada
    Inventors: Frederic Mustiere, Hossein Najaf-Zadeh, Ramin Pishehvar, Hassan Lahdili, Louis Thibault, Martin Bouchard
  • Patent number: 8682671
    Abstract: Techniques for generating synthetic speech with contrastive stress. In one aspect, a speech-enabled application generates a text input including a text transcription of a desired speech output, and inputs the text input to a speech synthesis system. The synthesis system generates an audio speech output corresponding to at least a portion of the text input, with at least one portion carrying contrastive stress, and provides the audio speech output for the speech-enabled application. In another aspect, a speech-enabled application inputs a plurality of text strings, each corresponding to a portion of a desired speech output, to a software module for rendering contrastive stress. The software module identifies a plurality of audio recordings that render at least one portion of at least one of the text strings as speech carrying contrastive stress. The speech-enabled application generates an audio speech output corresponding to the desired speech output using the audio recordings.
    Type: Grant
    Filed: April 17, 2013
    Date of Patent: March 25, 2014
    Assignee: Nuance Communications, Inc.
    Inventors: Darren C. Meyer, Stephen R. Springer
  • Patent number: 8660849
    Abstract: Methods, systems, and computer readable storage medium related to operating an intelligent digital assistant are disclosed. A user request is received, the user request including at least a speech input received from a user. The user request including the speech input is processed to obtain a representation of user intent for identifying items of a selection domain based on at least one selection criterion. A prompt is provided to the user, the prompt presenting two or more properties relevant to items of the selection domain and requesting the user to specify relative importance between the two or more properties. A listing of search results is provided to the user, where the listing of search results has been obtained based on the at least one selection criterion and the relative importance provided by the user.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: February 25, 2014
    Assignee: Apple Inc.
    Inventors: Thomas Robert Gruber, Adam John Cheyer, Didier Rene Guzzoni, Christopher Dean Brigham, Harry Joseph Saddler
  • Patent number: 8655659
    Abstract: A personalized text-to-speech synthesizing device includes: a personalized speech feature library creator, configured to recognize personalized speech features of a specific speaker by comparing a random speech fragment of the specific speaker with preset keywords, thereby to create a personalized speech feature library associated with the specific speaker, and store the personalized speech feature library in association with the specific speaker; and a text-to-speech synthesizer, configured to perform a speech synthesis of a text message from the specific speaker, based on the personalized speech feature library associated with the specific speaker and created by the personalized speech feature library creator, thereby to generate and output a speech fragment having pronunciation characteristics of the specific speaker.
    Type: Grant
    Filed: August 12, 2010
    Date of Patent: February 18, 2014
    Assignees: Sony Corporation, Sony Mobile Communications AB
    Inventors: Qingfang Wang, Shouchun He
  • Patent number: 8639508
    Abstract: A method of automatic speech recognition includes receiving an utterance from a user via a microphone that converts the utterance into a speech signal, pre-processing the speech signal using a processor to extract acoustic data from the received speech signal, and identifying at least one user-specific characteristic in response to the extracted acoustic data. The method also includes determining a user-specific confidence threshold responsive to the at least one user-specific characteristic, and using the user-specific confidence threshold to recognize the utterance received from the user and/or to assess confusability of the utterance with stored vocabulary.
    Type: Grant
    Filed: February 14, 2011
    Date of Patent: January 28, 2014
    Assignee: General Motors LLC
    Inventors: Xufang Zhao, Gaurav Talwar
  • Patent number: 8600741
    Abstract: A system and method for tuning a speech recognition engine to an individual microphone using a database containing acoustical models for a plurality of microphones. Microphone performance characteristics are obtained from a microphone at a speech recognition engine, the database is searched for an acoustical model that matches the characteristics, and the speech recognition engine is then modified based on the matching acoustical model.
    Type: Grant
    Filed: August 20, 2008
    Date of Patent: December 3, 2013
    Assignee: General Motors LLC
    Inventors: Gaurav Talwar, Rathinavelu Chengalvarayan, Jesse T. Gratke, Subhash B. Gullapalli, Dana B. Fecher
  • Patent number: 8600744
    Abstract: Disclosed are systems, methods, and computer readable media for performing speech recognition. The method embodiment comprises selecting a codebook from a plurality of codebooks with a minimal acoustic distance to a received speech sample, the plurality of codebooks generated by a process of (a) computing a vocal tract length for a each of a plurality of speakers, (b) for each of the plurality of speakers, clustering speech vectors, and (c) creating a codebook for each speaker, the codebook containing entries for the respective speaker's vocal tract length, speech vectors, and an optional vector weight for each speech vector, (2) applying the respective vocal tract length associated with the selected codebook to normalize the received speech sample for use in speech recognition, and (3) recognizing the received speech sample based on the respective vocal tract length associated with the selected codebook.
    Type: Grant
    Filed: April 13, 2012
    Date of Patent: December 3, 2013
    Assignee: AT&T Intellectual Property II, L.P.
    Inventor: Mazin Gilbert
  • Patent number: 8577678
    Abstract: A speech recognition system according to the present invention includes a sound source separating section which separates mixed speeches from multiple sound sources from one another; a mask generating section which generates a soft mask which can take continuous values between 0 and 1 for each frequency spectral component of a separated speech signal using distributions of speech signal and noise against separation reliability of the separated speech signal; and a speech recognizing section which recognizes speeches separated by the sound source separating section using soft masks generated by the mask generating section.
    Type: Grant
    Filed: March 10, 2011
    Date of Patent: November 5, 2013
    Assignee: Honda Motor Co., Ltd.
    Inventors: Kazuhiro Nakadai, Toru Takahashi, Hiroshi Okuno
  • Patent number: 8571870
    Abstract: Techniques for generating synthetic speech with contrastive stress. In one aspect, a speech-enabled application generates a text input including a text transcription of a desired speech output, and inputs the text input to a speech synthesis system. The synthesis system generates an audio speech output corresponding to at least a portion of the text input, with at least one portion carrying contrastive stress, and provides the audio speech output for the speech-enabled application. In another aspect, a speech-enabled application inputs a plurality of text strings, each corresponding to a portion of a desired speech output, to a software module for rendering contrastive stress. The software module identifies a plurality of audio recordings that render at least one portion of at least one of the text strings as speech carrying contrastive stress. The speech-enabled application generates an audio speech output corresponding to the desired speech output using the audio recordings.
    Type: Grant
    Filed: August 9, 2010
    Date of Patent: October 29, 2013
    Assignee: Nuance Communications, Inc.
    Inventors: Darren C. Meyer, Stephen R. Springer
  • Patent number: 8560316
    Abstract: The present invention relates to a system and method of making a verification decision within a speaker recognition system. A speech sample is gathered from a speaker over a period of time a verification score is then produce for said sample over the period. Once the verification score is determined a confidence measure is produced based on frame score observations from said sample over the period and a confidence measure calculated based on the standard Gaussian distribution. If the confidence measure indicates with a set level of confidence that the verification score is below the verification threshold the speaker is rejected and gathering process terminated.
    Type: Grant
    Filed: December 19, 2007
    Date of Patent: October 15, 2013
    Inventors: Robert Vogt, Michael Mason, Sridaran Subramanian
  • Patent number: 8560324
    Abstract: A mobile terminal including an input unit configured to receive an input to activate a voice recognition function on the mobile terminal, a memory configured to store information related to operations performed on the mobile terminal, and a controller configured to activate the voice recognition function upon receiving the input to activate the voice recognition function, to determine a meaning of an input voice instruction based on at least one prior operation performed on the mobile terminal and a language included in the voice instruction, and to provide operations related to the determined meaning of the input voice instruction based on the at least one prior operation performed on the mobile terminal and the language included in the voice instruction and based on a probability that the determined meaning of the input voice instruction matches the information related to the operations of the mobile terminal.
    Type: Grant
    Filed: January 31, 2012
    Date of Patent: October 15, 2013
    Assignee: LG Electronics Inc.
    Inventors: Jong-Ho Shin, Jae-Do Kwak, Jong-Keun Youn
  • Patent number: 8554566
    Abstract: Techniques for training and applying prosody models for speech synthesis are provided. A speech recognition engine processes audible speech to produce text annotated with prosody information. A prosody model is trained with this annotated text. After initial training, the model is applied during speech synthesis to generate speech with non-standard prosody from input text. Multiple prosody models can be used to represent different prosody styles.
    Type: Grant
    Filed: November 29, 2012
    Date of Patent: October 8, 2013
    Assignee: Morphism LLC
    Inventor: James H. Stephens, Jr.
  • Patent number: 8538752
    Abstract: The invention comprises a method and apparatus for predicting word accuracy. Specifically, the method comprises obtaining an utterance in speech data where the utterance comprises an actual word string, processing the utterance for generating an interpretation of the actual word string, processing the utterance to identify at least one utterance frame, and predicting a word accuracy associated with the interpretation according to at least one stationary signal-to-noise ratio and at least one non-stationary signal to noise ratio, wherein the at least one stationary signal-to-noise ratio and the at least one non-stationary signal to noise ratio are determined according to a frame energy associated with each of the at least one utterance frame.
    Type: Grant
    Filed: May 7, 2012
    Date of Patent: September 17, 2013
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Mazin Gilbert, Hong Kook Kim
  • Patent number: 8538751
    Abstract: A speech recognition system and a speech recognizing method for high-accuracy speech recognition in the environment with ego noise are provided. A speech recognition system according to the present invention includes a sound source separating and speech enhancing section; an ego noise predicting section; and a missing feature mask generating section for generating missing feature masks using outputs of the sound source separating and speech enhancing section and the ego noise predicting section; an acoustic feature extracting section for extracting an acoustic feature of each sound source using an output for said each sound source of the sound source separating and speech enhancing section; and a speech recognizing section for performing speech recognition using outputs of the acoustic feature extracting section and the missing feature masks.
    Type: Grant
    Filed: June 10, 2011
    Date of Patent: September 17, 2013
    Assignee: Honda Motor Co., Ltd.
    Inventors: Kazuhiro Nakadai, Gokhan Ince
  • Patent number: 8521536
    Abstract: A Mobile Voice Self Service (MVSS) mobile device and method thereof. A VoiceXML browser that is implemented directly on the MVSS mobile device may request a VoiceXML application from a VoiceXML application server and process it. A call data manager may also be implemented on the MVSS mobile device and may provide call data that, in conjunction with data from the VoiceXML application server, may authorize access to advanced Media Resource Control Protocol (MRCP) services, such as Automatic Speech Recognition (ASR) or Text-To-Speech (TTS). A media resource gateway may then provide the advanced MRCP services to the VoiceXML application processed by the VoiceXML application browser. Hotkey navigations and bookmarked application points to VoiceXML applications may be created and applied through application analysis and state tracking. Therein, VoiceXML document transitions and user input are stored to maintain application state changes until the user requests creation of an application bookmark.
    Type: Grant
    Filed: October 22, 2012
    Date of Patent: August 27, 2013
    Assignee: West Corporation
    Inventor: Chad Daniel Fox
  • Patent number: 8515747
    Abstract: A transmitted data that includes audio data and a transmitted spectral sharpness parameter representing a spectral harmonic/noise sharpness of a plurality of subbands are received. A measured spectral sharpness parameter is estimated from received audio data. The transmitted spectral sharpness parameter is compared with the measured spectral sharpness parameter. A main sharpness control parameter is formed for each of the decoded subbands. The main sharpness control parameter for each of the decoded subbands is analyzed. Ones of the decoded subbands are sharpened if the corresponding main sharpness control indicates that a corresponding subband is not sharp enough, wherein sharpened subbands are formed. Likewise, ones of the decoded subbands are flattened if the corresponding main sharpness control indicates that a corresponding subband is not flat enough, wherein flattened subbands are formed.
    Type: Grant
    Filed: September 4, 2009
    Date of Patent: August 20, 2013
    Assignee: Huawei Technologies Co., Ltd.
    Inventor: Yang Gao