Patents Assigned to Lessac Technology, Inc.
  • Patent number: 10453442
    Abstract: A computer-implemented method for automatically analyzing, predicting, and/or modifying acoustic units of prosodic human speech utterances for use in speech synthesis or speech recognition. Possible steps include: initiating analysis of acoustic wave data representing the human speech utterances, via the phase state of the acoustic wave data; using one or more phase state defined acoustic wave metrics as common elements for analyzing, and optionally modifying, pitch, amplitude, duration, and other measurable acoustic parameters of the acoustic wave data, at predetermined time intervals; analyzing acoustic wave data representing a selected acoustic unit to determine the phase state of the acoustic unit; and analyzing the acoustic wave data representing the selected acoustic unit to determine at least one acoustic parameter of the acoustic unit with reference to the determined phase state of the selected acoustic unit. Also included are systems for implementing the described and related methods.
    Type: Grant
    Filed: September 26, 2016
    Date of Patent: October 22, 2019
    Assignee: LESSAC TECHNOLOGIES, INC.
    Inventors: Nishant Chandra, Reiner Wilhelms-Tricarico, Rattima Nitisaroj, Brian Mottershead, Gary A. Marple, John B. Reichenbach
  • Patent number: 10453479
    Abstract: A system-effected method for synthesizing speech, or recognizing speech including a sequence of expressive speech utterances. The method can be computer-implemented and can include system-generating a speech signal embodying the sequence of expressive speech utterances. Other possible steps include: system-marking the speech signal with a pitch marker indicating a pitch change at or near a first zero amplitude crossing point of the speech signal following a glottal closure point, at a minimum, at a maximum or at another location; system marking the speech signal with at least one further pitch marker; system-aligning a sequence of prosodically marked text with the pitch-marked speech signal according to the pitch markers; and system outputting the aligned text or the aligned speech signal, respectively. Computerized systems, and stored programs for implementing method embodiments of the invention are also disclosed.
    Type: Grant
    Filed: September 21, 2012
    Date of Patent: October 22, 2019
    Assignee: LESSAC TECHNOLOGIES, INC.
    Inventors: Reiner Wilhelms-Tricarico, Brian Mottershead, Rattima Nitisaroj, Michael Baumgartner, John B. Reichenbach, Gary A. Marple
  • Publication number: 20130262096
    Abstract: A system-effected method for synthesizing speech, or recognizing speech including a sequence of expressive speech utterances. The method can be computer-implemented and can include system-generating a speech signal embodying the sequence of expressive speech utterances. Other possible steps include: system-marking the speech signal with a pitch marker indicating a pitch change at or near a first zero amplitude crossing point of the speech signal following a glottal closure point, at a minimum, at a maximum or at another location; system marking the speech signal with at least one further pitch marker; system-aligning a sequence of prosodically marked text with the pitch-marked speech signal according to the pitch markers; and system outputting the aligned text or the aligned speech signal, respectively. Computerized systems, and stored programs for implementing method embodiments of the invention are also disclosed.
    Type: Application
    Filed: September 21, 2012
    Publication date: October 3, 2013
    Applicant: LESSAC TECHNOLOGIES, INC.
    Inventors: Reiner WILHELMS-TRICARICO, Brian MOTTERSHEAD, Rattima NITISAROJ, Michael BAUMGARTNER, John B. REICHENBACH, Gary A. MARPLE
  • Publication number: 20130226569
    Abstract: A computer-implemented method for automatically analyzing, predicting, and/or modifying acoustic units of prosodic human speech utterances for use in speech synthesis or speech recognition. Possible steps include: initiating analysis of acoustic wave data representing the human speech utterances, via the phase state of the acoustic wave data; using one or more phase state defined acoustic wave metrics as common elements for analyzing, and optionally modifying, pitch, amplitude, duration, and other measurable acoustic parameters of the acoustic wave data, at predetermined time intervals; analyzing acoustic wave data representing a selected acoustic unit to determine the phase state of the acoustic unit; and analyzing the acoustic wave data representing the selected acoustic unit to determine at least one acoustic parameter of the acoustic unit with reference to the determined phase state of the selected acoustic unit. Also included are systems for implementing the described and related methods.
    Type: Application
    Filed: March 18, 2013
    Publication date: August 29, 2013
    Applicant: Lessac Technologies, Inc.
    Inventor: Lessac Technologies, Inc.
  • Patent number: 8401849
    Abstract: A computer-implemented method for automatically analyzing, predicting, and/or modifying acoustic units of prosodic human speech utterances for use in speech synthesis or speech recognition. Possible steps include: initiating analysis of acoustic wave data representing the human speech utterances, via the phase state of the acoustic wave data; using one or more phase state defined acoustic wave metrics as common elements for analyzing, and optionally modifying, pitch, amplitude, duration, and other measurable acoustic parameters of the acoustic wave data, at predetermined time intervals; analyzing acoustic wave data representing a selected acoustic unit to determine the phase state of the acoustic unit; and analyzing the acoustic wave data representing the selected acoustic unit to determine at least one acoustic parameter of the acoustic unit with reference to the determined phase state of the selected acoustic unit. Also included are systems for implementing the described and related methods.
    Type: Grant
    Filed: December 16, 2009
    Date of Patent: March 19, 2013
    Assignee: Lessac Technologies, Inc.
    Inventors: Nishant Chandra, Reiner Wilhelms-Tricarico, Rattima Nitisaroj, Brian Mottershead, Gary A. Marple, John B. Reichenbach
  • Patent number: 8219398
    Abstract: Disclosed are novel embodiments of a speech synthesizer and speech synthesis method for generating human-like speech wherein a speech signal can be generated by concatenation from phonemes stored in a phoneme database. Wavelet transforms and interpolation between frames can be employed to effect smooth morphological fusion of adjacent phonemes in the output signal. The phonemes may have one prosody or set of prosody characteristics and one or more alternative prosodies may be created by applying prosody modification parameters to the phonemes from a differential prosody database. Preferred embodiments can provide fast, resource-efficient speech synthesis with an appealing musical or rhythmic output in a desired prosody style such as reportorial or human interest.
    Type: Grant
    Filed: March 28, 2006
    Date of Patent: July 10, 2012
    Assignee: Lessac Technologies, Inc.
    Inventors: Gary Marple, Nishant Chandra
  • Patent number: 8175879
    Abstract: The inventive system can automatically annotate the relationship of text and acoustic units for the purposes of: (a) predicting how the text is to be pronounced as expressively synthesized speech, and (b) improving the proportion of expressively uttered speech as correctly identified text representing the speaker's message. The system can automatically annotate text corpora for relationships of uttered speech for a particular speaking style and for acoustic units in terms of context and content of the text to the utterances. The inventive system can use kinesthetically defined expressive speech production phonetics that are recognizable and controllable according to kinesensic feedback principles. In speech synthesis embodiments of the invention, the text annotations can specify how the text is to be expressively pronounced as synthesized speech.
    Type: Grant
    Filed: August 8, 2008
    Date of Patent: May 8, 2012
    Assignee: Lessac Technologies, Inc.
    Inventors: Rattima Nitisaroj, Gary Marple, Nishant Chandra
  • Patent number: 7877259
    Abstract: A method of, and system for, acoustically coding text for use in the synthesis of speech from the text. The method includes marking the text to be spoken with one or more graphic symbols to indicate to a speaker a desired prosody to impart to the spoken text. The markups can include grapheme-phoneme pairs each wherein a visible prosodic-indicating grapheme is employed with written text and a corresponding digital phoneme is functional in the digital domain. The invention is useful in the generation of appealing, humanized machine speech for a wide range of applications, including voice mail systems, electronically enabled appliances, automobiles, computers, robotic assistants, games and the like, in spoken books and magazines, drama and other entertainment.
    Type: Grant
    Filed: March 7, 2005
    Date of Patent: January 25, 2011
    Assignee: Lessac Technologies, Inc.
    Inventors: Gary Marple, Sue Ann Park, H. Donald Wilson, Mary Louise B. Wilson, legal representative, Nancy Krebs, Diane Gaary, Barry Kur
  • Publication number: 20080195391
    Abstract: Disclosed are novel embodiments of a speech synthesizer and speech synthesis method for generating human-like speech wherein a speech signal can be generated by concatenation from phonemes stored in a phoneme database. Wavelet transforms and interpolation between frames can be employed to effect smooth morphological fusion of adjacent phonemes in the output signal. The phonemes may have one prosody or set of prosody characteristics and one or more alternative prosodies may be created by applying prosody modification parameters to the phonemes from a differential prosody database. Preferred embodiments can provide fast, resource-efficient speech synthesis with an appealing musical or rhythmic output in a desired prosody style such as reportorial or human interest.
    Type: Application
    Filed: March 28, 2006
    Publication date: August 14, 2008
    Applicant: LESSAC TECHNOLOGIES, INC.
    Inventors: Gary Marple, Nishant Chandra
  • Patent number: 7280964
    Abstract: In accordance with a present invention speech recognition is disclosed. It uses a microphone to receive audible sounds input by a user into a first computing device having a program with a database consisting of (i) digital representations of known audible sounds and associated alphanumeric representations of the known audible sounds and (ii) digital representations of known audible sounds corresponding to mispronunciations resulting from known classes of mispronounced words and phrases. The method is performed by receiving the audible sounds in the form of the electrical output of the microphone. A particular audible sound to be recognized is converted into a digital representation of the audible sound. The digital representation of the particular audible sound is then compared to the digital representations of the known audible sounds to determine which of those known audible sounds is most likely to be the particular audible sound being compared to the sounds in the database.
    Type: Grant
    Filed: December 31, 2002
    Date of Patent: October 9, 2007
    Assignee: Lessac Technologies, Inc.
    Inventors: H. Donald Wilson, Anthony H. Handal, Gary Marple, Michael Lessac
  • Patent number: 6963841
    Abstract: In accordance with a present invention speech training system is disclosed. It uses a microphone to receive audible sounds input by a user into a first computing device having a program with a database consisting of (i) digital representations of known audible sounds and associated alphanumeric representations of the known audible sounds, and (ii) digital representations of known audible sounds corresponding to mispronunciations resulting from known classes of mispronounced words and phrases. The method is performed by receiving the audible sounds in the form of the electrical output of the microphone. A particular audible sound to be recognized is converted into a digital representation of the audible sound. The digital representation of the particular audible sound is then compared to the digital representations of the known audible sounds to determine which of those known audible sounds is most likely to be the particular audible sound being compared to the sounds in the database.
    Type: Grant
    Filed: January 9, 2003
    Date of Patent: November 8, 2005
    Assignee: Lessac Technology, Inc.
    Inventors: Anthony H. Handal, Gary Marple, H. Donald Wilson, Michael Lessac
  • Patent number: 6865533
    Abstract: A preferred embodiment of the method for converting text to speech using a computing device having a memory is disclosed. The inventive method comprises examining a text to be spoken to an audience for a specific communications purpose, followed by marking-up the text according to a phonetic markup systems such as the Lessac System pronunciation rules notations. A set of rules to control a speech to text generator based on speech principles, such as Lessac principles. Such rules are of the tide normally implemented on prior art text-to-speech engines, and control the operation of the software and the characteristics of the speech generated by a computer using the software. A computer is used to speak the marked-up text expressively.
    Type: Grant
    Filed: December 31, 2002
    Date of Patent: March 8, 2005
    Assignee: Lessac Technology Inc.
    Inventors: Edwin R. Addison, H. Donald Wilson, Gary Marple, Anthony H. Handal, Nancy Krebs
  • Patent number: 6847931
    Abstract: A preferred embodiment of the method for converting text to speech using a computing device having a memory is disclosed. Text, being made up of a plurality of words, is received into the memory of the computing device. A plurality of phonemes are derived from the text. Each of the phonemes is associated with a prosody record based on a database of prosody records associated with a plurality of words. A first set of the artificial intelligence rules is applied to determine context information associated with the text. The context influenced prosody changes for each of the phonemes is determined. Then a second set of rules, based on Lessac theory to determine Lessac derived prosody changes for each of the phonemes is applied. The prosody record for each of the phonemes is amended in response to the context influenced prosody changes and the Lessac derived prosody changes. Then a reading from the memory sound information associated with the phonemes is performed.
    Type: Grant
    Filed: January 29, 2002
    Date of Patent: January 25, 2005
    Assignee: Lessac Technology, Inc.
    Inventors: Edwin R. Addison, H. Donald Wilson, Gary Marple, Anthony H. Handal, Nancy Krebs
  • Publication number: 20030229497
    Abstract: In accordance with a present invention speech recognition is disclosed. It uses a microphone to receive audible sounds input by a user into a first computing device having a program with a database consisting of (i) digital representations of known audible sounds and associated alphanumeric representations of the known audible sounds and (ii) digital representations of known audible sounds corresponding to mispronunciations resulting from known classes of mispronounced words and phrases. The method is performed by receiving the audible sounds in the form of the electrical output of the microphone. A particular audible sound to be recognized is converted into a digital representation of the audible sound. The digital representation of the particular audible sound is then compared to the digital representations of the known audible sounds to determine which of those known audible sounds is most likely to be the particular audible sound being compared to the sounds in the database.
    Type: Application
    Filed: December 31, 2002
    Publication date: December 11, 2003
    Applicant: LESSAC TECHNOLOGY INC.
    Inventors: H. Donald Wilson, Anthony H. Handal, Gary Marple, Michael Lessac