Speech Patents (Class 434/185)
  • Patent number: 12038973
    Abstract: A relation visualizing apparatus to an embodiment includes a memory; and a processor configured to: receive, as an input, dialog content data representing a content on which two or more persons have had a dialog with respect to predetermined two or more topic items and generate a graph, composed of nodes and edges, by using the topic items as the nodes and utterer IDs identifying persons having given utterance to two topic items different from each other as the edges, the edges connecting nodes corresponding to the two topic items; and display the generated graph.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: July 16, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Yoko Ishii, Momoko Nakatani, Ai Nakane, Yumiko Matsuura
  • Patent number: 11817005
    Abstract: Approaches presented herein enable delivery of real-time internet of things (IoT) feedback to optimize a public speaking performance. More specifically, a set of data representing a speaking performance of a user is captured and analyzed to generate a speaking performance profile of the user. This profile is compared to a reference speaking performance profile and, based on the comparison, a set of performance improvement strategies for the user is generated. A performance improvement strategy is selected from the set of performance improvement strategies based on an identification of an availability of a set of IoT devices for delivery of at least one of the strategies. Instructions are then communicated, responsive to the captured speaking performance associated with the user, to an available IoT device to deliver the selected performance improvement strategy to the user through an output user interface of the available IoT device during the speaking performance.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: November 14, 2023
    Assignee: International Business Machines Corporation
    Inventor: Roxana Monge Nunez
  • Patent number: 11688106
    Abstract: Audio of a user speaking is gathered while spatial data of the face of the user is gathered. Positions of elements of the face of are identified, wherein relative positions of the elements cause a plurality of qualities of the user voice. A subset of positions of the elements are identified to have caused a detected first quality of the user voice during the period of time. Alternate positions of the one or more elements are identified that are determined to cause the user voice to have a second quality rather than the first quality. A graphical representation of the face that depicts one or more adjustments from the subset of the positions to the alternate positions is provided to the user.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: June 27, 2023
    Assignee: International Business Machines Corporation
    Inventors: Trudy L. Hewitt, Christian Compton, Merrilee Freeburg Tomlinson, Christina Lynn Wetli, Jeremy R. Fox
  • Patent number: 11523245
    Abstract: Some implementations may involve receiving, via an interface system, personnel location data indicating a location of at least one person and receiving, from an orientation system, headset orientation data corresponding with the orientation of a headset. First environmental element location data, indicating a location of at least a first environmental element, may be determined. Based at least in part on the headset orientation data, the personnel location data and the first environmental element location data, headset coordinate locations of at least one person and at least the first environmental element in a headset coordinate system corresponding with the orientation of the headset may be determined. An apparatus may be caused to provide spatialization indications of the headset coordinate locations. Providing the spatialization indications may involve controlling a speaker system to provide environmental element sonification corresponding with at least the first environmental element location data.
    Type: Grant
    Filed: February 10, 2021
    Date of Patent: December 6, 2022
    Assignee: Dolby Laboratories Licensing Corporation
    Inventor: Poppy Anne Carrie Crum
  • Patent number: 11438722
    Abstract: Systems, apparatuses and methods may provide away to render augmented reality (AR) and/or virtual reality (VR) sensory enhancements using ray tracing. More particularly, systems, apparatuses and methods may provide a way to normalize environment information captured by multiple capture devices, and calculate, for an observer, the sound sources or sensed events vector paths. The systems, apparatuses and methods may detect and/or manage one or more capture devices and assign one or more the capture devices based on one or more conditions to provide observer an immersive VR/AR experience.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: September 6, 2022
    Assignee: Intel Corporation
    Inventors: Joydeep Ray, Travis T. Schluessler, Prasoonkumar Surti, John H. Feit, Nikos Kaburlasos, Jacek Kwiatkowski, Abhishek R. Appu, James M. Holland, Jeffery S. Boles, Jonathan Kennedy, Louis Feng, Atsuo Kuwahara, Barnan Das, Narayan Biswal, Stanley J. Baran, Gokcen Cilingir, Nilesh V. Shah, Archie Sharma, Mayuresh M. Varerkar
  • Patent number: 11404070
    Abstract: A method for phoneme identification. The method includes receiving an audio signal from a speaker, performing initial processing comprising filtering the audio signal to remove audio features, the initial processing resulting in a modified audio signal, transmitting the modified audio signal to a phoneme identification method and a phoneme replacement method to further process the modified audio signal, and transmitting the modified audio signal to a speaker. Also, a system for identifying and processing audio signals. The system includes at least one speaker, at least one microphone, and at least one processor, wherein the processor processes audio signals received using a method for phoneme replacement.
    Type: Grant
    Filed: February 14, 2020
    Date of Patent: August 2, 2022
    Assignee: DEKA PRODUCTS LIMITED PARTNERSHIP
    Inventors: Dean Kamen, Derek G. Kane
  • Patent number: 11386918
    Abstract: This disclosure generally relates to a system and method for assessing reading quality during a reading session. In one embodiment, system is disclosed that analyzes speech that corresponds to a reading session for the duration of the reading session, the consistency of the reading sessions, the speed of the speech during the reading session, the engagement level of the parent during the reading session, and the environment in which the reading session takes place in another embodiment, a method is disclosed for calculating an objective score for a reading session, communicating the score to a parent, and providing suggestions and challenges for improving future reading sessions.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: July 12, 2022
    Assignee: The University of Chicago
    Inventors: Jon Boggiano, Jonathan Simon, Alexandra Yorke, Rodrigo Rallo, Chris Boggiano, Nicola Boyd, Phil Balliet
  • Patent number: 11252518
    Abstract: A media system and a method of using the media system to accommodate hearing loss of a user, are described. The method includes selecting a personal level-and-frequency dependent audio filter that corresponds to a hearing loss profile of the user. The personal level-and-frequency dependent audio filter can be one of several level-and-frequency-dependent audio filters having respective average gain levels and respective gain contours. An accommodative audio output signal can be generated by applying the personal level-and-frequency dependent audio filter to an audio input signal to enhance the audio input signal based on an input level and an input frequency of the audio input signal. The audio output signal can be played by an audio output device to deliver speech or music that the user perceives clearly, despite the hearing loss of the user. Other aspects are also described and claimed.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: February 15, 2022
    Assignee: APPLE INC.
    Inventors: John Woodruff, Yacine Azmi, Ian M. Fisch, Jing Xia
  • Patent number: 11222633
    Abstract: The present invention improves a sense of participation in a topic and enables a dialogue to continue for a long time. A dialogue system 12 includes at least an input part 1 that receives a user's utterance and a presentation part 5 that presents an utterance. In an utterance receiving step, the input part 1 receives an utterance performed by the user. In a first presentation step, the presentation part 5 presents an utterance determined based on scenarios stored in advance. In a second presentation step, the presentation part 5 presents the utterance determined based on the user's utterance contents. A dialogue control part 8 performs control to execute a dialogue at least including a first dialogue flow which is a dialogue including the utterance receiving step and the first presentation step at least one time respectively based on a predetermined scenario, and a second dialogue flow which is a dialogue including the utterance receiving step and the second presentation step at least one time respectively.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: January 11, 2022
    Assignees: NIPPON TELEGRAPH AND TELEPHONE CORPORATION, OSAKA UNIVERSITY
    Inventors: Hiroaki Sugiyama, Toyomi Meguro, Junji Yamato, Yuichiro Yoshikawa, Hiroshi Ishiguro
  • Patent number: 11210968
    Abstract: A computer system interacts with a user with a behavioral state. An activity performed by an entity with a behavioral state is determined. A virtual character corresponding to the entity and performing the determined activity of the entity is generated and displayed. A mental state of the entity responsive to the virtual character is detected. In response to detection of a positive mental state of the entity, one or more natural language terms are provided to the entity corresponding to the activity performed by the virtual character. Embodiments of the present invention further include a method and program product for interacting with a user with a behavioral state in substantially the same manner described above.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: December 28, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lawrence A. Clevenger, Stefania Axo, Leigh Anne H. Clevenger, Krishna R. Tunga, Mahmoud Amin, Bryan Gury, Christopher J. Penny, Mark C. Wallen, Zhenxing Bi, Yang Liu
  • Patent number: 11170663
    Abstract: One or more implementations allow for systems, methods, and devices for teaching and/or assessment of one or more spoken language skills through analysis of one or more pronunciation characteristics of one or more individual language components of a teaching string audio sample data that corresponds to the user's speaking of the teaching string.
    Type: Grant
    Filed: September 19, 2017
    Date of Patent: November 9, 2021
    Assignee: SpeechAce LLC
    Inventors: Chun Ho Cheung, Ahmed El-Shimi, Abhishek Gupta
  • Patent number: 11158210
    Abstract: A method, computer system, and computer program product for a cognitive, real-time feedback speaking coach are provided. The embodiment may include capturing a plurality of text from a prepared document. The embodiment may also include capturing a plurality of user voice data and a plurality of user movement data. The embodiment may further include calculating a speaker rating based on the plurality of received user voice data, the plurality of received user movement data, and the plurality of captured text. The embodiment may also include identifying one or more points of improvement based on the calculated speaker rating. The embodiment may further include alerting a user of the one or more identified points of improvement.
    Type: Grant
    Filed: November 8, 2017
    Date of Patent: October 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Catherine H. Crawford, Eleni Pratsini, Ramya Raghavendra, Aisha Walcott
  • Patent number: 11017693
    Abstract: A method for enhancing speech performance includes communicating, via an input/output (I/O) device, speech data of a patient with speech problems, segmenting the speech data, generating one or more feature vectors based on at least the segmented speech data, determining whether the one or more feature vectors match with one or more recognition objects pre-trained using clinical data of one or more other patients, determining a speech disorder based on a matched result between the one or more feature vectors and the one or more recognition objects, and communicating, via the I/O device, one or more ameliorative actions for mitigating the determined speech disorder.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: May 25, 2021
    Assignee: International Business Machines Corporation
    Inventors: Michael S. Gordon, Roxana Monge Nunez, Clifford A. Pickover, Maja Vukovic
  • Patent number: 10997970
    Abstract: A hearing aid system presents a hearing impaired user with customized enhanced intelligibility sound in a preferred language. The system includes a model trained with a set of source speech data representing sampling from a speech population relevant to the user. The model is also trained with a set of corresponding alternative articulation of source data, pre-defined or algorithmically constructed during an interactive session with the user. The model creates a set of selected target speech training data from the set of alternative articulation data that is preferred by the user as being satisfactorily intelligible and clear. The system includes a machine learning model, trained to shift incoming source speech data to a preferred variant of the target data that the hearing aid system presents to the user.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: May 4, 2021
    Inventor: Abbas Rafii
  • Patent number: 10976999
    Abstract: Disclosed herein is a mixed reality application to use a multi-channel audio input to identify a character and origin of a given sound, then present a visual representation of the given sound on a near eye display. The visual representation including a vector to the source of the sound. The visual representation further including graphical elements that describe various attributes of the given sound including the magnitude, directionality, source, and threat level. Where the source of the given sound is moving, the visual representation shifts to illustrate the movement.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: April 13, 2021
    Assignee: Chosen Realities, LLC
    Inventor: Eric Browy
  • Patent number: 10950239
    Abstract: Recognizing a user's speech is a computationally demanding task. If a user calls a destination server, little may be known about the user or the user's speech profile. The user's source system (device and/or server) may have an extensive profile of the user. As provided herein, a source device may provide translated text and/or speech attributes to a destination server. As a benefit, the recognition algorithm may be well tuned to the user and provide the recognized content to the destination. Additionally, the destination may provide domain attributes to allow the source recognition engine to better recognize the spoken content.
    Type: Grant
    Filed: October 22, 2015
    Date of Patent: March 16, 2021
    Assignee: Avaya Inc.
    Inventors: Lin Lin, Ping Lin
  • Patent number: 10939204
    Abstract: One embodiment of the present application sets forth a computer-implemented method that includes receiving, from a first microphone, a first input acoustic signal, generating a first audio spectrum from at least the first input acoustic signal, wherein the first audio spectrum includes a set of time-frequency bins, and selecting a first time-frequency bin from the set based on a first local space-domain distance (LSDD) computed for the first time-frequency bin.
    Type: Grant
    Filed: April 16, 2020
    Date of Patent: March 2, 2021
    Assignee: FACEBOOK TECHNOLOGIES, LLC
    Inventors: Vladimir Tourbabin, Ravish Mehra
  • Patent number: 10930274
    Abstract: In an approach to analyzing a sound file, determining the language of the sound file and the display, creating a pronunciation map between the languages, generating a set of pronunciation hints based on the pronunciation map, and displaying the set of pronunciation hints, one or more computer processors identify a word from one or more words in a sound file. The one or more computer processors determine a dialect of spoken language for the word. The one or more computer processors determine a different language to display the word. The one or more computer processors retrieve one or more phonological rules based on the determined spoken language of the word and the determined different language to display the word. The one or more computer processors create a pronunciation map based on the retrieved phonological rules of the word.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: February 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Michael Donati, Nadiya Kochura, Scott L. Sachs, Fang Lu
  • Patent number: 10878825
    Abstract: The present disclosure provides methods, systems, devices and computer program products for authenticating a user based on a comparison of audio signals to a stored voice model for an authorised user. In one aspect, a method comprises: obtaining a first audio signal that comprises a representation of a bone-conducted signal, wherein the bone-conducted signal is conducted via at least part of the user's skeleton; obtaining a second audio signal that comprises a representation of an air-conducted signal; and, responsive to a determination that the first audio signal comprises a voice signal, enabling updates to the stored voice model for the authorised user based on the second audio signal.
    Type: Grant
    Filed: March 21, 2018
    Date of Patent: December 29, 2020
    Assignee: Cirrus Logic, Inc.
    Inventor: John Paul Lesso
  • Patent number: 10845954
    Abstract: A user such as a vision-impaired person watching an audio video device (AVD) such as a TV may be given the option to define, in his user profile, whether he prefers options (such as channel listings on an electronic program guide (EPG)) to be presented in a two-dimensional matrix format or a one-dimensional list format.
    Type: Grant
    Filed: July 11, 2017
    Date of Patent: November 24, 2020
    Assignee: Sony Corporation
    Inventors: Peter Shintani, Brant Candelore, Mahyar Nejat
  • Patent number: 10735402
    Abstract: Systems and methods for automated data packet selection and delivery are disclosed herein. The system can include a memory containing a data packet database including data packets for delivery to a user; and a user profile database including information identifying at least one user. The system can include a user device and one or several servers. The one or several servers can: receive a request for delivery of a set of data packets to a user via the user device; identify potential data packets for delivery to the user via the user device; determine a probability of the user providing a desired response to each of the potential data packets; weight the data packets according to weighting data and the determined probability; select a set of data packets from the potential data packets; and provide the set of data packets to the user via the user device.
    Type: Grant
    Filed: August 3, 2017
    Date of Patent: August 4, 2020
    Assignee: PEARSON EDUCATION, INC.
    Inventors: Jacob Anderson, Daniel Ensign
  • Patent number: 10715909
    Abstract: One embodiment of the present application sets forth a computer-implemented method that includes receiving, from a first microphone, a first input acoustic signal, generating a first audio spectrum from at least the first input acoustic signal, where the first audio spectrum includes a set of time-frequency bins, for each time-frequency bin included in the set of time-frequency bins, computing a weighted local space-domain distance (LSDD) spectrum value based on a portion of the first audio spectrum that is included in the time-frequency bin, generating a combined spectrum value based on a set of the weighted LSDD spectrum values computed for the set of time-frequency bins, and determining a first estimated direction of the first input acoustic signal based on the combined spectrum value.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: July 14, 2020
    Assignee: FACEBOOK TECHNOLOGIES, LLC
    Inventors: Vladimir Tourbabin, Ravish Mehra
  • Patent number: 10659875
    Abstract: One embodiment of the present application sets forth a computer-implemented method that includes receiving, from a first microphone, a first input acoustic signal, generating a first audio spectrum from at least the first input acoustic signal, wherein the first audio spectrum includes a set of time-frequency bins, and selecting a first time-frequency bin from the set based on a first local space-domain distance (LSDD) computed for the first time-frequency bin.
    Type: Grant
    Filed: April 6, 2018
    Date of Patent: May 19, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Vladimir Tourbabin, Ravish Mehra
  • Patent number: 10573297
    Abstract: Systems and methods of script identification in audio data obtained from audio data. The audio data is segmented into a plurality of utterances. A script model representative of a script text is obtained. The plurality of utterances are decoded with the script model. A determination is made if the script text occurred in the audio data.
    Type: Grant
    Filed: July 22, 2016
    Date of Patent: February 25, 2020
    Assignee: Verint Systems Ltd.
    Inventors: Jeffrey Michael Iannone, Ron Wein, Omer Ziv
  • Patent number: 10556087
    Abstract: A method of providing repetitive motion therapy comprising providing access to audio content; selecting audio content for delivery to a patient; performing an analysis on the selected audio content, the analysis identifying audio features of the selected audio content, and extracting rhythmic and structural features of the selected audio content; performing an entrainment suitability analysis on the selected audio content; generating entrainment assistance cue(s) to the selected audio content, the assistance cue(s) including a sound added to the audio content; applying the assistance cues to the audio content simultaneously with playing the selected audio content; evaluating a therapeutic effect on the patient, wherein the selected audio content continues to play when a therapeutic threshold is detected, and a second audio content is selected for delivery to the patient when a therapeutic threshold is not detected.
    Type: Grant
    Filed: July 24, 2018
    Date of Patent: February 11, 2020
    Assignee: MEDRHYTHMS, INC.
    Inventors: Owen McCarthy, Brian Harris, Alex Kalpaxis, David Guerette
  • Patent number: 10542929
    Abstract: Techniques, methods, systems, devices, and computer readable medium are disclosed from an oral cavity of a user identifying a feature in the user's oral cavity, tracking the feature as it changes, and determining a condition based on the tracking of the feature in the user's oral cavity. The user can use an interface provided by the device's user interface for activities such as biofeedback, controlling functions of the device and other devices, receiving information from external devices or the device, etc.
    Type: Grant
    Filed: February 23, 2017
    Date of Patent: January 28, 2020
    Inventor: Dustin Ryan Kimmel
  • Patent number: 10477334
    Abstract: A method and apparatus for evaluating audio device for evaluating performance of an audio device by inputting into the audio device an audio signal having a waveform in which a plurality of waves with different frequency components are superimposed, comparing the sound waveform before the input and the sound waveform after the input, and finding the degree of conformity therebetween. An audio device is characterized in that, with sound field correction as a precondition, low-pitch ranges can be reproduced by using numerous small-diameter speakers, a single one of which is insufficient to reproduce low-pitch ranges despite good group delay characteristic, and outstanding waveform reproducibility can be achieved by covering the periphery of the speakers with sound-absorbent material so as to remove noise emitted by surfaces other than the front surface of cone paper.
    Type: Grant
    Filed: January 7, 2016
    Date of Patent: November 12, 2019
    Inventor: Setuo Aniya
  • Patent number: 10468019
    Abstract: A method and system method for automatic speech recognition using selection of speech models based on input characteristics is disclosed herein. The method includes obtaining speech data from a speaker utilizing a microphone or an audio upload. The system and method select the best speech recognition model to automatically decode the input speech and continuously update models by updating/creating models in a database based on users speech abilities.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: November 5, 2019
    Assignee: Kadho, Inc.
    Inventors: Dhonam Pemba, Kaveh Azartash
  • Patent number: 10468017
    Abstract: Methods and systems are provided for a speech system of a vehicle. In particular, a method is taught for associating a speech utterance with a voice command in response to a failed voice control attempt followed by a successfully voice control attempt.
    Type: Grant
    Filed: December 14, 2017
    Date of Patent: November 5, 2019
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Ron M Hecht, Yael Shmueli Friedland, Ariel Telpaz, Omer Tsimhoni, Peggy Wang
  • Patent number: 10431116
    Abstract: Techniques for leveraging the capabilities of wearable mobile technology to collect data and to provide real-time feedback to an orator about his/her performance and/or audience interaction are provided. In one aspect, a method for providing real-time feedback to a speaker making a presentation to an audience includes the steps of: collecting real-time data from the speaker during the presentation, wherein the data is collected via a mobile device worn by the speaker; analyzing the real-time data collected from the speaker to determine whether corrective action is needed to improve performance; and generating a real-time alert to the speaker suggesting the corrective action if the real-time data indicates that corrective action is needed to improve performance, otherwise continuing to collect data from the speaker in real-time. Real-time data may also be collected from members of the audience and/or from other speakers (if present) via wearable mobile devices.
    Type: Grant
    Filed: December 10, 2015
    Date of Patent: October 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Benjamin D. Briggs, Lawrence A. Clevenger, Leigh Anne H. Clevenger, Jonathan H. Connell, II, Nalini K. Ratha, Michael Rizzolo
  • Patent number: 10390137
    Abstract: An example non-transitory computer-readable medium includes instructions. When executed by a processor, the instructions cause the processor to remove nondominant frequencies from a low frequency portion of an audio signal. The instructions also cause the processor to apply non-linear processing to a remainder of the low frequency portion to generate a plurality of harmonics. The instructions cause the processor to insert the plurality of harmonics into an audio output corresponding to a high frequency portion of the audio signal. The audio output is to be provided to an audio output device.
    Type: Grant
    Filed: November 4, 2016
    Date of Patent: August 20, 2019
    Assignee: Hewlett-Packard Dvelopment Company, L.P.
    Inventor: Sunil Bharitkar
  • Patent number: 10255913
    Abstract: A system and method of processing disfluent speech at an automatic speech recognition (ASR) system includes: receiving speech from a speaker via a microphone; determining the received speech includes disfluent speech; accessing a disfluent speech grammar or acoustic model in response to the determination; and processing the received speech using the disfluent speech grammar.
    Type: Grant
    Filed: February 17, 2016
    Date of Patent: April 9, 2019
    Assignee: GM Global Technology Operations LLC
    Inventors: Xufang Zhao, Gaurav Talwar
  • Patent number: 10111013
    Abstract: Methods and devices are provided for processing sound signals, localizing sound signals corresponding to one or more sound sources, and rendering, on a wearable display device, an acoustic visualization corresponding to localized sound sources. A wearable visualization device may include two or more microphones for detecting sounds from one or more sound sources, and display devices for displaying the acoustic visualizations, optionally in a stereographic manner. A sound source may be located by processing the sound signals recorded by the microphones to localize sound signals corresponding to a given sound source, and processing the localized sound signals to identify the location of the sound source. The acoustic visualization may be a frequency-domain visualization, and may involve a mapping of frequency to color. The acoustic visualization devices and methods provided herein may assist in training the human brain to comprehend sound visualization signals as sound signal itself.
    Type: Grant
    Filed: January 24, 2014
    Date of Patent: October 23, 2018
    Assignee: SENSE INTELLIGENT
    Inventor: Hai Hu
  • Patent number: 10102770
    Abstract: A method of demonstrating bacteria removal from the tongue, the method comprising: providing a simulated tongue substrate comprising a plurality of projections wherein the plurality of projections are arranged to simulate the surface of a human tongue; applying a film to the surface of the simulated tongue substrate; exposing the simulated tongue substrate to a liquid; agitating the simulated tongue substrate and the liquid to at least partially remove the film, wherein a liquid-film mixture is formed.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: October 16, 2018
    Assignee: The Procter & Gamble Company
    Inventors: Jason William Newlon, Linda M. Bayuk, Melba Highley, Carrita Anne Hightower, Melissa Patterson, Debra Kay Williams
  • Patent number: 10102771
    Abstract: A method and a device for learning a language and a computer readable recording medium are provided. The method includes following steps. An input voice from a voice receiver is transformed into an input sentence according to a grammar rule. Whether the input sentence is the same as a learning sentence displayed on a display is determined. If the input sentence is different from the learning sentence, an ancillary information containing at least one error word in the input sentence that is different from the learning sentence is generated.
    Type: Grant
    Filed: February 13, 2014
    Date of Patent: October 16, 2018
    Assignee: Wistron Corporation
    Inventor: Hsi-chun Hsiao
  • Patent number: 10056077
    Abstract: Speech recorded by an audio capture facility of a music facility is processed by a speech recognition facility to generate results that are provided to the music facility. When information related to a music application running on the music facility are provided to the speech recognition facility, the results generated are based at least in part on the application related information. The speech recognition facility uses an unstructured language model for generating results. The user of the music facility may optionally be allowed to edit the results being provided to the music facility. The speech recognition facility may also adapt speech recognition based on usage of the results.
    Type: Grant
    Filed: August 1, 2008
    Date of Patent: August 21, 2018
    Assignee: Nuance Communications, Inc.
    Inventors: Joseph P. Cerra, John N. Nguyen, Michael S. Phillips, Han Shu
  • Patent number: 10013971
    Abstract: Methods, systems, and apparatus for determining candidate user profiles as being associated with a shared device, and identifying, from the candidate user profiles, candidate pronunciation attributes associated with at least one of the candidate user profiles determined to be associated with the shared device. The methods, systems, and apparatus are also for receiving, at the shared device, a spoken utterance; determining a received pronunciation attribute based on received audio data corresponding to the spoken utterance; comparing the received pronunciation attribute to at least one of the candidate pronunciation attributes; and selecting a particular pronunciation attribute from the candidate pronunciation attributes based on a result of the comparison of the received pronunciation attribute to at least one of the candidate pronunciation attributes.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: July 3, 2018
    Assignee: Google LLC
    Inventors: Justin Lewis, Lisa Takehana
  • Patent number: 9974473
    Abstract: The present disclosure relates to devices, systems, and methods for assessing and altering swallowing, speech, and breathing function. In particular, the present disclosure relates to devices and systems to assess and improve speech, breathing, and swallowing function in subjects in need thereof.
    Type: Grant
    Filed: October 21, 2016
    Date of Patent: May 22, 2018
    Assignee: SWALLOW SOLUTIONS, LLC
    Inventors: Tye Gribb, JoAnne Robbins, Jackie Hind, John Peterman
  • Patent number: 9911352
    Abstract: Systems, methods, and other embodiments associated with producing an immersive training content module (ITCM) are described. One example system includes a capture logic to acquire information from which the ITCM may be produced. An ITCM may include a set of nodes, a set of measures, a logic to control transitions between nodes during a training session, and a logic to establish values for measures during the training sessions. Therefore, the example system may also include an assessment definition logic to define a set of measures to be included in the ITCM and an interaction logic to define a set of interactions to be included in the ITCM. The ITCM may be written to a computer-readable medium.
    Type: Grant
    Filed: December 21, 2007
    Date of Patent: March 6, 2018
    Assignee: Case Western Reserve University
    Inventors: Stacy L Williams, Marc Buchner
  • Patent number: 9786199
    Abstract: Speech data from the operation of a speech recognition application is recorded over the course of one or more language learning sessions. The operation of the speech recognition application during each language learning sessions corresponds to a user speaking, and the speech recognition application generating text data. The text data may a recognition of what the user spoke. The speech data may comprise the text data, and confidence values that are an indication of an accuracy of the recognition. The speech data from each language learning session may be analyzed to determine an overall performance level of the user.
    Type: Grant
    Filed: August 28, 2012
    Date of Patent: October 10, 2017
    Assignee: Vocefy, Inc.
    Inventors: Luc Julia, Jerome Dubreuil, Jehen Bing
  • Patent number: 9786195
    Abstract: A system of evaluating reading fluency by monitoring the underlining of text as it is being read on a tablet or other computing device. The text or passage is presented on the screen of the tablet computing device with a touchscreen, such as, but not limited to, an iPad. The reader uses a stylus, finger, or other device to underline each word as it is read, and may go back and re-underline any words to regress within the passage. Alternatively, a mouse can be used to indicate words as they are read. Computer software tracks the reader's underlining, providing detailed information about reading rate, pauses, regressions, and other word and word combination features.
    Type: Grant
    Filed: September 1, 2013
    Date of Patent: October 10, 2017
    Inventor: Max M. Louwerse
  • Patent number: 9773497
    Abstract: Disclosed herein are systems, computer-implemented methods, and tangible computer-readable media for handling missing speech data. The computer-implemented method includes receiving speech with a missing segment, generating a plurality of hypotheses for the missing segment, identifying a best hypothesis for the missing segment, and recognizing the received speech by inserting the identified best hypothesis for the missing segment. In another method embodiment, the final step is replaced with synthesizing the received speech by inserting the identified best hypothesis for the missing segment. In one aspect, the method further includes identifying a duration for the missing segment and generating the plurality of hypotheses of the identified duration for the missing segment. The step of identifying the best hypothesis for the missing segment can be based on speech context, a pronouncing lexicon, and/or a language model. Each hypothesis can have an identical acoustic score.
    Type: Grant
    Filed: March 2, 2016
    Date of Patent: September 26, 2017
    Assignee: Nuance Communications, Inc.
    Inventors: Andrej Ljolje, Alistair D. Conkie
  • Patent number: 9656060
    Abstract: A mouthpiece for providing non-invasive neuromodulation to a patient, the mouthpiece including an elongated housing having an anterior region and a posterior region, the elongated housing having a non-planar exterior top surface and internal structural members disposed within the housing, the internal structural members elastically responding to biting forces generated by the patient, a spacer attached to the top surface of the housing for limiting contact between a patient's upper teeth and the exterior top surface of the elongated housing, and a printed circuit board mounted to a bottom portion of the elongated housing, the printed circuit board having a plurality of electrodes for delivering subcutaneous local electrical stimulation to the patient's tongue.
    Type: Grant
    Filed: December 3, 2014
    Date of Patent: May 23, 2017
    Assignee: NEUROHABILITATION CORPORATION
    Inventors: Justin Fisk, Mark Guarraia, Aidan Petrie, Joseph M. Gordon, Faith David-Hegerich, Shane Siwinski, Adam Muratori, Jeffrey M. Wallace
  • Patent number: 9620114
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, receiving audio data; determining that an initial portion of the audio data corresponds to an initial portion of a hotword; in response to determining that the initial portion of the audio data corresponds to the initial portion of the hotword, selecting, from among a set of one or more actions that are performed when the entire hotword is detected, a subset of the one or more actions; and causing one or more actions of the subset to be performed.
    Type: Grant
    Filed: October 21, 2016
    Date of Patent: April 11, 2017
    Assignee: Google Inc.
    Inventor: Matthew Sharifi
  • Patent number: 9502020
    Abstract: An adaptive noise canceling (ANC) circuit adaptively generates an anti-noise signal that is injected into the speaker or other transducer output to cause cancellation of ambient audio sounds. At least one microphone provides an error signal indicative of the noise cancellation at the transducer, and the adaptive filter is adapted to minimize the error signal. In order to prevent improper adaptation or instabilities in one or both of the adaptive filters, spikes are detected in the error signal by comparing the error signal or its rate of change to a threshold. Therefore, if the magnitude of the coefficient error is greater than a threshold value for an update, the update is skipped. Alternatively the step size of the updates may be reduced. Similar criteria can be applied to a filter modeling the secondary path, based on detection applied to both the source audio and the error signal.
    Type: Grant
    Filed: March 14, 2014
    Date of Patent: November 22, 2016
    Assignee: CIRRUS LOGIC, INC.
    Inventors: Ali Abdollahzadeh Milani, Jeffrey Alderson, Gautham Devendra Kamath, Yang Lu
  • Patent number: 9480812
    Abstract: The ANS of a subject is continuously monitored to obtain the sympathetic nervous system (SNS) and parasympathetic nervous system (PSNS) components of the autonomous nervous system while an external stimulus is applied to the subject having a frequency that sweeps across a frequency band. The stimulus may be a vibration, flickering light, or sound. The subject is determined to have entered into a state of homeostasis when the SNS component equals the PSNS component. The value of frequency that corresponds to the state of homeostasis is selected as the fundamental frequency of the subject for use in subsequent treatment protocols.
    Type: Grant
    Filed: January 12, 2015
    Date of Patent: November 1, 2016
    Inventor: Jeffrey D. Thompson
  • Patent number: 9474483
    Abstract: The present disclosure relates to devices, systems, and methods for assessing and altering swallowing, speech, and breathing function. In particular, the present disclosure relates to devices and systems to assess and improve speech, breathing, and swallowing function in subjects in need thereof.
    Type: Grant
    Filed: August 12, 2014
    Date of Patent: October 25, 2016
    Inventors: Tye Gribb, JoAnne Robbins, Jackie Hind, John Peterman
  • Patent number: 9462993
    Abstract: The invention relates to a reference object (3) and a method for checking a measuring system (1), wherein a plurality of three-dimensional recordings (4, 8) of a reference object are recorded from different recording directions (5) by means of the measuring system (1). The reference object (3) has a closed shape, wherein each of the three-dimensional recordings (4) is registered with at least the preceding recording (4). In the case of a faulty calibration and/or in the case of a faulty registration, the individual recordings (4, 8) are deformed compared to the actual shape of the reference object (3), so that the deformation continues when assembling the individual three-dimensional recordings (4) to form an overall recording (54) and the generated overall recording (54) deviates in its dimensions from the dimensions of the reference object (3) as a result thereof.
    Type: Grant
    Filed: January 28, 2013
    Date of Patent: October 11, 2016
    Assignee: Sirona Dental Systems GmbH
    Inventors: Björn Popilka, Volker Wedler, Anders Adamson, Frank Thiel
  • Patent number: 9443537
    Abstract: A voice processing device includes a processor; and a memory which stores a plurality of instructions, which when executed by the processor, causing the processor to execute: acquiring an input voice; detecting a sound period included in the input voice and a silent period adjacent to a back end of the sound period; calculating a number of words included in the sound period; and controlling a length of the silent period according to the number of words.
    Type: Grant
    Filed: May 5, 2014
    Date of Patent: September 13, 2016
    Assignee: FUJITSU LIMITED
    Inventors: Chisato Shioda, Taro Togawa, Takeshi Otani
  • Patent number: 9304787
    Abstract: Described is a technique for establishing an interaction language for a user interface without having to communicate with the user in a default language, which the user may or may not understand. The technique may prompt the user for multiples responses in order to determine a specific language. The responses may include speech input or selecting particular regions on a map. In some implementations, the language may be precise to a particular dialect or variant preferred or spoken by the user. Accordingly, this approach provides an accurate and efficient method of providing a high degree of specificity for language selection without overwhelming the user with an unmanageable list of languages.
    Type: Grant
    Filed: December 31, 2013
    Date of Patent: April 5, 2016
    Assignee: Google Inc.
    Inventors: Jeffrey David Oldham, Mark Edward Davis, Chinglan Ho, Cibu Chalissery Johny, Markus Scherer, Jungshik Shin, Erik Menno van der Poel, Neha Chachra