Handicap Aid Patents (Class 704/271)
  • Patent number: 10313502
    Abstract: A method for automatically delaying playback of a message at a captioning device may include obtaining, at the captioning device, a request to playback of the message. The method may also include, in response to the request, automatically delaying the playback of the message at the captioning device in order to allow the captioning system to receive the audio of the message from a beginning of the playback of the audio of the message.
    Type: Grant
    Filed: February 12, 2018
    Date of Patent: June 4, 2019
    Assignee: Sorenson IP Holdings, LLC
    Inventors: Michael Stimpson, Brian Chevrier, Jennifer Mitchell
  • Patent number: 10238333
    Abstract: In one embodiment, a computer program product includes a computer readable storage medium having program instructions embodied therewith. The embodied program instructions are executable by a processing circuit to cause the processing circuit to receive collected data from one or more data collection devices. The collected data is aggregated over a period of time lasting at least one month, and the collected data includes audio data of a user of the one or more data collection devices. The embodied program instructions also cause the processing circuit to store the audio data to a computer readable storage medium. Moreover, the embodied program instructions cause the processing circuit to analyze the audio data for indications of hearing loss in the user over the period of time.
    Type: Grant
    Filed: August 12, 2016
    Date of Patent: March 26, 2019
    Assignee: International Business Machines Corporation
    Inventors: Inseok Hwang, Su Liu, Eric J. Rozner, Chin Ngai Sze
  • Patent number: 10187738
    Abstract: Methods and arrangements for filtering audio in a noisy environment involving receiving audio input at a user's location, using a plurality of audio input devices in proximity with a user. The audio is then separated into sources in response to a user selection. After the selection is made, the amplitudes of the audio sources are adjusted based on the selection. Other variants and embodiments are broadly contemplated herein.
    Type: Grant
    Filed: April 29, 2015
    Date of Patent: January 22, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jitendra Ajmera, Nitendra Rajput, Saurabh Srivastava, Shubham Toshniwal
  • Patent number: 10095034
    Abstract: Systems and methods for eyewear devices with integrated heads-up displays are provided. In one embodiment, an eyewear device provides an integrated heads-up display in a display area that is elongate and extends laterally across a user's field of view. A display mechanism forming part of the eyewear device can be configured to display visual information in the form of text messages, with no more than a single laterally extending line of text characters being displayable at any particular time. The display mechanism can comprise a partially reflective element carried by an eyeglass lens to reflect towards the user computer-generated imagery projected on to it, the display mechanism further including a cooperating projector assembly housed by a frame of the eyewear device in an overhead configuration relative to the partially reflective element.
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: October 9, 2018
    Assignee: Snap Inc.
    Inventors: Jonathan M Rodriguez, II, Kimberly A. Phifer
  • Patent number: 10049109
    Abstract: Techniques include outputting to a developer an offer to opt-in to a translation feature that enables human translators to translate their web page to a target language. In response to receiving a first request to opt-in to the translation feature, the server: generates and stores a web page copy, obtains from the human translators translations of at least a portion of the web page from its source language to the target language, modifies the web page copy based on the obtained translations to obtain a translated web page that is a translated version of the web page, detects a second request for the web page from a computing device associated with the target language, and in response to detecting the second request outputs, to the computing device, the translated web page with additional content relevant to the computing device or a user associated with the computing device.
    Type: Grant
    Filed: August 4, 2017
    Date of Patent: August 14, 2018
    Assignee: Google LLC
    Inventors: Jonathan Wald, Aaron Baeten Brown
  • Patent number: 10009603
    Abstract: A 2D and/or 3D video processing device comprising a camera and a display captures images of a viewer as the viewer observes displayed 2D and/or 3D video content in a viewport. Face and/or eye tracking of viewer images is utilized to generate a different viewport. Current and different viewports may comprise 2D and/or 3D video content from a single source or from different sources. The sources of 2D and/or 3D content may be scrolled, zoomed and/or navigated through for generating the different viewport. Content for the different viewport may be processed. Images of a viewer's positions, angles and/or movements of face, facial expression, eyes and/or physical gestures are captured by the camera and interpreted by face and/or eye tracking. The different viewport may be generated for navigating through 3D content and/or for rotating a 3D object. The 2D and/or 3D video processing device communicates via wire, wireless and/or optical interfaces.
    Type: Grant
    Filed: June 23, 2014
    Date of Patent: June 26, 2018
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Marcus Kellerman, Xuemin Chen, Samir Hulyalkar, Ilya Klebanov
  • Patent number: 9996730
    Abstract: Vision-assist systems including user eye tracking cameras are disclosed. A vision-assist system includes a processor, a memory module communicatively coupled to the processor, a user eye tracking camera communicatively coupled to the processor, an environment camera communicatively coupled to the processor, a feedback device communicatively coupled to the processor, and machine readable instructions stored in the memory module that, when executed by the processor, cause the vision-assist system to receive environment image data from the environment camera, determine a location of an individual speaking to a user based on the environment image data, receive user eye tracking image data from the user eye tracking camera, determine a pose of the user's eyes based on the user eye tracking image data, and provide feedback to the user with the feedback device based on the location of the individual speaking to the user and the pose of the user's eyes.
    Type: Grant
    Filed: March 18, 2016
    Date of Patent: June 12, 2018
    Assignee: TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.
    Inventor: Christopher P. Lee
  • Patent number: 9928084
    Abstract: An electronic device is provided with an activation unit for activating an application wherein character input to a character input area is possible, a memory for storing a predetermined character string corresponding to the application, an operation unit for inputting a first character string and a second character string, a display unit for displaying the first character string and the second character string input from the operation unit, a determination unit for determining whether or not the second character string matches the predetermined character string, and a control unit which, when the determination unit determines that the second character string matches the predetermined character string, functions to control an activation unit to activate an application corresponding to the predetermined character string and functions to input the first character string in the character input area of the application.
    Type: Grant
    Filed: June 20, 2012
    Date of Patent: March 27, 2018
    Assignee: KYOCERA CORPORATION
    Inventors: Natsuhito Honda, Saya Shigeta
  • Patent number: 9917939
    Abstract: A method for automatically delaying playback of a voice message at a captioning device may include receiving, at a captioning device, a request from a user to play back a voice message that is stored on the captioning device. The method may also include, in response to the request from the user, automatically delaying the playback of the voice message, at the captioning device, until an establishment of a connection between the captioning device and a remote call assistant, in order to allow the remote call assistant to caption on the captioning device, in real-time, the voice message from the very beginning of the playback of the voice message.
    Type: Grant
    Filed: March 1, 2017
    Date of Patent: March 13, 2018
    Assignee: SORENSON IP HOLDINGS, LLC
    Inventors: Michael Stimpson, Brian Chevrier, Jennifer Mitchell
  • Patent number: 9826929
    Abstract: A device for stuttering alleviation is disclosed, comprising a speech sensor, configured to output signals indicative of speech, a processing unit configured to detect stuttering, log stuttering and/or produce stimulation indication based on stimulation rules Said device may further comprise a remote server and a server user interface, configured to allow the speech therapist access to the processing unit. Further provided is a method for accelerating the learning procedure for obtaining a permanent fluent speech, comprising receiving and analyzing speech parameters, determining whether a negative reinforcement is required and executing the negative reinforcement.
    Type: Grant
    Filed: January 17, 2013
    Date of Patent: November 28, 2017
    Assignee: NINISPEECH LTD.
    Inventors: Shirley Steinberg-Shapira, Yair Shapira
  • Patent number: 9754075
    Abstract: In an aspect, a method of monitoring one or more symptoms of a person include repeating, over a period of time, the steps of: selecting, by the person, one or more symbolic representations corresponding to one or more symptoms from a predefined set of symbolic representations presented to the person; and electronically recording data regarding the one or more symbolic representations selected by the person such that the data is electronically accessible later for generating a history of the symptoms of the person over the period of time. The one or more symbolic representations corresponding to one or more symptoms is selected using an electronic device having a component for displaying the predefined set of symbolic representations that is coupled to a user input for receiving the selection of the one or more symbolic representations by the person.
    Type: Grant
    Filed: January 26, 2015
    Date of Patent: September 5, 2017
    Assignee: RESCON LTD
    Inventors: Thomas Andrew Dawson, Robert W. Twitchell, Jr.
  • Patent number: 9753703
    Abstract: Disclosed are database systems, methods, and computer program products for generating identifiers for user interface elements of a web page of a web application. In some implementations, a server of a database system analyzes a copy of source code for a first web page. The first web page may comprise user interface elements capable of being generated from the source code. The server identifies one or more of the user interface elements of the first web page as not having a unique identifier or as having a dynamically generated identifier. The server generates, for each identified user interface element, a further unique identifier to be associated with the respective identified user interface element. The server generates edited source code comprising one or more further unique identifiers for the identified one or more user interface elements. The server stores the edited source code in a database of the database system.
    Type: Grant
    Filed: January 27, 2015
    Date of Patent: September 5, 2017
    Assignee: salesforce.com, inc.
    Inventor: Daniel Everett Jemiolo
  • Patent number: 9589107
    Abstract: Methods and systems are described for monitoring patient speech to determine compliance of the patient with a prescribed regimen for treating for a brain-related disorder. Patient speech is detected with an audio sensor at the patient location, and speech data is transmitted to a monitoring location. Patient speech is processed at the patient location and/or monitoring location to identify speech parameters and/or patterns that indicate whether the patient has complied with the prescribed treatment regimen. Patient identity may be determined through biometric identification or other authentication techniques. The system may provide a report to an interested party, for example a medical care provider, based on whether (and/or the extent to which) the patient has complied with the prescribed treatment regimen. The monitoring system may transmit a report to a wireless device such as a pager or cell phone, generate an alarm or notification, and/or store information for later use.
    Type: Grant
    Filed: November 17, 2014
    Date of Patent: March 7, 2017
    Assignee: Elwha LLC
    Inventors: Jeffrey A. Bowers, Paul Duesterhoft, Daniel Hawkins, Roderick A. Hyde, Edward K. Y. Jung, Jordin T. Kare, Eric C. Leuthardt, Nathan P. Myhrvold, Michael A. Smith, Elizabeth A. Sweeney, Clarence T. Tegreene, Lowell L. Wood, Jr.
  • Patent number: 9585616
    Abstract: Methods and systems are described for monitoring patient speech to determine compliance of the patient with a prescribed regimen for treating for a brain-related disorder. Patient speech is detected with an audio sensor at the patient location, and speech data is transmitted to a monitoring location. The audio sensor and other components at the patient location may be incorporated into, or associated with, a cell phone, computing system, or stand-alone microprocessor-based device, for example. Patient speech is processed at the patient location and/or monitoring location to identify speech parameters and/or patterns that indicate whether the patient has complied with the prescribed treatment regimen. Patient identity may be determined through biometric identification or other authentication techniques. The system may provide a report to an interested party, for example a medical care provider, based on whether (and/or the extent to which) the patient has complied with the prescribed treatment regimen.
    Type: Grant
    Filed: November 17, 2014
    Date of Patent: March 7, 2017
    Assignee: Elwha LLC
    Inventors: Jeffrey A. Bowers, Paul Duesterhoft, Daniel Hawkins, Roderick A. Hyde, Edward K. Y. Jung, Jordin T. Kare, Eric C. Leuthardt, Nathan P. Myhrvold, Michael A. Smith, Elizabeth A. Sweeney, Clarence T. Tegreene, Lowell L. Wood, Jr.
  • Patent number: 9525830
    Abstract: A method to generate a contact list may include receiving an identifier of a first communication device at a captioning system. The first communication device may configured to provide first audio data to a second communication device. The second communication device may be configured to receive first text data of the first audio data from the captioning system. The method may further include receiving and storing contact data from each of multiple communication devices at the captioning system. The method may further include selecting the contact data from the multiple communication devices that include the identifier of the first communication device as selected contact data and generating a contact list based on the selected contact data. The method may also include sending the contact list to the first communication device to provide the contact list as contacts for presentation on an electronic display of the first communication device.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: December 20, 2016
    Assignee: CaptionCall LLC
    Inventors: Shane Roylance, Kenneth Boehme, Pat Nola, Merle Lamar Walker, III
  • Patent number: 9430466
    Abstract: Techniques include outputting to a developer an offer to opt-in to a translation feature that enables human translators to translate their web page to a target language. In response to receiving a first request to opt-in to the translation feature, the server: generates and stores a web page copy, obtains from the human translators translations of at least a portion of the web page from its source language to the target language, modifies the web page copy based on the obtained translations to obtain a translated web page that is a translated version of the web page, detects a second request for the web page from a computing device associated with the target language, and in response to detecting the second request outputs, to the computing device, the translated web page with additional content relevant to the computing device or a user associated with the computing device.
    Type: Grant
    Filed: August 26, 2015
    Date of Patent: August 30, 2016
    Assignee: Google Inc.
    Inventors: Jonathan Wald, Aaron Baeten Brown
  • Patent number: 9424843
    Abstract: Methods and devices are described for allowing users to use portable computer devices such as smart phones to share microphone signals and/or closed captioning text generated by speech recognition processing of the microphone signals. Under user direction, the portable devices exchange messages to form a signal sharing group to facilitate their conversation.
    Type: Grant
    Filed: September 24, 2013
    Date of Patent: August 23, 2016
    Assignee: Starkey Laboratories, Inc.
    Inventor: Karrie LaRae Recker
  • Patent number: 9374536
    Abstract: Apparatuses and methods are disclosed for providing captioning of a video communication session for a conversation between at least two users in which media data is communicated between at least two communication devices during a video communication session involving a video captioning service. The video captioning service provides text captions for the far-end audio of the video communication session, in which the user of the second communication device is associated with a hearing-capable user that is not authorized to receive text captions from the video communication service during the video communication session.
    Type: Grant
    Filed: November 12, 2015
    Date of Patent: June 21, 2016
    Assignee: CaptionCall, LLC
    Inventors: Pat Nola, Shane A. Roylance, Merle L. Walker, III
  • Patent number: 9299358
    Abstract: A method for voice modification during a telephone call comprising receiving a source audio signal associated with at least one participant, wherein the source audio signal comprises a voice of the at least one participant, detecting a source dialect of the at least one participant, selecting a target dialect based on at least a characteristic of a target participant and creating a modulated audio signal based on the source audio signal, the source dialect, and the target dialect and transmitting the modulated audio signal to the target participant.
    Type: Grant
    Filed: August 7, 2013
    Date of Patent: March 29, 2016
    Assignee: Vonage America Inc.
    Inventor: Tzahi Efrati
  • Patent number: 9292764
    Abstract: A method for providing object information for a scene in a wearable computer is disclosed. In this method, an image of the scene is captured. Further, the method includes determining a current location of the wearable computer and a view direction of an image sensor of the wearable computer and extracting at least one feature from the image indicative of at least one object. Based on the current location, the view direction, and the at least one feature, information on the at least one object is determined. Then, the determined information is output.
    Type: Grant
    Filed: September 17, 2013
    Date of Patent: March 22, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Sungrack Yun, Kyu Woong Hwang, Jun-Cheol Cho, Taesu Kim, Minho Jin, Yongwoo Cho, Kang Kim
  • Patent number: 9280914
    Abstract: The present invention discloses a vision-aided hearing assisting device, which includes a display device, a microphone and a processing unit. The processing unit includes a receiving module, a message generating module and a display driving module. The processing unit is electrically connected to the display device and the microphone. The receiving module receives a surrounding sound signal, which is generated by the microphone. The message generating module analyzes the surrounding sound signal according to a present-scenario mode to generate a related message related with the surrounding sound signal. The display driving module drives the display device to display the related message.
    Type: Grant
    Filed: April 10, 2014
    Date of Patent: March 8, 2016
    Assignee: National Central University
    Inventors: Jia-Ching Wang, Chang-Hong Lin, Chih-Hao Shih
  • Patent number: 9218119
    Abstract: A computer device with a sensor subsystem for detecting off-surface objects, that carries out continued processing of the position and shape of objects detected in the vicinity of the device, associates these positions and shapes with predetermined gesture states, determines if the object is transitioning between gesture states and provides feedback based on the determined transition between the gesture states.
    Type: Grant
    Filed: March 25, 2011
    Date of Patent: December 22, 2015
    Assignee: BlackBerry Limited
    Inventors: Dan Gärdenfors, Karl-Anders Johansson, James Haliburton
  • Patent number: 9111545
    Abstract: The present invention relates to a hand-held communication aid and method that assists the deaf-dumb and visually impaired individuals to communicate with each other and with normal individuals. The method enables deaf-dumb and visually impaired individuals to communicate with each other and with normal individuals on remote communication means without any hardware improvization. The method enables face to face communication and remote communication aid for deaf-dumb and visually impaired individuals. This method requires no modifications in hand-held communication device used by normal individual.
    Type: Grant
    Filed: May 18, 2011
    Date of Patent: August 18, 2015
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Charudatta Vitthal Jadhav, Bhushan Jagyasi
  • Patent number: 9057826
    Abstract: An optical apparatus includes an optical combiner, an image lens, and an external scene lens. The optical combiner has an eye-ward side and an external scene side and includes a partially reflective diffraction grating that is at least partially reflective to image light incident through the eye-ward side and at least partially transmissive to external scene light incident through the external scene side. A first mount is positioned to hold the image lens in an optical path of the image light to apply a first corrective prescription to the image light. A second mount is positioned to hold an external scene lens over the external scene side of the optical combiner to apply a second corrective prescription to the external scene light. The optical combiner combines the image light with the scene light to form a combined image that is corrected according to the first and second corrective prescriptions.
    Type: Grant
    Filed: January 31, 2013
    Date of Patent: June 16, 2015
    Assignee: Google Inc.
    Inventors: Anurag Gupta, Greg E. Priest-Dorman, Bernard C. Kress
  • Patent number: 9043204
    Abstract: Some embodiments of the inventive subject matter include a method for detecting speech loss and supplying appropriate recollection data to the user. Such embodiments include detecting a speech stream from a user, converting the speech stream to text, storing the text, detecting an interruption to the speech stream, wherein the interruption to the speech stream indicates speech loss by the user, searching a catalog using the text as a search parameter to find relevant catalog data and, presenting the relevant catalog data to remind the user about the speech stream.
    Type: Grant
    Filed: September 12, 2012
    Date of Patent: May 26, 2015
    Assignee: International Business Machines Corporation
    Inventor: Scott H. Berens
  • Patent number: 9026237
    Abstract: A system for generating audio impressions of data for a visually-impaired user. The system receives data that is displayable by a chart. The data comprises a plurality of values. The system generates an audio impression of the received data. The audio impression includes a first portion and a second portion. The first portion is based upon at least a first value of the received data. The second portion is based upon at least a second value of the received data. An audible difference between the first portion and the second portion reflects the magnitude of a difference between the first value and the second value.
    Type: Grant
    Filed: September 21, 2012
    Date of Patent: May 5, 2015
    Assignee: Oracle International Corporation
    Inventor: Lory D. Molesky
  • Publication number: 20150119635
    Abstract: A system, including a first prosthetic device configured to evoke a hearing percept based on a first ambient sound and a second non-invasive device configured to stimulate skin based on a second ambient sound generated by a voice.
    Type: Application
    Filed: October 25, 2013
    Publication date: April 30, 2015
    Inventors: Johan Gustafsson, Martin Hillbratt, Kristian Asnes, Marcus Andersson
  • Patent number: 8996387
    Abstract: For clearing transaction data selected for a processing, there is generated in a portable data carrier (1) a transaction acoustic signal (003; 103; 203) (S007; S107; S207) upon whose acoustic reproduction by an end device (10) at least transaction data selected for the processing are reproduced superimposed acoustically with a melody specific to a user of the data carrier (1) (S009; S109; S209). The generated transaction acoustic signal (003; 103; 203) is electronically transferred to an end device (10) (S108; S208), which processes the selected transaction data (S011; S121; S216) only when the user of the data carrier (1) confirms vis-à-vis the end device (10) an at least partial match both of the acoustically reproduced melody with the user-specific melody and of the acoustically reproduced transaction data with the selected transaction data (S010; S110, S116; S210).
    Type: Grant
    Filed: September 8, 2009
    Date of Patent: March 31, 2015
    Assignee: Giesecke & Devrient GmbH
    Inventors: Thomas Stocker, Michael Baldischweiler
  • Patent number: 8977550
    Abstract: Part units of speech information are arranged in a predetermined order to generate a sentence unit of a speech information set. To each of a plurality of speech part units of the speech information, an attribute of “interrupt possible after reproduction” with which reproduction of priority interrupt information can be started after the speech part unit of the speech information is reproduced or another attribute of “interrupt impossible after reproduction” with which reproduction of the priority interrupt information cannot be started even after the speech part unit of the speech information is reproduced is set. When the priority interrupt information having a high priority rank than the speech information set being currently reproduced is inputted, if the attribute of the speech information being reproduced at the point in time is “interrupt impossible after reproduction,” then the priority interrupt information is reproduced after the speech information is reproduced.
    Type: Grant
    Filed: May 6, 2011
    Date of Patent: March 10, 2015
    Assignee: Honda Motor Co., Ltd.
    Inventor: Tokujiro Kizaki
  • Patent number: 8965772
    Abstract: Methods, systems, and products are disclosed for displaying speech command input state information in a multimodal browser including displaying an icon representing a speech command type and displaying an icon representing the input state of the speech command. In typical embodiments, the icon representing a speech command type and the icon representing the input state of the speech command also includes attributes of a single icon. Typical embodiments include accepting from a user a speech command of the speech command type, changing the input state of the speech command, and displaying another icon representing the changed input state of the speech command. Typical embodiments also include displaying the text of the speech command in association with the icon representing the speech command type.
    Type: Grant
    Filed: March 20, 2014
    Date of Patent: February 24, 2015
    Assignee: Nuance Communications, Inc.
    Inventors: Charles W. Cross, Jr., Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
  • Patent number: 8954334
    Abstract: A voice-activated pulser can trigger an oscilloscope or a meter, upon a simple voice command, thereby enabling hands-free signal measurements. The pulser can also be used to control the circuit under test, activating it or changing parameters, all under voice control. The pulser includes numerous switch-selectable output modes that allow users to generate complex, tightly-controlled diagnostic sequences, all activated upon a voice command and hands-free. The invention includes a fast, robust command-interpretation protocol that completely eliminates the expense and complexity of word recognition. Visual indicators display the device status and various operating modes, and also confirm each output pulse. The device receives voice commands directly through an internal microphone, or through a detachable headset, and confirms each command with an acoustical signal in the headset.
    Type: Grant
    Filed: October 15, 2011
    Date of Patent: February 10, 2015
    Assignee: Zanavox
    Inventor: David Edward Newman
  • Patent number: 8949128
    Abstract: Techniques for providing speech output for speech-enabled applications. A synthesis system receives from a speech-enabled application a text input including a text transcription of a desired speech output. The synthesis system selects one or more audio recordings corresponding to one or more portions of the text input. In one aspect, the synthesis system selects from audio recordings provided by a developer of the speech-enabled application. In another aspect, the synthesis system selects an audio recording of a speaker speaking a plurality of words. The synthesis system forms a speech output including the one or more selected audio recordings and provides the speech output for the speech-enabled application.
    Type: Grant
    Filed: February 12, 2010
    Date of Patent: February 3, 2015
    Assignee: Nuance Communications, Inc.
    Inventors: Darren C. Meyer, Corinne Bos-Plachez, Martine Marguerite Staessen
  • Patent number: 8949123
    Abstract: The voice conversion method of a display apparatus includes: in response to the receipt of a first video frame, detecting one or more entities from the first video frame; in response to the selection of one of the detected entities, storing the selected entity; in response to the selection of one of a plurality of previously-stored voice samples, storing the selected voice sample in connection with the selected entity; and in response to the receipt of a second video frame including the selected entity, changing a voice of the selected entity based on the selected voice sample and outputting the changed voice.
    Type: Grant
    Filed: April 11, 2012
    Date of Patent: February 3, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Aditi Garg, Kasthuri Jayachand Yadlapalli
  • Patent number: 8949129
    Abstract: A method and apparatus are provided for processing a set of communicated signals associated with a set of muscles, such as the muscles near the larynx of the person, or any other muscles the person use to achieve a desired response. The method includes the steps of attaching a single integrated sensor, for example, near the throat of the person proximate to the larynx and detecting an electrical signal through the sensor. The method further includes the steps of extracting features from the detected electrical signal and continuously transforming them into speech sounds without the need for further modulation. The method also includes comparing the extracted features to a set of prototype features and selecting a prototype feature of the set of prototype features providing a smallest relative difference.
    Type: Grant
    Filed: August 12, 2013
    Date of Patent: February 3, 2015
    Assignee: Ambient Corporation
    Inventors: Michael Callahan, Thomas Coleman
  • Patent number: 8938382
    Abstract: An item of information (212) is transmitted to a distal computer (220), translated to a different sense modality and/or language (222), and in substantially real time, and the translation (222) is transmitted back to the location (211) from which the item was sent. The device sending the item is preferably a wireless device, and more preferably a cellular or other telephone (210). The device receiving the translation is also preferably a wireless device, and more preferably a cellular or other telephone, and may advantageously be the same device as the sending device. The item of information (212) preferably comprises a sentence of human of speech having at least ten words, and the translation is a written expression of the sentence. All of the steps of transmitting the item of information, executing the program code, and transmitting the translated information preferably occurs in less than 60 seconds of elapsed time.
    Type: Grant
    Filed: March 21, 2012
    Date of Patent: January 20, 2015
    Assignee: Ulloa Research Limited Liability Company
    Inventor: Robert D. Fish
  • Patent number: 8938394
    Abstract: A computing device includes at least one processor and at least one module, operable by the at least one processor, to determine a context of the computing device, the context including an indication of at least one of an application executing at the computing device and a location of the computing device and determine, based at least in part on the context, one or more contextual audio triggers usable to initiate interaction with the computing device, each of the one or more contextual audio triggers being associated with a respective operation of the computing device. The at least one module is further operable to receive audio data, and responsive to determining that a portion of the audio data corresponds to a particular contextual audio trigger from the one or more contextual audio triggers, perform the respective operation associated with the particular contextual audio trigger.
    Type: Grant
    Filed: January 9, 2014
    Date of Patent: January 20, 2015
    Assignee: Google Inc.
    Inventors: Alexander Faaborg, Daniel Marc Gatan Shiplacoff
  • Patent number: 8924218
    Abstract: An automated personal assistance system employing artificial intelligence technology that includes speech recognition and synthesis, situational awareness, pattern and behavioral recognition, and the ability to learn from the environment. Embodiments of the system include environmental and occupant sensors and environmental actuators interfaced to an assistance controller having the artificial intelligence technology incorporated therein to control the environment of the system. An embodiment of the invention is implemented as a vehicle which reacts to voice command for movement and operation of the vehicle and detects objects, obstructions, and distances. This invention provides the ability to monitor for the safety of operation and modify dangerous maneuvers as well as to learn locations in the environment and to automatically find its way to them. The system may also incorporate communication capability to convey patterns of environmental and occupant parameters and to a monitoring center.
    Type: Grant
    Filed: November 29, 2011
    Date of Patent: December 30, 2014
    Inventors: Greg L. Corpier, Katie J. Boyer
  • Patent number: 8920174
    Abstract: An electro-tactile display includes an electrode substrate provided with a plurality of stimulation electrodes, a conductive gel layer positioned between the stimulation electrodes and the skin of a wearer, a switching circuit section electrically connected to the stimulation electrodes, a stimulation pattern generating section electrically connected to the switching circuit, and means for alleviating a sensation experienced by the wearer as a result of the stimulation electrodes. In one aspect, the means for alleviating a sensation is configured from the conductive gel layer. The conductive gel layer has a resistance value equivalent to that of the horny layer of the skin. In another aspect, the means for alleviating a sensation is configured from the stimulation determination means and the threshold value adjustment means.
    Type: Grant
    Filed: December 7, 2006
    Date of Patent: December 30, 2014
    Assignees: The University of Tokyo, Eye Plus Plus, Inc.
    Inventors: Susumu Tachi, Hiroyuki Kajimoto, Yonezo Kanno
  • Publication number: 20140379352
    Abstract: Exemplary embodiments include an assistive device to facilitate social interactions in autistic individuals by identifying emotions using a voice-detecting machine learning algorithm that extracts emotion content from an audio sample input and outputs the emotional content to a user through a device. This device may be a portable, concealable, real-time and automatic device that may receive and process an audio input. The audio input may be analyzed using a machine learning algorithm. The device may output the closest emotional match to the autistic user. The output may be tactile in nature such as a vibration pattern that is different for different identified emotions.
    Type: Application
    Filed: June 16, 2014
    Publication date: December 25, 2014
    Inventors: Suhas Gondi, Andrea Shao-Yin Li, Maxinder S. Kanwal, Corwin de Boor, Muthuraman Chidambaram, Anand Prasanna, Jae Young Chang, Benjamin L. Hsu
  • Patent number: 8918197
    Abstract: As the possible variations of “Hearing Thresholds”, “Hearing Loudness bandwidths” and “Voice Intonation” characteristics of people are finite, it is proposed to set a Database of these characteristics, where the data elements fully describe the Hearing and Talking characteristics of anyone, while many have the same characteristics. Thus any voice communication between two parties may be optimized by correcting the intensities of the call in the spectral domain, differently for each party and each ear. The optimizations are automatic given the “codes” of the parties and have a minimal latency. The system may be implemented either centrally in the world-wide-web or at the edges, in cellular phones, landline phones, VoIP, VoIM and in the audio parts of entertainment devices.
    Type: Grant
    Filed: June 13, 2012
    Date of Patent: December 23, 2014
    Inventor: Avraham Suhami
  • Patent number: 8917822
    Abstract: A device and method for providing captioned services to an assisted user using a captioned device linkable via a first communication link to a hearing user's device where the method includes the steps of, at a relay, receiving a request for captioning service from the captioned device on a second communication link, in response to the request, setting up the captioning service at the relay including receiving hearing user voice signals from the captioned device, providing the voice signals to a call assistant to transcribe into text and transmitting the text back to the captioned device to display, wherein the step of receiving a request may be prior to establishment the first communication link and wherein the step of receiving a request may be subsequent to establishment of the first communication link.
    Type: Grant
    Filed: June 9, 2014
    Date of Patent: December 23, 2014
    Assignee: Ultratec, Inc.
    Inventors: Robert M Engelke, Kevin R Colwell
  • Patent number: 8914291
    Abstract: Techniques for generating synthetic speech with contrastive stress. In one aspect, a speech-enabled application generates a text input including a text transcription of a desired speech output, and inputs the text input to a speech synthesis system. The synthesis system generates an audio speech output corresponding to at least a portion of the text input, with at least one portion carrying contrastive stress, and provides the audio speech output for the speech-enabled application. In another aspect, a speech-enabled application inputs a plurality of text strings, each corresponding to a portion of a desired speech output, to a software module for rendering contrastive stress. The software module identifies a plurality of audio recordings that render at least one portion of at least one of the text strings as speech carrying contrastive stress. The speech-enabled application generates an audio speech output corresponding to the desired speech output using the audio recordings.
    Type: Grant
    Filed: September 24, 2013
    Date of Patent: December 16, 2014
    Assignee: Nuance Communications, Inc.
    Inventors: Darren C. Meyer, Stephen R. Springer
  • Patent number: 8909523
    Abstract: A method determines a bias reduced noise and interference estimation in a binaural microphone configuration with a right and a left microphone signal at a time-frame with a target speaker active. The method includes a determination of the auto power spectral density estimate of the common noise formed of noise and interference components of the right and left microphone signals and a modification of the auto power spectral density estimate of the common noise by using an estimate of the magnitude squared coherence of the noise and interference components contained in the right and left microphone signals determined at a time frame without a target speaker active. An acoustic signal processing system and a hearing aid implement the method for determining the bias reduced noise and interference estimation. The noise reduction performance of speech enhancement algorithms is improved by the invention. Further, distortions of the target speech signal and residual noise and interference components are reduced.
    Type: Grant
    Filed: June 7, 2011
    Date of Patent: December 9, 2014
    Assignee: Siemens Medical Instruments Pte. Ltd.
    Inventors: Walter Kellermann, Klaus Reindl, Yuanhang Zheng
  • Patent number: 8908838
    Abstract: A system and method for providing captioned services comprising a relay, an assisted user's captioned device including a processor programmed to perform the steps of establishing a first communication link between the captioned device and a hearing person's device, receiving voice signals from the hearing person via the first communication link, receive an indication that an activator has been activated to invoke a captioning service and in response transmitting the hearing user's voice signals received at the captioned device to a relay via a second communication link, receiving text back corresponding to the hearing user's voice signals from the relay and displaying the text wherein the assisted user can invoke the captioning service either prior to or after the first communication link is established.
    Type: Grant
    Filed: June 9, 2014
    Date of Patent: December 9, 2014
    Assignee: Ultratec, Inc.
    Inventors: Robert M Engelke, Kevin R Colwell
  • Patent number: 8909538
    Abstract: Improved methods of presenting speech prompts to a user as part of an automated system that employs speech recognition or other voice input are described. The invention improves the user interface by providing in combination with at least one user prompt seeking a voice response, an enhanced user keyword prompt intended to facilitate the user selecting a keyword to speak in response to the user prompt. The enhanced keyword prompts may be the same words as those a user can speak as a reply to the user prompt but presented using a different audio presentation method, e.g., speech rate, audio level, or speaker voice, than used for the user prompt. In some cases, the user keyword prompts are different words from the expected user response keywords, or portions of words, e.g., truncated versions of keywords.
    Type: Grant
    Filed: November 11, 2013
    Date of Patent: December 9, 2014
    Assignee: Verizon Patent and Licensing Inc.
    Inventor: James Mark Kondziela
  • Publication number: 20140358551
    Abstract: A speech aid system includes a tube for mounting at a tracheostomy of a user, a voice parameter acquiring device mounted to the tube and generating a voice parameter signal according to airflow applied within the tube resulting from attempt by the user to speak, a processor generating an audio signal corresponding to the voice parameter signal, and a sound generator for mounting in an oral cavity of the user. The sound generator produces a substitute glottal sound corresponding to the audio signal.
    Type: Application
    Filed: June 3, 2014
    Publication date: December 4, 2014
    Inventors: Ching-Feng LIU, Hsiao-Han CHEN
  • Patent number: 8892232
    Abstract: The invention describes the proprietary activities, services and devices provided to a networked community of Hearing impaired people, that help improve wired, wireless and direct voice communications.
    Type: Grant
    Filed: November 20, 2012
    Date of Patent: November 18, 2014
    Inventor: Avraham Suhami
  • Patent number: 8888494
    Abstract: One or more embodiments present a script to a user in an interactive script environment. A digital representation of a manuscript is analyzed. This digital representation includes a set of roles and a set of information associated with each role in the set of roles. An active role in the set of roles that is associated with a given user is identified based on the analyzing. At least a portion of the manuscript is presented to the given user via a user interface. The portion includes at least a subset of information in the set of information. Information within the set of information that is associated with the active role is presented in a visually different manner than information within the set of information that is associated with a non-active role, which is a role that is associated with a user other than the given user.
    Type: Grant
    Filed: June 27, 2011
    Date of Patent: November 18, 2014
    Inventor: Randall Lee Threewits
  • Patent number: 8868426
    Abstract: The amount of speech output to a blind or low-vision user using a screen reader application is automatically adjusted based on how the user navigates to a control in a graphic user interface. Navigation by mouse presumes the user has greater knowledge of the identity of the control than navigation by tab keystroke which is more indicative of a user searching for a control. In addition, accelerator keystrokes indicate a higher level of specificity to set focus on a control and thus less verbosity is required to sufficiently inform the screen reader user.
    Type: Grant
    Filed: August 23, 2012
    Date of Patent: October 21, 2014
    Assignee: Freedom Scientific, Inc.
    Inventors: Garald Lee Voorhees, Glen Gordon, Eric Damery
  • Patent number: 8868373
    Abstract: Disclosed are virtual reality systems, in particular immersive virtual reality systems, their parts, construction and use. The systems and/or parts thereof may be used by adults or children, and may be adapted to support, often within a single device, a large range of users of different sizes and medical condition. Users with physical disabilities have difficulties using existing immersive technologies such as those using accessories like head-mounted displays and data gloves. Such users are provided with immersive virtual reality outputs that allow them to see virtual representations of their body parts which appear in a correct spatial position relative to the users' viewpoint.
    Type: Grant
    Filed: August 19, 2009
    Date of Patent: October 21, 2014
    Assignee: Universitat Zurich Prorektorat MNW
    Inventors: Kynan Eng, Pawel Pyk, Edith Chevrier, Lisa Holper, Daniel Kiper