Handicap Aid Patents (Class 704/271)
  • Patent number: 7881939
    Abstract: A system for monitoring conditions associated with an individual in a region includes at least one speech input transducer and speech processing software coupled thereto. Results of the speech processing can initiate communications with a displaced communications device such as a telephone or a computer to provide a source of feedback.
    Type: Grant
    Filed: May 31, 2005
    Date of Patent: February 1, 2011
    Assignee: Honeywell International Inc.
    Inventor: Lee D. Tice
  • Publication number: 20110004468
    Abstract: A hearing aid for improving diminished hearing caused by reduced temporal resolution includes: a speech input unit (201) which receives a speech signal from outside; a speech analysis unit (202) which detects a sound segment and a segment acoustically regarded as soundless from the speech signal received by the speech input unit and detects a consonant segment and a vowel segment within the detected sound segment; and a signal processing unit (204) which temporally increments the consonant segment detected by the speech analysis unit (204) and temporally decrements at least one of the vowel segment and the segment acoustically regarded as soundless detected by the speech analysis unit (204).
    Type: Application
    Filed: January 28, 2010
    Publication date: January 6, 2011
    Inventors: Kazue Fusakawa, Gempo Ito
  • Patent number: 7844461
    Abstract: Provided are an information processing apparatus and method so adapted that if a plurality of speech output units having a speech synthesizing function are present, a conversion is made to speech having mutually different feature quantitys so that a user can readily be informed of which unit is providing the user with information such as an alert information. Speech data that is output from another speech output unit is input from a communication unit (8) and stored in a RAM (7). A central processing unit (1) extracts a feature quantity relating to the input speech data. Further, the central processing unit (1) utilizes a speech synthesis dictionary (51) that has been stored in a storage device (5) and generates speech data having a feature quantity different from that of the extracted feature quantity. The generated speech data is output from a speech output unit (4).
    Type: Grant
    Filed: June 2, 2003
    Date of Patent: November 30, 2010
    Assignee: Canon Kabushiki Kaisha
    Inventor: Masayuki Yamada
  • Patent number: 7822613
    Abstract: In a vehicle-mounted control apparatus, a control unit 2 displays guidance on an operation of the vehicle-mounted control apparatus by voice input on a display device 6, and makes the user get training so that the user can master the techniques for operating the vehicle-mounted control apparatus (in step ST4). At this time, by using a voice which the user tries to input in order to master the techniques for operating the vehicle-mounted control apparatus, the voice recognition unit 5 learns the features of the user's voice in the background, and computes recognition parameters. Thereby, the user can know how to operate the vehicle-mounted control apparatus and can also register the features of the voice in the vehicle-mounted control apparatus.
    Type: Grant
    Filed: October 7, 2003
    Date of Patent: October 26, 2010
    Assignee: Mitsubishi Denki Kabushiki Kaisha
    Inventors: Tsutomu Matsubara, Masato Hirai, Emiko Kido, Fumitaka Sato
  • Patent number: 7809576
    Abstract: A unit for providing an interactive, light-activated, voice recorder unit for a book wherein said interactive voice recorder unit automatically initiates a playback mode when the front cover of a book is opened and light enters the unit.
    Type: Grant
    Filed: July 16, 2007
    Date of Patent: October 5, 2010
    Inventors: Lucient G. Lallouz, Sharon J. Fixman-Lallouz
  • Patent number: 7792676
    Abstract: Embodiments of the present invention comprises a system, method, and apparatus that provides for the utilization of a relatively real-time or near real-time interpretation or translation that may be utilized preferably for a relatively short duration of time on a network. A preferred embodiment of the present invention provides online, real-time, short-duration interpreting services in a network-based format. In preferred embodiments, the interpreting system comprises at least one provider computer, such as a server, wherein the provider computer is capable of communicating with user computers via a network. In one preferred embodiment, the provider computer provides a series of web pages that allow access to the interpreting system, including, but not limited to, a request for service page, wherein a user can access the system and input a request for interpreting services. Interpreting services are then provided to a user and a third party desiring to communicate with the user via the network.
    Type: Grant
    Filed: October 25, 2001
    Date of Patent: September 7, 2010
    Inventors: Robert Glenn Klinefelter, Gregory A. Piccionelli
  • Publication number: 20100222098
    Abstract: A mobile wireless communications device includes a housing and transceiver carried by the housing for transmitting and receiving radio frequency (RF) signals carrying communications data of speech. A processor is coupled to the transceiver for processing the communications data as speech that is transmitted and received to and from the transceiver. A keyboard and display is carried by the housing and connected to the processor. A speech-to-text and text-to-speech module converts communications data as speech received from the transceiver to text that is displayed on the display and converting text that is typed by a user on the keyboard in the communications data as speech to be transmitted from the transceiver as an RF signal.
    Type: Application
    Filed: February 27, 2009
    Publication date: September 2, 2010
    Applicant: Research In Motion Limited
    Inventor: Neeraj GARG
  • Patent number: 7778834
    Abstract: The present disclosure presents a useful metric for assessing the relative difficulty which non-native speakers face in pronouncing a given utterance and a method and systems for using such a metric in the evaluation and assessment of the utterances of non-native speakers. In an embodiment, the metric may be based on both known sources of difficulty for language learners and a corpus-based measure of cross-language sound differences. The method may be applied to speakers who primarily speak a first language speaking utterances in any non-native second language.
    Type: Grant
    Filed: August 11, 2008
    Date of Patent: August 17, 2010
    Assignee: Educational Testing Service
    Inventors: Derrick Higgins, Klaus Zechner, Yoko Futagi, Rene Lawless
  • Publication number: 20100198582
    Abstract: Nothing exists like this Verbal Command Laptop Computer and Software worldwide to transfer electronic information data of EDI. It can be used for the elderly when they need to scan something. Just set the item on a scanner and say, “scan, please.” The Verbal Command Laptop Computer and Software can be used to store names and addresses. Also, used as a fax machine. Just say, “fax, please” or “email please.” When user wishes to use email say, “check email please” or “send email please.” When wanting to use the Internet say, “Internet please” or for search engine say, “search engine.” The Verbal Command Laptop Computer and Software can handle all phases of a standard computer. The Verbal Command Laptop Computer and Software also has its own search engine. One can also use verbal command hands free with the Cordless Microphone or Verbal Head Set.
    Type: Application
    Filed: February 2, 2009
    Publication date: August 5, 2010
    Inventor: Gregory Walker Johnson
  • Publication number: 20100174533
    Abstract: Techniques are described for automatically measuring fluency of a patient's speech based on prosodic characteristics thereof. The prosodic characteristics may include statistics regarding silent pauses, filled pauses, repetitions, or fundamental frequency of the patient's speech. The statistics may include a count, average number of occurrences, duration, average duration, frequency of occurrence, standard deviation, or other statistics. In one embodiment, a method includes receiving an audio sample that includes speech of a patient, analyzing the audio sample to identify prosodic characteristics of the speech of the patient, and automatically measuring fluency of the speech of the patient based on the prosodic characteristics. These techniques may present several advantages, such as objectively measuring fluency of a patient's speech without requiring a manual transcription or other manual intervention in the analysis process.
    Type: Application
    Filed: January 5, 2010
    Publication date: July 8, 2010
    Applicant: Regents of the University of Minnesota
    Inventor: Serguei V.S. Pakhomov
  • Patent number: 7746986
    Abstract: Systems and methods for displaying visual content to a user corresponding to sound captured at a user terminal are disclosed. After receiving over a network from a user terminal a request to convert sound into a visual content representing the sound, wherein the sound comprises one or more words, a translation server may retrieve text corresponding to the one or more words from a database. The translation server may then convert the text into one or more content phrases, wherein the content phrases represent the meaning of the one or more words, and convert each of the one or more content phrases into a new language. Finally, the translation server may send visual content to the user terminal representing the new language.
    Type: Grant
    Filed: June 15, 2006
    Date of Patent: June 29, 2010
    Assignee: Verizon Data Services LLC
    Inventors: Vittorio G. Bucchieri, Albert L. Schmidt, Jr.
  • Patent number: 7729907
    Abstract: Preparing for the full-fledged aged society, measures to prevent senility are required. Senility is prevented by extracting signals of prescribed bands from a speech signal using a first bandpass filter section having a plurality of bandpass filters, extracting the envelopes of each frequency band signal using an envelope extraction section having envelope extractors, applying a noise source signal to a second bandpass filter section having a plurality of bandpass filters and extracting noise signals corresponding to the prescribed bands, multiplying the outputs from the first bandpass filter section and the second bandpass filter section in a multiplication section, summing up the outputs from the multiplication section in an addition section to produce a Noise-Vocoded Speech Sound signal, and presenting the Noise-Vocoded Speech Sound signal for listening.
    Type: Grant
    Filed: February 21, 2005
    Date of Patent: June 1, 2010
    Assignees: Rion Co., Ltd.
    Inventor: Hiroshi Rikimaru
  • Publication number: 20100109918
    Abstract: A device for use by a deafblind person is disclosed. The device comprises a first key for manually inputting a series of words in the form of a code, a second key for manually inputting an action to be performed by the device, a third key for manually inputting a user preference, and a fourth key for manually inputting communication instructions. The device further has an internal processor programmed to carry out communication functions and search and guide functions. The device has various safety and security functions for pedestrians or persons in transit. In a preferred embodiment, the device comprises an electronic cane known as an eCane. Also disclosed is a system for allowing a deafblind person to enjoy television programs.
    Type: Application
    Filed: November 4, 2009
    Publication date: May 6, 2010
    Inventor: Raanan Liebermann
  • Publication number: 20100100388
    Abstract: A speech aid for persons with hypokinetic dysarthria, a speech disorder associated with Parkinson's disease. The speech aid alters the pitch at which the user hears his or her voice and/or provides multitalker babble noise to the speaker's ears. The speech aid induces increased speech motor activity and improves the intelligibility of the user's speech. The speech aid may be used with a variety of microphones, headphones, in one or both ears, with a voice amplifier, or connected to telephones.
    Type: Application
    Filed: October 12, 2009
    Publication date: April 22, 2010
    Inventor: Thomas David Kehoe
  • Patent number: 7702506
    Abstract: An object of the present invention is to provide a conversation support apparatus and a conversation support method that allow users to effectively and smoothly onverse with each other. According to the present invention, since a first display section 22 and a second display section 32 can be placed at different angles, while a first user is watching the second display section 32 and a second user is watching the first display section 22, they can smoothly converse with each other. Since the first display section 22 and the second display section 32 are disposed, for example the second user and the first user can face-to-face converse with each other.
    Type: Grant
    Filed: May 12, 2004
    Date of Patent: April 20, 2010
    Inventor: Takashi Yoshimine
  • Publication number: 20100063794
    Abstract: A sign language recognition apparatus and method is provided for translating hand gestures into speech or written text. The apparatus includes a number of sensors on the hand, arm and shoulder to measure dynamic and static gestures. The sensors are connected to a microprocessor to search a library of gestures and generate output signals that can then be used to produce a synthesized voice or written text. The apparatus includes sensors such as accelerometers on the fingers and thumb and two accelerometers on the back of the hand to detect motion and orientation of the hand. Sensors are also provided on the back of the hand or wrist to detect forearm rotation, an angle sensor to detect flexing of the elbow, two sensors on the upper arm to detect arm elevation and rotation, and a sensor on the upper arm to detect arm twist. The sensors transmit the data to the microprocessor to determine the shape, position and orientation of the hand relative to the body of the user.
    Type: Application
    Filed: July 21, 2009
    Publication date: March 11, 2010
    Inventor: Jose L. HERNANDEZ-REBOLLAR
  • Publication number: 20100063822
    Abstract: A communication system that is specifically designed for the needs of speech impaired individuals, particularly aphasia victims, makes use of a speech generating mobile terminal communication device (SGMTD) (12) that is designed to be hand held and operated by a speech disabled individual. The SGMTD includes a database of audio files that are accessed to generate full sentences in response to single word or short phrase entries selected from a plurality of menus by the disabled user. A second, companion mobile terminal device (COMTD) (14) enables a caregiver to communicate with the speech disabled individual's SGMTD to assist the individual in communicating with the caregiver by causing the SGMTD to switch to a particular menu or list from which the caregiver wants the disabled individual to make a selection. The SGMTD also includes software that enables the device to communicate with other SGMTDs via wireless communications and thereby simulate a verbal conversation between speech impaired individuals.
    Type: Application
    Filed: April 21, 2008
    Publication date: March 11, 2010
    Inventors: Daniel C. O'Brien, Edward T. Buchholz
  • Patent number: 7676372
    Abstract: A speech transformation apparatus comprises a microphone 21 for detecting speech and generating a speech signal; a signal processor 22 for performing a speech recognition process using the speech signal; a speech information generator for transforming the recognition result responsive to the physical state of the user, the operating conditions, and/or the purpose for using the apparatus; and a display unit 26 and loudspeaker 25 for generating a control signal for outputting a raw recognition result and/or a transformed recognition result. In a speech transformation apparatus thus constituted, speech enunciated by a spoken-language-impaired individual can be transformed and presented to the user, and sounds from outside sources can also be transformed and presented to the user.
    Type: Grant
    Filed: February 16, 2000
    Date of Patent: March 9, 2010
    Assignee: Yugen Kaisha GM&M
    Inventor: Toshihiko Oba
  • Patent number: 7676368
    Abstract: The present invention is intended to perform text-to-speech conversion by replacing URLs and electronic mail addresses included in the text data of electronic mail by registered predetermined words. A mail watcher application control section executes the processing for converting electronic mail received by a MAPI mailer into speech data. The mail watcher application control section outputs URLs and electronic mail addresses included in the text data of electronic mail supplied from the MAPI mailer to a URL and mail address filter to replace them by registered predetermined names. Of the entered texts, the URL and mail address filter compares the URL or mail address included in the entered text with those registered in the URL and mail address table. If a the URL or mail address of the entered text is found matching, the URL and mail address filter replace it by the registered name and outputs it to the mail watcher application control section.
    Type: Grant
    Filed: July 2, 2002
    Date of Patent: March 9, 2010
    Assignee: Sony Corporation
    Inventors: Utaha Shizuka, Satoshi Fujimura, Yasuhiko Kato
  • Patent number: 7664636
    Abstract: The invention provides a system and method for indexing and organizing voice mail message by the speaker of the message. One or more speaker models are created from voice mail messages received. As additional messages are left, each of the new messages are compared with existing speaker models to determine the identity of the callers of each of the new messages. The voice mail messages are organized within a user's mailbox by caller. Unknown callers may be identified and tagged by the user and then used to create new speaker models and/or update existing speaker models.
    Type: Grant
    Filed: April 17, 2000
    Date of Patent: February 16, 2010
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Julia Hirschberg, Sarangarajan Parthasarathy, Aaron Edward Rosenberg, Stephen Whittaker
  • Patent number: 7653543
    Abstract: The present invention is directed toward a method, device, and system for providing a high quality communication session. The system provides a way of determining speech characteristics of participants in the communication session and adjusting, if necessary, signals from a speaker to a listener such that the listener can more intelligibly understand what the speaker is saying.
    Type: Grant
    Filed: March 24, 2006
    Date of Patent: January 26, 2010
    Assignee: Avaya Inc.
    Inventors: Colin Blair, Jonathan R. Yee-Hang Choy, Andrew W. Lang, David Preshan Thambiratnam, Paul Roller Michaelis
  • Patent number: 7643997
    Abstract: The present invention relates to a handheld analysis instrument for assaying a medically significant sample. The instrument comprises a measuring device for measuring the concentration of an analyte in the sample, an output device for outputting measurement results. The output device has both an acoustic signal output device for outputting the measurement results through nonverbal acoustic signals and also a wireless interface for communicating with an external speech output unit.
    Type: Grant
    Filed: June 5, 2006
    Date of Patent: January 5, 2010
    Assignee: Roche Diagnostics Operations, Inc.
    Inventors: Hans Kintzig, Jean Thilges
  • Patent number: 7636663
    Abstract: An on-vehicle acoustic control system determines which one of sounds of an audio device and a navigation device should be generated with priority, when both devices are requested to generate respective sounds. The control system further detects a user's physical condition based on an interaction with the user, a picture of the user and biometric information of the user. The control system generates sound in the order of determined priority, and varies the manner of sound generation based on the user's physical condition.
    Type: Grant
    Filed: September 20, 2005
    Date of Patent: December 22, 2009
    Assignee: Denso Corporation
    Inventors: Ichiro Yoshida, Kazunao Yamada
  • Publication number: 20090287490
    Abstract: Embodiments of the invention may be used to enhance the presentation of a virtual environment for certain users, e.g., a visually impaired user. Because users may visit, and revisit, locations within the virtual environment, the state of elements in the virtual environment may change. Accordingly, audible descriptions of an object, person or environment, may be adjusted to prevent redundant or unnecessary descriptions. For example, when the user encounters a given element a second time, rather than describe each characteristic of the element, only changes to the characteristics of the element are described.
    Type: Application
    Filed: May 14, 2008
    Publication date: November 19, 2009
    Inventors: Brian John Cragun, Zachary Adam Garbow, Christopher A. Peterson
  • Patent number: 7613613
    Abstract: A method and system for presenting lip-synchronized speech corresponding to the text received in real time is provided. A lip synchronization system provides an image of a character that is to be portrayed as speaking text received in real time. The lip synchronization system receives a sequence of text corresponding to the speech of the character. It may modify the received text in various ways before synchronizing the lips. It may generate phonemes for the modified text that are adapted to certain idioms. The lip synchronization system then generates the lip-synchronized images based on the phonemes generated from the modified texts and based on the identified expressions.
    Type: Grant
    Filed: December 10, 2004
    Date of Patent: November 3, 2009
    Assignee: Microsoft Corporation
    Inventors: Timothy V. Fields, Brandon Cotton
  • Publication number: 20090264789
    Abstract: A set of therapy parameter values is selected based on a patient state, where the patient state comprises a speech state or a mixed patient state including the speech state and at least one of a movement state or a sleep state. In this way, therapy delivery is tailored to the patient state, which may include one or more patient symptoms specific to the patient state. In some examples, a medical device determines whether the patient is in the speech state or a mixed patient state including the speech state based on a signal generated by a voice activity sensor. The voice activity sensor detects the use of the patient's voice, and may include a microphone, a vibration detector or an accelerometer.
    Type: Application
    Filed: April 28, 2009
    Publication date: October 22, 2009
    Inventors: Gregory F. Molnar, Richard T. Stone, Xuan Wei
  • Publication number: 20090259689
    Abstract: A way of delivering recipe preparation instruction to disabled individuals is provided using an interactive cooking preparation device. The device retrieves an instruction delivery preference that corresponds to a user with a disability, such as a hearing or sight disability. The user then selects a recipe from list of recipes. Preparation steps that correspond to the selected recipe are retrieved from a data store, such as a database. The retrieved preparation steps are provided to the user using the interactive cooking preparation device that provides the preparation steps in an alternative delivery mode based on the user's delivery preference.
    Type: Application
    Filed: April 15, 2008
    Publication date: October 15, 2009
    Applicant: International Business Machines Corporation
    Inventors: Lydia Mai Do, Travis M. Grigsby, Pamela Ann Nesbitt, Lisa Anne Seacat
  • Publication number: 20090259473
    Abstract: Methods and apparatus to present a video program to a visually impaired person are disclosed. An example method comprises receiving a video stream and an associated audio stream of a video program, detecting a portion of the video program that is not readily consumable by a visually impaired person, obtaining text associated with the portion of the video program, converting the text to a second audio stream, and combining the second audio stream with the associated audio stream.
    Type: Application
    Filed: April 14, 2008
    Publication date: October 15, 2009
    Inventors: Hisao M. Chang, Horst Schroeter
  • Publication number: 20090210231
    Abstract: Stuttering treatment methods and apparatus which utilize removable oral-based appliances having actuators which are attached, adhered, or otherwise embedded into or upon a dental or oral appliance are described. Such oral appliances may receive the user's voice and process the voice to introduce a time delay and/or a frequency shift. The altered audio feedback signal is then transmitted back to the user through a tooth, teeth, or other bone via a vibrating actuator element. The actuator element may utilize electromagnetic or piezoelectric actuator mechanisms and may be positioned directly along the dentition or along an oral appliance housing in various configurations.
    Type: Application
    Filed: February 15, 2008
    Publication date: August 20, 2009
    Inventors: John SPIRIDIGLIOZZI, Amir ABOLFATHI
  • Patent number: 7565295
    Abstract: A sign language recognition apparatus and method is provided for translating hand gestures into speech or written text. The apparatus includes a number of sensors on the hand, arm and shoulder to measure dynamic and static gestures. The sensors are connected to a microprocessor to search a library of gestures and generate output signals that can then be used to produce a synthesized voice or written text. The apparatus includes sensors such as accelerometers on the fingers and thumb and two accelerometers on the back of the hand to detect motion and orientation of the hand. Sensors are also provided on the back of the hand or wrist to detect forearm rotation, an angle sensor to detect flexing of the elbow, two sensors on the upper arm to detect arm elevation and rotation, and a sensor on the upper arm to detect arm twist. The sensors transmit the data to the microprocessor to determine the shape, position and orientation of the hand relative to the body of the user.
    Type: Grant
    Filed: August 27, 2004
    Date of Patent: July 21, 2009
    Assignee: The George Washington University
    Inventor: Jose L. Hernandez-Rebollar
  • Patent number: 7555521
    Abstract: The present invention informs a recipient of a holding non-voice call and allows the recipient to participate in real time text communication. A voice/text server is capable of communicating with a text device across a network or the Internet. The text device transmits the non-voice call to the voice/text server where the non-voice call is stored in a memory. The voice/text server generates a placeholder call and transmits the placeholder call to a voice telephone. The recipient receives the placeholder call by use of a voice telephone. A local text device generates a text link that is transmitted to the voice/text server. The voice/text server matches the text link to the holding text call. Text communication may then proceed between the users in any number of formats.
    Type: Grant
    Filed: April 10, 2003
    Date of Patent: June 30, 2009
    Assignee: NXI Communications, Inc.
    Inventors: Thomas J. McLaughlin, Jeff F. Knighton, Alan S. Call
  • Publication number: 20090138270
    Abstract: The provision of speech therapy to a learner (76) entails receiving a speech signal (156) from the learner (76) at a computing system (24). The speech signal (156) corresponds to an utterance (116) made by the learner (76). A set of parameters (166) is ascertained from the speech signal (156). The parameters (166) represent a contact pattern (52) between a tongue and palate of the learner (156) during the utterance (116). For each parameter in the set of parameters (166), a deviation measure (188) is calculated relative to a corresponding parameter from a set of normative parameters (138) characterizing an ideal pronunciation of the utterance (116). An accuracy score (56) for the utterance (116), relative to its ideal pronunciation, is generated from the deviation measure (188). The accuracy score (56) is provided to the learner (76) to visualize accuracy of the utterance (116) relative to its ideal pronunciation.
    Type: Application
    Filed: November 26, 2007
    Publication date: May 28, 2009
    Inventors: Samuel G. Fletcher, Dah-Jye Lee, Jared Darrell Turpin
  • Publication number: 20090119109
    Abstract: The invention describes a computer-based system that asks (101) a patient to pronounce a word displayed on a monitor, automatically assesses (104, 105) the speech quality, and uses suitable means to feed back (106) any improvement or deterioration of speech quality.
    Type: Application
    Filed: May 11, 2007
    Publication date: May 7, 2009
    Applicant: KONINKLIJKE PHILIPS ELECTRONICS N.V.
    Inventors: Richard Willmann, Gerd Lanfermann, Dieter Geller
  • Publication number: 20090112601
    Abstract: A human interface device for assisting the verbally challenged to record custom messages and play back the custom and pre-recorded messages through a sequence of simple finger movements. A data glove containing Hall Effect and Bend Resistor is warn by the user and connected to the Voice Module. The data glove is designed to capture and translate the sequence of finger movements into actions, then transmits the actions to the Voice Module. When a pause is sensed in the actions, the Voice Module links these actions to the custom or pre-recorded messages. These messages are then played on the Voice Module allowing people in close proximity to hear. A Remote Voice Monitor may also be wirelessly connected to the Voice Module to allow remote monitoring.
    Type: Application
    Filed: October 25, 2007
    Publication date: April 30, 2009
    Inventor: Larry Don Fullmer
  • Publication number: 20090099848
    Abstract: The present invention is an innovative system and method for passive diagnosis of dementias. The disclosed invention enables early diagnosis of and assessments of the efficacy of medications for neural disorders which are characterized by progressive linguistic decline and circadian speech-rhythm disturbances. Clinical and psychometric indicators of dementias are automatically identified by longitudinal statistical measurements and track the nature of language change and/or patient audio features change using mathematical methods. According to embodiments of the present invention the disclosed system and method include multi-layer processing units wherein initial processing of the recorded audio data is processed in a local unit. Processed and required raw data is also transferred to a central unit which performs in-depth analysis of the audio data.
    Type: Application
    Filed: October 16, 2007
    Publication date: April 16, 2009
    Inventors: Moshe Lerner, Ofer Bahar
  • Publication number: 20090100150
    Abstract: The present invention provides an assistive technology screen reader in a distributed network computer system. The screen reader, on a server computer system, receives display information output from one or more applications. The screen reader converts the text and symbolic content of the display information into a performant format for transmission across a network. The screen reader, on a client computer system, receives the performant format. The received performant format is converted to a device type file, by the screen reader. The screen reader then presents the device type file to a device driver, for output to a speaker, braille reader, or the like.
    Type: Application
    Filed: June 14, 2002
    Publication date: April 16, 2009
    Inventor: David Yee
  • Patent number: 7519537
    Abstract: An interface system including a manipulandum adapted to be moveable according to a manual gesture imparted by the user; a sensor adapted to detect a characteristic of the manual gesture imparted to the manipulandum and to generate a sensor signal representing the detected characteristic of the manual gesture; a microphone adapted to detect a characteristic of an utterance spoken by the user and to generate an audio signal representing the detected characteristic of the spoken utterance; and a control system adapted receive the generated sensor and audio signals and to transmit a command signal to an electronic device via a communication link, the command signal being based on the generated sensor and audio signals and the time synchronization between them.
    Type: Grant
    Filed: October 7, 2005
    Date of Patent: April 14, 2009
    Assignee: Outland Research, LLC
    Inventor: Louis B. Rosenberg
  • Patent number: 7509255
    Abstract: An apparatus for processing a speech signal includes a receiver, a speech signal decoder, a speech rate conversion information detector, and a speech rate converting processor. The receiver receives multiplexed signal of information concerning controls and programs, including speech packets through a transmission line. The decoder decodes the speech signal of packets out of the received signals. The detector detects speech rate conversion execution information in the received signals. The processor subjects the decoded speech signal to a speech rate conversion process if the speech rate conversion execution information indicates that the speech signal has not been subjected to the speech rate conversion process on the transmitting end, and which does not subject the decoded speech signal to the speech rate conversion process if the speech rate conversion execution information indicates that the speech signal has been subjected to the speech rate conversion process on the transmitting end.
    Type: Grant
    Filed: September 28, 2004
    Date of Patent: March 24, 2009
    Assignee: Victor Company of Japan, Limited
    Inventors: Hiroyuki Takeishi, Yutaka Ichinoi
  • Publication number: 20090076825
    Abstract: A portable assistive listening system for enhancing sound for hearing impaired individuals includes a fully functional hearing aid and a separate handheld digital signal processing (DSP) device. The focus of the present invention is directed to the handheld DSP device and a unique method of processing incoming audio signals. The DSP device includes a programmable digital signal processor, a UWB transceiver for communicating with the hearing aid and/or other wireless audio sources, an LCD display, and a user input device (keypad). The handheld device is user programmable to apply different sound enhancement algorithms for enhancing sound signals received from the hearing aid and/or other audio source. The handheld device is capable of receiving audio signals from multiple sources, and gives the user control over selection of incoming sound sources and selective enhancement of sound.
    Type: Application
    Filed: September 13, 2007
    Publication date: March 19, 2009
    Applicant: BIONICA CORPORATION
    Inventors: KIPP BRADFORD, RALPH A. BECKMAN, JOHN F. MURPHY, III
  • Publication number: 20090055192
    Abstract: A device for use by a deafblind person is disclosed. The device comprises a first key for manually inputting a series of words in the form of a code, a second key for manually inputting an action to be performed by the device, a third key for manually inputting a user preference, and a fourth key for manually inputting communication instructions. The device further has an internal processor programmed to carry out communication functions and search and guide functions. The device has various safety and security functions for pedestrians or persons in transit. In a preferred embodiment, the device comprises an electronic cane known as an eCane. Also disclosed is a system for allowing a deafblind person to enjoy television programs.
    Type: Application
    Filed: November 3, 2008
    Publication date: February 26, 2009
    Inventor: Raanan Liebermann
  • Patent number: 7483834
    Abstract: The invention includes an apparatus and method of providing information using an information appliance coupled to a network. The method includes storing text files in a database at a remote location and converting, at the remote location, the text files into speech files. A portion of the speech files requested are downloaded to the information appliance and presented through an audio speaker. The speech files may include audio of electronic program guide (EPG) information, weather information, news information or other information. The method also includes converting the text files into speech files at the remote location using an English text-to-speech (TTS) synthesizer, a Spanish TTS synthesizer, or another language synthesizer. A voice personality may be selected to announce the speech files.
    Type: Grant
    Filed: November 30, 2001
    Date of Patent: January 27, 2009
    Assignee: Panasonic Corporation
    Inventors: Saiprasad V. Naimpally, Vasanth Shreesha
  • Publication number: 20090024183
    Abstract: Methods and devices (620, 1220, 1410) to deliver a tactile speech analog to a person's skin (404, 604, 1082, 1440) providing a silent, invisible, hands-free, eyes-free, and ears-free way to receive and directly comprehend electronic communications (1600b). Embodiments include an alternative to hearing aids that will enable people with hearing loss to better understand speech. A device (1410), worn like watch or bracelet, supplements a person's remaining hearing to help identify and disambiguate those sounds he or she can not hear properly. Embodiments for hearing aids (620) and hearing prosthetics (1220) are also described.
    Type: Application
    Filed: August 3, 2006
    Publication date: January 22, 2009
    Inventor: Mark I. Fitchmun
  • Patent number: 7480865
    Abstract: An auxiliary operation interface of a digital recording/reproducing apparatus includes a targeting item, a switching button set and an audio prompt generator. The targeting item is optionally triggered to have the digital recording/reproducing apparatus execute a selected function. The audio prompt generator is enabled to generate an audio prompt when the targeting item is triggered. The audio prompt generator is optionally enabled or disabled by an operation of the switching button set.
    Type: Grant
    Filed: October 20, 2005
    Date of Patent: January 20, 2009
    Assignee: Lite-On It Corp.
    Inventor: Chia-Hsiang Lin
  • Patent number: 7480616
    Abstract: Information relating to an amount of muscle activity is extracted from a myo-electrical signal by activity amount information extraction means, and information recognition is performed by activity amount information recognition means using the information relating to the amount of muscle activity of a speaker. There is a prescribed correspondence relationship between the amount of muscle activity of a speaker and a phoneme uttered by a speaker, so the content of an utterance can be recognized with a high recognition rate by information recognition using information relating to an amount of muscle activity.
    Type: Grant
    Filed: February 27, 2003
    Date of Patent: January 20, 2009
    Assignee: NTT DoCoMo, Inc.
    Inventors: Hiroyuki Manabe, Akira Hiraiwa, Toshiaki Sugimura
  • Publication number: 20080300885
    Abstract: A speech communication system for patients having difficulty in speaking or writing comprises a display screen, a controller, a host having a storage unit for storing specific software and connected with the display screen, and a speaker connected with the host. A plurality of choices is presented on the display screen in a nine-square form or an English keyboard form for patients to select according to their needs. The controller is used for patients having difficulty in speaking or writing to move a cursor on the display screen to select any choice they need. The speaker is designed to output speech sounds of words or simple sentences in different languages corresponding to the choices patients select via the controller and thus make it possible for patients to communicate with others.
    Type: Application
    Filed: October 11, 2007
    Publication date: December 4, 2008
    Inventors: Chung-Hung Shih, Ching-An Liaw
  • Patent number: 7446669
    Abstract: A device for use by a deafblind person is disclosed. The device comprises a first key for manually inputting a series of words in the form of a code, a second key for manually inputting an action to be performed by the device, a third key for manually inputting a user preference, and a fourth key for manually inputting communication instructions. The device further has an internal processor programmed to carry out communication functions and search and guide functions. The device has various safety and security functions for pedestrians or persons in transit. In a preferred embodiment, the device comprises an electronic cane known as an eCane. Also disclosed is a system for allowing a deafblind person to enjoy television programs.
    Type: Grant
    Filed: July 2, 2003
    Date of Patent: November 4, 2008
    Inventor: Raanan Liebermann
  • Patent number: 7433818
    Abstract: A subscriber terminal is provided for speech-to-text translation. Speech packets are received at a broadband telephony interface and stored in a buffer. The speech packets are processed and textual representations thereof are displayed as words on a display device. Speech processing is activated and deactivated in response to a command from a subscriber.
    Type: Grant
    Filed: February 1, 2006
    Date of Patent: October 7, 2008
    Assignee: AT&T Corp.
    Inventors: Charles David Caldwell, John Bruce Harlow, Robert J. Sayko, Norman Shaye
  • Publication number: 20080215332
    Abstract: Cochlear implant performance is improved by extracting pitch information and encoding such pitch information into the processor of a cochlear implant. One embodiment of the invention is to explicitly extract the pitch and deliver it to the cochlear implant by co-varying the stimulate site and rate. Another embodiment of the invention is to implicitly encode the pitch information via a code book that serves as the carrier of stimulation in the cochlear implant.
    Type: Application
    Filed: July 20, 2007
    Publication date: September 4, 2008
    Inventors: Fan-Gang Zeng, Hongbin Chen
  • Patent number: 7421392
    Abstract: The present invention provides a diagnostic device that presents, to a patient, a Noise-Vocoded Speech Sound signal that is obtained by dividing at least one portion of a sound signal into a single or a plurality of frequency band signals and subjecting the frequency band signals to noise, and analyzing the content of a response recognized by the patient and the presented stimulus to diagnose a disease of the patient based on the analysis results, so that diagnosis including determining the disease of the patient and estimating a damaged site can be performed.
    Type: Grant
    Filed: December 9, 2003
    Date of Patent: September 2, 2008
    Assignee: RION Co., Ltd.
    Inventor: Hiroshi Rikimaru
  • Patent number: RE41002
    Abstract: An electronic communications system for the deaf includes a video apparatus for observing and digitizing the facial, body and hand and finger signing motions of a deaf person, an electronic translator for translating the digitized signing motions into words and phrases, and an electronic output for the words and phrases. The video apparatus desirably includes both a video camera and a video display which will display signing motions provided by translating spoken words of a hearing person into digitized images. The system may function as a translator by outputting the translated words and phrases as synthetic speech at the deaf person's location for another person at that location, and that person's speech may be picked up, translated, and displayed as signing motions on a display in the video apparatus.
    Type: Grant
    Filed: June 23, 2000
    Date of Patent: November 24, 2009
    Inventor: Raanan Liebermann