Handicap Aid Patents (Class 704/271)
-
Patent number: 8082149Abstract: A method for myoelectric-based processing of speech. The method includes capturing a myoelectric signal from a user using at least one electrode, wherein the electrode converts an ionic current generated by muscle contraction into an electric current. The method also includes amplifying the electric current, filtering the amplified electric current, and converting the filtered electric current into a digital signal. The method further includes transmitting the myoelectric signal to a digital device, transforming the digital signal into a written representation using an automatic speech recognition method, and generating an audible output from the written representation using a speech synthesis method.Type: GrantFiled: October 26, 2007Date of Patent: December 20, 2011Assignee: Biosensic, LLCInventors: Tanja Schultz, Alexander Walbel
-
Patent number: 8065154Abstract: The present invention broadly comprises a computer-based method for aiding aphasics having gross and fine motor impairments in efficiently communicating, comprising the steps of storing alphanumeric characters in a database, calculating statistics of the alphanumeric characters based on frequency used and most recent used, and, predicting a response according to the statistics of the alphanumeric characters, wherein the steps of storing, calculating, and predicting are performed by a general purpose computer specially programmed to perform the steps of storing, calculating, and predicting.Type: GrantFiled: July 29, 2005Date of Patent: November 22, 2011Assignee: The Research Foundation of State Univesity of New YorkInventors: Kris Schindler, Michael Buckley
-
Publication number: 20110257977Abstract: A system and method for presenting and editing associations between graphics and audio useful for assisting people who have difficulty in speaking. This may include presenting graphics and audio information together with topic information thus allowing for a user to create outlines of speech needed for different events and situations. When a user selects a topic, images appear and by selecting the image a device is instructed to play the associated audio file. Images and audio may be outlined into useful constructs to provide for differing situations and multiple users may edit those outlines. Moreover, the constructs, may be published to devices and other systems. In accordance with certain embodiments, images and audio files may be uploaded or purchased from vendors. Collaboration provides the speech impaired with the ability to develop a progressively improving speech vocabulary and to share that vocabulary with others needing assistance.Type: ApplicationFiled: June 29, 2011Publication date: October 20, 2011Applicant: ASSISTYX LLCInventors: Leonard A. Greenberg, Philip G. Bookman
-
Patent number: 8036895Abstract: A handheld device includes an image input device capable of acquiring images, circuitry to send a representation of the image to a remote computing system that performs at least one processing function related to processing the image and circuitry to receive from the remote computing system data based on processing the image by the remote system.Type: GrantFiled: April 1, 2005Date of Patent: October 11, 2011Assignee: K-NFB Reading Technology, Inc.Inventors: Raymond C. Kurzweil, Paul Albrecht, Lucy Gibson
-
Patent number: 8024006Abstract: The present invention provides a mobile communication terminal having broadcast reception function, capable of preventing occurrence of a situation in which sound comes out due to the activation of a television function even though the phone's silent mode is set. When the phone's silent mode is set in a television watching state, a CPU 1 determines that it is a television silent mode. Similarly, also when an ON operation is performed on a television function activation key in a state where the phone's silent mode is set, the CPU 1 determines that it is the television silent mode. When the CPU 1 determines that it is the television silent mode, the CPU 1 mutes television sound.Type: GrantFiled: November 27, 2006Date of Patent: September 20, 2011Assignee: Kyocera CorporationInventors: Tetsutaka Yabuta, Masaki Kanbe
-
Patent number: 8019276Abstract: An audio transmission method and system. The method includes detecting by a computing system, a wireless device belonging to a user. The computing system enables a connection between the wireless device and the computing system. The computing system receives from the wireless device, a request for receiving an audio broadcast. The computing system transmits to the wireless device, a language list comprising different languages for the audio broadcast. The computing system receives from the wireless device, a selection for a first language from the language list. The computing system transmits a message indicating the selection to the wireless device. The computing system requests the audio broadcast. The computing system receives the audio broadcast. The computing system transmits the audio broadcast comprising the first language to the wireless device.Type: GrantFiled: June 2, 2008Date of Patent: September 13, 2011Assignee: International Business Machines CorporationInventor: Christopher Phillips
-
Patent number: 8015009Abstract: A computer system comprising hardware and software elements; the hardware elements including a processor, a display means and a speaker, the software elements comprising a speech synthesizer, a database platform and a software application comprising a methodology of inputting and tabulating visual elements and verbal elements into the database, links for linking the visual elements and verbal elements; operations for manipulating the database and for enunciating the verbal elements as the corresponding visual elements are displayed on the display means.Type: GrantFiled: May 3, 2006Date of Patent: September 6, 2011Inventors: Joel Jay Harband, Uziel Yosef Harband
-
Patent number: 8010366Abstract: A hearing application suite includes enhancement and training for listening and hearing of prerecorded speech, extemporaneous voice communication, and non-speech sound. Enhancement includes modification of audio according to audiometric data representing subjective hearing abilities of the user, display of textual captions contemporaneously with the display of the audiovisual content, user-initiated repeating of a most recently played portion of the audiovisual content, user-controlled adjustment of the rate of playback of the audiovisual content, user-controlled dynamic range compression/expansion, and user controlled noise reduction. Training includes testing the user's ability to discern speech and/or various other qualities of audio with varying degrees of quality.Type: GrantFiled: March 20, 2007Date of Patent: August 30, 2011Assignee: NeuroTone, Inc.Inventors: Gerald W. Kearby, Earl I. Levine, A. Robert Modeste
-
Patent number: 7983920Abstract: A method and system for adapting a computing device in response to changes in an environment surrounding the computing device, or in response to the user's stated preferences. The computing device includes one or more sensors that sense the environment. A changed characteristic of the environment is detected. A determination is made as to one or more settings to change in response to the changed characteristic. Then one or more of the settings are changed to cause the computing device to interact with the user in a different mode. A mode may include which inputs, outputs, and/or processes are used to communicate with the user. A mode may also include how an application formats output or receives input.Type: GrantFiled: November 18, 2003Date of Patent: July 19, 2011Assignee: Microsoft CorporationInventor: Robert E. Sinclair, II
-
Patent number: 7974845Abstract: Stuttering treatment methods and apparatus which utilize removable oral-based appliances having actuators which are attached, adhered, or otherwise embedded into or upon a dental or oral appliance are described. Such oral appliances may receive the user's voice and process the voice to introduce a time delay and/or a frequency shift. The altered audio feedback signal is then transmitted back to the user through a tooth, teeth, or other bone via a vibrating actuator element. The actuator element may utilize electromagnetic or piezoelectric actuator mechanisms and may be positioned directly along the dentition or along an oral appliance housing in various configurations.Type: GrantFiled: February 15, 2008Date of Patent: July 5, 2011Assignee: Sonitus Medical, Inc.Inventors: John Spiridigliozzi, Amir A. Abolfathi
-
Patent number: 7958201Abstract: A computer implemented system and method for encouraging frequent and purposeful electronic communications from caregivers to individuals with impaired memory to, inter alia, alleviate feelings of isolation and improve memory. The system includes a Web-based application through which caregivers send text, image, voice and other forms of data for receipt by the sufferer on a PDA having a simple user interface. The Web application records the dates and nature (i.e., text, audio, photograph, video) of communications sent by each caregiver, processes the data in order to display it in a variety of meaningful ways to all caregivers, thus creating a peer-pressure environment to encourage more frequent communication.Type: GrantFiled: August 13, 2009Date of Patent: June 7, 2011Inventor: Ted Lindsay
-
Patent number: 7949520Abstract: An enhancement system extracts pitch from a processed speech signal. The system estimates the pitch of voiced speech by deriving filter coefficients of an adaptive filter and using the obtained filter coefficients to derive pitch. The pitch estimation may be enhanced by using various techniques to condition the input speech signal, such as spectral modification of the background noise and the speech signal, and/or reduction of the tonal noise from the speech signal.Type: GrantFiled: December 9, 2005Date of Patent: May 24, 2011Assignee: QNX Software Sytems Co.Inventors: Rajeev Nongpiur, Phillip A. Hetherington
-
Publication number: 20110096232Abstract: The transmitting apparatus includes an encoder creating an encoded content signal by encoding the content, a generator generating sign language word identification information corresponding to chronologically-ordered sign language words appearing in a speech in the content, a creating unit creating control information containing the generated chronologically-ordered sign language word identification information, a storage unit storing sign language word images for displaying a sign language video corresponding to the sign language words by grouping the sign language word images into a plurality of modules according to a frequency of appearance of the sign language words in the speech in the content, a multiplexer creating a data stream by combining the encoded content signal with the control information and by repeatedly replicating the plurality of modules at a frequency corresponding to the frequency of appearance, and a transmitter transmitting the created data stream.Type: ApplicationFiled: October 15, 2010Publication date: April 28, 2011Inventors: Yoshiharu DEWA, Ichiro Hamada
-
Patent number: 7930212Abstract: An electronic talking menu system for the visually impaired includes a battery powered portable electronic, audio output device having large back-lighted buttons corresponding to menu items. Each button corresponds to contents of the restaurant's menu, such as appetizers, drinks, seafood, desserts, etc. Pressing a particular button activates a pre-recorded description of the menu item, or menu items within the selected category. An electronic menus system thus provides a system for enabling a visually impaired person to review and select desired menu items using audio feedback. Delivery of pre-recorded content is accomplished via either a logically-managed service wherein formatted sound files are uploaded to a memory card from a personal computer via the Internet, and/or a courier-based service wherein formatted memory cards are delivered to restaurants via a third party parcel delivery service with a round-robin mailer to exchange memory cards.Type: GrantFiled: March 31, 2008Date of Patent: April 19, 2011Inventors: Susan Perry, Richard Herbst
-
Patent number: 7925511Abstract: There is provided a system and method for secure voice identification in a medical device. More specifically, in one embodiment, there is provided a method comprising receiving an audio signal, identifying one or more frequency components of the received audio signal, determining a permission level associated with the one or more frequency components, determining a medical device command associated with the one or more frequency components, wherein the medical device command has a permission level, and executing the medical device command if the permission level of the medical device command is at or below the permission level associated with the one or more frequency components.Type: GrantFiled: September 29, 2006Date of Patent: April 12, 2011Assignee: Nellcor Puritan Bennett LLCInventors: Li Li, Clark R. Baker, Jr.
-
Patent number: 7925492Abstract: A method for emulating human cognition in electronic form is disclosed. Information is received in the form of a textual or voice input in a natural language. This is parsed into pre-determined phrases based on a stored set of language rules for the natural language. Then, the parsed phrases are determined as to whether they define aspects of an environment and, if so, then creating weighting factors to the natural language that are adaptive, the created weighting factors operable to create a weighted decision based upon the natural language. Then it is determined if the parsed phrases constitute a query and, if so, then using the weighted factors to make a decision to the query.Type: GrantFiled: June 5, 2007Date of Patent: April 12, 2011Assignee: Neuric Technologies, L.L.C.Inventor: Thomas A. Visel
-
Patent number: 7901211Abstract: A computer system provides a series of visual flash stimuli to a user and then requires that the user process the visual stimuli to produce a verbalization that corresponds to the visual stimuli and/or a fine motor activity that corresponds to the visual stimuli. The visual flash stimuli are presented to a user via a display device and include letters, words and phrases. The fine motor activity includes inputting letters or words via an input device, such as typing on a keyboard. The system includes eye movement activities, letter flash activities and word flash activities. The content or visual stimuli provided during these activities, as well as the progression through these activities can be determined in part by the diagnosis of the individual user. The system can be used to treat a variety of mental disabilities.Type: GrantFiled: March 26, 2008Date of Patent: March 8, 2011Inventor: Shirley M. Pennebaker
-
Publication number: 20110040559Abstract: Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for captioning a media presentation. The method includes receiving automatic speech recognition (ASR) output from a media presentation and a transcription of the media presentation. The method includes selecting via a processor a pair of anchor words in the media presentation based on the ASR output and transcription and generating captions by aligning the transcription with the ASR output between the selected pair of anchor words. The transcription can be human-generated. Selecting pairs of anchor words can be based on a similarity threshold between the ASR output and the transcription. In one variation, commonly used words on a stop list are ineligible as anchor words. The method includes outputting the media presentation with the generated captions. The presentation can be a recording of a live event.Type: ApplicationFiled: August 17, 2009Publication date: February 17, 2011Applicant: AT&T Intellectual Property I, L.P.Inventors: Yeon-Jun KIM, David C. Gibbon, Horst Schroeter
-
Publication number: 20110040565Abstract: A method and a system for voice communication, especially for a user who has voice or speaking problems, are disclosed. The method requires a communication sheet and a digital voice signal processing device. The communication sheet comprises a plurality of communication units and a plurality of function units for a user to click with the digital voice signal processing device. The plurality of function units comprise a whole sentence unit, and the method comprises a method for performing a function of emitting the sound of a whole sentence, which comprises the following steps: receiving sounds of words selected by the user; searching a voice file according to each of the sounds of words; receiving a command generated by the user's clicking the whole sentence unit; and playing voice files in order.Type: ApplicationFiled: February 19, 2010Publication date: February 17, 2011Inventors: Chih-Kang Yang, Shu-Hua Guo, Kuo-Ping Yang, Ho-Hsin Liao, Chun-Kai Wang, Sin-Chen Lin, Kun-Yi Hua, Ming-Hsiang Cheng, Chih-Long Chang
-
Patent number: 7881939Abstract: A system for monitoring conditions associated with an individual in a region includes at least one speech input transducer and speech processing software coupled thereto. Results of the speech processing can initiate communications with a displaced communications device such as a telephone or a computer to provide a source of feedback.Type: GrantFiled: May 31, 2005Date of Patent: February 1, 2011Assignee: Honeywell International Inc.Inventor: Lee D. Tice
-
Publication number: 20110004468Abstract: A hearing aid for improving diminished hearing caused by reduced temporal resolution includes: a speech input unit (201) which receives a speech signal from outside; a speech analysis unit (202) which detects a sound segment and a segment acoustically regarded as soundless from the speech signal received by the speech input unit and detects a consonant segment and a vowel segment within the detected sound segment; and a signal processing unit (204) which temporally increments the consonant segment detected by the speech analysis unit (204) and temporally decrements at least one of the vowel segment and the segment acoustically regarded as soundless detected by the speech analysis unit (204).Type: ApplicationFiled: January 28, 2010Publication date: January 6, 2011Inventors: Kazue Fusakawa, Gempo Ito
-
Patent number: 7844461Abstract: Provided are an information processing apparatus and method so adapted that if a plurality of speech output units having a speech synthesizing function are present, a conversion is made to speech having mutually different feature quantitys so that a user can readily be informed of which unit is providing the user with information such as an alert information. Speech data that is output from another speech output unit is input from a communication unit (8) and stored in a RAM (7). A central processing unit (1) extracts a feature quantity relating to the input speech data. Further, the central processing unit (1) utilizes a speech synthesis dictionary (51) that has been stored in a storage device (5) and generates speech data having a feature quantity different from that of the extracted feature quantity. The generated speech data is output from a speech output unit (4).Type: GrantFiled: June 2, 2003Date of Patent: November 30, 2010Assignee: Canon Kabushiki KaishaInventor: Masayuki Yamada
-
Patent number: 7822613Abstract: In a vehicle-mounted control apparatus, a control unit 2 displays guidance on an operation of the vehicle-mounted control apparatus by voice input on a display device 6, and makes the user get training so that the user can master the techniques for operating the vehicle-mounted control apparatus (in step ST4). At this time, by using a voice which the user tries to input in order to master the techniques for operating the vehicle-mounted control apparatus, the voice recognition unit 5 learns the features of the user's voice in the background, and computes recognition parameters. Thereby, the user can know how to operate the vehicle-mounted control apparatus and can also register the features of the voice in the vehicle-mounted control apparatus.Type: GrantFiled: October 7, 2003Date of Patent: October 26, 2010Assignee: Mitsubishi Denki Kabushiki KaishaInventors: Tsutomu Matsubara, Masato Hirai, Emiko Kido, Fumitaka Sato
-
Patent number: 7809576Abstract: A unit for providing an interactive, light-activated, voice recorder unit for a book wherein said interactive voice recorder unit automatically initiates a playback mode when the front cover of a book is opened and light enters the unit.Type: GrantFiled: July 16, 2007Date of Patent: October 5, 2010Inventors: Lucient G. Lallouz, Sharon J. Fixman-Lallouz
-
Patent number: 7792676Abstract: Embodiments of the present invention comprises a system, method, and apparatus that provides for the utilization of a relatively real-time or near real-time interpretation or translation that may be utilized preferably for a relatively short duration of time on a network. A preferred embodiment of the present invention provides online, real-time, short-duration interpreting services in a network-based format. In preferred embodiments, the interpreting system comprises at least one provider computer, such as a server, wherein the provider computer is capable of communicating with user computers via a network. In one preferred embodiment, the provider computer provides a series of web pages that allow access to the interpreting system, including, but not limited to, a request for service page, wherein a user can access the system and input a request for interpreting services. Interpreting services are then provided to a user and a third party desiring to communicate with the user via the network.Type: GrantFiled: October 25, 2001Date of Patent: September 7, 2010Inventors: Robert Glenn Klinefelter, Gregory A. Piccionelli
-
Publication number: 20100222098Abstract: A mobile wireless communications device includes a housing and transceiver carried by the housing for transmitting and receiving radio frequency (RF) signals carrying communications data of speech. A processor is coupled to the transceiver for processing the communications data as speech that is transmitted and received to and from the transceiver. A keyboard and display is carried by the housing and connected to the processor. A speech-to-text and text-to-speech module converts communications data as speech received from the transceiver to text that is displayed on the display and converting text that is typed by a user on the keyboard in the communications data as speech to be transmitted from the transceiver as an RF signal.Type: ApplicationFiled: February 27, 2009Publication date: September 2, 2010Applicant: Research In Motion LimitedInventor: Neeraj GARG
-
Patent number: 7778834Abstract: The present disclosure presents a useful metric for assessing the relative difficulty which non-native speakers face in pronouncing a given utterance and a method and systems for using such a metric in the evaluation and assessment of the utterances of non-native speakers. In an embodiment, the metric may be based on both known sources of difficulty for language learners and a corpus-based measure of cross-language sound differences. The method may be applied to speakers who primarily speak a first language speaking utterances in any non-native second language.Type: GrantFiled: August 11, 2008Date of Patent: August 17, 2010Assignee: Educational Testing ServiceInventors: Derrick Higgins, Klaus Zechner, Yoko Futagi, Rene Lawless
-
Publication number: 20100198582Abstract: Nothing exists like this Verbal Command Laptop Computer and Software worldwide to transfer electronic information data of EDI. It can be used for the elderly when they need to scan something. Just set the item on a scanner and say, “scan, please.” The Verbal Command Laptop Computer and Software can be used to store names and addresses. Also, used as a fax machine. Just say, “fax, please” or “email please.” When user wishes to use email say, “check email please” or “send email please.” When wanting to use the Internet say, “Internet please” or for search engine say, “search engine.” The Verbal Command Laptop Computer and Software can handle all phases of a standard computer. The Verbal Command Laptop Computer and Software also has its own search engine. One can also use verbal command hands free with the Cordless Microphone or Verbal Head Set.Type: ApplicationFiled: February 2, 2009Publication date: August 5, 2010Inventor: Gregory Walker Johnson
-
Publication number: 20100174533Abstract: Techniques are described for automatically measuring fluency of a patient's speech based on prosodic characteristics thereof. The prosodic characteristics may include statistics regarding silent pauses, filled pauses, repetitions, or fundamental frequency of the patient's speech. The statistics may include a count, average number of occurrences, duration, average duration, frequency of occurrence, standard deviation, or other statistics. In one embodiment, a method includes receiving an audio sample that includes speech of a patient, analyzing the audio sample to identify prosodic characteristics of the speech of the patient, and automatically measuring fluency of the speech of the patient based on the prosodic characteristics. These techniques may present several advantages, such as objectively measuring fluency of a patient's speech without requiring a manual transcription or other manual intervention in the analysis process.Type: ApplicationFiled: January 5, 2010Publication date: July 8, 2010Applicant: Regents of the University of MinnesotaInventor: Serguei V.S. Pakhomov
-
Patent number: 7746986Abstract: Systems and methods for displaying visual content to a user corresponding to sound captured at a user terminal are disclosed. After receiving over a network from a user terminal a request to convert sound into a visual content representing the sound, wherein the sound comprises one or more words, a translation server may retrieve text corresponding to the one or more words from a database. The translation server may then convert the text into one or more content phrases, wherein the content phrases represent the meaning of the one or more words, and convert each of the one or more content phrases into a new language. Finally, the translation server may send visual content to the user terminal representing the new language.Type: GrantFiled: June 15, 2006Date of Patent: June 29, 2010Assignee: Verizon Data Services LLCInventors: Vittorio G. Bucchieri, Albert L. Schmidt, Jr.
-
Patent number: 7729907Abstract: Preparing for the full-fledged aged society, measures to prevent senility are required. Senility is prevented by extracting signals of prescribed bands from a speech signal using a first bandpass filter section having a plurality of bandpass filters, extracting the envelopes of each frequency band signal using an envelope extraction section having envelope extractors, applying a noise source signal to a second bandpass filter section having a plurality of bandpass filters and extracting noise signals corresponding to the prescribed bands, multiplying the outputs from the first bandpass filter section and the second bandpass filter section in a multiplication section, summing up the outputs from the multiplication section in an addition section to produce a Noise-Vocoded Speech Sound signal, and presenting the Noise-Vocoded Speech Sound signal for listening.Type: GrantFiled: February 21, 2005Date of Patent: June 1, 2010Assignees: Rion Co., Ltd.Inventor: Hiroshi Rikimaru
-
Publication number: 20100109918Abstract: A device for use by a deafblind person is disclosed. The device comprises a first key for manually inputting a series of words in the form of a code, a second key for manually inputting an action to be performed by the device, a third key for manually inputting a user preference, and a fourth key for manually inputting communication instructions. The device further has an internal processor programmed to carry out communication functions and search and guide functions. The device has various safety and security functions for pedestrians or persons in transit. In a preferred embodiment, the device comprises an electronic cane known as an eCane. Also disclosed is a system for allowing a deafblind person to enjoy television programs.Type: ApplicationFiled: November 4, 2009Publication date: May 6, 2010Inventor: Raanan Liebermann
-
Publication number: 20100100388Abstract: A speech aid for persons with hypokinetic dysarthria, a speech disorder associated with Parkinson's disease. The speech aid alters the pitch at which the user hears his or her voice and/or provides multitalker babble noise to the speaker's ears. The speech aid induces increased speech motor activity and improves the intelligibility of the user's speech. The speech aid may be used with a variety of microphones, headphones, in one or both ears, with a voice amplifier, or connected to telephones.Type: ApplicationFiled: October 12, 2009Publication date: April 22, 2010Inventor: Thomas David Kehoe
-
Patent number: 7702506Abstract: An object of the present invention is to provide a conversation support apparatus and a conversation support method that allow users to effectively and smoothly onverse with each other. According to the present invention, since a first display section 22 and a second display section 32 can be placed at different angles, while a first user is watching the second display section 32 and a second user is watching the first display section 22, they can smoothly converse with each other. Since the first display section 22 and the second display section 32 are disposed, for example the second user and the first user can face-to-face converse with each other.Type: GrantFiled: May 12, 2004Date of Patent: April 20, 2010Inventor: Takashi Yoshimine
-
Publication number: 20100063822Abstract: A communication system that is specifically designed for the needs of speech impaired individuals, particularly aphasia victims, makes use of a speech generating mobile terminal communication device (SGMTD) (12) that is designed to be hand held and operated by a speech disabled individual. The SGMTD includes a database of audio files that are accessed to generate full sentences in response to single word or short phrase entries selected from a plurality of menus by the disabled user. A second, companion mobile terminal device (COMTD) (14) enables a caregiver to communicate with the speech disabled individual's SGMTD to assist the individual in communicating with the caregiver by causing the SGMTD to switch to a particular menu or list from which the caregiver wants the disabled individual to make a selection. The SGMTD also includes software that enables the device to communicate with other SGMTDs via wireless communications and thereby simulate a verbal conversation between speech impaired individuals.Type: ApplicationFiled: April 21, 2008Publication date: March 11, 2010Inventors: Daniel C. O'Brien, Edward T. Buchholz
-
Publication number: 20100063794Abstract: A sign language recognition apparatus and method is provided for translating hand gestures into speech or written text. The apparatus includes a number of sensors on the hand, arm and shoulder to measure dynamic and static gestures. The sensors are connected to a microprocessor to search a library of gestures and generate output signals that can then be used to produce a synthesized voice or written text. The apparatus includes sensors such as accelerometers on the fingers and thumb and two accelerometers on the back of the hand to detect motion and orientation of the hand. Sensors are also provided on the back of the hand or wrist to detect forearm rotation, an angle sensor to detect flexing of the elbow, two sensors on the upper arm to detect arm elevation and rotation, and a sensor on the upper arm to detect arm twist. The sensors transmit the data to the microprocessor to determine the shape, position and orientation of the hand relative to the body of the user.Type: ApplicationFiled: July 21, 2009Publication date: March 11, 2010Inventor: Jose L. HERNANDEZ-REBOLLAR
-
Patent number: 7676372Abstract: A speech transformation apparatus comprises a microphone 21 for detecting speech and generating a speech signal; a signal processor 22 for performing a speech recognition process using the speech signal; a speech information generator for transforming the recognition result responsive to the physical state of the user, the operating conditions, and/or the purpose for using the apparatus; and a display unit 26 and loudspeaker 25 for generating a control signal for outputting a raw recognition result and/or a transformed recognition result. In a speech transformation apparatus thus constituted, speech enunciated by a spoken-language-impaired individual can be transformed and presented to the user, and sounds from outside sources can also be transformed and presented to the user.Type: GrantFiled: February 16, 2000Date of Patent: March 9, 2010Assignee: Yugen Kaisha GM&MInventor: Toshihiko Oba
-
Patent number: 7676368Abstract: The present invention is intended to perform text-to-speech conversion by replacing URLs and electronic mail addresses included in the text data of electronic mail by registered predetermined words. A mail watcher application control section executes the processing for converting electronic mail received by a MAPI mailer into speech data. The mail watcher application control section outputs URLs and electronic mail addresses included in the text data of electronic mail supplied from the MAPI mailer to a URL and mail address filter to replace them by registered predetermined names. Of the entered texts, the URL and mail address filter compares the URL or mail address included in the entered text with those registered in the URL and mail address table. If a the URL or mail address of the entered text is found matching, the URL and mail address filter replace it by the registered name and outputs it to the mail watcher application control section.Type: GrantFiled: July 2, 2002Date of Patent: March 9, 2010Assignee: Sony CorporationInventors: Utaha Shizuka, Satoshi Fujimura, Yasuhiko Kato
-
Patent number: 7664636Abstract: The invention provides a system and method for indexing and organizing voice mail message by the speaker of the message. One or more speaker models are created from voice mail messages received. As additional messages are left, each of the new messages are compared with existing speaker models to determine the identity of the callers of each of the new messages. The voice mail messages are organized within a user's mailbox by caller. Unknown callers may be identified and tagged by the user and then used to create new speaker models and/or update existing speaker models.Type: GrantFiled: April 17, 2000Date of Patent: February 16, 2010Assignee: AT&T Intellectual Property II, L.P.Inventors: Julia Hirschberg, Sarangarajan Parthasarathy, Aaron Edward Rosenberg, Stephen Whittaker
-
Patent number: 7653543Abstract: The present invention is directed toward a method, device, and system for providing a high quality communication session. The system provides a way of determining speech characteristics of participants in the communication session and adjusting, if necessary, signals from a speaker to a listener such that the listener can more intelligibly understand what the speaker is saying.Type: GrantFiled: March 24, 2006Date of Patent: January 26, 2010Assignee: Avaya Inc.Inventors: Colin Blair, Jonathan R. Yee-Hang Choy, Andrew W. Lang, David Preshan Thambiratnam, Paul Roller Michaelis
-
Patent number: 7643997Abstract: The present invention relates to a handheld analysis instrument for assaying a medically significant sample. The instrument comprises a measuring device for measuring the concentration of an analyte in the sample, an output device for outputting measurement results. The output device has both an acoustic signal output device for outputting the measurement results through nonverbal acoustic signals and also a wireless interface for communicating with an external speech output unit.Type: GrantFiled: June 5, 2006Date of Patent: January 5, 2010Assignee: Roche Diagnostics Operations, Inc.Inventors: Hans Kintzig, Jean Thilges
-
Patent number: 7636663Abstract: An on-vehicle acoustic control system determines which one of sounds of an audio device and a navigation device should be generated with priority, when both devices are requested to generate respective sounds. The control system further detects a user's physical condition based on an interaction with the user, a picture of the user and biometric information of the user. The control system generates sound in the order of determined priority, and varies the manner of sound generation based on the user's physical condition.Type: GrantFiled: September 20, 2005Date of Patent: December 22, 2009Assignee: Denso CorporationInventors: Ichiro Yoshida, Kazunao Yamada
-
Publication number: 20090287490Abstract: Embodiments of the invention may be used to enhance the presentation of a virtual environment for certain users, e.g., a visually impaired user. Because users may visit, and revisit, locations within the virtual environment, the state of elements in the virtual environment may change. Accordingly, audible descriptions of an object, person or environment, may be adjusted to prevent redundant or unnecessary descriptions. For example, when the user encounters a given element a second time, rather than describe each characteristic of the element, only changes to the characteristics of the element are described.Type: ApplicationFiled: May 14, 2008Publication date: November 19, 2009Inventors: Brian John Cragun, Zachary Adam Garbow, Christopher A. Peterson
-
Patent number: 7613613Abstract: A method and system for presenting lip-synchronized speech corresponding to the text received in real time is provided. A lip synchronization system provides an image of a character that is to be portrayed as speaking text received in real time. The lip synchronization system receives a sequence of text corresponding to the speech of the character. It may modify the received text in various ways before synchronizing the lips. It may generate phonemes for the modified text that are adapted to certain idioms. The lip synchronization system then generates the lip-synchronized images based on the phonemes generated from the modified texts and based on the identified expressions.Type: GrantFiled: December 10, 2004Date of Patent: November 3, 2009Assignee: Microsoft CorporationInventors: Timothy V. Fields, Brandon Cotton
-
Publication number: 20090264789Abstract: A set of therapy parameter values is selected based on a patient state, where the patient state comprises a speech state or a mixed patient state including the speech state and at least one of a movement state or a sleep state. In this way, therapy delivery is tailored to the patient state, which may include one or more patient symptoms specific to the patient state. In some examples, a medical device determines whether the patient is in the speech state or a mixed patient state including the speech state based on a signal generated by a voice activity sensor. The voice activity sensor detects the use of the patient's voice, and may include a microphone, a vibration detector or an accelerometer.Type: ApplicationFiled: April 28, 2009Publication date: October 22, 2009Inventors: Gregory F. Molnar, Richard T. Stone, Xuan Wei
-
Publication number: 20090259473Abstract: Methods and apparatus to present a video program to a visually impaired person are disclosed. An example method comprises receiving a video stream and an associated audio stream of a video program, detecting a portion of the video program that is not readily consumable by a visually impaired person, obtaining text associated with the portion of the video program, converting the text to a second audio stream, and combining the second audio stream with the associated audio stream.Type: ApplicationFiled: April 14, 2008Publication date: October 15, 2009Inventors: Hisao M. Chang, Horst Schroeter
-
Publication number: 20090259689Abstract: A way of delivering recipe preparation instruction to disabled individuals is provided using an interactive cooking preparation device. The device retrieves an instruction delivery preference that corresponds to a user with a disability, such as a hearing or sight disability. The user then selects a recipe from list of recipes. Preparation steps that correspond to the selected recipe are retrieved from a data store, such as a database. The retrieved preparation steps are provided to the user using the interactive cooking preparation device that provides the preparation steps in an alternative delivery mode based on the user's delivery preference.Type: ApplicationFiled: April 15, 2008Publication date: October 15, 2009Applicant: International Business Machines CorporationInventors: Lydia Mai Do, Travis M. Grigsby, Pamela Ann Nesbitt, Lisa Anne Seacat
-
Publication number: 20090210231Abstract: Stuttering treatment methods and apparatus which utilize removable oral-based appliances having actuators which are attached, adhered, or otherwise embedded into or upon a dental or oral appliance are described. Such oral appliances may receive the user's voice and process the voice to introduce a time delay and/or a frequency shift. The altered audio feedback signal is then transmitted back to the user through a tooth, teeth, or other bone via a vibrating actuator element. The actuator element may utilize electromagnetic or piezoelectric actuator mechanisms and may be positioned directly along the dentition or along an oral appliance housing in various configurations.Type: ApplicationFiled: February 15, 2008Publication date: August 20, 2009Inventors: John SPIRIDIGLIOZZI, Amir ABOLFATHI
-
Patent number: 7565295Abstract: A sign language recognition apparatus and method is provided for translating hand gestures into speech or written text. The apparatus includes a number of sensors on the hand, arm and shoulder to measure dynamic and static gestures. The sensors are connected to a microprocessor to search a library of gestures and generate output signals that can then be used to produce a synthesized voice or written text. The apparatus includes sensors such as accelerometers on the fingers and thumb and two accelerometers on the back of the hand to detect motion and orientation of the hand. Sensors are also provided on the back of the hand or wrist to detect forearm rotation, an angle sensor to detect flexing of the elbow, two sensors on the upper arm to detect arm elevation and rotation, and a sensor on the upper arm to detect arm twist. The sensors transmit the data to the microprocessor to determine the shape, position and orientation of the hand relative to the body of the user.Type: GrantFiled: August 27, 2004Date of Patent: July 21, 2009Assignee: The George Washington UniversityInventor: Jose L. Hernandez-Rebollar
-
Patent number: RE41002Abstract: An electronic communications system for the deaf includes a video apparatus for observing and digitizing the facial, body and hand and finger signing motions of a deaf person, an electronic translator for translating the digitized signing motions into words and phrases, and an electronic output for the words and phrases. The video apparatus desirably includes both a video camera and a video display which will display signing motions provided by translating spoken words of a hearing person into digitized images. The system may function as a translator by outputting the translated words and phrases as synthetic speech at the deaf person's location for another person at that location, and that person's speech may be picked up, translated, and displayed as signing motions on a display in the video apparatus.Type: GrantFiled: June 23, 2000Date of Patent: November 24, 2009Inventor: Raanan Liebermann