Language Recognition (epo) Patents (Class 704/E15.003)
  • Publication number: 20100036653
    Abstract: The present invention relates to a method and apparatus of translating a language using voice recognition. The present invention provides a method of translating a language using voice recognition, comprising: receiving a voice input comprising a first language; acquiring at least one recognition candidate corresponding to the voice input by performing voice recognition on the voice input; providing a user interface for selecting at least one of the acquired at least one recognition candidate; and outputting a second language corresponding to the selected at least one recognition candidate, wherein the type of the user interface is determined according to the number of the acquired at least one recognition candidate, and an apparatus of translating a language using voice recognition for implementing the above method.
    Type: Application
    Filed: March 30, 2009
    Publication date: February 11, 2010
    Inventors: Yu Jin KIM, Won Ho SHIN
  • Publication number: 20100020960
    Abstract: An enhanced services system for a telecommunications network includes operator equipment accessible by an operator, and a routing system for routing a call from a caller to the operator equipment. The caller may access e-mail creation and service and/or interpreter services. A method of sending an electronic message includes routing a call from a caller to operator equipment using a routing system, inputting information provided by the caller into the operator equipment to create an electronic message, and sending the electronic message to at least one recipient. A method of translating a telephone conversation includes providing a routing system to connect to operator equipment a call from a caller, routing the call to the operator equipment using the routing system to enable communications between at least the caller and a bilingual operator, and translating at least a portion of the conversation between the caller and a third party.
    Type: Application
    Filed: October 2, 2009
    Publication date: January 28, 2010
    Inventors: Joseph Allen PENCE, Joseph L. Durkee, Roundell L. Harris, JR., Larry S. Wechter
  • Publication number: 20100010814
    Abstract: A method for enhancing a media file to enable speech-recognition of spoken navigation commands can be provided. The method can include receiving a plurality of textual items based on subject matter of the media file and generating a grammar for each textual item, thereby generating a plurality of grammars for use by a speech recognition engine. The method can further include associating a time stamp with each grammar, wherein a time stamp indicates a location in the media file of a textual item corresponding with a grammar. The method can further include associating the plurality of grammars with the media file, such that speech recognized by the speech recognition engine is associated with a corresponding location in the media file.
    Type: Application
    Filed: July 28, 2008
    Publication date: January 14, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Paritosh D. Patel
  • Publication number: 20090326918
    Abstract: Language detection techniques are described. In implementation, a method comprises determining which human writing system is associated with text characters in a string based on values representing the text characters. When the values are associated with more than one human language, the string is compared with a targeted dictionary to identify a corresponding human language associated with the string.
    Type: Application
    Filed: June 26, 2008
    Publication date: December 31, 2009
    Applicant: Microsoft Corporation
    Inventors: Dimiter Georgiev, Shenghua Ye, Gerardo Villarreal Guzman, Kieran Snyder, Ryan M. Cavalcante, Tarek M. Sayed, Yaniv Feinberg, Yung-Shin Lin
  • Publication number: 20090265159
    Abstract: The present invention can recognize both English and Chinese at the same time. The most important skill is that the features of all English words (without samples) are entirely extracted from the features of Chinese syllables. The invention normalizes the signal waveforms of variable lengths for English words (Chinese syllables) such that the same words (syllables) can have the same features at the same time position. Hence the Bayesian classifier can recognize both the fast and slow utterance of sentences. The invention can improve the feature such that the speech recognition of the unknown English (Chinese) is guaranteed to be correct. Furthermore, since the invention can create the features of English words from the features of Chinese syllables, it can also create the features of other languages from the features of Chinese syllables and hence it can also recognize other languages, such as German, French, Japanese, Korean, Russian, etc.
    Type: Application
    Filed: October 10, 2008
    Publication date: October 22, 2009
    Inventors: Tze-Fen LI, Tai-Jan Lee Li, Shih-Tzung Li, Shih-Hon Li, Li-Chuan Liao
  • Publication number: 20090265171
    Abstract: Systems, methods, and apparatuses including computer program products for segmenting words using scaled probabilities. In one implementation, a method is provided. The method includes receiving a probability of a n-gram identifying a word, determining a number of atomic units in the corresponding n-gram, identifying a scaling weight depending on the number of atomic units in the n-gram, and applying the scaling weight to the probability of the n-gram identifying a word to determine a scaled probability of the n-gram identifying a word.
    Type: Application
    Filed: April 16, 2008
    Publication date: October 22, 2009
    Applicant: GOOGLE INC.
    Inventor: Mark Davis
  • Publication number: 20090254334
    Abstract: A translation method for properly recognizing and automatically translating a sentence containing an emphasized word including two or more successive identical characters. First, words in a source text to be translated are looked up in a dictionary (step S201) to determine whether the text includes an unregistered word (step S203). Then, it is determined whether an unregistered word contains successive identical characters (step S205). If it contains successive identical characters, the number of the characters is reduced (step S207) and determines whether a modified word thus obtained is contained in the dictionary (step S209). If it is determined that the modified word is contained in the dictionary, the unregistered word is determined as the modified word (step S215), the part of speech and the attribute of the modified word are determined (step S217), and the unregistered word is replaced with the modified word to make translation.
    Type: Application
    Filed: March 25, 2009
    Publication date: October 8, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Tomohiro Miyahira, Yoshiroh Kamiyama, Hiromi Hatori
  • Publication number: 20090254335
    Abstract: Examples of methods are provided for generating a multilingual codebook. According to an example method, a main language codebook and at least one additional codebook corresponding to a language different from the main language are provided. A multilingual codebook is generated from the main language codebook and the at least one additional codebook by adding a sub-set of code vectors of the at least one additional codebook to the main codebook based on distances between the code vectors of the at least one additional codebook to code vectors of the main language codebook. Systems and methods for speech recognition using the multilingual codebook and applications that use speech recognition based on the multilingual codebook are also provided.
    Type: Application
    Filed: April 1, 2009
    Publication date: October 8, 2009
    Applicant: Harman Becker Automotive Systems GmbH
    Inventors: Raymond Brückner, Martin Raab, Rainer Gruhn
  • Publication number: 20090247296
    Abstract: A gaming apparatus of the present invention comprises a microphone, a speaker, a display, and a controller programmed to conduct the processing of: (A) conducting a conversation with a player by recognizing a voice inputted from the microphone, in addition to outputting a voice relating to the conversation from the speaker by executing a conversation program; (B) measuring an elapsed time since the voice relating to the conversation is outputted from the speaker; (C) determining whether or not a response to the voice relating to the conversation is inputted from the microphone; and (D) conducting the conversation with the player by recognizing the voice inputted from the microphone, in addition to displaying to the display an image of sign language relating to the conversation, instead of conducting the processing (A), when the elapsed time exceeds a predetermined time without the response.
    Type: Application
    Filed: October 3, 2008
    Publication date: October 1, 2009
    Applicant: ARUZE GAMING AMERICA, INC.
    Inventor: Kazuo OKADA
  • Publication number: 20090248605
    Abstract: The present invention provides a technique for building natural language parsers by implementing a country and/or jurisdiction specific set of training data that is automatically converted during a build phase to a respective predictive model, i.e., an automated country specific natural language parser. The predictive model can be used without the training data to quantify any input address. This model may be included as part of a larger Geographic Information System (GIS) data-set or as a stand alone quantifier. The build phase may also be run on demand and the resultant predictive model kept in temporary storage for immediate use.
    Type: Application
    Filed: September 29, 2008
    Publication date: October 1, 2009
    Inventors: David John MITCHELL, Arthur Newth Morris, IV, Ralph James Mason
  • Publication number: 20090216535
    Abstract: A computerized method for speech recognition in a computer system. Reference word segments are stored in memory. The reference word segments when concatenated form spoken words in a language. Each of the reference word segments is a combination of at least two phonemes, including a vowel sound in the language. A temporal speech signal is input and digitized to produced a digitized temporal speech signal The digitized temporal speech signal is transformed piecewise into the frequency domain to produce a time and frequency dependent transform function. The energy spectral density of the temporal speech signal is proportional to the absolute value squared of the transform function. The energy spectral density is cut into input time segments of the energy spectral density. Each of the input time segments includes at least two phonemes including at least one vowel sound of the temporal speech signal.
    Type: Application
    Filed: February 22, 2008
    Publication date: August 27, 2009
    Inventors: Avraham Entlis, Adam Simone, Rabin Cohen-Tov, Izhak Meller, Roman Budovnich, Shlomi Bognim
  • Publication number: 20090209341
    Abstract: A gaming apparatus of the present invention comprises: a microphone; a speaker; a display; a memory storing text data for each language type; and a controller. The controller is programmed to conduct the processing of: (A) recognizing a language type from a sound inputted from the microphone by executing a language recognition program; (B) conducting a conversation with a player by recognizing a voice inputted from the microphone, in addition to outputting a voice from the speaker by executing a conversation program corresponding to the language recognized in the processing (A); and (C) displaying to the display a text based on text data corresponding to the language type recognized in the processing (A) according to progress of a game, the text data being read from the memory.
    Type: Application
    Filed: January 21, 2009
    Publication date: August 20, 2009
    Applicant: ARUZE GAMING AMERICA, INC.
    Inventor: Kazuo OKADA
  • Publication number: 20090204392
    Abstract: A simple means for expanding a speech recognition dictionary between communication terminals is provided. A speech recognition dictionary update support device (100) is provided with a speech recognition processing unit (102) which performs speech recognition on content of communication between the communication terminals (200) and also detects words included in a speech recognition dictionary that is a source of dictionary data from a result of the speech recognition, and a permitted word transmission unit (104) which transmits dictionary data corresponding to the detected words to a communication terminal (200) that is a destination of dictionary data. The communication terminals (200) are provided with an addition confirmation unit (202) which confirms with a user whether or not the received dictionary data is to be registered, and performs addition registration to a personal recognition dictionary (201) only in cases in which a registration operation is performed.
    Type: Application
    Filed: July 11, 2007
    Publication date: August 13, 2009
    Inventor: Shinya Ishikawa
  • Publication number: 20090171664
    Abstract: Systems and methods for receiving natural language queries and/or commands and execute the queries and/or commands. The systems and methods overcomes the deficiencies of prior art speech query and response systems through the application of a complete speech-based information query, retrieval, presentation and command environment. This environment makes significant use of context, prior information, domain knowledge, and user specific profile data to achieve a natural environment for one or more users making queries or commands in multiple domains. Through this integrated approach, a complete speech-based natural language query and response environment can be created. The systems and methods creates, stores and uses extensive personal profile information for each user, thereby improving the reliability of determining the context and presenting the expected results for a particular question or command.
    Type: Application
    Filed: February 4, 2009
    Publication date: July 2, 2009
    Inventors: Robert A. Kennewick, David Locke, Michael R. Kennewick, SR., Michael R. Kennewick, JR., Richard Kennewick, Tom Freeman
  • Publication number: 20090150156
    Abstract: A conversational, natural language voice user interface may provide an integrated voice navigation services environment. The voice user interface may enable a user to make natural language requests relating to various navigation services, and further, may interact with the user in a cooperative, conversational dialogue to resolve the requests. Through dynamic awareness of context, available sources of information, domain knowledge, user behavior and preferences, and external systems and devices, among other things, the voice user interface may provide an integrated environment in which the user can speak conversationally, using natural language, to issue queries, commands, or other requests relating to the navigation services provided in the environment.
    Type: Application
    Filed: December 11, 2007
    Publication date: June 11, 2009
    Inventors: Michael R. Kennewick, Catherine Cheung, Larry Baldwin, Ari Salomon, Michael Tjalve, Sheetal Guttigoli, Lynn Armstrong, Philippe Di Cristo, Bernie Zimmerman, Sam Menaker
  • Publication number: 20090144050
    Abstract: A method and system for automatic speech recognition are disclosed. The method comprises receiving speech from a user, the speech including at least one speech error, increasing the probabilities of closely related words to the at least one speech error and processing the received speech using the increased probabilities. A corpora of data having common words that are mis-stated is used to identify and increase the probabilities of related words. The method applies to at least the automatic speech recognition module and the spoken language understanding module.
    Type: Application
    Filed: February 5, 2009
    Publication date: June 4, 2009
    Applicant: AT&T Corp.
    Inventors: Steven H. Lewis, Kenneth H. Rosen
  • Publication number: 20090132230
    Abstract: Illustrative embodiments provide a computer implemented method and apparatus, in the form of a data processing system, and a computer program product for optimizing a natural language translation. In one illustrative embodiment, the computer implemented method comprises receiving a request from a requester, wherein the request comprises source language data, an indication of a source language and a destination language, and determining whether a translation between the source language and the destination language is needed. Identifying a mapping between the source language and the destination language includes a set of hops, the method, responsive to a determination that the translation is needed, translates the source language data into a destination language data associated with each successive hop in the set of hops in the mapping and returns the destination language data to the requester at a destination hop.
    Type: Application
    Filed: November 15, 2007
    Publication date: May 21, 2009
    Inventors: Dimitri Kanevsky, Bhuvana Ramabhadran, Mahesh Viswanathan
  • Publication number: 20090099850
    Abstract: Methods, apparatus, products are disclosed for displaying speech for a user of a surface computer, the surface computer comprising a surface, the surface computer capable of receiving multi-touch input through the surface and rendering display output on the surface, that include: registering, by the surface computer, a plurality of users with the surface computer; allocating, by the surface computer to each registered user, a portion of the surface for interaction between that registered user and the surface computer; detecting, by the surface computer, a speech utterance from one of the plurality of users; determining, by the surface computer using a speech engine, speech text in dependence upon the speech utterance; creating, by the surface computer, display text in dependence upon the speech text; and rendering, by the surface computer, the display text on at least one of the allocated portions of the surface.
    Type: Application
    Filed: October 10, 2007
    Publication date: April 16, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lydia M. Do, Pamela A. Nesbitt, Lisa A. Seacat
  • Publication number: 20090089044
    Abstract: Linguistic analysis is used to identify queries that use different natural language formations to request similar information. Common intent categories are identified for the queries requesting similar information. Intent responses can then be provided that are associated with the identified intent categories. An intent management tool can be used for identifying new intent categories, identifying obsolete intent categories, or refining existing intent categories.
    Type: Application
    Filed: August 14, 2006
    Publication date: April 2, 2009
    Applicant: INQUIRA, INC.
    Inventors: Edwin Riley Cooper, Michael Peter Dukes, Gann Alexander Bierner, Filippo Ferdinando Paulo Beghelli
  • Publication number: 20090070102
    Abstract: A speech recognition method includes a model selection step which selects a recognition model and translation dictionary information based on characteristic information of input speech and a speech recognition step which translates input speech into text data based on the selected recognition model and translation step which translates the text data based on the selected translation dictionary information.
    Type: Application
    Filed: March 12, 2008
    Publication date: March 12, 2009
    Inventor: SHUHEI MAEGAWA
  • Publication number: 20090055184
    Abstract: A method of creating an application-generic class-based SLM includes, for each of a plurality of speech applications, parsing a corpus of utterance transcriptions to produce a first output set, in which expressions identified in the corpus are replaced with corresponding grammar tags from a grammar that is specific to the application. The method further includes, for each of the plurality of speech applications, replacing each of the grammar tags in the first output set with a class identifier of an application-generic class, to produce a second output set. The method further includes processing the resulting second output sets with a statistical language model (SLM) trainer to generate an application-generic class-based SLM.
    Type: Application
    Filed: August 24, 2007
    Publication date: February 26, 2009
    Applicant: Nuance Communications, Inc.
    Inventor: Matthieu Hebert
  • Publication number: 20090055179
    Abstract: Provided are a method and apparatus for providing a mobile voice web service in a mobile terminal. The method includes analyzing a web history of a user from web search logs of the user and generating a voice access list based on the analysis results, and performing voice recognition by dynamically generating a voice recognition syntax according to the generated voice access list. Accordingly, by limiting syntax required for voice recognition by generating a syntax suitable for a web context of the user, efficient voice recognition, which can be performed in a terminal not a server, can be implemented.
    Type: Application
    Filed: January 15, 2008
    Publication date: February 26, 2009
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jeong-mi CHO, Ji-yeun Kim, Yoon-kyung Song, Byung-kwan Kwak, Nam-hoon Kim, Ick-sang Han
  • Publication number: 20090018842
    Abstract: Techniques are described to create a context for use in automated speech recognition. In an implementation, a determination is made as to which data received by a position-determining device is selectable to initiate one or more functions of the position-determining device, wherein at least one of the functions relates to position-determining functionality. A dynamic context is generated to include one or more phrases taken from the data based on the determination. An audio input is translated by the position-determining device using one or more said phrases from the dynamic context.
    Type: Application
    Filed: December 19, 2007
    Publication date: January 15, 2009
    Applicant: GARMIN LTD.
    Inventors: Jacob W. Caire, Pascal M. Lutz, Kenneth A. Bolton
  • Publication number: 20090018829
    Abstract: Described is a speech recognition dialog management system that allows more open-ended conversations between virtual agents and people than are possible using just agent-directed dialogs. The system uses both novel dialog context switching and learning algorithms based on spoken interactions with people. The context switching is performed through processing multiple dialog goals in a last-in-first-out (LIFO) pattern. The recognition accuracy for these new flexible conversations is improved through automated learning from processing errors and addition of new grammars.
    Type: Application
    Filed: June 8, 2005
    Publication date: January 15, 2009
    Applicant: METAPHOR SOLUTIONS, INC.
    Inventor: Michael Kuperstein
  • Publication number: 20080312907
    Abstract: Techniques for the design and use of a perception modeling language for communicating according to the perspective of at least two communicators. The disclosed method and system provide for forming a model including a predetermined number of states and a plurality of related transitions. The disclosed subject matter represents each of said predetermined number of states according to a plurality of perspectives, said perspectives including a plurality of states and a set of related transitions, and forms a perspective language by deriving a plurality of functions associating said plurality of perspectives for representing at least one actually observable system. Furthermore, the perspective modeling language derives a set of modeling perspectives for modeling said at least one actually observable system.
    Type: Application
    Filed: April 7, 2008
    Publication date: December 18, 2008
    Applicant: PERCEPTION LABS
    Inventor: Jonathan McCoy
  • Publication number: 20080270135
    Abstract: A method (and system) of handling out-of-grammar utterances includes building a statistical language model for a dialog state using, generating sentences and semantic interpretations for the sentences using finite state grammar, building a statistical action classifier, receiving user input, carrying out recognition with the finite state grammar, carrying out recognition with the statistical language model, using the statistical action classifier to find semantic interpretations, comparing an output from the finite state grammar and an output from the statistical language model, deciding which output of the output from the finite state grammar and the output from the statistical language model to keep as a final recognition output, selecting the final recognition output, and outputting the final recognition result, wherein the statistical action classifier, the finite state grammar and the statistical language model are used in conjunction to carry out speech recognition and interpretation.
    Type: Application
    Filed: April 30, 2007
    Publication date: October 30, 2008
    Applicant: International Business Machines Corporation
    Inventors: Vaibhava Goel, Ramesh Gopinath, Ea-Ee Jan, Karthik Visweswariah
  • Publication number: 20080270119
    Abstract: A new system is hereby provided that generates automatic summaries of groups of multiple documents using multiple variations of each sentence from a selected group of representative sentences from the documents, and then selecting from the multiple variations when assembling the automatic summary. The system may generate alternative strings of text, select from among the alternative strings of text, and provide a summary of the group of documents using the strings of text selected from among the alternatives. The alternative strings of text may be generated based on each of a plurality of sentences from the group of documents. Selecting from among the alternative strings of text may be based on one or more criteria indicating the strings of text to be representative of the content of the group of documents.
    Type: Application
    Filed: April 30, 2007
    Publication date: October 30, 2008
    Applicant: Microsoft Corporation
    Inventor: Hisami Suzuki
  • Publication number: 20080270134
    Abstract: A hybrid-captioning system for editing captions for spoken utterances within video includes an editor-type caption-editing subsystem, a line-based caption-editing subsystem, and a mechanism. The editor-type subsystem is that in which captions are edited for spoken utterances within the video on a groups-of-line basis without respect to particular lines of the captions and without respect to temporal positioning of the captions in relation to the spoken utterances. The line-based subsystem is that in which captions are edited for spoken utterances within the video on a line-by-line basis with respect to particular lines of the captions and with respect to temporal positioning of the captions in relation to the spoken utterances. For each section of spoken utterances within the video, the mechanism is to select the editor-type or the line-based subsystem to provide captions for the section of spoken utterances in accordance with a predetermined criteria.
    Type: Application
    Filed: July 13, 2008
    Publication date: October 30, 2008
    Inventors: Kohtaroh Miyamoto, Noriko Negishi, Kenichi Arakawa
  • Publication number: 20080270118
    Abstract: Architecture for correcting incorrect recognition results in an Asian language speech recognition system. A spelling mode can be launched in response to receiving speech input, the spelling mode for correcting incorrect spelling of the recognition results or generating new words. Correction can be obtained using speech and/or manual selection and entry. The architecture facilitates correction in a single pass, rather than multiples times as in conventional systems. Words corrected using the spelling mode are corrected as a unit and treated as a word. The spelling mode applies to languages of at least the Asian continent, such as Simplified Chinese, Traditional Chinese, and/or other Asian languages such as Japanese.
    Type: Application
    Filed: April 26, 2007
    Publication date: October 30, 2008
    Applicant: Microsoft Corporation
    Inventors: Shiun-Zu Kuo, Kevin E. Feige, Yifan Gong, Taro Miwa, Arun Chitrapu
  • Publication number: 20080255846
    Abstract: The disclosed and claimed concept relates generally to handheld electronic devices and, more particularly, to a method of providing language objects by identifying an occupation of a user of a handheld electronic device and a handheld electronic device incorporating the same. A method and apparatus of providing language objects by identifying an occupation of a user of a handheld electronic device includes the following steps: identifying the occupation of the user of the handheld electronic device from a number of occupations; detecting a text input; and displaying at least a portion of at least a first language object that is associated with the identified occupation and that corresponds to the text input.
    Type: Application
    Filed: April 13, 2007
    Publication date: October 16, 2008
    Inventor: Vadim Fux
  • Publication number: 20080249773
    Abstract: A method and system for automatically generating a scoring model for scoring a speech sample are disclosed. One or more training speech samples are received in response to a prompt. One or more speech features are determined for each of the training speech samples. A scoring model is then generated based on the speech features. At least one of the training speech samples may be a high entropy speech sample. An evaluation speech sample is received and a score is assigned to the evaluation speech sample using the scoring model. The evaluation speech sample may be a high entropy speech sample.
    Type: Application
    Filed: June 16, 2008
    Publication date: October 9, 2008
    Inventors: Isaac Bejar, Klaus Zechner
  • Publication number: 20080243513
    Abstract: An apparatus for controlling the output format of information is provided. The apparatus includes a communications unit configured to receive information intended for at least one recipient. The apparatus also includes a selection unit, which is configured to automatically detect, based on the at least one recipient, an externally-specified indication of a preferred form of output selected from amongst a plurality of available forms of output. The selection unit causes the information to be outputted in the preferred form of output. A method and a computer program product are also provided for controlling the output format of information.
    Type: Application
    Filed: March 30, 2007
    Publication date: October 2, 2008
    Applicant: Verizon Laboratories Inc.
    Inventors: Vittorio Bucchieri, Albert L. Schmidt
  • Publication number: 20080201145
    Abstract: Methods are disclosed for automatic accent labeling without manually labeled data. The methods are designed to exploit accent distribution between function and content words.
    Type: Application
    Filed: February 20, 2007
    Publication date: August 21, 2008
    Applicant: Microsoft Corporation
    Inventors: YiNing Chen, Frank Kao-ping Soong, Min Chu
  • Publication number: 20080183461
    Abstract: A method for negotiating a common language on a voice over Internet Protocol (VoIP) network, the method comprising: allowing a plurality of users to connect to the VoIP network, each of the plurality of users having at least one of a plurality of VoIP compatible transmitting/receiving devices; configuring each of the plurality of VoIP compatible transmitting/receiving devices with a list of a plurality of languages, each of the plurality of languages having a priority level associated therewith; allowing automatic selection of the common language between two or more of the plurality of users on a joint VoIP call by performing a language handshake; computing a maximum selection score via a language handshake algorithm provided by the language handshake; maximizing a sum of priority levels; minimizing a sum of priority differences; and selecting the common language that provides a largest sum of the priority levels and a lowest sum of the priority differences.
    Type: Application
    Filed: April 3, 2008
    Publication date: July 31, 2008
    Applicant: INTERNATIONAL BUSINESS MACHINE CORPORATION
    Inventor: Irwin Boutboul
  • Publication number: 20080169899
    Abstract: A programmable controller for activating an appliance controlled by an activation signal is voice-programmable and voice-activated. If a user verbally indicates the appliance is activated by a rolling code activation signal, the controller transmits a sequence of different rolling code activation signals until the user verbally indicates a successful rolling code transmission. The controller stores data representing the successful rolling code transmission. If the user verbally indicates the appliance is activated by a fixed code activation signal, the controller uses a fixed code word to transmit each of a sequence of different fixed code activation signals until the user verbally indicates a successful fixed code transmission. The controller then stores data representing the fixed code word and a fixed code scheme used to generate the successful fixed code transmission. In response to the user verbally identifying an activation input, the controller transmits an activation signal based on stored data.
    Type: Application
    Filed: January 12, 2007
    Publication date: July 17, 2008
    Applicant: Lear Corporation
    Inventors: Jason G. Bauman, Jody K. Harwood, Sumithra Krishnan, Kenan R. Rudnick
  • Publication number: 20080167856
    Abstract: A transliteration mechanism is provided that allows a user to view a text in one Indian language, to highlight a word or phrase, and to easily transliterate the selected word or phrase into a target language or script. The mechanism may be an application, an applet, or a plug-in to another application, such as a Web browser. The target language and/or script may be stored in a user profile. Preferably, the source language may be any known Indian language in any known script.
    Type: Application
    Filed: March 20, 2008
    Publication date: July 10, 2008
    Inventors: Janani Janakiraman, David Bruce Kumhyr
  • Publication number: 20080162146
    Abstract: A method and device are provided for classifying at least two languages in an automatic dialogue system, which processes digitized speech input. At least one speech recognition method and at least one language identification method are used on the digitized speech input in order, by logical evaluation of the results of the method, to identify the language of the speech input.
    Type: Application
    Filed: December 3, 2007
    Publication date: July 3, 2008
    Applicant: Deutsche Telekom AG
    Inventors: Martin Eckert, Roman Englert, Wiebke Johannsen, Fred Runge, Markus Van Ballegooy
  • Publication number: 20080147381
    Abstract: A computer-implemented method is disclosed for improving the accuracy of a directory assistance system. The method includes constructing a prefix tree based on a collection of alphabetically organized words. The prefix tree is utilized as a basis for generating splitting rules for a compound word included in an index associated with the directory assistance system. A language model check and a pronunciation check are conducted in order to determine which of the generated splitting rules are mostly likely correct. The compound word is split into word components based on the most likely correct rule or rules. The word components are incorporated into a data set associated with the directory assistance system, such as into a recognition grammar and/or the index.
    Type: Application
    Filed: December 13, 2006
    Publication date: June 19, 2008
    Applicant: Microsoft Corporation
    Inventors: Dong Yu, Alejandro Acero, Yun-Cheng Ju
  • Publication number: 20080140417
    Abstract: When registering speech onto an object, an information processing apparatus selects identification information from an identification information database and stores information including the object, speech for registration, and the selected identification information in a registration database. When a user performs a speech call, the information processing apparatus outputs identification information that is included in the information called by the user.
    Type: Application
    Filed: October 12, 2007
    Publication date: June 12, 2008
    Applicant: CANON KABUSHIKI KAISHA
    Inventor: Kenichiro Nakagawa
  • Publication number: 20080114583
    Abstract: Translating named entities from a source language to a target language. In general, in one implementation, the technique includes: generating potential translations of a named entity from a source language to a target language using a pronunciation-based and spelling-based transliteration model, searching a monolingual resource in the target language for information relating to usage frequency, and providing output including at least one of the potential translations based on the usage frequency information.
    Type: Application
    Filed: June 7, 2007
    Publication date: May 15, 2008
    Inventors: Yaser Al-Onaizan, Kevin Knight
  • Publication number: 20080077548
    Abstract: A computer implemented method and systems used to create and interpret a set of formal glossaries which refer one to the other and are intended to define precisely the terminology of a field of endeavor. Such glossaries are known as intelligent, in the sense that they allow machines to make deductions, with interaction of human actors. Once a word is defined in an intelligent glossary, all the logical consequences of the use of that word in a formal and well-formed sentence are computable. The process includes a question and answer mechanism, which applies the definitions contained in the intelligent glossaries to a given formal sentence. The methods may be applied in the development of knowledge management methods and tools that are based on semantics; for example: modeling of essential knowledge in the field based on the relevant semantics; and computer-aided human-reasoning.
    Type: Application
    Filed: September 7, 2007
    Publication date: March 27, 2008
    Inventor: Philippe Michelin
  • Publication number: 20080059187
    Abstract: Methods of retrieving documents using a language model are disclosed. A method may include preparing a language model of a plurality of documents, receiving a query, processing the query using the language model, and using the processed query to retrieve documents responding to the query via the search engine. The methods may be implemented in software and/or hardware on computing devices, including personal computers, telephones, servers, and others.
    Type: Application
    Filed: August 30, 2007
    Publication date: March 6, 2008
    Inventors: Herbert L. Roitblat, Brian Golbere
  • Publication number: 20080052078
    Abstract: An intelligent query system for processing voiced-based queries is disclosed, which uses a combination of both statistical and semantic based processing to identify the question posed by the user by understanding the meaning of the user's utterance. Based on identifying the meaning of the utterance, the system selects a single answer that best matches the user's query. The answer that is paired to this single question is then retrieved and presented to the user. The system, as implemented, accepts environmental variables selected by the user and is scalable to provide answers to a variety and quantity of user-initiated queries.
    Type: Application
    Filed: October 31, 2007
    Publication date: February 28, 2008
    Inventor: Ian Bennett