Having Particular Input/output Device Patents (Class 704/3)
  • Publication number: 20150051898
    Abstract: A data processing apparatus receives data indicating a movement of a client device by a first user. The apparatus determines that the movement of the client device is a delimiter motion for switching between a first mode, in which the client device is configured to (i) provide a first interface for a first user speaking in a first language and (ii) perform speech recognition of the first language, and a second mode, in which the client device is configured to (i) provide a second interface for a second user speaking in a second language and (ii) perform speech recognition of the second language, the second interface being different from the first interface. Based on determining that the movement is a delimiter motion, the apparatus switches between the first mode and the second mode without the second user physically interacting with the client device.
    Type: Application
    Filed: November 12, 2013
    Publication date: February 19, 2015
    Applicant: Google Inc.
    Inventors: Alexander J. Cuthbert, Joshua J. Estelle, Macduff Richard Hughes, Sunny Goyal, Minqi Sebastian Jiang
  • Patent number: 8959011
    Abstract: The preferred embodiments provide an automated machine translation from one language to another. The source language may contain expressions or words that are not readily handled by the translation system. Such problematic words or word combinations may, for example, include the words not found in the dictionary of the translation system, as well as text fragments corresponding to structures with low ratings. To improve translation quality, such potentially erroneous words or questionable word combinations are identified by the translation system and displayed to a user by distinctive display styles in the display of a document in the source language and in its translation to a target language. A user is provided with a capability to correct erroneous or questionable words so as to improve the quality of translation.
    Type: Grant
    Filed: January 26, 2012
    Date of Patent: February 17, 2015
    Assignee: ABBYY InfoPoisk LLC
    Inventors: Konstantin Anisimovich, Tatiana Danielyan, Vladimir Selegey, Konstantin Zuev
  • Patent number: 8959013
    Abstract: A method, including presenting, by a computer system executing a non-tactile three dimensional user interface, a virtual keyboard on a display, the virtual keyboard including multiple virtual keys, and capturing a sequence of depth maps over time of a body part of a human subject. On the display, a cursor is presented at positions indicated by the body part in the captured sequence of depth maps, and one of the multiple virtual keys is selected in response to an interruption of a motion of the presented cursor in proximity to the one of the multiple virtual keys.
    Type: Grant
    Filed: September 25, 2011
    Date of Patent: February 17, 2015
    Assignee: Apple Inc.
    Inventors: Micha Galor, Ofir Or, Shai Litvak, Erez Sali
  • Publication number: 20150046148
    Abstract: A method for controlling a mobile terminal is provided. The method includes receiving content data including video data; determining whether first caption data including a first language caption is included in the content data; determining, if the first caption data is included in the content data, whether a high-difficulty word is included in the first language caption; generating explanation data corresponding to the high-difficulty word if the high-difficulty word is included in the first language caption; and converting the first caption data into second caption data by adding the explanation data to the first caption data.
    Type: Application
    Filed: August 6, 2014
    Publication date: February 12, 2015
    Inventors: Young-Joon Oh, Jung-Hoon Park
  • Patent number: 8954314
    Abstract: Disclosed is subject matter that provides a technique and a device that may include an accelerometer, a display device, an input device and a processor. The input device may receive textual information in a first language. The processor may be configured to generate a plurality of probable translation alternatives for a translation result. Each probable translation alternative may be a translation of the textual information into a second language. The processor may present a first of the plurality of probable translation alternatives on the display device in an alternate translation result dialog screen. Based on an accelerometer signal, the processor may determine whether the device is being shaken. In response to a determination the device is being shaken, the processor may present a second of the plurality of probable translation alternatives on the display device in place of the first of the plurality of probable translation alternatives.
    Type: Grant
    Filed: September 24, 2012
    Date of Patent: February 10, 2015
    Assignee: Google Inc.
    Inventor: Piotr Powalowski
  • Patent number: 8954335
    Abstract: Appropriate processing results or appropriate apparatuses can be selected with a control device that selects the most probable speech recognition result by using speech recognition scores received with speech recognition results from two or more speech recognition apparatuses; sends the selected speech recognition result to two or more translation apparatuses respectively; selects the most probable translation result by using translation scores received with translation results from the two or more translation apparatuses; sends the selected translation result to two or more speech synthesis apparatuses respectively; receives a speech synthesis processing result including a speech synthesis result and a speech synthesis score from each of the two or more speech synthesis apparatuses; selects the most probable speech synthesis result by using the scores; and sends the selected speech synthesis result to a second terminal apparatus.
    Type: Grant
    Filed: March 3, 2010
    Date of Patent: February 10, 2015
    Assignee: National Institute of Information and Communications Technology
    Inventors: Satoshi Nakamura, Eiichiro Sumita, Yutaka Ashikari, Noriyuki Kimura, Chiori Hori
  • Publication number: 20150039288
    Abstract: A portable electronic translator (1) forming a headset. The translator (1) comprises at least: a sound pickup device (5) having firstly at least one mouth microphone (8) and at least one dialog microphone (9). The pickup device (5) is coupled to electronic means (11) in such a manner as to determine a current stage of conversation and to act automatically to adapt its functions as a function of that stage.
    Type: Application
    Filed: August 9, 2011
    Publication date: February 5, 2015
    Inventor: Joel Pedre
  • Patent number: 8949112
    Abstract: One embodiment of the present invention is an XML application module that processes an XML character stream, which module includes an XML interface module, a parallel bit stream module, a lexical item stream module, a parser and a parsed data receiver. The XML interface module applies the XML character stream as input to the parallel bit stream module and the parser; the parallel bit stream module forms parallel bit streams and applies them as input to the lexical item stream module; the lexical stream module forms lexical item streams and applies them as input to the parser; the parser forms a stream of parsed XML data and applies it as input to the parsed data receiver; and the parsed data receiver processes the stream of parsed XML data. The parsed data receiver may be, for example, a communication module of a portable communication device.
    Type: Grant
    Filed: February 6, 2013
    Date of Patent: February 3, 2015
    Assignee: International Characters, Inc.
    Inventor: Robert D. Cameron
  • Patent number: 8949111
    Abstract: A method includes accessing text that includes a plurality of words, tagging each of the plurality of words with one of a plurality of parts of speech (POS) tags, and creating a plurality of tokens, each token comprising one of the plurality of words and its associated POS tag. The method further includes clustering one or more of the created tokens into a chunk of tokens, the one or more tokens clustered into the chunk of tokens based on the POS tags of the one or more tokens, and forming a phrase based on the chunk of tokens, the phrase comprising the words of the one or more tokens clustered into the chunk of tokens.
    Type: Grant
    Filed: December 14, 2011
    Date of Patent: February 3, 2015
    Assignee: Brainspace Corporation
    Inventor: Paul A. Jakubik
  • Patent number: 8949130
    Abstract: In embodiments of the present invention improved capabilities are described for a user interacting with a mobile communication facility, where speech presented by the user is recorded using a mobile communication facility resident capture facility. The recorded speech may be recognized using an external speech recognition facility to produce an external output and a resident speech recognition facility to produce an internal output, where at least one of the external output and the internal output may be selected based on a criteria.
    Type: Grant
    Filed: October 21, 2009
    Date of Patent: February 3, 2015
    Assignee: Vlingo Corporation
    Inventor: Michael S. Phillips
  • Publication number: 20150032440
    Abstract: An eReader displays contents of an eBook and, in response to a user request, obtains a translation of text in the eBook and displays the translated text. Optionally, the eReader uses text-to-speech technology to read the translated text to the user.
    Type: Application
    Filed: March 4, 2013
    Publication date: January 29, 2015
    Inventor: Mark Charles Hale
  • Patent number: 8938382
    Abstract: An item of information (212) is transmitted to a distal computer (220), translated to a different sense modality and/or language (222), and in substantially real time, and the translation (222) is transmitted back to the location (211) from which the item was sent. The device sending the item is preferably a wireless device, and more preferably a cellular or other telephone (210). The device receiving the translation is also preferably a wireless device, and more preferably a cellular or other telephone, and may advantageously be the same device as the sending device. The item of information (212) preferably comprises a sentence of human of speech having at least ten words, and the translation is a written expression of the sentence. All of the steps of transmitting the item of information, executing the program code, and transmitting the translated information preferably occurs in less than 60 seconds of elapsed time.
    Type: Grant
    Filed: March 21, 2012
    Date of Patent: January 20, 2015
    Assignee: Ulloa Research Limited Liability Company
    Inventor: Robert D. Fish
  • Patent number: 8935147
    Abstract: A method of handling different languages in an object, such as a business object includes receiving a language selection indication within a business object instance interface. Data within the business object is displayed for at least one data field in the selected language. Edited data for at least one data field within the business object is received, and other language versions of the edited data in the at least one data field may be modified.
    Type: Grant
    Filed: December 31, 2007
    Date of Patent: January 13, 2015
    Assignee: SAP SE
    Inventors: Andre Stern, Christoph Kernke, Heinz Willumeit, Udo Arend
  • Patent number: 8935149
    Abstract: A method for a patternized record of a bilingual sentence-pair, for recording a source sentence and a corresponding target sentence onto a record medium in a mapping manner, comprising: recording a source sentence into a first part in the record medium, and recording a target sentence into a second part in the record medium; recording at least one patternized unit in at least one of the first part and the second part; in the patternized unit, a content of a unit in target sentence and information of a corresponding unit in source sentence are recorded in a predetermined format; wherein the patternized unit comprises: a source portion, a target portion, a POS portion, an attribute portion, a portion of serial number of a unit, or any combination thereof; wherein each portion in the patternized unit can be identified automatically by a computer. The translation method and translation system base on the bilingual patternized sentence-pair are also disclosed.
    Type: Grant
    Filed: May 10, 2010
    Date of Patent: January 13, 2015
    Inventor: Longbu Zhang
  • Patent number: 8930177
    Abstract: Methods of adding data identifiers and speech/voice recognition functionality are disclosed. A telnet client runs one or more scripts that add data identifiers to data fields in a telnet session. The input data is inserted in the corresponding fields based on data identifiers. Scripts run only on the telnet client without modifications to the server applications. Further disclosed are methods for providing speech recognition and voice functionality to telnet clients. Portions of input data are converted to voice and played to the user. A user also may provide input to certain fields of the telnet session by using his voice. Scripts running on the telnet client convert the user's voice into text and is inserted to corresponding fields.
    Type: Grant
    Filed: May 9, 2012
    Date of Patent: January 6, 2015
    Assignee: Crimson Corporation
    Inventors: Lamar John Van Wagenen, Brant David Thomsen, Scott Allen Caddes
  • Patent number: 8918318
    Abstract: Speech recognition of even a speaker who uses a speech recognition system is enabled by using an extended recognition dictionary suited to the speaker without requiring any previous learning using an utterance label corresponding to the speech of the speaker.
    Type: Grant
    Filed: January 15, 2008
    Date of Patent: December 23, 2014
    Assignee: NEC Corporation
    Inventor: Yoshifumi Onishi
  • Patent number: 8914276
    Abstract: A caption translation system is described herein that provides a way to reach a greater world-wide audience when displaying video content by providing dynamically translated captions based on the language the user has selected for their browser. The system provides machine-translated captions to accompany the video content by determining the language the user has selected for their browser or a manual language selection of the user. The system uses the language value to invoke an automated translation application-programming interface that returns translated caption text in the selected language. The system can use one or more well-known caption formats to store the translated captions, so that video playing applications that know how to consume captions can automatically display the translated captions. The video playing application plays back the video file and displays captions in the user's language.
    Type: Grant
    Filed: June 8, 2011
    Date of Patent: December 16, 2014
    Assignee: Microsoft Corporation
    Inventor: Erik Reitan
  • Patent number: 8914292
    Abstract: In embodiments of the present invention improved capabilities are described for a user interacting with a mobile communication facility, where speech presented by the user is recorded using a mobile communication facility resident capture facility. The recorded speech may be recognized using an external speech recognition facility to produce an external output and a resident speech recognition facility to produce an internal output, where at least one of the external output and the internal output may be selected based on a criteria.
    Type: Grant
    Filed: October 21, 2009
    Date of Patent: December 16, 2014
    Assignee: Vlingo Corporation
    Inventor: Michael S. Phillips
  • Patent number: 8914278
    Abstract: A computer-assisted language correction system including spelling correction functionality, misused word correction functionality, grammar correction functionality and vocabulary enhancement functionality utilizing contextual feature-sequence functionality employing an internet corpus.
    Type: Grant
    Filed: July 31, 2008
    Date of Patent: December 16, 2014
    Assignee: Ginger Software, Inc.
    Inventors: Yael Karov Zangvil, Avner Zangvil
  • Publication number: 20140365204
    Abstract: A computer-implemented technique includes receiving a first input from a user at a user device, the first input including a first word of a first alphabet-based language, which is a transliteration of a non-alphabet-based language, which is one of a logogram-based language and a syllabogram-based language. The technique then compares the first words to pluralities of potential translated words from one or more datastores associated with a second alphabet-based language and the logogram-based or syllabogram-based languages. The technique may then generate a probability score for each of the pluralities of potential translated words, the probability score indicating a likelihood of an appropriate translation. The techniques may then provide the user with some or all of the pluralities of potential translated words and the user may select an appropriate translated word to obtain a selected word, which may then be displayed via a display of the user device.
    Type: Application
    Filed: August 22, 2014
    Publication date: December 11, 2014
    Applicant: GOOGLE INC.
    Inventors: Xiangye Xiao, Fan Yang, Hanping Feng, Shijun Tian, Yuanbo Zhang
  • Patent number: 8909516
    Abstract: Computing functionality converts an input linguistic item into a normalized linguistic item, representing a normalized counterpart of the input linguistic item. In one environment, the input linguistic item corresponds to a complaint by a person receiving medical care, and the normalized linguistic item corresponds to a definitive and error-free version of that complaint. In operation, the computing functionality uses plural reference resources to expand the input linguistic item, creating an expanded linguistic item. The computing functionality then forms a graph based on candidate tokens that appear in the expanded linguistic item, and then finds a shortest path through the graph; that path corresponds to the normalized linguistic item. The computing functionality may use a statistical language model to assign weights to edges in the graph, and to determine whether the normalized linguistic incorporates two or more component linguistic items.
    Type: Grant
    Filed: December 7, 2011
    Date of Patent: December 9, 2014
    Assignee: Microsoft Corporation
    Inventors: Julie A. Medero, Daniel S. Morris, Lucretia H. Vanderwende, Michael Gamon
  • Publication number: 20140358519
    Abstract: A method for rewriting source text includes receiving source text including a source text string in a first natural language. The source text string is translated with a machine translation system to generate a first target text string in a second natural language. A translation confidence for the source text string is computed, based on the first target text string. At least one alternative text string is generated, where possible, in the first natural language by automatically rewriting the source string. Each alternative string is translated to generate a second target text string in the second natural language. A translation confidence is computed for the alternative text string based on the second target string. Based on the computed translation confidences, one of the alternative text strings may be selected as a candidate replacement for the source text string and may be proposed to a user on a graphical user interface.
    Type: Application
    Filed: June 3, 2013
    Publication date: December 4, 2014
    Inventors: Shachar Mirkin, Sriram Venkatapathy, Marc Dymetman
  • Publication number: 20140358518
    Abstract: The present invention is a server-based protocol for improving translation performance for cases where a large number of documents are generated in a source language context but their controversies are adjudicated in a different language context. The protocol is intended to improve terminology consistency, offset the effects of contextual shift on perceived facts in translations, and improve task-tracking order. If the protocol is used by well trained and motivated document reviewers in a collaborative and harmonic environment, it can reduce unnecessary translations, improve translation accuracy and legibility, minimize the needs for amendments, control translation costs, and help the client significantly improve its litigation position.
    Type: Application
    Filed: June 2, 2013
    Publication date: December 4, 2014
    Inventors: Jianqing Wu, Ping Zha
  • Patent number: 8903712
    Abstract: A system and method for providing an easy-to-use interface for verifying semantic tags in a steering application in order to generate a natural language grammar. The method includes obtaining user responses to open-ended steering questions, automatically grouping the user responses into groups based on their semantic meaning, and automatically assigning preliminary semantic tags to each of the groups. The user interface enables the user to validate the content of the groups to ensure that all responses within a group have the same semantic meaning and to add or edit semantic tags associated with the groups. The system and method may be applied to interactive voice response (IVR) systems, as well as customer service systems that can communicate with a user via a text or written interface.
    Type: Grant
    Filed: September 27, 2011
    Date of Patent: December 2, 2014
    Assignee: Nuance Communications, Inc.
    Inventors: Real Tremblay, Jerome Tremblay, Amy E. Ulug, Jean-Francois Fortier, Francois Berleur, Jeffrey N. Marcus, David Andrew Mauro
  • Patent number: 8903707
    Abstract: A method, an apparatus and an article of manufacture for determining a dropped pronoun from a source language. The method includes collecting parallel sentences from a source and a target language, creating at least one word alignment between the parallel sentences in the source and the target language, mapping at least one pronoun from the target language sentence onto the source language sentence, computing at least one feature from the mapping, wherein the at least one feature is extracted from both the source language and the at least one pronoun projected from the target language, and using the at least one feature to train a classifier to predict position and spelling of at least one pronoun in the target language when the at least one pronoun is dropped in the source language.
    Type: Grant
    Filed: January 12, 2012
    Date of Patent: December 2, 2014
    Assignee: International Business Machines Corporation
    Inventors: Bing Zhao, Imed Zitouni, Xiaoqiang Luo, Vittorio Castelli
  • Publication number: 20140337008
    Abstract: An image processing apparatus (3) includes: a translation section (32) carrying out a translation process of a language contained in image data; and a formatting process section (34) generating an image file, in accordance with the image data and a result of the translation process. The formatting process section (34) embeds, in the image file, a command for causing a computer to switch between a first display state in which the language and the translated word are displayed together and a second display state in which the language is displayed without the translated word in a case where a user gives, with respect to the image file, a switching instruction to switch between the first display state and the second display state.
    Type: Application
    Filed: January 15, 2013
    Publication date: November 13, 2014
    Inventors: Atsuhisa Morimoto, Yohsuke Konishi, Hitoshi Hirohata, Akihito Yoshida
  • Publication number: 20140337007
    Abstract: A hybrid speech translation system whereby a wireless-enabled client computing device can, in an offline mode, translate input speech utterances from one language to another locally, and also, in an online mode when there is wireless network connectivity, have a remote computer perform the translation and transmit it back to the client computing device via the wireless network for audible outputting by client computing device. The user of the client computing device can transition between modes or the transition can be automatic based on user preferences or settings. The back-end speech translation server system can adapt the various recognition and translation models used by the client computing device in the offline mode based on analysis of user data over time, to thereby configure the client computing device with scaled-down, yet more efficient and faster, models than the back-end speech translation server system, while still be adapted for the user's domain.
    Type: Application
    Filed: June 12, 2013
    Publication date: November 13, 2014
    Applicant: Facebook, Inc.
    Inventors: Naomi Aoki Waibel, Alexander Waibel, Christian Fuegen, Kay Rottmann
  • Patent number: 8886515
    Abstract: Systems and methods for enhancing machine translation post edit review processes are provided herein. According to some embodiments, methods for displaying confidence estimations for machine translated segments of a source document may include executing instructions stored in memory, the instructions being executed by a processor to calculate a confidence estimation for a machine translated segment of a source document, compare the confidence estimation for the machine translated segment to one or more benchmark values, associate the machine translated segment with a color based upon the confidence estimation for the machine translated segment relative to the one or more benchmark values, and provide the machine translated segment having the color in a graphical format, to a client device.
    Type: Grant
    Filed: October 19, 2011
    Date of Patent: November 11, 2014
    Assignee: Language Weaver, Inc.
    Inventor: Gert Van Assche
  • Publication number: 20140330551
    Abstract: Sentence internationalization methods and systems are disclosed.
    Type: Application
    Filed: May 6, 2013
    Publication date: November 6, 2014
    Inventors: Ling Bao, Hugo Johan van Heuven, Jiangbo Miao, Li Tan, David Mercurio, Maximilian Machedon
  • Patent number: 8879701
    Abstract: Embodiments provide caller information in multiple languages to multiple receiving communication devices. In one embodiment, the method includes receiving a call request from a sending endpoint communication device connected to a network. Caller identification information associated with the call request is obtained, where the caller identification information includes one or more identifications associated with the sending endpoint communication device. Each identification specifies the same content in a different language. The call request and at least one of the identifications are transmitted over the network to be received by receiving endpoint communication devices connected to the network. Each receiving endpoint communication device can output at least one of the received identifications specified in a language designated for use by that receiving endpoint communication device.
    Type: Grant
    Filed: April 15, 2012
    Date of Patent: November 4, 2014
    Assignee: Avaya Inc.
    Inventors: Mohan Vinayak Phadnis, Sreekanth Subrahmanya Nemani
  • Patent number: 8880389
    Abstract: A method, computer program product and system are disclosed for determining the semantic density of textualized digital media (a measure of how much information is conveyed in a sentence or clause relative to its length). The more semantically dense text is, the more information it conveys in a given space. Users input a topic, a timeline, and one or more target web media sources for analysis. Text in the target media sources is deconstructed to determine density, and a density rating assigned to the web media source. Over time, users can track trends in the density of text media relative to a given topic, and determine how much information is being conveyed in connection with the topic, such as a political campaign. Line graphs, pie charts, and other time-elapsed output graphic representations of the semantic density are generated and rendered for the user.
    Type: Grant
    Filed: December 9, 2011
    Date of Patent: November 4, 2014
    Inventor: Igor Iofinov
  • Publication number: 20140324412
    Abstract: A translation device includes input and output units and identifies, for each of them, a communication type utilized in the relevant input and output unit. When any one of the input and output units has obtained a message, the communication type identified for this input and output unit is detected, and the message is translated from one having the detected communication type into one having another identified communication type. The translated message is output to the input and output unit associated with the other identified communication type.
    Type: Application
    Filed: October 12, 2012
    Publication date: October 30, 2014
    Applicant: NEC CASIO MOBILE COMMUNICATIONS, LTD.
    Inventor: Shinichi Itamoto
  • Patent number: 8874426
    Abstract: A method of translating a computer generated log output message from a first language to a second language, including receiving a log output containing a plurality of messages in a first language and matching words and phrases in the log output messages to pre-established codes in a matched message index. Ambiguous matches are resolved by removing codes matched to ones of the words and phrases that have overlap with words and phrases matched to different codes. The codes in the matched message index are translated into a second language different than the first language to a corresponding second log output message in the second language and then the second log output message is output in the second language.
    Type: Grant
    Filed: June 30, 2009
    Date of Patent: October 28, 2014
    Assignee: International Business Machines Corporation
    Inventor: Hugh P. Williams
  • Publication number: 20140316763
    Abstract: A computer implemented method for performing sign language translation based on movements of a user is provided. A capture device detects motions defining gestures and detected gestures are matched to signs. Successive signs are detected and compared to a grammar library to determine whether the signs assigned to gestures make sense relative to each other and to a grammar context. Each sign may be compared to previous and successive signs to determine whether the signs make sense relative to each other. The signs may further be compared to user demographic information and a contextual database to verify the accuracy of the translation. An output of the match between the movements and the sign is provided.
    Type: Application
    Filed: April 24, 2014
    Publication date: October 23, 2014
    Applicant: Microsoft Corporation
    Inventor: John Tardif
  • Patent number: 8868419
    Abstract: A text content summary is created from speech content. A focus more signal is issued by a user while receiving the speech content. The focus more signal is associated with a time window, and the time window is associated with a part of the speech content. It is determined whether to use the part of the speech content associated with the time window to generate a text content summary based on a number of the focus more signals that are associated with the time window. The user may express relative significance to different portions of speech content, so as to generate a personal text content summary.
    Type: Grant
    Filed: August 22, 2011
    Date of Patent: October 21, 2014
    Assignee: Nuance Communications, Inc.
    Inventors: Bao Hua Cao, Le He, Xing Jin, Qing Bo Wang, Xin Zhou
  • Publication number: 20140309982
    Abstract: Methods and systems for an improved navigation environment are provided. The navigation system can route users to preferred locations based on user profile data and past experience with the present driver and other drivers. The system provides more cost-effective and time-sensitive routing by incorporating other information about destinations. Further, the navigation system provides enhanced guidance in foreign or unfamiliar locations by incorporating experience from other drivers and other data.
    Type: Application
    Filed: April 15, 2014
    Publication date: October 16, 2014
    Applicant: Flextronics AP, LLC
    Inventor: Christopher P. Ricci
  • Patent number: 8862478
    Abstract: In conventional network-type speech translation systems, devices or models for recognizing or synthesizing speech cannot be changed in accordance with speakers' attributes, and therefore, accuracy is reduced or inappropriate output occurs in each process of speech recognition, translation, and speech synthesis. Accuracy of each processing of speech translation, translation, or speech synthesis is improved and appropriate output is performed in a network-type speech translation system by, based on speaker attributes, appropriately changing the server to perform speech recognition or the speech recognition model, appropriately changing the translation server to perform translation or the translation model, or appropriately changing the speech synthesis server or speech synthesis model.
    Type: Grant
    Filed: March 3, 2010
    Date of Patent: October 14, 2014
    Assignee: National Institute of Information and Communications Technology
    Inventors: Satoshi Nakamura, Eiichiro Sumita, Yutaka Ashikari, Noriyuki Kimura, Chiori Hori
  • Patent number: 8862462
    Abstract: A vehicle communication system is provided and may include at least one communication device that audibly communicates information within the vehicle. A controller may receive a character string from an external device and may determine if the character string represents an emoticon. The controller may translate the character string into a face description if the character string represents an emoticon and may audibly communicate the face description via the at least one communication device.
    Type: Grant
    Filed: December 9, 2011
    Date of Patent: October 14, 2014
    Assignee: Chrysler Group LLC
    Inventor: Stephen L. Hyde
  • Publication number: 20140297257
    Abstract: Disclosed herein is a motion sensor-based portable automatic interpretation apparatus and control method thereof, which can precisely detect the start time and the end time of utterance of a user in a portable automatic interpretation system, thus improving the quality of the automatic interpretation system. The motion sensor-based portable automatic interpretation apparatus includes a motion sensing unit for sensing a motion of the portable automatic interpretation apparatus. An utterance start time detection unit detects an utterance start time based on an output signal of the motion sensing unit. An utterance end time detection unit detects an utterance end time based on an output signal of the motion sensing unit after the utterance start time has been detected.
    Type: Application
    Filed: October 29, 2013
    Publication date: October 2, 2014
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jong-Hun SHIN, Young-Kil KIM, Chang-Hyun KIM, Young-Ae SEO, Seong-Il YANG, Jin-Xia HUANG, Seung-Hoon NA, Oh-Woog KWON, Ki-Young LEE, Yoon-Hyung ROH, Sung-Kwon CHOI, Sang-Keun JUNG, Yun JIN, Eun-Jin PARK, Sang-Kyu PARK
  • Patent number: 8843360
    Abstract: Disclosed are various embodiments for client-side internationalization of network pages. A network page and code that localizes the network page are obtained from a server. The code that localizes the network page is executed in a client and determines a locale associated with the client. One or more internationalized elements are identified in the network page. The internationalized elements are replaced with corresponding localized translations. The network page is rendered for display in the client after the network page has been localized.
    Type: Grant
    Filed: March 4, 2011
    Date of Patent: September 23, 2014
    Assignee: Amazon Technologies, Inc.
    Inventors: Simon K. Johnston, Margaux Eng, James K. Keiger, Gideon Shavit
  • Publication number: 20140278347
    Abstract: A computer-implemented technique includes receiving, at a server, a source code for a computer application executable by a computing device. The server extracts one or more translatable messages in a source language from the source code. The server inserts a hidden unique identifier for each of the one or more translatable messages to obtain a first modified source code, wherein each specific hidden unique identifier is operable to identify a corresponding specific translatable message during execution of the first modified source code by the computing device. The server transmits the first modified source code to the computing device. The server receives one or more translated messages from the computing device, the one or more translated messages being in a target language and having been input at the computing device in response to execution of the first modified source code. The server then outputs the one or more translated messages.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Inventors: Yongping Luo, Chenjun Wu
  • Publication number: 20140278346
    Abstract: A device is configured to receive a translation query that requests a translation of terms from a source language to a target language; determine translation features associated with the translation query; assign a feature value to each of the translation features to form feature values; apply a feature weight to each of the feature values to generate a final value; and determine whether to provide a dialog translation user interface or a non-dialog translation user interface based on whether the final value satisfies a threshold. The dialog translation user interface may facilitate translation of a conversation, the non-dialog translation user interface may provide translation search results, and the non-dialog translation user interface may be different than the dialog translation user interface. The device also configured to provide the dialog translation user interface for display when the final value satisfies the threshold.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Applicant: Google Inc.
    Inventors: Asaf ZOMET, Gal CHECHIK, Michael SHYNAR
  • Patent number: 8838434
    Abstract: Techniques disclosed herein include systems and methods for creating a bootstrap call router for other languages by using selected N-best translations. Techniques include using N-best translations from a machine translation system so as to increase a possibility that desired keywords in a target language are covered in the machine translation output. A 1-best translation is added to a new text corpus. This is followed by selecting a subset that provides a varied set of translations for a given source transcribed utterance for better translation coverage. Additional translations are added to the new text corpus based on a measure of possible translations having words not yet seen for the selected transcribed utterances, and also based on possible translation having words that are not associated with very many or semantic tags in the new text corpus. Candidate translations can be selected from a window of N-best translations calculated based on machine translation accuracy.
    Type: Grant
    Filed: July 29, 2011
    Date of Patent: September 16, 2014
    Assignee: Nuance Communications, Inc.
    Inventor: Ding Liu
  • Patent number: 8831945
    Abstract: A text in a corpus including a set of world wide web (web) pages is analyzed. At least one word appropriate for a document type set according to a voice recognition target is extracted based on an analysis result. A word set is generated from the extracted at least one word. A retrieval engine is caused to perform a retrieval process using the generated word set as a retrieval query of the retrieval engine on the Internet, and a link to a web page from the retrieval result is acquired. A language model for voice recognition is generated from the acquired web page.
    Type: Grant
    Filed: October 12, 2011
    Date of Patent: September 9, 2014
    Assignee: NEC Informatec Systems, Ltd.
    Inventors: Kazuhiro Arai, Tadashi Emori
  • Patent number: 8831929
    Abstract: Methods, systems, and apparatus, including computer program products, in which an input method editor receives composition inputs and determines language context values based on the composition inputs. Candidate selections based on the language context values and the composition inputs are identified.
    Type: Grant
    Filed: September 23, 2013
    Date of Patent: September 9, 2014
    Assignee: Google Inc.
    Inventor: Feng Hong
  • Publication number: 20140249798
    Abstract: A translation system is connected with at least two translation service providers through internet and includes an image capture unit, a data transmission unit, a comparing unit, and a display unit. A translation method is achieved by the foregoing translation system and includes the steps: capturing original data of original documents as an image data by means of the image capture unit; transmitting the image data to an image recognition unit and utilizing the image recognition unit to convert the image data into text data; transmitting the text data to the translation service providers for translation; utilizing the comparing unit to compare translation results and determine one of them as the best translation result according to the occurrence number of the same words in the compared translation results; and displaying the best translation result on a display unit. So the user can review the translation results quickly and easily.
    Type: Application
    Filed: March 4, 2013
    Publication date: September 4, 2014
    Applicant: Foxlink Image Technology Co., Ltd.
    Inventors: Chun Yuan Sun, Chi Wen Chen
  • Patent number: 8825470
    Abstract: A system and method of providing a response with different language options for a data communication protocol, such as Session Initiation Protocol, are disclosed. For example, data communication is controlled between at least two endpoints. A response code indicative of a condition of the data communication is transmitted to one of the at least two endpoints. The response code is associated with a reason phrase operable to be displayed at the one of the at least two endpoints in a language selected from an option of a plurality of languages.
    Type: Grant
    Filed: September 27, 2007
    Date of Patent: September 2, 2014
    Assignee: Siemens Enterprise Communications Inc.
    Inventors: Mallikarjuna Samayamantry Rao, Dennis Kucmerowski
  • Patent number: 8825474
    Abstract: In one example, a device includes at least one processor and at least one module operable by the at least one processor to output, for display, a graphical user interface including a graphical keyboard and one or more text suggestion regions, and select, based at least in part on an indication of gesture input, at least one key of the graphical keyboard. The at least one module is further operable by the at least one processor to determine a plurality of candidate character strings, determine past interaction data that comprises a representation of a past user input corresponding to at least one candidate character string while the at least one candidate character string was previously displayed in at least one of the one or more text suggestion regions, and output the at least one candidate character string for display in one of the one or more text suggestion regions.
    Type: Grant
    Filed: June 4, 2013
    Date of Patent: September 2, 2014
    Assignee: Google Inc.
    Inventors: Shumin Zhai, Philip Quinn
  • Patent number: 8825468
    Abstract: An apparatus includes a monocular display with a wireless communications interface, user input device, transmitter, and controller, and may provide a video link to and control and management of a host device and other devices, such as a cell phone, computer, laptop, or media player. The apparatus may receive speech and digitize it. The apparatus may compare the digitized speech in a first language to a table of digitized speech in a second language to provide translation or, alternatively, may compare the digitized speech to a table of control commands. The control commands allow user interaction with the apparatus or other remote devices in a visual and audio manner. The control signals control a “recognized persona” or avatar stored in a memory to provide simulated human attributes to the apparatus, network or third party communication device. The avatar may be changed or upgraded according to user choice.
    Type: Grant
    Filed: July 31, 2008
    Date of Patent: September 2, 2014
    Assignee: Kopin Corporation
    Inventors: Jeffrey J. Jacobsen, Stephen A. Pombo
  • Patent number: 8818791
    Abstract: A computer-implemented technique includes receiving a first input from a user at a user device, the first input including a first word of a first alphabet-based language, which is a transliteration of a non-alphabet-based language, which is one of a logogram-based language and a syllabogram-based language. The technique then compares the first words to pluralities of potential translated words from one or more datastores associated with a second alphabet-based language and the logogram-based or syllabogram-based languages. The technique may then generate a probability score for each of the pluralities of potential translated words, the probability score indicating a likelihood of an appropriate translation. The techniques may then provide the user with some or all of the pluralities of potential translated words and the user may select an appropriate translated word to obtain a selected word, which may then be displayed via a display of the user device.
    Type: Grant
    Filed: April 30, 2012
    Date of Patent: August 26, 2014
    Assignee: Google Inc.
    Inventors: Xiangye Xiao, Fan Yang, Hanping Feng, Shijun Tian, Yuanbo Zhang