Translation Patents (Class 704/277)
  • Patent number: 10367861
    Abstract: In one embodiment, a computer-program product embodied in a non-transitory computer read-able medium that is programmed to manage a digital audio conference including a plurality of conference units and each conference unit including a microphone is provided. The computer-program product includes instructions to receive first information corresponding to a layout of a venue that facilitates an audio conference for users of the plurality of conference units. The computer-program product further includes instructions to store second information corresponding to an arrangement of a plurality of seats in the venue and to associate a first conference unit of the plurality of conference units to a first seat of the plurality of seats.
    Type: Grant
    Filed: July 11, 2014
    Date of Patent: July 30, 2019
    Assignee: Harman International Industries, Inc.
    Inventors: Rudresha T. Shetty, Raghunandan Ghagarvale
  • Patent number: 10339958
    Abstract: Detecting and monitoring legacy devices (such as appliances in a home) using audio sensing is disclosed. Methods and systems are provided for transforming audio data captured by the sensor to afford privacy when speech is overheard by the sensor. Because these transformations may negatively impact the ability to detect/monitor devices, an effective transformation is determined based on both privacy and detectability concerns.
    Type: Grant
    Filed: September 9, 2016
    Date of Patent: July 2, 2019
    Assignee: ARRIS Enterprises LLC
    Inventors: Anthony J. Braskich, Venugopal Vasudevan
  • Patent number: 10235359
    Abstract: Inferring a natural language grammar is based on providing natural language understanding (NLU) data with concept annotations according to an application ontology characterizing a relationship structure between application-related concepts for a given NLU application. An application grammar is then inferred from the concept annotations and the application ontology.
    Type: Grant
    Filed: July 15, 2013
    Date of Patent: March 19, 2019
    Assignee: Nuance Communications, Inc.
    Inventors: Réal Tremblay, Jerome Tremblay, Stephen Douglas Peters, Serge Robillard
  • Patent number: 10140887
    Abstract: Techniques described herein relate to generating braille output and/or visual display output based on received mathematical expression input. Data corresponding to one or more mathematical expressions may be received via expression input devices or visual display devices, and may be converted to braille output characters for display on refreshable braille devices. Additionally, mathematical expression input data may be received via refreshable braille display devices and converted to output characters for display on visual display devices. In some embodiments, mathematical expression input data may be converted first to content markup, and then converted from the content markup to presentation markup and/or braille output characters.
    Type: Grant
    Filed: September 17, 2015
    Date of Patent: November 27, 2018
    Assignee: PEARSON EDUCATION, INC.
    Inventor: Samuel Sean Dooley
  • Patent number: 10126908
    Abstract: An improved solution for portlets is provided. In an embodiment of the invention, a method of automatically configuring a portlet includes: receiving a portlet; searching content of the portlet for a contextual aspect; and automatically applying attribute information to a portlet window object based on a discovered contextual aspect.
    Type: Grant
    Filed: August 10, 2015
    Date of Patent: November 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Al Chakra, Adam R. Cook, Ryan E. Smith
  • Patent number: 10121468
    Abstract: Disclosed herein are systems, methods, and computer-readable storage media for a speech recognition application for directory assistance that is based on a user's spoken search query. The spoken search query is received by a portable device and portable device then determines its present location. Upon determining the location of the portable device, that information is incorporated into a local language model that is used to process the search query. Finally, the portable device outputs the results of the search query based on the local language model.
    Type: Grant
    Filed: June 15, 2016
    Date of Patent: November 6, 2018
    Assignee: NUANCE COMMUNICATIONS, INC.
    Inventors: Enrico Bocchieri, Diamantino Antonio Caseiro
  • Patent number: 10067934
    Abstract: A system and method for operating the same includes a language processing module generating a search request text signal and determining identified data from the search request text signal. A search module generates search results in response to the search request text signal. A dialog manager classifies the search request text signal into a response classification associated with a plurality of templates, selects a first template from the plurality of templates in response to the response classification and corrects search results in response to the identified data and the template to form a corrected response signal. A device receives and displays the corrected response signal.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: September 4, 2018
    Assignee: The DIRECTV Group, Inc.
    Inventors: Huy Q. Tran, Vlad Zarney, Kapil Chaudhry, Douglas T. Kuriki, Todd T. Tran, David K. Homan, An T. Lam, Michael E. Yan, Ashley B. Tarnow
  • Patent number: 10063701
    Abstract: A request to execute an interaction site associated with a custom grammars file is received from a user device and by a communications system. An interaction flow document to execute the interaction site is accessed by the communications system. The custom grammars file is accessed by the communications system, the custom grammars file being configured to enable the communications system to identify executable commands corresponding to utterances spoken by users of user devices. An utterance spoken by a user of the user device is received from the user device and by the communications system. The utterance is stored by the communications system. The custom grammars file is updated by a grammar generation system to include a representation of the stored utterance for processing utterances in subsequent communications with users.
    Type: Grant
    Filed: May 29, 2014
    Date of Patent: August 28, 2018
    Assignee: GENESYS TELECOMMUNICATIONS LABORATORIES, INC.
    Inventors: Praphul Kumar, Aaron Wellman
  • Patent number: 10049656
    Abstract: Features are disclosed for generating predictive personal natural language processing models based on user-specific profile information. The predictive personal models can provide broader coverage of the various terms, named entities, and/or intents of an utterance by the user than a personal model, while providing better accuracy than a general model. Profile information may be obtained from various data sources. Predictions regarding the content or subject of future user utterances may be made from the profile information. Predictive personal models may be generated based on the predictions. Future user utterances may be processed using the predictive personal models.
    Type: Grant
    Filed: September 20, 2013
    Date of Patent: August 14, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: William Folwell Barton, Rohit Prasad, Stephen Frederick Potter, Nikko Strom, Yuzo Watanabe, Madan Mohan Rao Jampani, Ariya Rastrow, Arushan Rajasekaram
  • Patent number: 10044854
    Abstract: Embodiments of the present invention are directed to methods for providing captioned telephone service. One method includes initiating a first captioned telephone service call. During the first captioned telephone service call, a first set of captions is created using a human captioner. Simultaneous with creating the first set of captions using a human captioner, a second set of captions is created using an automated speech recognition captioner. The first set of captions and the second set of captions are compared using a scoring algorithm. In response to the score of second set of captions being within predetermined range of scores, the call is continued using only the automated speech recognition captioner. In response to the score of second set of captions being outside of a predetermined range of scores, the call is continued using a human captioner.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: August 7, 2018
    Assignee: ClearCaptions, LLC
    Inventors: Robert Lee Rae, Blaine Michael Reeve
  • Patent number: 9959270
    Abstract: A method for determining the prosody of a tag question in human speech and preserving said prosody as the human speech is translated into a different language.
    Type: Grant
    Filed: August 1, 2016
    Date of Patent: May 1, 2018
    Assignee: SPEECH MORPHING SYSTEMS, INC.
    Inventors: Fathy Yassa, Caroline Henton, Meir Friedlander
  • Patent number: 9904670
    Abstract: An apparatus and method for determining whether the meaning of a word included in an electronic message needs to be presented to a user, according to a dynamic determination whether the user currently knows the meaning of the word. In a client, a communication control unit receives a message sent between users, a morphological analysis unit extracts a word from the message, and a history acquisition unit acquires history information on viewing, usage, or the like of the word. A display determination unit determines whether the meaning of the word needs to be displayed, according to the acquired history information, the language level of a user stored in a user level storage unit, and the difficulty level of the word stored in a dictionary storage unit. An input/output control unit performs control such that the meaning of the word is presented to the user according to the determination result.
    Type: Grant
    Filed: August 29, 2013
    Date of Patent: February 27, 2018
    Assignee: International Business Machines Corporation
    Inventors: Ryoju Kamada, Ryo Kamimura, Shingo Kato, Takayuki Sato
  • Patent number: 9761224
    Abstract: An evaluation information posting device determines a rest state of a vehicle on the basis of rest information, determines a facility at which the vehicle has stopped off by using position information showing a rest position of the vehicle, map information including facility information about facilities located in an area surrounding the position shown by this position information, and a keyword about a facility at the rest position of the vehicle, and, by using both stop-off facility information about the facility which is a result of the determination, and a keyword about an evaluation which is provided for this facility, generates evaluation information about the stop-off facility and posts this evaluation information to an evaluation information managing server.
    Type: Grant
    Filed: April 25, 2013
    Date of Patent: September 12, 2017
    Assignee: Mitsubishi Electric Corporation
    Inventors: Takuji Morimoto, Kiyoshi Matsutani, Shinji Akatsu, Atsushi Matsumoto, Yasutaka Konishi
  • Patent number: 9760566
    Abstract: An augmented conversational understanding agent may be provided. Upon receiving, by an agent, at least one natural language phrase from a user, a context associated with the at least one natural language phrase may be identified. The natural language phrase may be associated, for example, with a conversation between the user and a second user. An agent action associated with the identified context may be performed according to the at least one natural language phrase and a result associated with performing the action may be displayed, wherein the agent action comprises providing a list of movies, a list of night clubs, a search for restaurants, or other action suggestions.
    Type: Grant
    Filed: March 31, 2011
    Date of Patent: September 12, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Patent number: 9741340
    Abstract: Disclosed herein are systems, computer-implemented methods, and computer-readable media for enhancing speech recognition accuracy. The method includes dividing a system dialog turn into segments based on timing of probable user responses, generating a weighted grammar for each segment, exclusively activating the weighted grammar generated for a current segment of the dialog turn during the current segment of the dialog turn, and recognizing user speech received during the current segment using the activated weighted grammar generated for the current segment. The method can further include assigning probability to the weighted grammar based on historical user responses and activating each weighted grammar is based on the assigned probability. Weighted grammars can be generated based on a user profile. A weighted grammar can be generated for two or more segments.
    Type: Grant
    Filed: November 7, 2014
    Date of Patent: August 22, 2017
    Assignee: Nuance Communications, Inc.
    Inventor: Michael Czahor
  • Patent number: 9727607
    Abstract: Various embodiments include systems and methods for generating query rewrite records which may be used to generate standardized query rewrites for a search engine. Such records may identify rewrite triggers as well as constraints and other metadata flags which may be associated with certain rewrites in query rewrite identification (QRIL) records. In certain embodiments, such records may be analyzed with other QRIL records or rewrite information to prevent rewrite conflicts and to generate standardized rewrites. This information may then be used by a search engine to generate responses to user queries.
    Type: Grant
    Filed: November 19, 2014
    Date of Patent: August 8, 2017
    Assignee: eBay Inc.
    Inventors: Prathyusha Senthil Kumar, Praveen Arasada, Ravi Chandra Jammalamadaka
  • Patent number: 9720909
    Abstract: The disclosed subject matter provides a system, computer readable storage medium, and a method providing an audio and textual transcript of a communication. A conferencing services may receive audio or audio visual signals from a plurality of different devices that receive voice communications from participants in a communication, such as a chat or teleconference. The audio signals representing voice (speech) communications input into respective different devices by the participants. A translation services server may receive over a separate communication channel the audio signals for translation into a second language. As managed by the translation services server, the audio signals may be converted into textual data. The textual data may be translated into text of different languages based the language preferences of the end user devices in the teleconference. The translated text may be further translated into audio signals.
    Type: Grant
    Filed: August 17, 2015
    Date of Patent: August 1, 2017
    Assignee: GOOGLE INC.
    Inventors: Trausti Kristjansson, John Huang, Yu-Kuan Lin, Hung-ying Tyan, Jakob David Uszkoreit, Joshua James Estelle, Chung-yi Wang, Kirill Buryak, Yusuke Konishi
  • Patent number: 9558102
    Abstract: A method for testing the display of bi-directional language script prior to translation in an application under test can include using unidirectional glyphs with shaping indicators to simulate right-to-left characters. The using step can include reversing an ordering of a first set of unidirectional text characters in an input string and mapping the unidirectional text characters to right-to-left code points in a bi-directional language code page to produce a pseudo-translated string. Multiple unidirectional language glyphs can be loaded where each corresponds to a same one of the right-to-left character code points as had been used to produce the pseudo-translation. The pseudo-translation and the glyphs can be combined to simulate right-to-left character rendering in the application under test such the resultant output is visually similar to the input string. Finally, the glyphs can include character shaping indicia such that a resultant output allows for the detection of shaping errors.
    Type: Grant
    Filed: July 29, 2015
    Date of Patent: January 31, 2017
    Assignee: International Business Machines Corporation
    Inventors: Dale M. Schultz, Roy Hudson
  • Patent number: 9542942
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for designating certain voice commands as hotwords. The methods, systems, and apparatus include actions of receiving a hotword followed by a voice command. Additional actions include determining that the voice command satisfies one or more predetermined criteria associated with designating the voice command as a hotword, where a voice command that is designated as a hotword is treated as a voice input regardless of whether the voice command is preceded by another hotword. Further actions include, in response to determining that the voice command satisfies one or more predetermined criteria associated with designating the voice command as a hotword, designating the voice command as a hotword.
    Type: Grant
    Filed: January 21, 2016
    Date of Patent: January 10, 2017
    Assignee: Google Inc.
    Inventor: Matthew Sharifi
  • Patent number: 9514128
    Abstract: A system and method to facilitate translation of communications between entities over a network are described. Multiple predetermined language constructs are communicated to a first entity as a first transmission over the network. Responsive to selection by the first entity of a language construct from the predetermined language constructs, a translated language construct corresponding to the selected language construct is identified. Finally, the translated language construct is communicated to a second entity as a second transmission over the network.
    Type: Grant
    Filed: January 27, 2014
    Date of Patent: December 6, 2016
    Assignee: eBay Inc.
    Inventor: Steve Grove
  • Patent number: 9509521
    Abstract: Techniques are disclosed for providing an enhanced contextual chat feature in online environments. The contextual chat feature may be used to present users with a list of expressions that may be sent to other users within an online environment (or to users in other online environments). The list of messages may be derived from a linguistic profile which itself may change as the use of language in an online environment (or by a particular user group) evolves, over time. In cases where a user sends a contextual chat message to another user in the same online environment, messages may be sent without being altered. However, when a user selects a contextual chat message from the list to send to a user in another online environment, the message may be translated based on a linguistic profile associated with users in the second environment.
    Type: Grant
    Filed: August 30, 2010
    Date of Patent: November 29, 2016
    Assignee: Disney Enterprises, Inc.
    Inventors: Cyrus J. Hoomani, Vita Markman
  • Patent number: 9484017
    Abstract: A first speech processing device includes a first speech input unit and a first speech output unit. A second speech processing device includes a second speech input unit and a second speech output unit. In a server therebetween, a speech of a first language sent from the first speech input unit is recognized. The speech recognition result is translated into a second language. The translation result is back translated into the first language. A first speech synthesis signal of the back translation result is sent to the first speech output unit. A second speech synthesis signal of the translation result is sent to the second speech output unit. Duration of the second speech synthesis signal or the first speech synthesis signal is measured. The first speech synthesis signal and the second speech synthesis signal are outputted by synchronizing a start time and an end time thereof, based on the duration.
    Type: Grant
    Filed: September 12, 2014
    Date of Patent: November 1, 2016
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Kazuo Sumita, Akinori Kawamura, Satoshi Kamatani
  • Patent number: 9456175
    Abstract: The present disclosure provides a caption searching method including: obtaining characteristic information of a video file to be played, and searching for a caption, for the video file to be played, in a caption database according to the characteristic information, so as to generate a search result; performing, according to the search result, a voice textualization process on the video file to be played; and updating the caption database according to a textualized caption generated by the voice textualization process, and using an updated caption in the caption database as a caption of the video file to be played. The present disclosure further provides an electronic device and storage medium for the caption searching. In this manner, a search for a caption is based on audio information recognition from the video file, to increase a hit rate and reduce an error rate of caption matching.
    Type: Grant
    Filed: April 17, 2015
    Date of Patent: September 27, 2016
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Gang Liu
  • Patent number: 9424597
    Abstract: In an example embodiment, text is received at an ecommerce service from a first user, the text in a first language and pertaining to a first listing on the ecommerce service. Contextual information about the first listing may be retrieved. The text may be translated to a second language. Then, a plurality of text objects, in the second language, similar to the translated text may be located in a database, each of the text objects corresponding to a listing. Then, the plurality of text objects similar to the translated text may be ranked based on a comparison of the contextual information about the first listing and contextual information stored in the database for the listings corresponding to the plurality of text objects similar to the translated text. At least one of the ranked plurality of text objects may then be translated to the first language.
    Type: Grant
    Filed: November 13, 2013
    Date of Patent: August 23, 2016
    Assignee: eBay Inc.
    Inventor: Yan Chelly
  • Patent number: 9405284
    Abstract: A converter component can efficiently manage conversion of data associated with a control system from one engineering unit (EU) type to another EU type, and/or conversion of the data from one language to another language, based at least in part on the user. The converter component can identify a user, or can receive a conversion selection(s) from the user, and can automatically select a specified subset of EU conversions and/or language conversions to employ in relation to the user, convert the data associated with the control system in accordance with the subset, and present the converted data to the user via the interface. The converter component can present a pre-populated table of EU conversions associated with the subset, and can allow a user to add or modify an EU conversion.
    Type: Grant
    Filed: January 27, 2016
    Date of Patent: August 2, 2016
    Assignee: ROCKWELL AUTOMATION TECHNOLOGIES, INC.
    Inventor: Keith M. Hogan
  • Patent number: 9373326
    Abstract: Disclosed herein are systems, methods, and computer-readable storage media for a speech recognition application for directory assistance that is based on a user's spoken search query. The spoken search query is received by a portable device and portable device then determines its present location. Upon determining the location of the portable device, that information is incorporated into a local language model that is used to process the search query. Finally, the portable device outputs the results of the search query based on the local language model.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: June 21, 2016
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Enrico Bocchieri, Diamantino Antonio Caseiro
  • Patent number: 9348818
    Abstract: Systems and methods of various embodiments may enable or refine translation of text between a first language and a second language. In particular, systems and methods may enable or refine a text translation by soliciting and/or receiving feedback for: translation of a first word or phrase from a first language to a second language; transformation of the first word or phrase (in the first language) to a second word or phrase in the first language; or transformation of the first word or phrase (in the first language) to a second word or phrase in the second language. The systems and methods of various embodiments may incentivize user feedback for failed translations in order to encourage user feedback, improve the quality of user feedback received, and to permit development of translation corpora that can evolve with time.
    Type: Grant
    Filed: March 20, 2014
    Date of Patent: May 24, 2016
    Assignee: Machine Zone, Inc.
    Inventors: Gabriel Leydon, Francois Orsini, Nikhil Bojja
  • Patent number: 9298703
    Abstract: Systems and methods of various embodiments may enable or refine translation of text between a first language and a second language. In particular, systems and methods may enable or refine a text translation by soliciting and/or receiving feedback for: translation of a first word or phrase from a first language to a second language; transformation of the first word or phrase (in the first language) to a second word or phrase in the first language; or transformation of the first word or phrase (in the first language) to a second word or phrase in the second language. The systems and methods of various embodiments may incentivize user feedback for failed translations in order to encourage user feedback, improve the quality of user feedback received, and to permit development of translation corpora that can evolve with time.
    Type: Grant
    Filed: June 3, 2013
    Date of Patent: March 29, 2016
    Assignee: Machine Zone, Inc.
    Inventors: Gabriel Leydon, Francois Orsini, Nikhil Bojja
  • Patent number: 9269182
    Abstract: A method for identifying entry points of a hierarchical structure having a plurality of nodes includes the operations selecting a node of a hierarchical structure and testing it for identification as an entry point. The node is identified as an entry point, and the selection, testing, and identification operations are repeated for at least one additional node of the hierarchical structure to identify at least a second node as a respective second entry point for the hierarchical structure.
    Type: Grant
    Filed: September 5, 2008
    Date of Patent: February 23, 2016
    Assignee: NVIDIA Corporation
    Inventors: Timo Aila, Samuli Laine
  • Patent number: 9262411
    Abstract: The present disclosure relates generally to the field of socially derived translation profiles to enhance translation quality of social content using a machine translation. In various embodiments, methodologies may be provided that automatically use socially derived translation profiles to enhance translation quality of social content using a machine translation.
    Type: Grant
    Filed: July 10, 2013
    Date of Patent: February 16, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Elizabeth V. Woodward, Shunguo Yan
  • Patent number: 9256396
    Abstract: Various embodiments provide techniques for implementing speech recognition for context switching In at least some embodiments, the techniques can enable a user to switch between different contexts and/or user interfaces of an application via speech commands. In at least some embodiments, a context menu is provided that lists available contexts for an application that may be navigated to via speech commands. In implementations, the contexts presented in the context menu include a subset of a larger set of contexts that are filtered based on a variety of context filtering criteria. A user can speak one of the contexts presented in the context menu to cause a navigation to a user interface associated with one of the contexts.
    Type: Grant
    Filed: October 10, 2011
    Date of Patent: February 9, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Matthew J. Monson, William P. Giese, Daniel J. Greenawalt
  • Patent number: 9122655
    Abstract: A method for testing the display of bi-directional language script prior to translation in an application under test can include using unidirectional glyphs with shaping indicators to simulate right-to-left characters. The using step can include reversing an ordering of a first set of unidirectional text characters in an input string and mapping the unidirectional text characters to right-to-left code points in a bi-directional language code page to produce a pseudo-translated string. Multiple unidirectional language glyphs can be loaded where each corresponds to a same one of the right-to-left character code points as had been used to produce the pseudo-translation. The pseudo-translation and the glyphs can be combined to simulate right-to-left character rendering in the application under test such the resultant output is visually similar to the input string. Finally, the glyphs can include character shaping indicia such that a resultant output allows for the detection of shaping errors.
    Type: Grant
    Filed: November 15, 2004
    Date of Patent: September 1, 2015
    Assignee: International Business Machines Corporation
    Inventors: Dale M. Schultz, Roy Hudson
  • Patent number: 9086735
    Abstract: Implementations of the present disclosure provide an input method editor (IME) extension framework for extending the functionality of (IMEs). In some implementations, a user input into a user interface of an (IME) is received and is provided to a script engine. A script is selected from a plurality of scripts electronically stored in a script repository. The user input is processed through the script using the script engine to generate one or more candidates, and the one or more candidates are provided to an (IME) engine. In some implementations, a script file is received, the script file being executable by an (IME) system to generate one or more candidates based on a user input into the (IME) system. The script file is electronically stored in a central registry, the central registry including a plurality of scripts, and the plurality of scripts are published for download to and installation on a user device, the user device including the (IME) system.
    Type: Grant
    Filed: April 12, 2010
    Date of Patent: July 21, 2015
    Assignee: Google Inc.
    Inventors: Yong-Gang Wang, Liangyi Ou, Yinfei Zhang
  • Patent number: 9075520
    Abstract: An apparatus for displaying an image in a portable terminal includes a camera to photograph the image, a touch screen to display the image and to allow selecting an object area of the displayed image, a memory to store the image, a controller to detect at least one object area within the image when displaying the image of the camera or the memory and to recognize object information of the detected object area to be converted into a voice, and an audio processing unit to output the voice.
    Type: Grant
    Filed: October 24, 2012
    Date of Patent: July 7, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hyunmi Park, Sanghyuk Koh
  • Patent number: 9053096
    Abstract: Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to automatically translate utterances from a first to a second language, based on speaker-related information determined from speaker utterances and/or other sources of information. In one embodiment, the AEFS receives data that represents an utterance of a speaker in a first language, the utterance obtained by a hearing device of the user, such as a hearing aid, smart phone, media player/device, or the like. The AEFS then determines speaker-related information associated with the identified speaker, such as by determining demographic information (e.g., gender, language, country/region of origin) and/or identifying information (e.g., name or title) of the speaker. The AEFS translates the utterance in the first language into a message in a second language, based on the determined speaker-related information.
    Type: Grant
    Filed: December 29, 2011
    Date of Patent: June 9, 2015
    Assignee: Elwha LLC
    Inventors: Richard T. Lord, Robert W. Lord, Nathan P. Myhrvold, Clarence T. Tegreene, Roderick A. Hyde, Lowell L. Wood, Jr., Muriel Y. Ishikawa, Victoria Y. H. Wood, Charles Whitmer, Paramvir Bahl, Douglas C. Burger, Ranveer Chandra, William H. Gates, III, Paul Holman, Jordin T. Kare, Craig J. Mundie, Tim Paek, Desney S. Tan, Lin Zhong, Matthew G. Dyor
  • Patent number: 9047454
    Abstract: A biological information authentication device is provided with a biological information memory means, a user group information confirmation means, a biological information registering means and an authentication unit. The user group information is the information representing a trust relationship among a plurality of users; the biological information memory unit associates each biological information extracted from a plurality of users with the user group information and stores them. The user group information confirmation unit receives a determination as to whether or not a trust relationship exists among a plurality of users from whom the biological information is extracted and confirms the relationship between users. The biological information registering unit matches the user group information and stores each biological information extracted from each user in association with the biological information memory means.
    Type: Grant
    Filed: July 4, 2011
    Date of Patent: June 2, 2015
    Assignee: BLD Oriental Co., Ltd.
    Inventor: Yasushi Ochi
  • Patent number: 9043213
    Abstract: A speech recognition method including the steps of receiving a speech input from a known speaker of a sequence of observations and determining the likelihood of a sequence of words arising from the sequence of observations using an acoustic model. The acoustic model has a plurality of model parameters describing probability distributions which relate a word or part thereof to an observation and has been trained using first training data and adapted using second training data to said speaker. The speech recognition method also determines the likelihood of a sequence of observations occurring in a given language using a language model and combines the likelihoods determined by the acoustic model and the language model and outputs a sequence of words identified from said speech input signal. The acoustic model is context based for the speaker, the context based information being contained in the model using a plurality of decision trees and the structure of the decision trees is based on second training data.
    Type: Grant
    Filed: January 26, 2011
    Date of Patent: May 26, 2015
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Byung Ha Chun
  • Patent number: 9043212
    Abstract: A messaging response system is disclosed wherein a service providing system provides services to users via messaging communications. In accordance with an exemplary embodiment of the present invention, multiple respondents servicing users through messaging communications may appear to simultaneously use a common “screen name” identifier.
    Type: Grant
    Filed: April 2, 2003
    Date of Patent: May 26, 2015
    Assignee: VERIZON PATENT AND LICENSING INC.
    Inventors: Richard G. Moore, Gregory L. Mumford, Duraisamy Gunasekar
  • Patent number: 9031849
    Abstract: A system for providing multi-language conference is provided. The system includes conference terminals and a multipoint control unit. The conference terminals are adapted to process a speech of a conference site, transmitting the processed speech to the multipoint control unit, process an audio data received from the multipoint control unit and output it. At least one of the conference terminals is an interpreting terminal adapted to interpret the speech of the conference according to the audio data transmitted from the multipoint control unit, process the interpreted audio data and output the processed audio data. The multipoint control unit is adapted to perform a sound mixing process of the audio data from the conference terminals in different sound channels according to language types, and then sends mixed audio data after the sound mixing process to the conference terminals.
    Type: Grant
    Filed: March 27, 2009
    Date of Patent: May 12, 2015
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Zhihui Liu, Zhonghui Yue
  • Patent number: 9031840
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving (i) audio data that encodes a spoken natural language query, and (ii) environmental audio data, obtaining a transcription of the spoken natural language query, determining a particular content type associated with one or more keywords in the transcription, providing at least a portion of the environmental audio data to a content recognition engine, and identifying a content item that has been output by the content recognition engine, and that matches the particular content type.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: May 12, 2015
    Assignee: Google Inc.
    Inventors: Matthew Sharifi, Gheorghe Postelnicu
  • Patent number: 9031827
    Abstract: The present invention relates to a new method and system for use of a multi-protocol conference bridge, and more specifically a new multi-language conference bridge system and method of use where different cues, such as an attenuated voice of an original non-interpreted speaker, is used to improve the flow of information over the system.
    Type: Grant
    Filed: November 30, 2012
    Date of Patent: May 12, 2015
    Assignee: Zip DX LLC
    Inventors: David Paul Frankel, Barry Slaughter Olsen
  • Patent number: 9020818
    Abstract: Implementations of systems, method and devices described herein enable enhancing the intelligibility of a target voice signal included in a noisy audible signal received by a hearing aid device or the like. In particular, in some implementations, systems, methods and devices are operable to generate a machine readable formant based codebook. In some implementations, the method includes determining whether or not a candidate codebook tuple includes a sufficient amount of new information to warrant either adding the candidate codebook tuple to the codebook or using at least a portion of the candidate codebook tuple to update an existing codebook tuple. Additionally and/or alternatively, in some implementations systems, methods and devices are operable to reconstruct a target voice signal by detecting formants in an audible signal, using the detected formants to select codebook tuples, and using the formant information in the selected codebook tuples to reconstruct the target voice signal.
    Type: Grant
    Filed: August 20, 2012
    Date of Patent: April 28, 2015
    Assignee: Malaspina Labs (Barbados) Inc.
    Inventors: Pierre Zakarauskas, Alexander Escott, Clarence S. H. Chu, Shawn E. Stevenson
  • Patent number: 9020803
    Abstract: A method, system, and computer program product for creating confidence-rated transcription and translation are provided in the illustrative embodiments. An input is provided in a first form to a set of transcription applications. A set of transcriptions is received. A first and a second set of confidence ratings are assigned to a first and a second transcription, respectively. The confidence-rated first transcription and the confidence-rated second transcription are combined and provided to a set of translation applications. A set of translations is received. A third and a fourth set of confidence ratings are assigned to a first and a second translation, respectively. The confidence-rated first and second translations are combined and presented.
    Type: Grant
    Filed: September 20, 2012
    Date of Patent: April 28, 2015
    Assignee: International Business Machines Corporation
    Inventors: William S. Carter, Brian J. Cragun
  • Patent number: 9015044
    Abstract: Implementations of systems, method and devices described herein enable enhancing the intelligibility of a target voice signal included in a noisy audible signal received by a hearing aid device or the like. In particular, in some implementations, systems, methods and devices are operable to generate a machine readable formant based codebook. In some implementations, the method includes determining whether or not a candidate codebook tuple includes a sufficient amount of new information to warrant either adding the candidate codebook tuple to the codebook or using at least a portion of the candidate codebook tuple to update an existing codebook tuple. Additionally and/or alternatively, in some implementations systems, methods and devices are operable to reconstruct a target voice signal by detecting formants in an audible signal, using the detected formants to select codebook tuples, and using the formant information in the selected codebook tuples to reconstruct the target voice signal.
    Type: Grant
    Filed: August 20, 2012
    Date of Patent: April 21, 2015
    Assignee: Malaspina Labs (Barbados) Inc.
    Inventors: Pierre Zakarauskas, Alexander Escott, Clarence S. H. Chu, Shawn E. Stevenson
  • Patent number: 9015030
    Abstract: A method for intercepting an application prompt before it reaches the user interface, wherein the application prompt has been transmitted from the computer application and intended to reach the user interface. The method also includes translating the intercepted application prompt from a source language to a target user language, and in response to translating the intercepted application prompt, transmitting the translated application prompt to the user interface. The method also includes intercepting, in response to the application prompt, user input from the user interface, wherein the user input is intended to reach the computer application. The method also includes translating the user input from the target language to the source language, and in response to translating the intercepted application prompt, transmitting the translated user input to the computer application.
    Type: Grant
    Filed: April 15, 2012
    Date of Patent: April 21, 2015
    Assignee: International Business Machines Corporation
    Inventors: Graham Hunter, Ian McCloy
  • Patent number: 9009024
    Abstract: There is provided a computer-implemented method of performing sentiment analysis. An exemplary method comprises identifying one or more sentences in a microblog. The microblog comprises an entity. The method further includes identifying one or more opinion words in the sentences based on an opinion lexicon. Additionally, the method includes determining, for each of the sentences, an opinion value for the entity. The opinion value is determined based on an opinion value for each of the opinion words in an opinion lexicon.
    Type: Grant
    Filed: October 24, 2011
    Date of Patent: April 14, 2015
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Lei Zhang, Riddhiman Ghosh, Mohamed E. Dekhil
  • Patent number: 9002696
    Abstract: A method, computer system, and computer program product for translating information. The computer system receives the information for a translation. The computer system identifies portions of the information based on a set of rules for security for the information in response to receiving the information. The computer system sends the portions of the information to a plurality of translation systems. In response to receiving translation results from the plurality of translation systems for respective portions of the information, the computer system combines the translation results for the respective portions to form a consolidated translation of the information.
    Type: Grant
    Filed: November 30, 2010
    Date of Patent: April 7, 2015
    Assignee: International Business Machines Corporation
    Inventors: Carl J. Kraenzel, David M. Lubensky, Baiju Dhirajlal Mandalia, Cheng Wu
  • Patent number: 8996369
    Abstract: System, method and program product for transcribing an audio file included in or referenced by a web page. A language of text in the web page is determined. Then, voice recognition software of the language of text is selected and used to transcribe the audio file. If the language of the text is not the language of the audio file, then a related language is determined. Then, voice recognition software of the related language is selected and used to transcribe the audio file. The related language can be related geographically, by common root, as another dialect of the same language, or as another language commonly spoken in the same country as the language of the text. Another system, method and program product is disclosed for transcribing an audio file included in or referenced by a web page. A domain extension or full domain of the web page and an official language of the domain extension or full domain are determined.
    Type: Grant
    Filed: August 30, 2007
    Date of Patent: March 31, 2015
    Assignee: Nuance Communications, Inc.
    Inventor: Joey Stanford
  • Patent number: 8983850
    Abstract: A method and system provides a graphical user interface for instant messaging on any of a plurality of instant messaging networks. The interface provides a roster of contacts in each instant messaging network. Instant messages entered through the interface are machine translated into a preferred language for each intended recipient contact. The translated message is sent over the respective instant messaging networks of the intended recipient contacts. Response messages are translated into the source language of the user of the graphical user interface.
    Type: Grant
    Filed: July 20, 2012
    Date of Patent: March 17, 2015
    Assignee: Ortsbo Inc.
    Inventors: Mark Charles Hale, Leemon Baird
  • Patent number: 8983825
    Abstract: A collaborative language translation system, computer readable storage medium, and method is disclosed that allocates as between automated and manual language translation services, wherein a manual language translator creates a unique database including manual translator languages capability, accuracy skill level, scope of translation project desired, and translation turnaround time. Also a client creates a unique information set that includes original language, desired language, scope of translated material, client desired translation formats, client desired translation timing, and client desired translation accuracy. Also included in the system is an automated language translation database and instructions for allocating a flow of the unique information set as between the unique database and the automated language translation database based upon the client unique information set and instructions to perform the selected language translation for the client.
    Type: Grant
    Filed: November 14, 2011
    Date of Patent: March 17, 2015
    Inventors: Amadou Sarr, Bonita Louise Griffin Kaake, Michael Esposito