Patents by Inventor Takako Aikawa

Takako Aikawa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9176952
    Abstract: A computerized system for performing statistical machine translation with a phrasal decoder is provided. The system may include a phrasal decoder trained prior to run-time on a monolingual parallel corpus, the monolingual parallel corpus including a machine translation output of source language documents of a bilingual parallel corpus and a corresponding target human translation output of the source language documents, to thereby learn mappings between the machine translation output and the target human translation output. The system may further include a statistical machine translation engine configured to receive a translation input and to produce a raw machine translation output, at run-time. The phrasal decoder may be configured to process the raw machine translation output, and to produce a corrected translation output based on the learned mappings for display on a display associated with the system.
    Type: Grant
    Filed: September 25, 2008
    Date of Patent: November 3, 2015
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Takako Aikawa, Achim Ruopp
  • Patent number: 8548791
    Abstract: A method of determining the consistency of training data for a machine translation system is disclosed. The method includes receiving a signal indicative of a source language corpus and a target language corpus. A textual string is extracted from the source language corpus. The textual string is aligned with the target language corpus to identify a translation for the textual string from the target language corpus. A consistency index is calculated based on a relationship between the textual string from the source language corpus and the translation. An indication of the consistency index is stored on a tangible medium.
    Type: Grant
    Filed: August 29, 2007
    Date of Patent: October 1, 2013
    Assignee: Microsoft Corporation
    Inventors: Masaki Itagaki, Takako Aikawa, Xiaodong He
  • Publication number: 20110252316
    Abstract: A system described herein includes an acquirer component that acquires an electronic document that comprises text in a first language, wherein the acquirer component acquires the electronic document based at least in part upon a physical object comprising the text contacting or becoming proximate to the interactive display of the surface computing device. The system also includes a language selector component that receives an indication of a second language from a user of the surface computing device and selects the second language. A translator component translates the text in the electronic document from the first language to the second language, and a formatter component formats the electronic document for display to the user on the interactive display of the surface computing device, wherein the electronic document comprises the text in the second language.
    Type: Application
    Filed: April 12, 2010
    Publication date: October 13, 2011
    Applicant: Microsoft Corporation
    Inventors: Michel Pahud, Takako Aikawa, Andrew D. Wilson, Hrvoje Benko, Sauleh Eetemadi, Anand M. Chakravarty
  • Publication number: 20100082324
    Abstract: A system described herein includes a receiver component that receives an output translation from a machine translation system, wherein the output translation is in a target language and is based at least in part upon an input to the machine translation system in a source language, and wherein the input to the machine translation system includes a first term in the source language and the output translation includes a second term in the target language that corresponds to the first term. The system additionally includes a replacer component in communication with the receiver component that accesses a dictionary of term correspondences, wherein the dictionary of term correspondences includes an indication that the input first term in the source language is desirably translated to a third term in the target language, and wherein the replacer component is configured to automatically replace the second term with the third term to modify the output translation.
    Type: Application
    Filed: September 30, 2008
    Publication date: April 1, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: Masaki Itagaki, Takako Aikawa
  • Publication number: 20100076746
    Abstract: A computerized system for performing statistical machine translation with a phrasal decoder is provided. The system may include a phrasal decoder trained prior to run-time on a monolingual parallel corpus, the monolingual parallel corpus including a machine translation output of source language documents of a bilingual parallel corpus and a corresponding target human translation output of the source language documents, to thereby learn mappings between the machine translation output and the target human translation output. The system may further include a statistical machine translation engine configured to receive a translation input and to produce a raw machine translation output, at run-time. The phrasal decoder may be configured to process the raw machine translation output, and to produce a corrected translation output based on the learned mappings for display on a display associated with the system.
    Type: Application
    Filed: September 25, 2008
    Publication date: March 25, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: Takako Aikawa, Achim Ruopp
  • Patent number: 7512537
    Abstract: The subject invention provides a unique system and method that facilitates integrating natural language input and graphics in a cooperative manner. In particular, as natural language input is entered by a user, an illustrated or animated scene can be generated to correspond to such input. The natural language input can be in sentence form. Upon detection of an end-of-sentence indicator, the input can be processed using NLP techniques and the images or templates representing at least one of the actor, action, background and/or object specified in the input can be selected and rendered. Thus, the user can nearly immediately visualize an illustration of his/her input. The input can be typed, written, or spoken—whereby speech recognition can be employed to convert the speech to text. New graphics can be created as well to allow the customization and expansion of the invention according to the user's preferences.
    Type: Grant
    Filed: March 22, 2005
    Date of Patent: March 31, 2009
    Assignee: Microsoft Corporation
    Inventors: Michel Pahud, Takako Aikawa, Lee A. Schwartz
  • Publication number: 20090063126
    Abstract: A method of determining the consistency of training data for a machine translation system is disclosed. The method includes receiving a signal indicative of a source language corpus and a target language corpus. A textual string is extracted from the source language corpus. The textual string is aligned with the target language corpus to identify a translation for the textual string from the target language corpus. A consistency index is calculated based on a relationship between the textual string from the source language corpus and the translation. An indication of the consistency index is stored on a tangible medium.
    Type: Application
    Filed: August 29, 2007
    Publication date: March 5, 2009
    Applicant: Microsoft Corporation
    Inventors: Masaki Itagaki, Takako Aikawa, Xiaodong He
  • Publication number: 20060217979
    Abstract: The subject invention provides a unique system and method that facilitates integrating natural language input and graphics in a cooperative manner. In particular, as natural language input is entered by a user, an illustrated or animated scene can be generated to correspond to such input. The natural language input can be in sentence form. Upon detection of an end-of-sentence indicator, the input can be processed using NLP techniques and the images or templates representing at least one of the actor, action, background and/or object specified in the input can be selected and rendered. Thus, the user can nearly immediately visualize an illustration of his/her input. The input can be typed, written, or spoken—whereby speech recognition can be employed to convert the speech to text. New graphics can be created as well to allow the customization and expansion of the invention according to the user's preferences.
    Type: Application
    Filed: March 22, 2005
    Publication date: September 28, 2006
    Applicant: Microsoft Corporation
    Inventors: Michel Pahud, Takako Aikawa, Lee Schwartz