Patents by Inventor Chieko Asakawa

Chieko Asakawa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9792834
    Abstract: A computer has a display device, a speaker device and an input device, and is capable of identifying problems concerning accessibility in web content displayed on the display device to a visually impaired user of the computer. The web content includes a plurality of structured objects. The computer also has text-to-speech capability such that the web content displayed on the display device is audibly read to the user. The user provides a specification operation input when he or she is uncomfortable with the audible reading. A reporter software module executing on the computer determines which one of the structured objects is causing the discomfort.
    Type: Grant
    Filed: March 6, 2009
    Date of Patent: October 17, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Chieko Asakawa, Shinya Kawanaka, Daisuke Sato, Hironobu Takagi
  • Patent number: 9349302
    Abstract: A device, computer program and method for outputting linguistic information. The voice output device, for example, includes an output information acquisition unit acquiring linguistic information and attribute information. Attribute information includes an attribute added to each linguistic element included in the linguistic information. A tactile pattern storage unit stores a predetermined tactile pattern corresponding to each linguistic element. A tactile pattern acquisition unit acquires the tactile pattern from the tactile pattern storage unit. A voice output unit reads aloud the linguistic elements and a tactile pattern output unit outputs, in parallel with reading aloud each linguistic element, the tactile pattern corresponding to the attribute added to the linguistic element, thereby allowing a user to sense the tactile pattern by the sense of touch.
    Type: Grant
    Filed: June 30, 2013
    Date of Patent: May 24, 2016
    Assignee: International Business Machines Corporation
    Inventors: Chieko Asakawa, Tohru Ifukube, Shuichi Ino, Hironobu Takagi
  • Patent number: 9262386
    Abstract: Provides ability to analyze the readability of an image to be displayed on a screen as a web page, etc., and appropriately modify the image. It includes a rendering section for generating an image by rendering an HTML document; an image processing section for performing image processing on the image generated by the rendering section to simulate and evaluate how the image is viewed under a certain visual characteristic; and a result presentation section for locating an HTML element that needs to be modified in the HTML document to be processed based on the evaluation result of the image processing section and for presenting the HTML element to a web page creator. The result presentation section also retrieves a modification method for the HTML element that needs to be modified from a symptom model storage section. A document modification processing section actually modifies the HTML document.
    Type: Grant
    Filed: February 28, 2013
    Date of Patent: February 16, 2016
    Assignee: International Business Machines Corporation
    Inventors: Chieko Asakawa, Kentaro Fukuda, Junji Maeda, Hidemasa Muta, Hironobu Takagi
  • Patent number: 8589067
    Abstract: A method and apparatus for mapping a moving direction by using sounds for people with visual impairments, in which a moving direction can be expressed multi-dimensionally by using sound information and can effectively assist people with visual impairments and the like in navigation tasks. A moving direction of a person carrying a device configured to output sounds is mapped by using a combination of sounds outputted from the device. A plurality of different sound information pieces are stored in association with three or more predetermined directions, respectively, a current position of the moving target device is identified, and then a moving direction of the device is identified. A sound obtained by combining two sounds in a ratio according to the identified moving direction is outputted on the basis of sound information pieces associated respectively with two adjacent directions sandwiching the identified moving direction among the predetermined directions.
    Type: Grant
    Filed: November 23, 2011
    Date of Patent: November 19, 2013
    Assignee: International Business Machines Corporation
    Inventors: Chieko Asakawa, Susumu Harada
  • Publication number: 20130298027
    Abstract: A device, computer program and method for outputting linguistic information. The voice output device, for example, includes an output information acquisition unit acquiring linguistic information and attribute information. Attribute information includes an attribute added to each linguistic element included in the linguistic information. A tactile pattern storage unit stores a predetermined tactile pattern corresponding to each linguistic element. A tactile pattern acquisition unit acquires the tactile pattern from the tactile pattern storage unit. A voice output unit reads aloud the linguistic elements and a tactile pattern output unit outputs, in parallel with reading aloud each linguistic element, the tactile pattern corresponding to the attribute added to the linguistic element, thereby allowing a user to sense the tactile pattern by the sense of touch.
    Type: Application
    Filed: June 30, 2013
    Publication date: November 7, 2013
    Inventors: Chieko ASAKAWA, Tohru IFUKUBE, Shuichi INO, Hironobu TAKAGI
  • Patent number: 8494860
    Abstract: A device, computer program and method for outputting linguistic information. The voice output device, for example, includes an output information acquisition unit acquiring linguistic information and attribute information. Attribute information includes an attribute added to each linguistic element included in the linguistic information. A tactile pattern storage unit stores a predetermined tactile pattern corresponding to each linguistic element. A tactile pattern acquisition unit acquires the tactile pattern from the tactile pattern storage unit. A voice output unit reads aloud the linguistic elements and a tactile pattern output unit outputs, in parallel with reading aloud each linguistic element, the tactile pattern corresponding to the attribute added to the linguistic element, thereby allowing a user to sense the tactile pattern by the sense of touch.
    Type: Grant
    Filed: June 11, 2008
    Date of Patent: July 23, 2013
    Assignee: International Business Machines Corporation
    Inventors: Chieko Asakawa, Tohru Ifukube, Shuichi Ino, Hironobu Takagi
  • Patent number: 8438470
    Abstract: Provides ability to analyze the readability of an image to be displayed on a screen as a web page, etc., and appropriately modify the image. It includes a rendering section for generating an image by rendering an HTML document; an image processing section for performing image processing on the image generated by the rendering section to simulate and evaluate how the image is viewed under a certain visual characteristic; and a result presentation section for locating an HTML element that needs to be modified in the HTML document to be processed based on the evaluation result of the image processing section 30 and for presenting the HTML element to a web page creator. The result presentation section also retrieves a modification method for the HTML element that needs to be modified from a symptom model storage section. A document modification processing section actually modifies the HTML document.
    Type: Grant
    Filed: August 16, 2007
    Date of Patent: May 7, 2013
    Assignee: International Business Machines Corporation
    Inventors: Chieko Asakawa, Kentaro Fukuda, Junji Maeda, Hidemasa Muta, Hironobu Takagi
  • Patent number: 8244541
    Abstract: The present invention relates to creating a web page and voice browsing of the web page, and more particularly, it improves accessibility for the voice browsing of the web page through a synthetic voice, efficiently with high reliability. A content creation system 20 of the present invention is used for creating a content which may be viewed through the synthetic voice, the system including: a database 22 for storing a structured document; and an information process section 24 for creating a speech node series 18 from the structured document, and calculating a reaching time from starting voice synthesis of the speech node series 18 until each node is outputted as the synthetic voice. The information process section 24 includes a support process section 36 to determine a graphic display corresponding to the reaching time, and to visually display the reaching time to a predetermined node by the voice synthesis on a screen of a display section 26.
    Type: Grant
    Filed: July 10, 2008
    Date of Patent: August 14, 2012
    Assignee: Nuance Communications, Inc.
    Inventors: Hironobu Takagi, Chieko Asakawa
  • Publication number: 20120136569
    Abstract: A method and apparatus for mapping a moving direction by using sounds for people with visual impairments, in which a moving direction can be expressed multi-dimensionally by using sound information and can effectively assist people with visual impairments and the like in navigation tasks. A moving direction of a person carrying a device configured to output sounds is mapped by using a combination of sounds outputted from the device. A plurality of different sound information pieces are stored in association with three or more predetermined directions, respectively, a current position of the moving target device is identified, and then a moving direction of the device is identified. A sound obtained by combining two sounds in a ratio according to the identified moving direction is outputted on the basis of sound information pieces associated respectively with two adjacent directions sandwiching the identified moving direction among the predetermined directions.
    Type: Application
    Filed: November 23, 2011
    Publication date: May 31, 2012
    Applicant: International Business Machines Corporation
    Inventors: Chieko Asakawa, Susumu Harada
  • Patent number: 8132099
    Abstract: A method and apparatus for obtaining accessibility information in content of a rich internet application. The method for obtaining accessibility information includes executing an object of the content displayed on a display screen, estimating a role of the object using reference model information prepared beforehand concerning a plurality of objects, and outputting the estimated role of the object as the accessibility information. Therefore, even when the object does not have accessibility information, the accessibility information can be obtained based on the output during the execution and the internal state. In this way, it is possible for a visually impaired user to operate rich internet applications that are created by Dynamic HTML and Flashâ„¢.
    Type: Grant
    Filed: September 26, 2008
    Date of Patent: March 6, 2012
    Assignee: International Business Machines Corporation
    Inventors: Chieko Asakawa, Hisashi Miyashita, Daisuke Sato, Hironobu Takagi
  • Patent number: 7913191
    Abstract: A user interface through which multiple application programs can be operated in common. An information processing apparatus provides a common input/output interface to multiple application programs. The apparatus includes a section which converts an application-specific document generated by each of the plurality of application programs and represented in a data structure specific to the application program to a common document represented in a common data structure; a section which presents the common document to a user; a section which inputs an operation performed by the user on the common document; an interface adapter which converts an object contained in the common document to an object used in the output section; a section which modifies the common document in accordance with an operation by the user; and a section which reflects modifications to the common document in the application-specific document.
    Type: Grant
    Filed: March 5, 2007
    Date of Patent: March 22, 2011
    Assignee: International Business Machines Corporation
    Inventors: Chieko Asakawa, Tatsuya Ishihara, Takashi Itoh, Hironobu Takagi
  • Patent number: 7890883
    Abstract: Digest screen display content deciding means selects display elements belonging to respective regions of a document based on display priorities of the display elements, which are obtained by digest screen display priority information creating means, and decides selected display elements as display content of a digest screen under a condition where a total display area does not exceed a required display area. A merging relationship among the regions is set based on layout information for the regions, created by digest screen region layout information creating means. Display content deciding means decides the display content of a detail screen based on the merging relationship among the regions, and creates a digest of the detail screen based on control information created by control information creating means. Moreover, digest screen display content changing means changes the display content of the digest screen in response to an operation of a user.
    Type: Grant
    Filed: July 16, 2009
    Date of Patent: February 15, 2011
    Assignee: International Business Machines Corporation
    Inventors: Chieko Asakawa, Kentaro Fukuda, Junji Maeda, Hironobu Takagi
  • Patent number: 7877260
    Abstract: The present invention relates to creating a web page and voice browsing of the web page, and more particularly, it improves accessibility for the voice browsing of the web page through a synthetic voice, efficiently with high reliability. A content creation system 20 of the present invention is used for creating a content which may be viewed through the synthetic voice, the system including: a database 22 for storing a structured document; and an information process section 24 for creating a speech node series 18 from the structured document, and calculating a reaching time from starting voice synthesis of the speech node series 18 until each node is outputted as the synthetic voice. The information process section 24 includes a support process section 36 to determine a graphic display corresponding to the reaching time, and to visually display the reaching time to a predetermined node by the voice synthesis on a screen of a display section 26.
    Type: Grant
    Filed: October 20, 2005
    Date of Patent: January 25, 2011
    Assignee: Nuance Communications, Inc.
    Inventors: Hironobu Takagi, Chieko Asakawa
  • Publication number: 20100179983
    Abstract: Digest screen display content deciding means selects display elements belonging to respective regions of a document based on display priorities of the display elements, which are obtained by digest screen display priority information creating means, and decides selected display elements as display content of a digest screen under a condition where a total display area does not exceed a required display area. A merging relationship among the regions is set based on layout information for the regions, created by digest screen region layout information creating means. Display content deciding means decides the display content of a detail screen based on the merging relationship among the regions, and creates a digest of the detail screen based on control information created by control information creating means. Moreover, digest screen display content changing means changes the display content of the digest screen in response to an operation of a user.
    Type: Application
    Filed: July 16, 2009
    Publication date: July 15, 2010
    Inventors: Chieko Asakawa, Kentaro Fukuda, Junji Maeda, Hirononu Takagi
  • Patent number: 7698627
    Abstract: A device, a control method, and a program to increase the accuracy of voice read-out and text mining by automatically structuring a presentation file.
    Type: Grant
    Filed: August 7, 2006
    Date of Patent: April 13, 2010
    Assignee: International Business Machines Corporation
    Inventors: Chieko Asakawa, Tarsuya Ishihara, Takashi Itoh, Hironobu Takagi
  • Patent number: 7600185
    Abstract: Digest screen display content deciding means selects display elements belonging to respective regions of a document based on display priorities of the display elements, which are obtained by digest screen display priority information creating means, and decides selected display elements as display content of a digest screen under a condition where a total display area does not exceed a required display area. A merging relationship among the regions is set based on layout information for the regions, created by digest screen region layout information creating means. Display content deciding means decides the display content of a detail screen based on the merging relationship among the regions, and creates a digest of the detail screen based on control information created by control information creating means. Moreover, digest screen display content changing means changes the display content of the digest screen in response to an operation of a user.
    Type: Grant
    Filed: March 24, 2004
    Date of Patent: October 6, 2009
    Assignee: International Business Machines Corporation
    Inventors: Chieko Asakawa, Kentaro Fukuda, Junji Maeda, Hironobu Takagi
  • Publication number: 20090228573
    Abstract: A computer having a display device, a speaker device and an input device, for identifying problems concerning accessibility in content the computer. The computer includes: display means for displaying content containing a plurality of structured objects; conversion means for converting the content into reading information based on predetermined rules; reading means for reading the converted reading information through the speaker device; history means for obtaining a history of operations of the input device by a user, the history including an operation by the user in response to a reading corresponding to a specific object; and identification means for identifying the specific object based on the reading information and the operation history. The method includes: displaying content containing structured objects; converting the content into reading information; reading the converted information; obtaining a history of operations; and identifying a specific object.
    Type: Application
    Filed: March 6, 2009
    Publication date: September 10, 2009
    Inventors: Chieko Asakawa, Shinya Kawanaka, Daisuke Sato, Hironobu Takagi
  • Publication number: 20090100328
    Abstract: A method for obtaining accessibility information in a content of a rich internet application and a computer program and an accessibility information device. The method for obtaining accessibility information includes the steps of executing an object of the content displayed on a display screen, estimating a role of the object using reference model information prepared beforehand concerning a plurality of objects, and outputting the estimated role of the object as the accessibility information.
    Type: Application
    Filed: September 26, 2008
    Publication date: April 16, 2009
    Inventors: Chieko Asakawa, Hisashi Miyashita, Daisuke Sato, Hironobu Takagi
  • Patent number: 7502995
    Abstract: Target subtree setting means sets a target subtree relating to a content portion. Occurrence mode detecting means collates a target subtree relating to a content with a tree relating to each of past structured/hierarchical contents and detects an occurrence mode of each node of the target subtree. Statistical information generating means generates statistical information concerning an occurrence frequency of the occurrence mode of each node in the target subtree. Classifying means classifies each node of the target subtree based on the statistical information and a result of detecting the occurrence mode. Matching pattern generating means generates the matching pattern for the target content portion based on the classification. The structured/hierarchical contents are identified by use of the matching pattern.
    Type: Grant
    Filed: October 23, 2003
    Date of Patent: March 10, 2009
    Assignee: International Business Machines Corporation
    Inventors: Hironobu Takagi, Chieko Asakawa
  • Publication number: 20080276163
    Abstract: The present invention relates to creating a web page and voice browsing of the web page, and more particularly, it improves accessibility for the voice browsing of the web page through a synthetic voice, efficiently with high reliability. A content creation system 20 of the present invention is used for creating a content which may be viewed through the synthetic voice, the system including: a database 22 for storing a structured document; and an information process section 24 for creating a speech node series 18 from the structured document, and calculating a reaching time from starting voice synthesis of the speech node series 18 until each node is outputted as the synthetic voice. The information process section 24 includes a support process section 36 to determine a graphic display corresponding to the reaching time, and to visually display the reaching time to a predetermined node by the voice synthesis on a screen of a display section 26.
    Type: Application
    Filed: July 10, 2008
    Publication date: November 6, 2008
    Inventors: Hironobu Takagi, Chieko Asakawa