Patents by Inventor Taira Ashikawa

Taira Ashikawa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9460718
    Abstract: According to an embodiment, a text generator includes a recognizer, a selector, and a generation unit. The recognizer is configured to recognize an acquired sound and obtain recognized character strings in recognition units and confidence levels of the recognized character strings. The selector is configured to select at least one of the recognized character strings used for a transcribed sentence on the basis of at least one of a parameter about transcription accuracy and a parameter about a workload needed for transcription. The generation unit is configured to generate the transcribed sentence using the selected recognized character strings.
    Type: Grant
    Filed: March 12, 2014
    Date of Patent: October 4, 2016
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Taira Ashikawa, Osamu Nishiyama, Tomoo Ikeda, Koji Ueno, Kouta Nakata
  • Patent number: 9196253
    Abstract: According to an embodiment, an information processing apparatus includes a dividing unit, an assigning unit, and a generating unit. The dividing unit is configured to divide speech data into pieces of utterance data. The assigning unit is configured to assign speaker identification information to each piece of utterance data based on an acoustic feature of the each piece of utterance data. The generating unit is configured to generate a candidate list that indicates candidate speaker names so as to enable a user to determine a speaker name to be given to the piece of utterance data identified by instruction information, based on operation history information in which at least pieces of utterance identification information, pieces of the speaker identification information, and speaker names given by the user to the respective pieces of utterance data are associated with one another.
    Type: Grant
    Filed: August 6, 2013
    Date of Patent: November 24, 2015
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Osamu Nishiyama, Taira Ashikawa, Tomoo Ikeda, Kouji Ueno, Kouta Nakata
  • Publication number: 20150170649
    Abstract: According to an embodiment, a memory controller stores, in a memory, character strings in voice text obtained through voice recognition on voice data, a node index, a recognition score, and a voice index. A detector detects reproduction section of the voice data. An obtainer obtains reading of a phrase in a text written down from the reproduced voice data, and obtains insertion position of character strings. A searcher searches for a character string including the reading. A determiner determines whether to perform display based on the recognition score corresponding to the retrieved character string. A history updater stores, in a memory, candidate history data indicating the retrieved character string, the recognition score, and the character insertion position. A threshold updater decides on a display threshold value using the recognition score of the candidate history data and/or the recognition score of the character string selected by a selector.
    Type: Application
    Filed: December 8, 2014
    Publication date: June 18, 2015
    Inventors: Taira Ashikawa, Kouji Ueno
  • Publication number: 20150025877
    Abstract: According to an embodiment, a character input device includes a first obtainer, a determiner, a first generator, and an outputter. The first obtainer receives an input of characters from a user and obtains an input character string. The determiner infers, from the input character string, word notations intended by the user and relations of connection between the word notations and to determine routes each of which represents the relation of connection having a high likelihood of serving as a notation candidate intended by the user. The first generator extracts, from a group of word notations included in the routes, the word notations to be output and generate layout information used in outputting the extracted word notations as the notation candidates. The outputter outputs the layout information.
    Type: Application
    Filed: July 17, 2014
    Publication date: January 22, 2015
    Inventors: Kouji UENO, Tomoo IKEDA, Taira ASHIKAWA, Kouta NAKATA
  • Publication number: 20140372117
    Abstract: According to an embodiment, a transcription support device includes a first voice acquisition unit, a second voice acquisition unit, a recognizer, a text acquisition unit, an information acquisition unit, a determination unit, and a controller. The first voice acquisition unit acquires a first voice to be transcribed. The second voice acquisition unit acquires a second voice uttered by a user. The recognizer recognizes the second voice to generate a first text. The text acquisition unit acquires a second text obtained by correcting the first text by the user. The information acquisition unit acquires reproduction information representing a reproduction section of the first voice. The determination unit determines a reproduction speed of the first voice on the basis of the first voice, the second voice, the second text, and the reproduction information. The controller reproduces the first voice at the determined reproduction speed.
    Type: Application
    Filed: March 5, 2014
    Publication date: December 18, 2014
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Kouta NAKATA, Taira ASHIKAWA, Tomoo IKEDA, Kouji UENO
  • Publication number: 20140303974
    Abstract: According to an embodiment, a text generator includes a recognizer, a selector, and a generation unit. The recognizer is configured to recognize an acquired sound and obtain recognized character strings in recognition units and confidence levels of the recognized character strings. The selector is configured to select at least one of the recognized character strings used for a transcribed sentence on the basis of at least one of a parameter about transcription accuracy and a parameter about a workload needed for transcription. The generation unit is configured to generate the transcribed sentence using the selected recognized character strings.
    Type: Application
    Filed: March 12, 2014
    Publication date: October 9, 2014
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Taira ASHIKAWA, Osamu NISHIYAMA, Tomoo IKEDA, Koji UENO, Kouta NAKATA
  • Publication number: 20140207454
    Abstract: According to an embodiment, a text reproduction device includes a setting unit, an acquiring unit, an estimating unit, and a modifying unit. The setting unit is configured to set a pause position delimiting text in response to input data that is input by the user during reproduction of speech data. The acquiring unit is configured to acquire a reproduction position of the speech data being reproduced when the pause position is set. The estimating unit is configured to estimate a more accurate position corresponding to the pause position by matching the text around the pause position with the speech data around the reproduction position. The modifying unit is configured to modify the reproduction position to the estimated more accurate position in the speech data, and set the pause position so that reproduction of the speech data can be started from the modified reproduction position when the pause position is designated by the user.
    Type: Application
    Filed: January 17, 2014
    Publication date: July 24, 2014
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Kouta Nakata, Taira Ashikawa, Tomoo Ikeda, Kouji Ueno, Osamu Nishiyama
  • Patent number: 8675260
    Abstract: According to one embodiment, the image processing apparatus includes a printing control unit, an image reading unit, an extracting unit, a difference image extracting unit, and a determination unit. The printing control unit controls printing of a plurality of pages on one sheet of paper according to a print setting information which indicates a printing form, and printing of a code indicating the print setting information on the paper. The image reading unit read the paper. The extracting unit extracts the code from the read image. The difference image extracting unit extracts a difference image between the printed image and the read image.
    Type: Grant
    Filed: March 14, 2012
    Date of Patent: March 18, 2014
    Assignee: Toshiba Tec Kabushiki Kaisha
    Inventors: Shigeo Uchida, Taira Ashikawa, Satoshi Oyama, Katsuhito Mochizuki
  • Publication number: 20140046666
    Abstract: According to an embodiment, an information processing apparatus includes a dividing unit, an assigning unit, and a generating unit. The dividing unit is configured to divide speech data into pieces of utterance data. The assigning unit is configured to assign speaker identification information to each piece of utterance data based on an acoustic feature of the each piece of utterance data. The generating unit is configured to generate a candidate list that indicates candidate speaker names so as to enable a user to determine a speaker name to be given to the piece of utterance data identified by instruction information, based on operation history information in which at least pieces of utterance identification information, pieces of the speaker identification information, and speaker names given by the user to the respective pieces of utterance data are associated with one another.
    Type: Application
    Filed: August 6, 2013
    Publication date: February 13, 2014
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Osamu Nishiyama, Taira Ashikawa, Tomoo Ikeda, Kouji Ueno, Kouta Nakata
  • Publication number: 20120236368
    Abstract: According to one embodiment, the image processing apparatus includes a printing control unit, an image reading unit, an extracting unit, a difference image extracting unit, and a determination unit. The printing control unit controls printing of a plurality of pages on one sheet of paper according to a print setting information which indicates a printing form, and printing of a code indicating the print setting information on the paper. The image reading unit read the paper. The extracting unit extracts the code from the read image. The difference image extracting unit extracts a difference image between the printed image and the read image.
    Type: Application
    Filed: March 14, 2012
    Publication date: September 20, 2012
    Applicant: Toshiba Tec kabushiki Kaisha
    Inventors: Shigeo UCHIDA, Taira Ashikawa, Satoshi Oyama, Katsuhito Mochizuki
  • Publication number: 20110228351
    Abstract: According to one embodiment, an image processing apparatus includes a scanner, a display, an accept unit, an edit unit, and a storage. The scanner is configured to scan a plurality of originals and read images thereof. The display is configured to display a preview image each time an image of one original is read by the scanner. The accept unit is configured to accept an edit process designated by an operator with respect to the preview image displayed by the display. The edit unit is configured to execute the edit process, which is accepted by the accept unit, on the image read by the scanner. The storage is configured to store the image edited by the edit unit.
    Type: Application
    Filed: February 23, 2011
    Publication date: September 22, 2011
    Applicant: TOSHIBA TEC KABUSHIKI KAISHA
    Inventors: Shigeo Uchida, Taira Ashikawa, Satoshi Oyama, Katsuhito Mochizuki
  • Publication number: 20110228322
    Abstract: According to one embodiment, a map information processing apparatus includes a generation unit, a transmitter, a receiver, an extraction unit, a determination unit, and a registration unit. The generation unit is configured to generate a print image for printing a map designated by a user. The transmitter is configured to transmit to a print device the print image. The receiver is configured to receive an image transmitted from the print device. The extraction unit is configured to extract a differential image by extracting a difference between the image and the print image. The determination unit is configured to determine registration information which is a target of registration, based on the differential image and the map corresponding to the print image. The registration unit is configured to register the registration information in an information registration device.
    Type: Application
    Filed: February 23, 2011
    Publication date: September 22, 2011
    Applicant: TOSHIBA TEC KABUSHIKI KAISHA
    Inventors: Taira Ashikawa, Shigeo Uchida, Satoshi Oyama, Katsuhito Mochizuki