Patents by Inventor Kenta Cho

Kenta Cho has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180067920
    Abstract: According to an embodiment, a dictionary updating apparatus includes a candidate extraction unit, a selection control unit, and a word registration unit. The candidate extraction unit extracts, based on a recognition result text obtained by a voice recognition engine performing a voice recognition processing using a word dictionary and a correction result text obtained by correcting at least a part of the recognition result text, candidates of words to be additionally registered in the word dictionary. The selection control unit generates a selection screen on which the extracted candidates are displayed selectably and, at the same time, information indicating the influence on the voice recognition processing at least when the candidates are additionally registered in the word dictionary is displayed, and accepts an operation of selecting the candidates displayed on the selection screen. The word registration unit registers additionally the candidates selected on the selection screen in the word dictionary.
    Type: Application
    Filed: August 29, 2017
    Publication date: March 8, 2018
    Applicants: Kabushiki Kaisha Toshiba, Toshiba Digital Solutions Corporation
    Inventors: Kenta CHO, Kazuyuki Goto, Yasunari Miyabe, Masahisa Shinozaki, Keisuke Sakanushi, Guowei Zu, Kaoru Hirano
  • Publication number: 20170365258
    Abstract: According to an embodiment, an utterance presentation device includes an utterance recording unit, a voice recognition unit, an association degree calculation unit, and a UI control unit. The utterance recording unit is configured to record vocal utterances. The voice recognition unit is configured to recognize the recorded utterances by voice recognition. An association degree calculation unit is configured to calculate degrees of association of the recognized utterances with a character string specified from among character strings displayed in a second display region of a user interface (UI) screen having a first display region and the second display region. A UI control unit is configured to display voice recognition results of utterances selected based on the degrees of association in the first display region of the UI screen.
    Type: Application
    Filed: September 1, 2017
    Publication date: December 21, 2017
    Inventors: Kenta Cho, Toshiyuki Kano
  • Publication number: 20170277679
    Abstract: An information processing device includes an extracting unit, a first calculating unit, and a second calculating unit. From a sentence included in a set of sentences, the extracting unit extracts compound words, each made of a plurality of words, and first words other than the words constituting the compound words. The first calculating unit calculates, based on the occurrence frequencies of the first words and based on the occurrence frequencies of the compound words, first degrees of importance indicating the degrees of importance of the first words and the degrees of importance of the compound words. With respect to first sentences included in the set of sentences, the second calculating unit calculates second degrees of importance, which indicate the degrees of importance of the first sentences, based on the first degrees of importance of the words and the first degrees of importance of the compound words.
    Type: Application
    Filed: March 13, 2017
    Publication date: September 28, 2017
    Inventors: Yasunari Miyabe, Kazuyuki Goto, Kenta Cho, Masahisa Shinozaki, Keisuke Sakanushi, Guowei Zu, Kaoru Hirano
  • Publication number: 20170277672
    Abstract: An information processing device according to an embodiment includes a keyword extracting unit, a tag generating unit and a UI control unit. The keyword extracting unit extracts a keyword from time-series texts within a time range set by a user. The tag generating unit generates a tag corresponding to a time period from a first appearing time until a last appearing time of a same keyword appearing plural times within a duration set according to the time range. The UI control unit creates a UI screen including a first display area in which a time axis corresponding to the time range is displayed and a second display area in which the tag is displayed while causing the tag to correspond to the time period on the time axis, and resets, by selecting the tag, a time period of the selected tag in the time range to update the UI screen.
    Type: Application
    Filed: March 9, 2017
    Publication date: September 28, 2017
    Inventors: Kenta Cho, Yasunari Miyabe, Kazuyuki Goto, Masahisa Shinozaki, Keisuke Sakanushi
  • Publication number: 20160275967
    Abstract: According to one embodiment, a presentation support apparatus includes a switcher, an acquirer, a recognizer and a controller. The switcher switches a first content to a second content in accordance with an instruction of a first user, the first content and the second content being presented to the first user. The acquirer acquires a speech related to the first content from the first user as a first audio signal. The recognizer performs speech recognition on the first audio signal to obtain a speech recognition result. The controller controls continuous output the first content to a second user, when the first content is switched to the second content, during a first period after presenting the speech recognition result to the second user.
    Type: Application
    Filed: March 9, 2016
    Publication date: September 22, 2016
    Inventors: Kazuo Sumita, Satoshi Kamatani, Kazuhiko Abe, Kenta Cho
  • Patent number: 9338049
    Abstract: According to an embodiment, a server device includes a first registering unit, a second registering unit, and a providing unit. The first registering unit is configured to acquire first action information regarding an action performed by a first user from a client device, and register the first action information in an action information storage unit in which pieces of action information of users are stored in association with labels of the respective pieces of action information. The second registering unit is configured to specify at least one piece of action information similar to the first action information among the pieces of action information, and register the label associated with the specified piece of action information in the action information storage unit in association with the first action information. The providing unit is configured to provide the label associated with the first action information to the client device.
    Type: Grant
    Filed: December 28, 2012
    Date of Patent: May 10, 2016
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Shinichi Nagano, Masayuki Okamoto, Kouji Ueno, Kenta Sasaki, Kenta Cho
  • Publication number: 20160078020
    Abstract: According to one embodiment, a speech translation apparatus includes a recognizer, a detector, a convertor and a translator. The recognizer recognizes a speech in a first language to generate a recognition result. The detector detects translation segments suitable for machine translation from the recognition result to generate translation-segmented character strings that are obtained by dividing the recognition result based on the detected translation segments. The convertor converts the translation-segmented character strings into converted character strings which are expressions suitable for the machine translation. The translator translates the converted character strings into a second language which is different from the first language to generate translated character strings.
    Type: Application
    Filed: September 8, 2015
    Publication date: March 17, 2016
    Applicants: KABUSHIKI KAISHA TOSHIBA, TOSHIBA SOLUTIONS CORPORATION
    Inventors: Kazuo SUMITA, Satoshi KAMATANI, Kazuhiko ABE, Kenta CHO
  • Patent number: 9195735
    Abstract: According to one embodiment, an information extracting method includes: collecting a text in which a keyword of interest appears, the keyword of interest, and a time of creation of the text; extracting a keyword included in the text except for the keyword of interest and configured to extract the time of creation; extracting the keyword having a time score obtained on the basis of an appearance frequency of the keyword in a time interval exceeding a first threshold value and a local score obtained from the appearance frequency of the keyword in a predetermined local area exceeding a second threshold value as a local hot word, and also extract the extracted time interval of the extracted keyword and the keyword of interest corresponding to the keyword; and storing the extracted local hot word, the time interval, and the keyword of interest.
    Type: Grant
    Filed: December 23, 2013
    Date of Patent: November 24, 2015
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Kenta Sasaki, Shinichi Nagano, Koji Ueno, Kenta Cho
  • Publication number: 20150199567
    Abstract: According to one embodiment, a document classification assisting apparatus includes an input unit, an extracting unit, an amount calculator, a setting unit, a calculator, and a storage. The input unit inputs documents including stroke information. The extracting unit extracts, from the stroke information, at least one of figure, annotation and text information. The amount calculator calculates, from the information extracted, feature amounts that enable comparison in similarity between the documents. The setting unit sets clusters including representative vectors that indicate features of the clusters and each include the feature amounts, and detects to which one of the clusters each of the documents belongs. The calculator calculates, as a classification rule, at least one of the feature amounts included in the representative vectors and characterizing the representative vectors. The storage stores the classification rule.
    Type: Application
    Filed: March 25, 2015
    Publication date: July 16, 2015
    Inventors: Kosei Fume, Masaru Suzuki, Kenta Cho, Masayuki Okamoto
  • Publication number: 20150199582
    Abstract: According to one embodiment, a character recognition apparatus includes a first generation unit, an estimation unit, a second generation unit and a search unit. The first generation unit generates a user dictionary that a preferred character is registered. The estimation unit estimates a first separation between characters based on one or more of a layout of a target text and marking information. The second generation unit generates a lattice structure, by estimating character segments being expressed by strokes based on the first separation. The search unit searches, if the lattice structure includes the path corresponding to the preferred character, the lattice structure for a path to obtain a character recognition result.
    Type: Application
    Filed: March 25, 2015
    Publication date: July 16, 2015
    Inventors: Masayuki Okamoto, Kenta Cho, Kosei Fume
  • Publication number: 20150179173
    Abstract: According to an embodiment, a communication support apparatus converts conversation between users into text data by using a dictionary and causes a terminal device to display the text data. The apparatus includes an event detection unit, a word extraction unit, and a word selection unit. The event detection unit analyzes a sentence obtained by converting a voice of an utterance of a conference participant into text data to detect an event indicating a failure of communication through conversation. The word extraction unit extracts words from the sentence in which the event is detected by the event detection unit. The word selection unit selects, from among the words extracted by the word extraction unit, a word causing a failure of the communication based on a value of a communication failure index calculated from the event detected in the sentence including the words extracted therefrom.
    Type: Application
    Filed: August 13, 2014
    Publication date: June 25, 2015
    Inventors: Kenta Cho, Toshiyuki Kano
  • Publication number: 20140304118
    Abstract: According to an embodiment, a product comparison apparatus for comparing a plurality of products includes an operation acceptance unit, a comparison table generator, and a display unit. The operation acceptance unit is configured to accept a user operation for selecting a plurality of annotated documents which include specification information items related to the products and annotations appended by a user. The comparison table generator is configured to generate a comparison table comprising the specification information items related to the products, in accordance with the user operation. The display unit is configured to display the comparison table.
    Type: Application
    Filed: March 3, 2014
    Publication date: October 9, 2014
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Kenta Cho, Masayuki Okamoto, Kosei Fume, Masaru Suzuki
  • Publication number: 20140289247
    Abstract: According to an embodiment, an annotation search apparatus includes a feature extractor and an annotation search unit. The feature extractor is configured to extract an annotation feature from an input document and an annotation appended by a user to the input document. The annotation search unit is configured to search annotation information items to retrieve at least one of the annotation information items according to an intended purpose of the user, one of the annotation information items corresponding to the input document and including the annotation feature.
    Type: Application
    Filed: March 3, 2014
    Publication date: September 25, 2014
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Masayuki Okamoto, Masaru Suzuki, Kosei Fume, Kenta Cho
  • Publication number: 20140289632
    Abstract: According to an embodiment, a picture drawing support apparatus includes following components. The feature extractor extracts a feature amount from a picture drawn by a user. The speech recognition unit performs speech recognition on speech input by the user. The keyword extractor extracts at least one keyword from a result of the speech recognition. The image search unit retrieves one or more images corresponding to the at least one keyword from a plurality of images prepared in advance. The image selector selects an image which matches the picture, from the one or more images based on the feature amount. The image deformation unit deforms the image based on the feature amount to generate an output image. The presentation unit presents the output image.
    Type: Application
    Filed: March 4, 2014
    Publication date: September 25, 2014
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Masaru Suzuki, Masayuki Okamoto, Kenta Cho, Kosei Fume
  • Publication number: 20140289238
    Abstract: According to one embodiment, a document creation support apparatus includes a determination unit, a search unit and a presentation unit. The determination unit determines a document type that is a type of a document containing a target character string, based on feature values including a first character recognition result and a first position information item. The search unit searches, if a search condition for searching for relevant character strings is satisfied, one or more databases for the relevant character strings to obtain the relevant character strings in order of decreasing score based on priorities, each of the priorities being set to each of the one or more databases according to the document type. The presentation unit presents the relevant character strings in order of decreasing the score.
    Type: Application
    Filed: February 21, 2014
    Publication date: September 25, 2014
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Kosei Fume, Masaru Suzuki, Masayuki Okamoto, Kenta Cho
  • Patent number: 8788621
    Abstract: An action acquiring unit acquires action information corresponding to operation information from a first storage unit for a first user, and stores acquired action information in a third storage unit. A receiving unit receives, via a network, action information of a second user from an external device. A situation acquiring unit acquires, from a second storage, a communication situation corresponding to received action information. A writing unit writes the action information of the first user indicated by the communication situation to the third storage unit, additionally.
    Type: Grant
    Filed: March 10, 2008
    Date of Patent: July 22, 2014
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Masayuki Okamoto, Naoki Iketani, Hideo Umeki, Sogo Tsuboi, Kenta Cho, Keisuke Nishimura
  • Publication number: 20140188883
    Abstract: According to one embodiment, an information extracting method includes: collecting a text in which a keyword of interest appears, the keyword of interest, and a time of creation of the text; extracting a keyword included in the text except for the keyword of interest and configured to extract the time of creation; extracting the keyword having a time score obtained on the basis of an appearance frequency of the keyword in a time interval exceeding a first threshold value and a local score obtained from the appearance frequency of the keyword in a predetermined local area exceeding a second threshold value as a local hot word, and also extract the extracted time interval of the extracted keyword and the keyword of interest corresponding to the keyword; and storing the extracted local hot word, the time interval, and the keyword of interest.
    Type: Application
    Filed: December 23, 2013
    Publication date: July 3, 2014
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Kenta SASAKI, Shinichi NAGANO, Koji UENO, Kenta CHO
  • Patent number: 8600918
    Abstract: According to one embodiment, an action history search device receives an inquiry from a user and outputs a inquiry time and a target of inquiry, decides a range for searching action history information representing history of the user's action together with a time of the user's action using the target of inquiry, and calculates an elapsed time from the time of the user's action within the range to the inquiry time. The device judges using the elapsed time and a narrowing-down model a probability on each response candidate to the inquiry based on the history of the action within the range. The narrowing-down model is used for judging according to the elapsed time the probability that the response candidate to the inquiry obtained from the history of the action is the user's desired response. The device outputs the response candidate according to the probability.
    Type: Grant
    Filed: September 15, 2011
    Date of Patent: December 3, 2013
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Hisao Setoguchi, Yuzo Okamoto, Kenta Cho, Takahiro Kawamura
  • Publication number: 20130268736
    Abstract: According to one embodiment, a sensor data recording apparatus includes following elements. The temporary storage unit temporarily stores the sensor data acquired from sensors. The data selector selects sensor data stored in the temporary storage unit for each sensor. The sensor data storage unit stores the sensor data selected for each sensor. The recording method controller controls at least one of a recording method of storing the sensor data in the temporary storage unit, and a recording method of storing the sensor data in the sensor data storage unit, based on the recording status which is statistical information about storing of the sensor data in the sensor data storage unit.
    Type: Application
    Filed: April 4, 2013
    Publication date: October 10, 2013
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Masayuki OKAMOTO, Takao MARUKAME, Kouji UENO, Takahiro KURITA, Atsuhiro KINOSHITA, Kenta CHO
  • Publication number: 20130262674
    Abstract: According to an embodiment, a server device includes a first registering unit, a second registering unit, and a providing unit. The first registering unit is configured to acquire first action information regarding an action performed by a first user from a client device, and register the first action information in an action information storage unit in which pieces of action information of users are stored in association with labels of the respective pieces of action information. The second registering unit is configured to specify at least one piece of action information similar to the first action information among the pieces of action information, and register the label associated with the specified piece of action information in the action information storage unit in association with the first action information. The providing unit is configured to provide the label associated with the first action information to the client device.
    Type: Application
    Filed: December 28, 2012
    Publication date: October 3, 2013
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Shinichi Nagano, Masayuki Okamoto, Kouhi Ueno, Kenta Sasaki, Kenta Cho