Patents by Inventor Kouji Ueno
Kouji Ueno has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20160036629Abstract: A control device includes: a controller configured to control an information processor including a plurality of processing devices, wherein, the controller, based on a program, performs operations to: receive a plurality of processes to be executed by the plurality of processing devices; and select, based on processing information pertaining to the plurality of processes, a second process capable of being executed by a second processing device among the plurality of processing devices during an adjustment time in accordance with an execution of a first process to be executed by a first processing device among the plurality of processing devices, the first process being different from the second process.Type: ApplicationFiled: June 23, 2015Publication date: February 4, 2016Inventors: Daisuke Miyazaki, Kouji Ueno, Mamoru Arisumi, Syoutarou Takanaka
-
Patent number: 9196253Abstract: According to an embodiment, an information processing apparatus includes a dividing unit, an assigning unit, and a generating unit. The dividing unit is configured to divide speech data into pieces of utterance data. The assigning unit is configured to assign speaker identification information to each piece of utterance data based on an acoustic feature of the each piece of utterance data. The generating unit is configured to generate a candidate list that indicates candidate speaker names so as to enable a user to determine a speaker name to be given to the piece of utterance data identified by instruction information, based on operation history information in which at least pieces of utterance identification information, pieces of the speaker identification information, and speaker names given by the user to the respective pieces of utterance data are associated with one another.Type: GrantFiled: August 6, 2013Date of Patent: November 24, 2015Assignee: Kabushiki Kaisha ToshibaInventors: Osamu Nishiyama, Taira Ashikawa, Tomoo Ikeda, Kouji Ueno, Kouta Nakata
-
Publication number: 20150178361Abstract: According to an embodiment, an information processing apparatus includes an obtaining unit; an extracting unit; a first determining unit; and an output unit. The obtaining unit is configured to obtain a specific spot. The extracting unit is configured to extract a first set of partial information from relationship information that indicates geographical relations of inclusion among a plurality of geographical areas. The first set of partial information indicates relations of inclusion among areas that include the specific spot. The first determining unit is configured to determine, as a candidate area group to be output, at least one of the relations of inclusion among areas included in the first set of partial information. The output unit configured to output the candidate area group.Type: ApplicationFiled: February 6, 2015Publication date: June 25, 2015Inventors: Kouji UENO, Shinichi NAGANO
-
Publication number: 20150170649Abstract: According to an embodiment, a memory controller stores, in a memory, character strings in voice text obtained through voice recognition on voice data, a node index, a recognition score, and a voice index. A detector detects reproduction section of the voice data. An obtainer obtains reading of a phrase in a text written down from the reproduced voice data, and obtains insertion position of character strings. A searcher searches for a character string including the reading. A determiner determines whether to perform display based on the recognition score corresponding to the retrieved character string. A history updater stores, in a memory, candidate history data indicating the retrieved character string, the recognition score, and the character insertion position. A threshold updater decides on a display threshold value using the recognition score of the candidate history data and/or the recognition score of the character string selected by a selector.Type: ApplicationFiled: December 8, 2014Publication date: June 18, 2015Inventors: Taira Ashikawa, Kouji Ueno
-
Publication number: 20150025877Abstract: According to an embodiment, a character input device includes a first obtainer, a determiner, a first generator, and an outputter. The first obtainer receives an input of characters from a user and obtains an input character string. The determiner infers, from the input character string, word notations intended by the user and relations of connection between the word notations and to determine routes each of which represents the relation of connection having a high likelihood of serving as a notation candidate intended by the user. The first generator extracts, from a group of word notations included in the routes, the word notations to be output and generate layout information used in outputting the extracted word notations as the notation candidates. The outputter outputs the layout information.Type: ApplicationFiled: July 17, 2014Publication date: January 22, 2015Inventors: Kouji UENO, Tomoo IKEDA, Taira ASHIKAWA, Kouta NAKATA
-
Publication number: 20140372117Abstract: According to an embodiment, a transcription support device includes a first voice acquisition unit, a second voice acquisition unit, a recognizer, a text acquisition unit, an information acquisition unit, a determination unit, and a controller. The first voice acquisition unit acquires a first voice to be transcribed. The second voice acquisition unit acquires a second voice uttered by a user. The recognizer recognizes the second voice to generate a first text. The text acquisition unit acquires a second text obtained by correcting the first text by the user. The information acquisition unit acquires reproduction information representing a reproduction section of the first voice. The determination unit determines a reproduction speed of the first voice on the basis of the first voice, the second voice, the second text, and the reproduction information. The controller reproduces the first voice at the determined reproduction speed.Type: ApplicationFiled: March 5, 2014Publication date: December 18, 2014Applicant: Kabushiki Kaisha ToshibaInventors: Kouta NAKATA, Taira ASHIKAWA, Tomoo IKEDA, Kouji UENO
-
Publication number: 20140207454Abstract: According to an embodiment, a text reproduction device includes a setting unit, an acquiring unit, an estimating unit, and a modifying unit. The setting unit is configured to set a pause position delimiting text in response to input data that is input by the user during reproduction of speech data. The acquiring unit is configured to acquire a reproduction position of the speech data being reproduced when the pause position is set. The estimating unit is configured to estimate a more accurate position corresponding to the pause position by matching the text around the pause position with the speech data around the reproduction position. The modifying unit is configured to modify the reproduction position to the estimated more accurate position in the speech data, and set the pause position so that reproduction of the speech data can be started from the modified reproduction position when the pause position is designated by the user.Type: ApplicationFiled: January 17, 2014Publication date: July 24, 2014Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Kouta Nakata, Taira Ashikawa, Tomoo Ikeda, Kouji Ueno, Osamu Nishiyama
-
Patent number: 8676578Abstract: According to one embodiment, a meeting support apparatus includes a storage unit, a determination unit, a generation unit. The storage unit is configured to store storage information for each of words, the storage information indicating a word of the words, pronunciation information on the word, and pronunciation recognition frequency. The determination unit is configured to generate emphasis determination information including an emphasis level that represents whether a first word should be highlighted and represents a degree of highlighting determined in accordance with a pronunciation recognition frequency of a second word when the first word is highlighted, based on whether the storage information includes second set corresponding to first set and based on the pronunciation recognition frequency of the second word when the second set is included. The generation unit is configured to generate an emphasis character string based on the emphasis determination information when the first word is highlighted.Type: GrantFiled: March 25, 2011Date of Patent: March 18, 2014Assignee: Kabushiki Kaisha ToshibaInventors: Tomoo Ikeda, Nobuhiro Shimogori, Kouji Ueno
-
Publication number: 20140046666Abstract: According to an embodiment, an information processing apparatus includes a dividing unit, an assigning unit, and a generating unit. The dividing unit is configured to divide speech data into pieces of utterance data. The assigning unit is configured to assign speaker identification information to each piece of utterance data based on an acoustic feature of the each piece of utterance data. The generating unit is configured to generate a candidate list that indicates candidate speaker names so as to enable a user to determine a speaker name to be given to the piece of utterance data identified by instruction information, based on operation history information in which at least pieces of utterance identification information, pieces of the speaker identification information, and speaker names given by the user to the respective pieces of utterance data are associated with one another.Type: ApplicationFiled: August 6, 2013Publication date: February 13, 2014Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Osamu Nishiyama, Taira Ashikawa, Tomoo Ikeda, Kouji Ueno, Kouta Nakata
-
Publication number: 20130268736Abstract: According to one embodiment, a sensor data recording apparatus includes following elements. The temporary storage unit temporarily stores the sensor data acquired from sensors. The data selector selects sensor data stored in the temporary storage unit for each sensor. The sensor data storage unit stores the sensor data selected for each sensor. The recording method controller controls at least one of a recording method of storing the sensor data in the temporary storage unit, and a recording method of storing the sensor data in the sensor data storage unit, based on the recording status which is statistical information about storing of the sensor data in the sensor data storage unit.Type: ApplicationFiled: April 4, 2013Publication date: October 10, 2013Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Masayuki OKAMOTO, Takao MARUKAME, Kouji UENO, Takahiro KURITA, Atsuhiro KINOSHITA, Kenta CHO
-
Patent number: 8553855Abstract: A conference support apparatus for supporting a conference held between at least two terminals, includes: a delay unit configured to delay first voice data obtained by one of the terminals, in accordance with a delay caused by an information extraction processing performed on the first voice data; and a delay information video generation unit configured to generate a delay information video obtained by visualizing information about the delay of the first voice data that is delayed by the delay unit.Type: GrantFiled: September 14, 2011Date of Patent: October 8, 2013Assignee: Kabushiki Kaisha ToshibaInventors: Kouji Ueno, Nobuhiro Shimogori, Tomoo Ikeda
-
Publication number: 20130080174Abstract: In an embodiment, a retrieving device includes: a text input unit, a first extracting unit, a retrieving unit, a second extracting unit, an acquiring unit, and a selecting unit. The text input unit inputs a text including unknown word information representing a phrase that a user was unable to transcribe. The first extracting unit extracts related words representing a phrase related to the unknown word information among phrases other than the unknown word information included in the text. The retrieving unit retrieves a related document representing a document including the related words. The second extracting unit extracts candidate words representing candidates for the unknown word information from a plurality of phrases included in the related document. The acquiring unit acquires reading information representing estimated pronunciation of the unknown word information. The selecting unit selects at least one of candidate word of which pronunciation is similar to the reading information.Type: ApplicationFiled: June 20, 2012Publication date: March 28, 2013Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Osamu Nishiyama, Nobuhiro Shimogori, Tomoo Ikeda, Kouji Ueno, Hirokazu Suzuki, Manabu Nagao
-
Publication number: 20130080163Abstract: According to an embodiment, an information processing apparatus includes a storage unit, a detector, an acquisition unit, and a search unit. The storage unit configured to store therein voice indices, each of which associates a character string included in voice text data obtained from a voice recognition process with voice positional information, the voice positional information indicating a temporal position in the voice data and corresponding to the character string. The acquisition unit acquires reading information being at least a part of a character string representing a reading of a phrase to be transcribed from the voice data played back. The search unit specifies, as search targets, character strings whose associated voice positional information is included in the played-back section information among the character strings included in the voice indices, and retrieves a character string including the reading represented by the reading information from among the specified character strings.Type: ApplicationFiled: June 26, 2012Publication date: March 28, 2013Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Nobuhiro Shimogori, Tomoo Ikeda, Kouji Ueno, Osamu Nishiyama, Hirokazu Suzuki, Manabu Nagao
-
Publication number: 20130030806Abstract: In an embodiment, a transcription support system includes: a first storage, a playback unit, a second storage, a text generating unit, an estimating unit, and a setting unit. The first storage stores the voice data therein; a playback unit plays back the voice data; and a second storage stores voice indices, each of which associates a character string obtained from a voice recognition process with voice positional information, for which the voice positional information is indicative of a temporal position in the voice data and corresponds to the character string. The text creating unit creates text; the estimating unit estimates already-transcribed voice positional information based on the voice indices; and the setting unit sets a playback starting position that indicates a position at which playback is started in the voice data based on the already-transcribed voice positional information.Type: ApplicationFiled: March 15, 2012Publication date: January 31, 2013Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Kouji Ueno, Nobuhiro Shimogori, Tomoo Ikeda, Osamu Nisiyama, Hirokazu Suzuki, Manabu Nagao
-
Publication number: 20130030805Abstract: According to one embodiment, a transcription support system supports transcription work to convert voice data to text. The system includes a first storage unit configured to store therein the voice data; a playback unit configured to play back the voice data; a second storage unit configured to store therein voice indices, each of which associates a character string obtained from a voice recognition process with voice positional information, for which the voice positional information is indicative of a temporal position in the voice data and corresponds to the character string; a text creating unit that creates the text in response to an operation input of a user; and an estimation unit configured to estimate already-transcribed voice positional information indicative of a position at which the creation of the text is completed in the voice data based on the voice indices.Type: ApplicationFiled: March 15, 2012Publication date: January 31, 2013Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Hirokazu Suzuki, Nobuhiro Shimogori, Tomoo Ikeda, Kouji Ueno, Osamu Nishiyama, Manabu Nagao
-
Publication number: 20120154514Abstract: A conference support apparatus for supporting a conference held between at least two terminals, includes: a delay unit configured to delay first voice data obtained by one of the terminals, in accordance with a delay caused by an information extraction processing performed on the first voice data; and a delay information video generation unit configured to generate a delay information video obtained by visualizing information about the delay of the first voice data that is delayed by the delay unit.Type: ApplicationFiled: September 14, 2011Publication date: June 21, 2012Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Kouji Ueno, Nobuhiro Shimogori, Tomoo Ikeda
-
Publication number: 20120078629Abstract: According to one embodiment, a meeting support apparatus includes a storage unit, a determination unit, a generation unit. The storage unit is configured to store storage information for each of words, the storage information indicating a word of the words, pronunciation information on the word, and pronunciation recognition frequency. The determination unit is configured to generate emphasis determination information including an emphasis level that represents whether a first word should be highlighted and represents a degree of highlighting determined in accordance with a pronunciation recognition frequency of a second word when the first word is highlighted, based on whether the storage information includes second set corresponding to first set and based on the pronunciation recognition frequency of the second word when the second set is included. The generation unit is configured to generate an emphasis character string based on the emphasis determination information when the first word is highlighted.Type: ApplicationFiled: March 25, 2011Publication date: March 29, 2012Inventors: Tomoo Ikeda, Nobuhiro Shimogori, Kouji Ueno
-
Patent number: 7953592Abstract: A semantic analysis apparatus includes a data obtaining unit that obtains data in which an item name and an item value belonging to the item name are represented in a predetermined data format; an item value extracting unit that extracts the item value from the data based on the data format; a concept storing unit that stores a concept which is a semantic notion to be attached to the item name and an instance which is specific data of the concept in association with each other; a concept specifying unit that specifies the concept, which is stored in the concept storing unit and which is associated with the instance which at least partially matches with a character string of the extracted item value, as the concept for the item name; and an associating unit that associates the concept with the item name.Type: GrantFiled: February 27, 2006Date of Patent: May 31, 2011Assignee: Kabushiki Kaisha ToshibaInventors: Takahiro Kawamura, Kouji Ueno, Shinichi Nagano, Tetsuo Hasegawa, Akihiko Ohsuga
-
Publication number: 20090222257Abstract: A translation direction specifying unit specifies a first language and a second language. A speech recognizing unit recognizes a speech signal of the first language and outputs a first language character string. A first translating unit translates the first language character string into a second language character string that will be displayed on a display device. A keyword extracting unit extracts a keyword for a document retrieval from the first language character string or the second language character string, with which a document retrieving unit performs a document retrieval. A second translating unit translates a retrieved document into its opponent language, which will be displayed on the display device.Type: ApplicationFiled: February 18, 2009Publication date: September 3, 2009Inventors: Kazuo SUMITA, Tetsuro CHINO, Satoshi KAMATANI, Kouji UENO
-
Patent number: D637276Type: GrantFiled: April 30, 2008Date of Patent: May 3, 2011Assignee: Sanyo Denki Co., Ltd.Inventors: Yoshinori Miyabara, Kouji Ueno, Tomoaki Ikeda, Masashi Miyazawa, Haruhisa Maruyama