Patents by Inventor Kazuo Sumita

Kazuo Sumita has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150081274
    Abstract: A first speech input device captures a speech of a first language. A first speech output device outputs another speech of the first language. A second speech input device captures a speech of a second language. A second speech output device outputs another speech of the second language. In a speech recognition/translation server, a first speech recognition device receives a first utterance speech of the first language from the first speech input device, and recognizes the first utterance speech. A first machine translation device consecutively translates the first language of the recognition result into the second language without waiting completion of the first utterance speech. A first speech synthesis device generates a second speech of the translation result. A first output adjustment device outputs the first utterance speech and the second speech to the second speech output device by adjusting a volume of the first utterance speech to be smaller than a volume of the second speech.
    Type: Application
    Filed: September 16, 2014
    Publication date: March 19, 2015
    Inventors: Akinori Kawamura, Kazuo Sumita, Satoshi Kamatani
  • Publication number: 20150081270
    Abstract: According to one embodiment, a speech of a first language is recognized using a speech recognition dictionary to recognize the first language and a second language, and a source sentence of the first language is generated. The source sentence is translated into a second language, and a translation sentence of the second language is generated. An unknown word included in the translation sentence is detected. The unknown word is not stored in the speech recognition dictionary. A first pronunciation candidate of the unknown word is estimated, from a representation of the unknown word. A second pronunciation candidate of the unknown word is estimated from a pronunciation of an original word included in the source sentence corresponding to the unknown word. The unknown word, the first pronunciation candidate and the second pronunciation candidate, are registered into the speech recognition dictionary correspondingly.
    Type: Application
    Filed: September 12, 2014
    Publication date: March 19, 2015
    Inventors: Satoshi Kamatani, Kazuo Sumita, Akinori Kawamura
  • Publication number: 20150081271
    Abstract: A first speech processing device includes a first speech input unit and a first speech output unit. A second speech processing device includes a second speech input unit and a second speech output unit. In a server therebetween, a speech of a first language sent from the first speech input unit is recognized. The speech recognition result is translated into a second language. The translation result is back translated into the first language. A first speech synthesis signal of the back translation result is sent to the first speech output unit. A second speech synthesis signal of the translation result is sent to the second speech output unit. Duration of the second speech synthesis signal or the first speech synthesis signal is measured. The first speech synthesis signal and the second speech synthesis signal are outputted by synchronizing a start time and an end time thereof, based on the duration.
    Type: Application
    Filed: September 12, 2014
    Publication date: March 19, 2015
    Inventors: Kazuo Sumita, Akinori Kawamura, Satoshi Kamatani
  • Publication number: 20150006441
    Abstract: According to one embodiment, a service control apparatus includes an acquisition unit, an estimator, and a generator. The acquisition unit acquires a user request. Intention knowledge items associate user requests with user intentions behind the user requests. The estimator estimates user an intention corresponding to the user request with reference to the intention knowledge items. Service control knowledge items define methods of generating service control conditions for operating the service. The methods correspond to the user intentions. The generator generates one of the service control conditions corresponding to the user request and the user intention, with reference to the service control knowledge items.
    Type: Application
    Filed: September 19, 2014
    Publication date: January 1, 2015
    Inventors: Masaru SUZUKI, Kazuo SUMITA, Hiroko FUJII, Hiromi WAKAKI, Michiaki ARIGA
  • Patent number: 8738371
    Abstract: A response storage unit stores a response, a watching degree relative to a display unit, and an output form of the response to a speaker and the display unit. An extracting unit extracts a request from a speech recognition result. A response determining unit determines a response based on the extracted request. A direction detector detects a viewing direction based on sensing information received from a transmitter mounted on a user. A watching-degree determining unit determines a watching degree based on the viewing direction. An output controller obtains an output form corresponding to the response and the determined watching degree from the response storage unit, and outputs the response to the speaker and the display unit according to the obtained output form.
    Type: Grant
    Filed: September 13, 2007
    Date of Patent: May 27, 2014
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Kazuo Sumita
  • Patent number: 8639518
    Abstract: According to an embodiment, an information retrieving apparatus includes a housing; an input-output unit to perform dialogue processing with a user; a first detecting unit to detect means of transfer which indicates present means of transfer for the user; a second detecting unit to detect a holding status which indicates whether the user is holding the housing; a third detecting unit to detect a talking posture which indicates whether the housing is held near the face of the user; a selecting unit to select, from among a plurality of interaction modes that establish the dialogue processing, an interaction mode according to a combination of the means of transfer, the holding status, and the talking posture; an dialogue manager to control the dialogue processing according to the selected interaction mode; and a information retrieval unit to retrieve information using a keyword that is input during the dialogue processing.
    Type: Grant
    Filed: July 2, 2012
    Date of Patent: January 28, 2014
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Hiromi Wakaki, Kazuo Sumita, Hiroko Fujii, Masaru Suzuki, Michiaki Ariga
  • Patent number: 8635070
    Abstract: According to one embodiment, a speech translation apparatus includes a receiving unit, a first recognition unit, a second recognition unit, a first generation unit, a translation unit, a second generation unit, a synthesis unit. The receiving unit is configured to receive a speech in a first language and convert to speech signal. The first recognition unit is configured to perform speech recognition and generate a transcription. The second recognition unit is configured to recognize which emotion type is included in the speech and generate emotion identification information including recognized emotion type(s). The first generation unit is configured to generate a filtered sentence. The translation unit is configured to generate a translation of the filtered sentence in the first language in a second language. The second generation unit is configured to generate an insertion sentence. The synthesis unit is configured to convert the filtered and the insertion sentences into speech signal.
    Type: Grant
    Filed: March 25, 2011
    Date of Patent: January 21, 2014
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Kazuo Sumita
  • Publication number: 20140006007
    Abstract: According to one embodiment, a speech translation apparatus includes a speech recognition unit, a translation unit, a search unit and a selection unit. The speech recognition unit successively performs speech recognition to obtain a first language word string. The translation unit translates the first language word string into a second language word string. The search unit search for at least one similar example and acquires the similar example and a translation example. The selection unit selects, in accordance with a user instruction, at least one of the first language word string associated with the similar example and the second language word string associated with the translation example, as a selected word string.
    Type: Application
    Filed: April 9, 2013
    Publication date: January 2, 2014
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Kazuo SUMITA, Hirokazu SUZUKI, Kentaro FURIHATA, Satoshi KAMATANI, Tetsuro CHINO, Hisayoshi NAGAE, Michiaki ARIGA, Takashi MASUKO
  • Patent number: 8583417
    Abstract: According to an embodiment, a translation unit translates a first sentence in a first language into a sentence in a second language using parallel translations. A next utterance table includes first identification information distinguishing between a sentence in the first language and a sentence in the second language included in the parallel translation and includes second identification information identifying a sentence previously selected as the next utterance of the sentence indicated by the first identification information. An acquiring unit acquires next utterance candidates, which are sentences indicated by the second identification information associated with the first identification information of the first sentence. If the selected next utterance candidate is the first language, the translation unit translates the selected next utterance candidate into the second language.
    Type: Grant
    Filed: March 7, 2012
    Date of Patent: November 12, 2013
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Kazuo Sumita, Michiaki Ariga, Tetsuro Chino
  • Publication number: 20130253932
    Abstract: A conversation supporting device of an embodiment of the present disclosure has a information storage unit, a recognition resource constructing unit, and a voice recognition unit. Here, the information storage unit stores the information disclosed by a speaker. The recognition resource constructing unit uses the disclosed information to construct the recognition resource including a voice model and a language model for recognition of voice data. The voice recognition unit uses the recognition resource to recognize the voice data.
    Type: Application
    Filed: February 25, 2013
    Publication date: September 26, 2013
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Masahide ARIU, Kazuo Sumita, Akinori Kawamura
  • Publication number: 20130253924
    Abstract: According to one embodiment, a speech conversation support apparatus includes a division unit, an analysis unit, a detection unit, an estimation unit and an output unit. The division unit divides a speech data item including a word item and a sound item into a plurality of divided speech data items. The analysis unit obtains an analysis result. The detection unit detects, for each divided speech data item, at least one clue expression indicating one of an instruction by a user and a state of the user. The estimation unit estimates, if the clue expression is detected, playback data item from at least one divided speech data item corresponding to a speech uttered before the clue expression is detected. The output unit outputs the playback data item.
    Type: Application
    Filed: December 27, 2012
    Publication date: September 26, 2013
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Yumi Ichimura, Kazuo Sumita
  • Publication number: 20130218553
    Abstract: According to an embodiment, an information notification supporting device includes an analyzer configured to analyze an input voice so as to identify voice information indicating information related to speech; a storage unit configured to store therein a history of the voice information; an output controller configured to determine, using the history of the voice information, whether a user is able to listen to a message of which the user should be notified; and an output unit configured to output the message when it is determined that the user is in a state in which the user is able to listen to the message.
    Type: Application
    Filed: December 28, 2012
    Publication date: August 22, 2013
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Hiroko Fujii, Masaru Suzuki, Kazuo Sumita, Masahide Ariu
  • Publication number: 20130006616
    Abstract: According to an embodiment, an information retrieving apparatus includes a housing; an input-output unit to perform dialogue processing with a user; a first detecting unit to detect means of transfer which indicates present means of transfer for the user; a second detecting unit to detect a holding status which indicates whether the user is holding the housing; a third detecting unit to detect a talking posture which indicates whether the housing is held near the face of the user; a selecting unit to select, from among a plurality of interaction modes that establish the dialogue processing, an interaction mode according to a combination of the means of transfer, the holding status, and the talking posture; an dialogue manager to control the dialogue processing according to the selected interaction mode; and a information retrieval unit to retrieve information using a keyword that is input during the dialogue processing.
    Type: Application
    Filed: July 2, 2012
    Publication date: January 3, 2013
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Hiromi Wakaki, Kazuo Sumita, Hiroko Fujii, Masaru Suzuki, Michiaki Ariga
  • Patent number: 8346537
    Abstract: An input apparatus which presents examples suitable to users, including an example storage module to store a plurality of example expressions; an edit storage module to store edits when a user edits the plurality of example expressions; a presentation example determining module to determine one or more presentation examples to be presented to the user from the plurality of example expressions stored in the example storage module; an edit adapting module to edit the presentation examples determined by the presentation example determining module based on the edits stored in the edit storage module; a display control module to present the presentation examples determined by the presentation example determining module to the user; and an entry accepting module to accept one of the presentation examples as a selected example when one of the presentation examples is selected by the user, or to receive edits from the user to one of the presentation examples and accept the edited presentation example as the selected
    Type: Grant
    Filed: September 28, 2006
    Date of Patent: January 1, 2013
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Tetsuro Chino, Satoshi Kamatani, Kazuo Sumita
  • Publication number: 20120296647
    Abstract: In an embodiment, an information processing apparatus includes: a converting unit; a selecting unit; a dividing unit; a generating unit; and a display processing unit. The converting unit recognizes a voice input from a user into a character string. The selecting unit selects characters from the character string according to designation of the user. The dividing unit converts the selected characters into phonetic characters and divides the phonetic characters into phonetic characters of sound units. The generating unit extracts similar character candidates corresponding to each of the divided phonetic characters of the sound units, from a similar character dictionary storing a plurality of phonetic characters of sound units similar in sound as the similar character candidates in association with each other, and generates correction character candidates for the selected characters. The display processing unit makes a display unit display the generated correction character candidates selectable by the user.
    Type: Application
    Filed: May 23, 2012
    Publication date: November 22, 2012
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Yuka Kobayashi, Tetsuro Chino, Kazuo Sumita, Hisayoshi Nagae, Satoshi Kamatani
  • Publication number: 20120253782
    Abstract: According to one embodiment, a foreign language service assisting apparatus is provided with first and second acquisition units, a translation unit, a presentation unit and an accepting unit. The first acquisition unit acquires first information on a first article. The second acquisition unit acquires second information on second articles associated with the first article, and subsequent speech candidates expected to be spoken, based on the first information. The translation unit translates the first information, the second information, and the candidates. The presentation unit presents translation result. The accepting unit accepts selection associated with the first or second articles or the candidate.
    Type: Application
    Filed: March 16, 2012
    Publication date: October 4, 2012
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Michiaki ARIGA, Kazuo SUMITA, Masaru SUZUKI, Hiroko FUJII, Hiromi WAKAKI
  • Publication number: 20120221323
    Abstract: According to an embodiment, a translation unit translates a first sentence in a first language into a sentence in a second language using parallel translations. A next utterance table includes first identification information distinguishing between a sentence in the first language and a sentence in the second language included in the parallel translation and includes second identification information identifying a sentence previously selected as the next utterance of the sentence indicated by the first identification information. An acquiring unit acquires next utterance candidates, which are sentences indicated by the second identification information associated with the first identification information of the first sentence. If the selected next utterance candidate is the first language, the translation unit translates the selected next utterance candidate into the second language.
    Type: Application
    Filed: March 7, 2012
    Publication date: August 30, 2012
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Kazuo Sumita, Michiaki Ariga, Tetsuro Chino
  • Patent number: 8204735
    Abstract: A machine translating apparatus includes an input unit that inputs a source language sentence, a syntax analyzing unit that performs a syntactic analysis on the source language sentence and generates syntax information, an extracting unit that extracts from the syntax information the first partial information that includes the first partial structure including all the nodes under the most significant nodes that are the nodes of the syntax information and the corresponding morphemes, and also extracts the second partial information including the second subtree representing a difference between two items of the first partial information and the corresponding morphemes, a translating unit that translates the morphemes of all the items of the partial information with multiple translation systems, and a most-plausible structure selecting unit that selects a combination for which the average of the translation scores takes the maximum value.
    Type: Grant
    Filed: January 27, 2009
    Date of Patent: June 19, 2012
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Satoshi Kamatani, Tetsuro Chino, Kazuo Sumita
  • Patent number: 8185372
    Abstract: An apparatus includes a first search unit that searches a storage unit for a first example of a first language based on a sentence in the first language; a second search unit that searches for a second example of a second language corresponding to the first example, the second language containing the same meaning as the first example; a determining unit that determines whether a plurality of the second examples exist; a first acquisition unit that acquires the first example corresponding to each of the second example from the storage unit; a second acquisition unit that acquires the second example corresponding to the first example acquired from the storage unit; and a choice generating unit that generates the first example acquired associated with the least number of the second examples acquired as a choice of the first example to be output.
    Type: Grant
    Filed: September 13, 2006
    Date of Patent: May 22, 2012
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Kazuo Sumita
  • Publication number: 20120078607
    Abstract: According to one embodiment, a speech translation apparatus includes a receiving unit, a first recognition unit, a second recognition unit, a first generation unit, a translation unit, a second generation unit, a synthesis unit. The receiving unit is configured to receive a speech in a first language and convert to speech signal. The first recognition unit is configured to perform speech recognition and generate a transcription. The second recognition unit is configured to recognize which emotion type is included in the speech and generate emotion identification information including recognized emotion type(s). The first generation unit is configured to generate a filtered sentence. The translation unit is configured to generate a translation of the filtered sentence in the first language in a second language. The second generation unit is configured to generate an insertion sentence. The synthesis unit is configured to convert the filtered and the insertion sentences into speech signal.
    Type: Application
    Filed: March 25, 2011
    Publication date: March 29, 2012
    Inventor: Kazuo Sumita