Patents by Inventor Norihide Umeyama
Norihide Umeyama has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20180315423Abstract: A system comprises an apparatus having a first voice I/O device; and a voice interface apparatus having a second voice I/O device, and connected to the apparatus by audio connection via short-range wireless communication, wherein the apparatus includes a voice I/O unit that performs voice input and output by using the first voice I/O device or the second voice I/O device; an interaction unit that performs voice interaction with a user; and a process unit that performs a process other than the voice interaction, by using the voice I/O, and the voice I/O unit switches a device used for the voice input and output to the first voice input/output device in a case where the process unit is brought into a first state in which the voice input and output is required when the voice interaction with the user is performed by using the second voice I/O device.Type: ApplicationFiled: April 19, 2018Publication date: November 1, 2018Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Satoshi MIZUMA, Atsushi IKENO, Hiroshi YAMAGUCHI, Yuta YAMAMOTO, Toshifumi NISHIJIMA, Satoru SASAKI, Hiromi TONEGAWA, Norihide UMEYAMA
-
Publication number: 20180311816Abstract: A voice interactive robot interacting with a user by voice includes a main body, a movable part capable of moving relative to the main body, a following control unit that moves the movable part so that the movable part follows the user, a temporary origin setting unit that sets a temporary origin of the movable part in response to movement of the movable part by the following control unit, an acquisition unit that acquires an operation instruction issued in relation to the movable part, and an operation execution unit that moves the movable part in accordance with the operation instruction using the temporary origin as a reference.Type: ApplicationFiled: April 20, 2018Publication date: November 1, 2018Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Hayato SAKAMOTO, Atsushi IKENO, Masaaki HANADA, Takehiko ISHIGURO, Masayuki TANIYAMA, Toshifumi NISHIJIMA, Hiromi TONEGAWA, Norihide UMEYAMA, Satoru SASAKI
-
Publication number: 20180308478Abstract: A voice interaction system includes: a speaker; a microphone having a microphone gain that is set at a low level while a sound is output from the speaker; a voice recognition unit that implements voice recognition processing on input sound data input from the microphone; a sound output unit that generates output sound data and outputs the generated output sound data through the speaker; and a non-audible sound output unit that, when a plurality of sounds are output with a time interval no greater than a threshold therebetween, outputs a non-audible sound through the speaker at least in the interim between output of the plurality of sounds.Type: ApplicationFiled: April 16, 2018Publication date: October 25, 2018Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Atsushi IKENO, Satoshi MIZUMA, Hayato SAKAMOTO, Hiroto KONNO, Toshifumi NISHIJIMA, Hiromi TONEGAWA, Norihide UMEYAMA, Satoru SASAKI
-
Publication number: 20180090133Abstract: A keyword generation apparatus, comprises a vocabulary acquisition unit that acquires a keyword uttered by a first user; a first positional information acquisition unit that acquires first positional information including information representing a location at which the first user has uttered the keyword; a storage unit that stores the first positional information and the keyword in association with each other; a second positional information acquisition unit that acquires second positional information including information representing a current position of a second user; and an extraction unit that extracts a keyword unique to a locality in which the second user is positioned from the storage unit based on the second positional information.Type: ApplicationFiled: September 15, 2017Publication date: March 29, 2018Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Takuma MINEMURA, Sei KATO, Junichi ITO, Youhei WAKISAKA, Atsushi IKENO, Toshifumi NISHIJIMA, Fuminori KATAOKA, Hiromi TONEGAWA, Norihide UMEYAMA
-
Publication number: 20180090132Abstract: A voice dialogue system includes a dialogue scenario storage storing a plurality of dialogue scenarios and a dialogue text generator generating a dialogue text for responding to a user utterance based on a result of voice recognition. The dialogue scenario is a single set of three contents: a content of a first system utterance, a content of an expected user utterance, and a content of a second system utterance for responding to the expected user utterance. The dialogue text generator determines whether or not the user utterance is an expected response and, when the user utterance is an expected response, generates a second system utterance defined in a dialogue scenario as a response to the user utterance as a dialogue text for responding to the user utterance.Type: ApplicationFiled: September 14, 2017Publication date: March 29, 2018Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Atsushi IKENO, Muneaki SHIMADA, Kota HATANAKA, Toshifumi NISHIJIMA, Fuminori KATAOKA, Hiromi TONEGAWA, Norihide UMEYAMA
-
Publication number: 20180090144Abstract: A voice dialogue system includes: a voice input unit which acquires a user utterance; an intention understanding unit which interprets an intention of utterance of a voice acquired by the voice input unit; a dialogue text creator which creates a text of a system utterance; and a voice output unit which outputs the system utterance as voice data, wherein when creating a text of a system utterance, the dialogue text creator creates the text by inserting a tag in a position in the system utterance, and the intention understanding unit interprets an utterance intention of a user in accordance with whether a timing at which the user utterance is made is before or after an output of a system utterance at a position corresponding to the tag from the voice output unit.Type: ApplicationFiled: September 14, 2017Publication date: March 29, 2018Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Atsushi IKENO, Yusuke JINGUJI, Toshifumi NISHIJIMA, Fuminori KATAOKA, Hiromi TONEGAWA, Norihide UMEYAMA
-
Publication number: 20180090145Abstract: A voice interaction apparatus includes: a voice recognizer configured to recognize content of a speech of a user; an extractor configured to extract profile information based on a result of the voice recognition, and to specify which user the profile information is associated with; a storage configured to store the extracted profile information in association with the user; an exchanger configured to exchange profile information with another voice interaction apparatus; and a generator configured to generate a speech sentence to speak to the user based on the profile information of the user.Type: ApplicationFiled: September 14, 2017Publication date: March 29, 2018Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Satoshi KUME, Atsushi IKENO, Toshihiko WATANABE, Muneaki SHIMADA, Hayato SAKAMOTO, Toshifumi NISHIJIMA, Fuminori KATAOKA, Hiromi TONEGAWA, Norihide UMEYAMA
-
Publication number: 20180068659Abstract: A voice recognition device comprises a voice acquisition unit that acquires voice given by a user; a voice recognition unit that recognizes the acquired voice to acquire a voice recognition result; a category classification unit that classifies a speech content of the user into a category, based on the voice recognition result; an information acquisition unit that acquires a category dictionary including words corresponding to the classified category; and a correction unit that corrects the voice recognition result, based on the category dictionary.Type: ApplicationFiled: August 31, 2017Publication date: March 8, 2018Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Atsushi IKENO, Muneaki SHIMADA, Kota HATANAKA, Toshifumi NISHIJIMA, Fuminori KATAOKA, Hiromi TONEGAWA, Norihide UMEYAMA
-
Publication number: 20180033432Abstract: A voice interactive device that interacts with a user by voice, the device comprises a voice input unit that acquires and recognizes voice uttered by a user; a degree-of-intimacy calculating unit that calculates a degree of intimacy with the user; a response generating unit that generates a response to the recognized voice, based on the degree of intimacy; and a voice output unit that outputs the response by voice, wherein the degree-of-intimacy calculating unit calculates a degree of intimacy with the user based on a sum of a first intimacy value calculated based on a content of an utterance made by the user and a second intimacy value calculated, based on the number of previous interactions with the user.Type: ApplicationFiled: July 25, 2017Publication date: February 1, 2018Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Atsushi IKENO, Muneaki SHIMADA, Kota HATANAKA, Toshifumi NISHIJIMA, Fuminori KATAOKA, Hiromi TONEGAWA, Norihide UMEYAMA
-
Publication number: 20170345425Abstract: A voice dialog device, comprises a sight line detection unit configured to detect a sight line of a user; a voice processing unit configured to obtain voice pronounced by the user and a result of recognizing the voice; a dialog determination unit configured to determine whether or not the voice dialog device has a dialog with the user; and an answer generation unit configured to generate an answer, based on a result of recognizing the voice, wherein the dialog determination unit determines whether or not the user has started the dialog, based on both the sight line of the user and the obtained voice.Type: ApplicationFiled: May 18, 2017Publication date: November 30, 2017Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Atsushi IKENO, Muneaki SHIMADA, Kota HATANAKA, Toshifumi NISHIJIMA, Fuminori KATAOKA, Hiromi TONEGAWA, Norihide UMEYAMA
-
Publication number: 20170345424Abstract: A voice dialog device, comprises a voice processing unit configured to obtain a voice pronounced by a user and a result of recognizing the voice; a plurality of estimation units configured to estimate emotion of the user by different methods; and a response unit configured to create a response sentence, based on results of estimating the emotion of the user, and provide the response sentence to the user, wherein when a discrepancy exists between the results of estimating the emotion of the user by the plurality of estimation units, the response unit makes an inquiry to the user, and determines which estimation result is to be adopted, based on content of an obtained response.Type: ApplicationFiled: May 18, 2017Publication date: November 30, 2017Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Atsushi IKENO, Muneaki SHIMADA, Kota HATANAKA, Toshifumi NISHIJIMA, Fuminori KATAOKA, Hiromi TONEGAWA, Norihide UMEYAMA
-
Patent number: D798596Type: GrantFiled: March 29, 2016Date of Patent: October 3, 2017Assignee: Toyota Jidosha Kabushiki KaishaInventors: Fuminori Kataoka, Norihide Umeyama, Toshifumi Nishijima, Hiromi Tonegawa, Shigeyuki Susaki, Hiroshi Okamoto, Nobuaki Kuwata, Keizou Tanaka