Patents by Inventor Rika Horinaka
Rika Horinaka has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 7987091Abstract: A robot can make a dialog customized for the user by first storing various pieces of information appendant to an object as values of the corresponding items of the object. A topic that is related to the topic used in the immediately preceding conversation is then selected. Then, an acquisition conversation for acquiring the value of the item of the selected topic or a utilization conversation for utilizing the value of the item of the topic that is already stored is generated as the next conversation. The value acquired by the acquisition conversation is stored as the value of the corresponding item.Type: GrantFiled: December 2, 2003Date of Patent: July 26, 2011Assignee: Sony CorporationInventors: Kazumi Aoyama, Yukiko Yoshiike, Shinya Ohtani, Rika Horinaka, Hideki Shimomura
-
Patent number: 7813835Abstract: A behavior control system for a robot apparatus that operates autonomously. The behavior control system includes a plurality of behavior description sections for describing motions of the robot and an external environment recognition section. The system also includes an internal state management section for managing an internal state of the robot in response to the recognized external environment and/or a result of execution of a behavior, and for managing emotions. A behavior evaluation section evaluates execution of behaviors in response to the external environment and/or the internal state.Type: GrantFiled: March 17, 2003Date of Patent: October 12, 2010Assignee: Sony CorporationInventors: Masahiro Fujita, Tsuyoshi Takagi, Rika Horinaka, Shinya Ohtani
-
Patent number: 7251606Abstract: Sentences corresponding to internal statuses of a robot device or the like are created and uttered, thereby expressing the internal statuses. The robot device or the like comprise means for recognizing an external status, and means for generating an emotion based on the internals status, whereby a change in the emotion is reflected upon a dialogue. The internal status is not associated with a sentence, but it exists independently of the system and is always varied depending on various external inputs and internal changes of the system. Accordingly, even when the same question is made on the robot device or the like, the contents of a reply are changed depending on the internal status at that time, and a manner of providing a reply also differs depending on the internal status.Type: GrantFiled: March 26, 2002Date of Patent: July 31, 2007Assignee: Sony CorporationInventors: Rika Horinaka, Masahiro Fujita, Atsushi Okubo, Kenta Kawamoto, Gabriel Costa, Masaki Fukuchi, Osamu Hanagata, Kotaro Sabe
-
Patent number: 7222076Abstract: The present invention relates to a voice output apparatus capable of, in response to a particular stimulus, stopping outputting a voice and outputting a reaction. The voice output apparatus is capable of outputting a voice in a natural manner. A rule-based synthesizer 24 produces a synthesized voice and outputs it. For example, when a synthesized voice “Where is an exit” was produced and outputting of the synthesized voice data has proceeded until “Where is an e” has been output, if a user taps a robot, then a reaction generator 30 determines, by referring to a reaction database 31, that a reaction voice “Ouch!” should be output in response to being tapped. The reaction generator 30 then controls an output controller 27 so as to stop outputting the synthesized voice “Where is an exit?” and output the reaction voice “Ouch!”.Type: GrantFiled: March 22, 2002Date of Patent: May 22, 2007Assignee: Sony CorporationInventors: Erika Kobayashi, Makoto Akabane, Tomoaki Nitta, Hideki Kishi, Rika Horinaka, Masashi Takeda
-
Patent number: 7216082Abstract: A robot system includes a speech recognition unit for converting speech information into text information, and a database retrieval unit for extracting a keyword included in the text information from a database. By designating a plurality of basic actions on a speech basis, and storing an action record, a combined action formed by combining the plurality of basic actions in time-series order can be named as a new action only in voice-based interaction. A user can designate complicated continuous actions by using only one word, and can easily have a conversation with the robot.Type: GrantFiled: March 26, 2002Date of Patent: May 8, 2007Assignee: Sony CorporationInventors: Atsushi Okubo, Gabriel Costa, Kenta Kawamoto, Rika Horinaka, Masaki Fukuchi, Masahiro Fujita
-
Publication number: 20060047362Abstract: No robot that can make a dialog customized for the user is known to date. According to the invention, various pieces of information appendant to an object are stored as values of the corresponding items of the object and a topic that is related to the topic used in the immediately preceding conversation is selected. Then, an acquisition conversation for acquiring the value of the item of the selected topic or a utilization conversation for utilizing the value of the item of the topic that is already stored is generated as the next conversation. The value acquired by the acquisition conversation is stored as the value of the corresponding item.Type: ApplicationFiled: December 2, 2003Publication date: March 2, 2006Inventors: Kazumi Aoyama, Yukiko Yoshiike, Shinya Ohtani, Rika Horinaka, Hideki Shimomura
-
Patent number: 6862497Abstract: There is proposed a method that may be universally used for controlling a man-machine interface unit. A learning sample is used in order at least to derive and/or initialize a target action (t) to be carried out and to lead the user from an optional current status (ec) to an optional desired target status (et) as the final status (ef). This learning sample (l) is formed by a data triple made up by an initial status (ei) before an optional action (a) carried out by the user, a final status (ef) after the action taken place (a).Type: GrantFiled: June 3, 2002Date of Patent: March 1, 2005Assignees: Sony Corporation, Sony International (Europe) GmbHInventors: Thomas Kemp, Ralf Kompe, Raquel Tato, Masahiro Fujita, Katsuki Minamino, Kenta Kawamoto, Rika Horinaka
-
Publication number: 20040243281Abstract: A situated behavior layer is formed from a tree structure of schemas, and a parent schema calls a Monitor function of a child schema using an external stimulus and an internal state as arguments whereas the child schema returns an AL value as a return value. The child schema calls a Monitor function of its child schema in order to calculate an AL value of the child schema itself. AL values from sub trees are returned to a root schema, and evaluation of behaviors and execution of a behavior are performed concurrently. Further, emotions are divided into a plurality of layers depending upon the significance of presence thereof, and it is determined which one of a plurality of such determined motions should be selectively performed depending upon an external environment and an internal state at the time.Type: ApplicationFiled: May 21, 2004Publication date: December 2, 2004Inventors: Masahiro Fujita, Tsuyoshi Takagi, Rika Horinaka, Shinya Ohtani
-
Patent number: 6718232Abstract: A robot apparatus causes the emotion in a feeling part (130) to be changed based on the information acquired by a perception part (120) to manifest the behavior of information acquisition as autonomous behavior. The robot apparatus includes a behavior control part for causing the robot apparatus to manifest a language acquisition behavior and a meaning acquisition part. The robot apparatus also includes a control part for performing the behavior control of pointing its object of learning. The robot apparatus causes changes in internal states, which are ascribable to the object, to be stored in a memory part in association with the object.Type: GrantFiled: September 24, 2002Date of Patent: April 6, 2004Assignee: Sony CorporationInventors: Masahiro Fujita, Tsuyoshi Takagi, Rika Horinaka, Jun Yokono, Gabriel Costa, Hideki Shimomura, Katsuki Minamino
-
Publication number: 20040054519Abstract: The present invention relates to a language processing apparatus capable of generating an effective synthesized sound by performing language processing taking into account an onomatopoeia or a mimetic word. An effective synthesized voice is produced from a given text such that the synthesized voice includes a “sound” representing the meaning of an onomatopoeia or a mimetic word included in the given text. An onomatopoeic/mimetic word analyzer 21 extracts the onomatopoeia or the mimetic word from the text, and an onomatopoeic/mimetic word processing unit 27 produces acoustic data of a sound effect corresponding to the extracted onomatopoeia or mimetic word. A voice mixer 26 superimposes the acoustic data produced by the onomatopoeic/mimetic word processing unit 27 on the whole or a part of the synthesized voice data, corresponding to the text, produced by a rule-based synthesizer 24. The present invention may be applied to a robot having a voice synthesizer.Type: ApplicationFiled: May 23, 2003Publication date: March 18, 2004Inventors: Erika Kobayashi, Makoto Akabane, Tomoaki Nitta, Hideki Kishi, Rika Horinaka, Masashi Takeda
-
Publication number: 20040039483Abstract: There is proposed a method that may be universally used for controlling a man-machine interface unit. A learning sample is used in order at least to derive and/or initialize a target action (t) to be carried out and to lead the user from an optional current status (ec) to an optional desired target status (et) as the final status (ef). This learning sample (l) is formed by a data triple made up by an initial status (ei) before an optional action (a) carried out by the user, a final status (ef) after the action taken place, and the action taken place (a).Type: ApplicationFiled: June 16, 2003Publication date: February 26, 2004Inventors: Thomas Kemp, Ralf Kompe, Raquel Tato, Masahiro Fujita, Katsuki Minamino, Kenta Kawamoto, Rika Horinaka
-
Publication number: 20030187653Abstract: A robot system includes a speech recognition unit for converting speech information into text information, and a database retrieval unit for extracting a keyword included in the text information from a database. By designating a plurality of basic actions on a speech basis, and storing an action record, a combined action formed by combining the plurality of basic actions in time-series order can be named as a new action only in voice-based interaction. A user can designate complicated continuous actions by using only one word, and can easily have a conversation with the robot.Type: ApplicationFiled: June 2, 2003Publication date: October 2, 2003Inventors: Atsushi Okubo, Gabriel Costa, Kenta Kawamoto, Rika Horinaka, Masaki Fukuchi, Masahiro Fujita
-
Publication number: 20030182122Abstract: Sentences corresponding to internal statuses of a robot device or the like are created and uttered, thereby expressing the internal statuses. The robot device or the like comprise means for recognizing an external status, and means for generating an emotion based on the internals status, whereby a change in the emotion is reflected upon a dialogue. The internal status is not associated with a sentence, but it exists independently of the system and is always varied depending on various external inputs and internal changes of the system. Accordingly, even when the same question is made on the robot device or the like, the contents of a reply are changed depending on the internal status at that time, and a manner of providing a reply also differs depending on the internal status.Type: ApplicationFiled: May 19, 2003Publication date: September 25, 2003Inventors: Rika Horinaka, Masahiro Fujita, Atsushi Okubo, Kenta Kawamoto, Gabriel Costa, Masaki Fukuchi, Osamu Hanagata, Kotaro Sabe
-
Publication number: 20030171850Abstract: The present invention relates to a voice output apparatus capable of, in response to a particular stimulus, stopping outputting a voice and outputting a reaction. The voice output apparatus is capable of outputting a voice in a natural manner. A rule-based synthesizer 24 produces a synthesized voice and outputs it. For example, when a synthesized voice “Where is an exit” was produced and outputting of the synthesized voice data has proceeded until “Where is an e” has been output, if a user taps a robot, then a reaction generator 30 determines, by referring to a reaction database 31, that a reaction voice “Ouch!” should be output in response to being tapped. The reaction generator 30 then controls an output controller 27 so as to stop outputting the synthesized voice “Where is an exit?” and output the reaction voice “Ouch!”.Type: ApplicationFiled: May 9, 2003Publication date: September 11, 2003Inventors: Erika Kobayashi, Makoto Akabane, Tomoaki Nitta, Hideki Kishi, Rika Horinaka, Masashi Takeda
-
Publication number: 20030060930Abstract: A robot apparatus causes the emotion in a feeling part (130) to be changed based on the information acquired by a perception part (120) to manifest the behavior of information acquisition as autonomous behavior. The robot apparatus includes a behavior control part for causing the robot apparatus to manifest a language acquisition behavior and a meaning acquisition part. The robot apparatus also includes a control part for performing the behavior control of pointing its object of learning. The robot apparatus causes changes in internal states, which are ascribable to the object, to be stored in a memory part in association with the object.Type: ApplicationFiled: September 24, 2002Publication date: March 27, 2003Inventors: Masahiro Fujita, Tsuyoshi Takagi, Rika Horinaka, Jun Yokono, Gabriel Costa, Hideki Shimomura, Katsuki Minamino