Patents by Inventor Kenta Kawamoto

Kenta Kawamoto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20070185825
    Abstract: A learning system is provided, which includes network storage means for storing a network including a plurality of nodes, each of which holds a dynamics; and learning means for self-organizationally updating the dynamics of the network on the basis of measured time-series data.
    Type: Application
    Filed: January 30, 2007
    Publication date: August 9, 2007
    Inventors: Masato Ito, Katsuki Minamino, Yukiko Yoshiike, Hirotaka Suzuki, Kenta Kawamoto
  • Patent number: 7251606
    Abstract: Sentences corresponding to internal statuses of a robot device or the like are created and uttered, thereby expressing the internal statuses. The robot device or the like comprise means for recognizing an external status, and means for generating an emotion based on the internals status, whereby a change in the emotion is reflected upon a dialogue. The internal status is not associated with a sentence, but it exists independently of the system and is always varied depending on various external inputs and internal changes of the system. Accordingly, even when the same question is made on the robot device or the like, the contents of a reply are changed depending on the internal status at that time, and a manner of providing a reply also differs depending on the internal status.
    Type: Grant
    Filed: March 26, 2002
    Date of Patent: July 31, 2007
    Assignee: Sony Corporation
    Inventors: Rika Horinaka, Masahiro Fujita, Atsushi Okubo, Kenta Kawamoto, Gabriel Costa, Masaki Fukuchi, Osamu Hanagata, Kotaro Sabe
  • Patent number: 7228201
    Abstract: A robot device (1) has a central processing process (CPU) having a plurality of objects and adapted for carrying out control processing on the basis inter-object communication carried out between the objects, the central processing process controlling accesses by the plurality of objects to a shared memory shared by the plurality of objects and thus carrying out inter-object communication. Specifically, the central processing process generates pointers P11, P12, P13, P21, P22 in accordance with accesses by the objects to predetermined areas M1, M2 on a shared memory M, then measures the pointers by the corresponding number-of-reference measuring objects RO1, RO2, and controls the accesses in accordance with the number of pointers measured, thereby carrying out inter-object communication. This enables easy realization of smooth inter-process communication.
    Type: Grant
    Filed: August 16, 2001
    Date of Patent: June 5, 2007
    Assignee: Sony Corporation
    Inventors: Kohtaro Sabe, Kenta Kawamoto
  • Publication number: 20070122012
    Abstract: A robot apparatus includes face includes a face tracking module (M2) for tracking a face in an image photographed by a CCD camera, a face detecting module (M1) for detecting face data of the face in the image photographed by the image pickup device, based on the face tracking information by the face tracking module (M2) and a face identification module (M3) for identifying a specified face based on the face data as detected by the face data detecting module (M1).
    Type: Application
    Filed: January 22, 2007
    Publication date: May 31, 2007
    Inventors: Atsushi Okubo, Kohtaro Sabe, Kenta Kawamoto, Masaki Fukuchi
  • Patent number: 7216082
    Abstract: A robot system includes a speech recognition unit for converting speech information into text information, and a database retrieval unit for extracting a keyword included in the text information from a database. By designating a plurality of basic actions on a speech basis, and storing an action record, a combined action formed by combining the plurality of basic actions in time-series order can be named as a new action only in voice-based interaction. A user can designate complicated continuous actions by using only one word, and can easily have a conversation with the robot.
    Type: Grant
    Filed: March 26, 2002
    Date of Patent: May 8, 2007
    Assignee: Sony Corporation
    Inventors: Atsushi Okubo, Gabriel Costa, Kenta Kawamoto, Rika Horinaka, Masaki Fukuchi, Masahiro Fujita
  • Patent number: 7200249
    Abstract: A robot apparatus includes face includes a face tracking module (M2) for tracking a face in an image photographed by a CCD camera, a face detecting module (M1) for detecting face data of the face in the image photographed by the image pickup device, based on the face tracking information by the face tracking module (M2) and a face identification module (M3) for identifying a specified face based on the face data as detected by the face data detecting module (M1).
    Type: Grant
    Filed: November 19, 2001
    Date of Patent: April 3, 2007
    Assignee: Sony Corporation
    Inventors: Atsushi Okubo, Kohtaro Sabe, Kenta Kawamoto, Masaki Fukuchi
  • Patent number: 7088853
    Abstract: A plural number of letters or characters, inferred from the results of letter/character recognition of an image photographed by a CCD camera (20), a plural number of kana readings inferred from the letters or characters and the way of pronunciation corresponding to the kana readings are generated in an pronunciation information generating unit (150) and the plural readings obtained are matched to the pronunciation from the user acquired by a microphone (23) to specify one kana reading and the way of pronunciation (reading) from among the plural generated candidates.
    Type: Grant
    Filed: December 31, 2002
    Date of Patent: August 8, 2006
    Assignee: Sony Corporation
    Inventors: Atsuo Hiroe, Katsuki Minamino, Kenta Kawamoto, Kohtaro Sabe, Takeshi Ohashi
  • Publication number: 20050280809
    Abstract: An object detecting device 1 comprises a scaling section 3 for generating scaled images by scaling down a gradation image input from an image output section 2, a scanning section 4 for sequentially manipulating the scaled images and cutting out window images from them and a discriminator 5 for judging if each window image is an object or not. The discriminator 5 includes a plurality of weak discriminators that are learnt in a group by boosting and an adder for making a weighted majority decision from the outputs of the weak discriminators. Each of the weak discriminators outputs an estimate telling the likelihood of a window image to be an object or not by using the difference of the luminance values of two pixels. The discriminator 5 suspends the operation of computing estimates for a window image that is judged to be a non-object, using a threshold value that is learnt in advance.
    Type: Application
    Filed: November 22, 2004
    Publication date: December 22, 2005
    Inventors: Kenichi Hidai, Kohtaro Sabe, Kenta Kawamoto
  • Publication number: 20050222709
    Abstract: The present invention is applied to, for example, a legged mobile robot and makes it possible to calculate the movement amount between a portion of the robot apparatus that had been in contact with a floor up to now and a next portion of the robot apparatus in contact with the floor using kinematics and to switch transformation to a coordinate system serving as an observation reference as a result of the switching between the floor contact portions.
    Type: Application
    Filed: March 24, 2004
    Publication date: October 6, 2005
    Inventors: Kohtaro Sabe, Takeshi Ohashi, Kenta Kawamoto
  • Publication number: 20050102246
    Abstract: A facial expression recognition system that uses a face detection apparatus realizing efficient learning and high-speed detection processing based on ensemble learning when detecting an area representing a detection target and that is robust against shifts of face position included in images and capable of highly accurate expression recognition, and a learning method for the system, are provided. When learning data to be used by the face detection apparatus by Adaboost, processing to select high-performance weak hypotheses from all weak hypotheses, then generate new weak hypotheses from these high-performance weak hypotheses on the basis of statistical characteristics, and select one weak hypothesis having the highest discrimination performance from these weak hypotheses, is repeated to sequentially generate a weak hypothesis, and a final hypothesis is thus acquired.
    Type: Application
    Filed: June 17, 2004
    Publication date: May 12, 2005
    Inventors: Javier Movellan, Marian Bartlett, Gwendolen Littlewort, John Hershey, Ian Fasel, Eric Carlson, Josh Susskind, Kohtaro Sabe, Kenta Kawamoto, Kenichi Hidai
  • Patent number: 6862497
    Abstract: There is proposed a method that may be universally used for controlling a man-machine interface unit. A learning sample is used in order at least to derive and/or initialize a target action (t) to be carried out and to lead the user from an optional current status (ec) to an optional desired target status (et) as the final status (ef). This learning sample (l) is formed by a data triple made up by an initial status (ei) before an optional action (a) carried out by the user, a final status (ef) after the action taken place (a).
    Type: Grant
    Filed: June 3, 2002
    Date of Patent: March 1, 2005
    Assignees: Sony Corporation, Sony International (Europe) GmbH
    Inventors: Thomas Kemp, Ralf Kompe, Raquel Tato, Masahiro Fujita, Katsuki Minamino, Kenta Kawamoto, Rika Horinaka
  • Publication number: 20050036649
    Abstract: A robot includes a face extracting section for extracting features of a face included in an image captured by a CCD camera, and a face recognition section for recognizing the face based on a result of face extraction by the face extracting section. The face extracting section is implemented by Gabor filters that filter images using a plurality of filters that have orientation selectivity and that are associated with different frequency components. The face recognition section is implemented by a support vector machine that maps the result of face recognition to a non-linear space and that obtains a hyperplane that separates in that space to discriminate a face from a non-face. The robot is allowed to recognize a face of a user within a predetermined time under a dynamically changing environment.
    Type: Application
    Filed: August 21, 2002
    Publication date: February 17, 2005
    Inventors: Jun Yokono, Kohtaro Sabe, Kenta Kawamoto
  • Patent number: 6850818
    Abstract: A robot apparatus integrates individual recognition results received asynchronously and then passes the integrated information to a behavior module. Thus, handling of information in the behavior module is facilitated. Since information regarding recognized observation results is held as a memory, even if observation results are temporarily missing, it appears to an upper module that items are constantly there in perception. Accordingly, insusceptibility against recognizer errors and sensor noise is improved, so that a stable system that is not dependent on timing of notifications by recognizers is implemented. Thus, the robot apparatus integrates a plurality of recognition results from external environment and handles the integrated information as meaningful symbol information, allowing sophisticated behavior control.
    Type: Grant
    Filed: October 22, 2002
    Date of Patent: February 1, 2005
    Assignee: Sony Corporation
    Inventors: Kohtaro Sabe, Kenta Kawamoto
  • Publication number: 20040117063
    Abstract: A robot apparatus integrates individual recognition results received asynchronously and then passes the integrated information to a behavior module. Thus, handling of information in the behavior module is facilitated. Since information regarding recognized observation results is held as a memory, even if observation results are temporarily missing, it appears to an upper module that items are constantly there in perception. Accordingly, insusceptibility against recognizer errors and sensor noise is improved, so that a stable system that is not dependent on timing of notifications by recognizers is implemented. Thus, the robot apparatus integrates a plurality of recognition results from external environment and handles the integrated information as meaningful symbol information, allowing sophisticated behavior control.
    Type: Application
    Filed: June 20, 2003
    Publication date: June 17, 2004
    Inventors: Kohtaro Sabe, Kenta Kawamoto
  • Publication number: 20040039483
    Abstract: There is proposed a method that may be universally used for controlling a man-machine interface unit. A learning sample is used in order at least to derive and/or initialize a target action (t) to be carried out and to lead the user from an optional current status (ec) to an optional desired target status (et) as the final status (ef). This learning sample (l) is formed by a data triple made up by an initial status (ei) before an optional action (a) carried out by the user, a final status (ef) after the action taken place, and the action taken place (a).
    Type: Application
    Filed: June 16, 2003
    Publication date: February 26, 2004
    Inventors: Thomas Kemp, Ralf Kompe, Raquel Tato, Masahiro Fujita, Katsuki Minamino, Kenta Kawamoto, Rika Horinaka
  • Publication number: 20040013295
    Abstract: An obstacle recognition apparatus is provided which can recognize an obstacle by accurately extracting a floor surface. It includes a distance image generator (222) to produce a distance image using a disparity image and homogeneous transform matrix, a plane detector (223) to detect plane parameters on the basis of the distance image from the distance image generator (222), a coordinate transformer (224) to transform the homogeneous transform matrix into a coordinate of a ground-contact plane of a robot apparatus (1), and a floor surface detector (225) to detect a floor surface using the plane parameters from the plane detector (223) and result of coordinate transformation from the coordinate transformer (224) and supply the plane parameters to an obstacle recognition block (226).
    Type: Application
    Filed: March 13, 2003
    Publication date: January 22, 2004
    Inventors: Kohtaro Sabe, Kenta Kawamoto, Takeshi Ohashi, Masaki Fukuchi, Atsushi Okubo, Steffen Gutmann
  • Publication number: 20030187653
    Abstract: A robot system includes a speech recognition unit for converting speech information into text information, and a database retrieval unit for extracting a keyword included in the text information from a database. By designating a plurality of basic actions on a speech basis, and storing an action record, a combined action formed by combining the plurality of basic actions in time-series order can be named as a new action only in voice-based interaction. A user can designate complicated continuous actions by using only one word, and can easily have a conversation with the robot.
    Type: Application
    Filed: June 2, 2003
    Publication date: October 2, 2003
    Inventors: Atsushi Okubo, Gabriel Costa, Kenta Kawamoto, Rika Horinaka, Masaki Fukuchi, Masahiro Fujita
  • Publication number: 20030182122
    Abstract: Sentences corresponding to internal statuses of a robot device or the like are created and uttered, thereby expressing the internal statuses. The robot device or the like comprise means for recognizing an external status, and means for generating an emotion based on the internals status, whereby a change in the emotion is reflected upon a dialogue. The internal status is not associated with a sentence, but it exists independently of the system and is always varied depending on various external inputs and internal changes of the system. Accordingly, even when the same question is made on the robot device or the like, the contents of a reply are changed depending on the internal status at that time, and a manner of providing a reply also differs depending on the internal status.
    Type: Application
    Filed: May 19, 2003
    Publication date: September 25, 2003
    Inventors: Rika Horinaka, Masahiro Fujita, Atsushi Okubo, Kenta Kawamoto, Gabriel Costa, Masaki Fukuchi, Osamu Hanagata, Kotaro Sabe
  • Publication number: 20030152261
    Abstract: A plural number of letters or characters, inferred from the results of letter/character recognition of an image photographed by a CCD camera (20), a plural number of kana readings inferred from the letters or characters and the way of pronunciation corresponding to the kana readings are generated in an pronunciation information generating unit (150) and the plural readings obtained are matched to the pronunciation from the user acquired by a microphone (23) to specify one kana reading and the way of pronunciation (reading) from among the plural generated candidates.
    Type: Application
    Filed: December 31, 2002
    Publication date: August 14, 2003
    Inventors: Atsuo Hiroe, Katsuki Minamino, Kenta Kawamoto, Kohtaro Sabe, Takeshi Ohashi
  • Publication number: 20030095514
    Abstract: A gateway object (48) for transmitting and receiving data to and from an object of a robot apparatus (1) is allocated to a radio LAN PC card (41) of the robot apparatus (1), and a gateway object (52) for transmitting and receiving data to and from an object on a personal computer (32) is allocated to a network adapter (31) of a remote system (30). When the radio LAN PC card (41) and the network adapter (31) are connected with each other by radio or wired connection, inter-object communication is carried out between the gateway object (48) of the radio LAN PC card (41) and the gateway object (52) of the network adapter (31), thereby carrying out inter-object communication between the object of the robot apparatus (1) and the object of the personal computer (32). Thus, preparation of a program is facilitated.
    Type: Application
    Filed: September 24, 2002
    Publication date: May 22, 2003
    Inventors: Kohtaro Sabe, Kenta Kawamoto, Gabriel Costa