Patents by Inventor Jun Yokono

Jun Yokono has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20070242876
    Abstract: The present invention provides an image processing apparatus to recognize a predetermined model whose surface has a plurality of colors from an input color image that is obtained by capturing an image of a color object whose surface has a plurality of colors. The image processing apparatus includes a detecting unit configured to detect color areas from the input color image, each color area including adjoining pixels of the same color; and a recognizing unit configured to determine whether the color areas on the input color image detected by the detecting unit correspond to parts of the model to which color areas on a reference color image obtained by capturing an image of the model correspond, and determine whether the color object in the input color image is the model on the basis of the determination result.
    Type: Application
    Filed: April 5, 2007
    Publication date: October 18, 2007
    Inventors: Kohtaro Sabe, Jun Yokono
  • Patent number: 7216112
    Abstract: A memory system and a method as well as robotic apparatus are strong against noise and excellent in memory capacity, volume of calculation, quantity of physical memory, and memory responsiveness. It is designed to store, in the frame form, the first information on a symbol as well as the second information on a symbol supplied separately from a variety of inputs in relation to competitive neurons corresponding to the symbol in a way to strengthen the connection between relevant input neurons and competitive neurons in response to the input patterns of a variety of inputs for each symbol with the use of a competitive neural network having a set of input layers composed of a plurality of input neurons and a set of competitive layers composed of a plurality of competitive neurons.
    Type: Grant
    Filed: March 14, 2003
    Date of Patent: May 8, 2007
    Assignee: Sony Corporation
    Inventors: Shinya Ohtani, Jun Yokono
  • Publication number: 20070098255
    Abstract: An image processing system includes a learning device generating, in advance, a recognizer for recognizing a recognition target; and a recognition device recognizing, using the recognizer, whether a recognition image includes the recognition target. The learning device includes model feature point generating means for generating model feature points, model feature quantity generating means for generating model feature quantities, learning feature point generating means for generating learning feature points, learning feature quantity generating means for generating learning feature quantities, learning correlation feature quantity generating means for generating a learning correlation feature quantity, and recognizer generating means for generating the recognizer.
    Type: Application
    Filed: October 31, 2006
    Publication date: May 3, 2007
    Inventor: Jun Yokono
  • Patent number: 6961705
    Abstract: Speech of a user is recognized in a speech recognizing unit. Based on a result of the speech recognition, a language processing unit, a dialog managing unit and a response generating unit cooperatively created a dialog sentence for exchanging a dialog with the user. Also, based on the speech recognition result, the dialog managing unit collects user information regarding, e.g., interests and tastes of the user. Therefore, the user information regarding, the interests and tastes of the user can be easily collected.
    Type: Grant
    Filed: January 19, 2001
    Date of Patent: November 1, 2005
    Assignee: Sony Corporation
    Inventors: Seiichi Aoyagi, Yasuharu Asano, Miyuki Tanaka, Jun Yokono, Toshio Oe
  • Patent number: 6865446
    Abstract: A robot apparatus is provided which includes body portions such as a head block (4), leg blocks (3A to 3D), an actuator (25) to actuate the body portions and a CPU (10) to supply a control signal to the actuator (25). In this apparatus, information about an external force applied to the apparatus, such as the position, magnitude, direction, etc. of the external force, is computed on the basis of changes of the control signal supplied from the CPU (10) to drive the actuator (25) and a signal supplied as a response to the CPU (10) when the actuator (25) is driven. The external force information is supplied to the CPU (10) and used as information for selection of a behavior and emotion of the robot apparatus and the next behavior of the apparatus.
    Type: Grant
    Filed: February 21, 2002
    Date of Patent: March 8, 2005
    Assignee: Sony Corporation
    Inventors: Jun Yokono, Masahiro Fujita, Vincent Hugel
  • Publication number: 20050036649
    Abstract: A robot includes a face extracting section for extracting features of a face included in an image captured by a CCD camera, and a face recognition section for recognizing the face based on a result of face extraction by the face extracting section. The face extracting section is implemented by Gabor filters that filter images using a plurality of filters that have orientation selectivity and that are associated with different frequency components. The face recognition section is implemented by a support vector machine that maps the result of face recognition to a non-linear space and that obtains a hyperplane that separates in that space to discriminate a face from a non-face. The robot is allowed to recognize a face of a user within a predetermined time under a dynamically changing environment.
    Type: Application
    Filed: August 21, 2002
    Publication date: February 17, 2005
    Inventors: Jun Yokono, Kohtaro Sabe, Kenta Kawamoto
  • Patent number: 6754560
    Abstract: A robot is proposed which has a speech recognition unit to detect information supplied simultaneously with or just before or after detection of a touch by a touch sensor, an associative memory/recall memory to store action made correspondingly to the touch and input information (speech signal) detected by the speech recognition unit in association with each other, and an action generator to control the robot to make action recalled by the associative memory/recall memory based on a newly acquired input information (speech signal). The robot has also a sensor data processor to allow the robot to act correspondingly to the touch detection by the touch sensor. Thus, the robot can learn action in association with an input signal such as speech signal.
    Type: Grant
    Filed: March 14, 2002
    Date of Patent: June 22, 2004
    Assignee: Sony Corporation
    Inventors: Masahiro Fujita, Tsuyoshi Takagi, Rika Hasegawa, Osamu Hanagata, Jun Yokono, Gabriel Costa, Hideki Shimomura
  • Patent number: 6718232
    Abstract: A robot apparatus causes the emotion in a feeling part (130) to be changed based on the information acquired by a perception part (120) to manifest the behavior of information acquisition as autonomous behavior. The robot apparatus includes a behavior control part for causing the robot apparatus to manifest a language acquisition behavior and a meaning acquisition part. The robot apparatus also includes a control part for performing the behavior control of pointing its object of learning. The robot apparatus causes changes in internal states, which are ascribable to the object, to be stored in a memory part in association with the object.
    Type: Grant
    Filed: September 24, 2002
    Date of Patent: April 6, 2004
    Assignee: Sony Corporation
    Inventors: Masahiro Fujita, Tsuyoshi Takagi, Rika Horinaka, Jun Yokono, Gabriel Costa, Hideki Shimomura, Katsuki Minamino
  • Patent number: 6697711
    Abstract: A robot apparatus (1) includes leg blocks (3A to 3D), head block (4), etc. as a moving part (16), a motion controller (102), learning unit (103), prediction unit (104) and a drive unit (105). When the moving part (106), any of the blocks, is operated from outside, the learning unit (103) learns a time-series signal generated due to the external operation. The motion controller (102) and drive unit (105) control together the moving part (106) based on a signal generated at the moving part (106) due to an external force applied to the robot apparatus (1) and a signal having already been learned by the learning unit (103) to make an action taught by the user. The prediction unit (105) predicts whether the moving part (106) makes the taught action according to the initial signal generated at the moving part (106) due to the applied external force. Thus, the robot apparatus (1) can learn an action taught by the user and determine an external force-caused signal to make the taught action.
    Type: Grant
    Filed: October 18, 2002
    Date of Patent: February 24, 2004
    Assignee: Sony Corporation
    Inventors: Jun Yokono, Kohtaro Sabe, Gabriel Costa, Takeshi Ohashi
  • Publication number: 20040034449
    Abstract: A robot apparatus is provided which includes body portions such as a head block (4), leg blocks (3A to 3D), an actuator (25) to actuate the body portions and a CPU (10) to supply a control signal to the actuator (25). In this apparatus, information about an external force applied to the apparatus, such as the position, magnitude, direction, etc. of the external force, is computed on the basis of changes of the control signal supplied from the CPU (10) to drive the actuator (25) and a signal supplied as a response to the CPU (10) when the actuator (25) is driven. The external force information is supplied to the CPU (10) and used as information for selection of a behavior and emotion of the robot apparatus and the next behavior of the apparatus.
    Type: Application
    Filed: June 10, 2003
    Publication date: February 19, 2004
    Inventors: Jun Yokono, Masahiro Fujita, Vincent Hugel
  • Publication number: 20030233170
    Abstract: A memory system and a method as well as robotic apparatus are strong against noise and excellent in memory capacity, volume of calculation, quantity of physical memory, and memory responsiveness. It is designed to store, in the frame form, the first information on a symbol as well as the second information on a symbol supplied separately from a variety of inputs in relation to competitive neurons corresponding to the symbol in a way to strengthen the connection between relevant input neurons and competitive neurons in response to the input patterns of a variety of inputs for each symbol with the use of a competitive neural network having a set of input layers composed of a plurality of input neurons and a set of competitive layers composed of a plurality of competitive neurons.
    Type: Application
    Filed: March 14, 2003
    Publication date: December 18, 2003
    Inventors: Shinya Ohtani, Jun Yokono
  • Publication number: 20030144764
    Abstract: A robot apparatus (1) includes leg blocks (3A to 3D), head block (4), etc. as a moving part (16), a motion controller (102), learning unit (103), prediction unit (104) and a drive unit (105). When the moving part (106), any of the blocks, is operated from outside, the learning unit (103) learns a time-series signal generated due to the external operation. The motion controller (102) and drive unit (105) control together the moving part (106) based on a signal generated at the moving part (106) due to an external force applied to the robot apparatus (1) and a signal having already been learned by the learning unit (103) to make an action taught by the user. The prediction unit (105) predicts whether the moving part (106) makes the taught action according to the initial signal generated at the moving part (106) due to the applied external force. Thus, the robot apparatus (1) can learn an action taught by the user and determine an external force-caused signal to make the taught action.
    Type: Application
    Filed: October 18, 2002
    Publication date: July 31, 2003
    Inventors: Jun Yokono, Kohtaro Sabe, Gabriel Costa, Takeshi Ohashi
  • Publication number: 20030060930
    Abstract: A robot apparatus causes the emotion in a feeling part (130) to be changed based on the information acquired by a perception part (120) to manifest the behavior of information acquisition as autonomous behavior. The robot apparatus includes a behavior control part for causing the robot apparatus to manifest a language acquisition behavior and a meaning acquisition part. The robot apparatus also includes a control part for performing the behavior control of pointing its object of learning. The robot apparatus causes changes in internal states, which are ascribable to the object, to be stored in a memory part in association with the object.
    Type: Application
    Filed: September 24, 2002
    Publication date: March 27, 2003
    Inventors: Masahiro Fujita, Tsuyoshi Takagi, Rika Horinaka, Jun Yokono, Gabriel Costa, Hideki Shimomura, Katsuki Minamino
  • Patent number: 6534943
    Abstract: A walking-type robot device and its learning method are disclosed, wherein the robot device is caused to perform walking that accords with the parameters for controlling the walking, the walking is evaluated, and the parameters are updated so that the very evaluation is enhanced. Besides, a walking-type robot device is provided with a controlling means for controlling the robot so as to cause it to perform walking that accords with parameters which prescribe the driving phase of each leg of the time of walking, an evaluating means for evaluating the velocity of the walking, and a parameter updating means for updating the parameters so that the evaluation of the walking by the evaluating means is enhanced.
    Type: Grant
    Filed: October 24, 2000
    Date of Patent: March 18, 2003
    Assignee: Sony Corporation
    Inventors: Gregory Hornby, Seiichi Takamura, Masahiro Fujita, Takashi Yamamoto, Osamu Hanagata, Jun Yokono
  • Publication number: 20020158599
    Abstract: A robot (1) is proposed which includes a speech recognition unit (101) to detect information supplied simultaneously with or just before or after detection of a touch by a touch sensor, an associative memory/recall memory (104) to store action made correspondingly to the touch and input information (speech signal) detected by the speech recognition unit (101) in association with each other, and an action generator (105) to control the robot (1) to make action recalled by the associative memory/recall memory (104) based on a newly acquired input information (speech signal). The robot (1) includes also a sensor data processor (102) to allow the robot (1) to act correspondingly to the touch detection by the touch sensor. Thus, the robot (1) can learn action in association with an input signal such as speech signal.
    Type: Application
    Filed: March 14, 2002
    Publication date: October 31, 2002
    Inventors: Masahiro Fujita, Tsuyoshi Takagi, Rika Hasegawa, Osamu Hanagata, Jun Yokono, Gabriel Costa, Hideki Shimomura
  • Publication number: 20010041977
    Abstract: Voices of a user are recognized in a voice recognizing unit. Based on a result of the voice recognition, a language processing unit, a dialog managing unit and a response generating unit cooperatively create a dialog sentence for exchanging a dialog with the user. Also, based on the voice recognition result, the dialog managing unit collects user information regarding, e.g., interests and tastes of the user. Therefore, the user information regarding, e.g., interests and tastes of the user can be easily collected.
    Type: Application
    Filed: January 19, 2001
    Publication date: November 15, 2001
    Inventors: Seiichi Aoyagi, Yasuharu Asano, Miyuki Tanaka, Jun Yokono, Toshio Oe