Patents by Inventor Koji Akatsuka

Koji Akatsuka has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11246521
    Abstract: Provided is a method for evaluating muscle strength characteristics of a limb based on a muscle group model including a first pair of antagonistic one-joint muscles, a second pair of antagonistic one-joint muscles, and a pair of antagonistic two-joint muscles, where the limb has a first rod having a proximal end supported by a first joint and a second rod supported on a free end of the first rod through a second joint. The method includes: measuring a maximum output of a free end of the second rod in at least one predetermined direction; measuring orbiting outputs of the free end of the second rod in all directions in the plane; and creating a hexagonal maximum output distribution corresponding to a contribution amount of each muscle of the muscle group model based on the maximum output in the predetermined direction and the orbiting outputs.
    Type: Grant
    Filed: July 8, 2019
    Date of Patent: February 15, 2022
    Assignee: Honda Motor Co., Ltd.
    Inventors: Toru Takenaka, Yasushi Ikeuchi, Hiroshi Uematsu, Koji Akatsuka, Tomoyuki Shimono, Takahiro Fujishiro, Yu Goto
  • Publication number: 20210282688
    Abstract: A muscular strength characteristic evaluation method evaluates a muscular strength characteristic of a limb 3 including a first rod L1 having a base end supported by a first joint Ji, and a second rod L2 supported by a free end of the first rod via a second joint J2. The muscular strength characteristic evaluation method includes the following steps. In Step ST1, a free end of the second rod is moved at two or more different velocities va, vb, and vc in a predetermined direction, and an output at the free end of the second rod is respectively measured at a predetermined position O. In Step ST2, a function indicating a relationship between the output and the velocity in a direction is calculated based on the output and the velocity. In Steps ST3 and ST4, the muscular strength characteristic is evaluated based on the function.
    Type: Application
    Filed: March 10, 2021
    Publication date: September 16, 2021
    Applicant: Honda Motor Co., Ltd.
    Inventors: Yasushi Ikeuchi, Toru Takenaka, Koji Akatsuka, Tomoyuki Shimono, Yu Goto, Mayu Miyake
  • Publication number: 20200060600
    Abstract: Provided is a method for evaluating muscle strength characteristics of a limb based on a muscle group model including a first pair of antagonistic one-joint muscles, a second pair of antagonistic one-joint muscles, and a pair of antagonistic two-joint muscles, where the limb has a first rod having a proximal end supported by a first joint and a second rod supported on a free end of the first rod through a second joint. The method includes: measuring a maximum output of a free end of the second rod in at least one predetermined direction; measuring orbiting outputs of the free end of the second rod in all directions in the plane; and creating a hexagonal maximum output distribution corresponding to a contribution amount of each muscle of the muscle group model based on the maximum output in the predetermined direction and the orbiting outputs.
    Type: Application
    Filed: July 8, 2019
    Publication date: February 27, 2020
    Applicant: Honda Motor Co.,Ltd.
    Inventors: Toru Takenaka, Yasushi Ikeuchi, Hiroshi Uematsu, Koji Akatsuka
  • Patent number: 8886357
    Abstract: It is possible to perform robot motor learning in a quick and stable manner using a reinforcement learning apparatus including: a first-type environment parameter obtaining unit that obtains a value of one or more first-type environment parameters; a control parameter value calculation unit that calculates a value of one or more control parameters maximizing a reward by using the value of the one or more first-type environment parameters; a control parameter value output unit that outputs the value of the one or more control parameters to the control object; a second-type environment parameter obtaining unit that obtains a value of one or more second-type environment parameters; a virtual external force calculation unit that calculates the virtual external force by using the value of the one or more second-type environment parameters; and a virtual external force output unit that outputs the virtual external force to the control object.
    Type: Grant
    Filed: March 28, 2012
    Date of Patent: November 11, 2014
    Assignees: Advanced Telecommunications Research Institute International, Honda Motor Co., Ltd.
    Inventors: Norikazu Sugimoto, Yugo Ueda, Tadaaki Hasegawa, Soshi Iba, Koji Akatsuka
  • Patent number: 8392346
    Abstract: A reinforcement learning system (1) of the present invention utilizes a value of a first value gradient function (dV1/dt) in the learning performed by a second learning device (122), namely in evaluating a second reward (r2(t)). The first value gradient function (dV1/dt) is a temporal differential of a first value function (V1) which is defined according to a first reward (r1(t)) obtained from an environment and is served as a learning result given by a first learning device (121). An action policy which should be taken by a robot (R) to execute a task is determined based on the second reward (r2(t)).
    Type: Grant
    Filed: November 2, 2009
    Date of Patent: March 5, 2013
    Assignees: Honda Motor Co., Ltd., Advanced Telecommunications Research Institute International
    Inventors: Yugo Ueda, Tadaaki Hasegawa, Soshi Iba, Koji Akatsuka, Norikazu Sugimoto
  • Publication number: 20120253514
    Abstract: It is possible to perform robot motor learning in a quick and stable manner using a reinforcement learning apparatus including: a first-type environment parameter obtaining unit that obtains a value of one or more first-type environment parameters; a control parameter value calculation unit that calculates a value of one or more control parameters maximizing a reward by using the value of the one or more first-type environment parameters; a control parameter value output unit that outputs the value of the one or more control parameters to the control object; a second-type environment parameter obtaining unit that obtains a value of one or more second-type environment parameters; a virtual external force calculation unit that calculates the virtual external force by using the value of the one or more second-type environment parameters; and a virtual external force output unit that outputs the virtual external force to the control object.
    Type: Application
    Filed: March 28, 2012
    Publication date: October 4, 2012
    Inventors: Norikazu Sugimoto, Yugo Ueda, Tadaaki Hasegawa, Soshi Iba, Koji Akatsuka
  • Publication number: 20100114807
    Abstract: A reinforcement learning system (1) of the present invention utilizes a value of a first value gradient function (dV1/dt) in the learning performed by a second learning device (122), namely in evaluating a second reward (r2(t)). The first value gradient function (dV1/dt) is a temporal differential of a first value function (V1) which is defined according to a first reward (r1(t)) obtained from an environment and is served as a learning result given by a first learning device (121). An action policy which should be taken by a robot (R) to execute a task is determined based on the second reward (r2(t)).
    Type: Application
    Filed: November 2, 2009
    Publication date: May 6, 2010
    Applicants: HONDA MOTOR CO., LTD., ADVANCED TELECOMMUNICATIONS RESEARCH INSTITUTE INTERNATIONAL
    Inventors: Yugo Ueda, Tadaaki Hasegawa, Soshi Iba, Koji Akatsuka, Norikazu Sugimoto
  • Patent number: 7710243
    Abstract: Disclosed is a driver-assistance vehicle including one or more lighting members which are placed within peripheral vision of a driver and which are arranged on respective sides of the vehicle. Furthermore, the driver-assistance vehicle includes a vehicle behavior sensing unit for predicting or sensing a state of the vehicle, and a light controller for controlling the lighting members, based on the sensed state. With this driver-assistance vehicle, the driver can be assisted in distributing his attention.
    Type: Grant
    Filed: June 28, 2006
    Date of Patent: May 4, 2010
    Assignee: Honda Motor Co., Ltd.
    Inventors: Koji Akatsuka, Hiroshi Uematsu
  • Patent number: 7295684
    Abstract: An object detection apparatus and method capable of detecting objects based on visual images captured by a self-moving unit. A sequential images output section makes a train of a first input image and a second input image sequential to the first input image and outputs said train. A local area image processor calculates local flows based on said first input image and said second input image. An inertia information acquiring section measures self-motion of the unit to calculate inertia information thereof. A global area image processor uses said inertia information to estimate global flow, which is a motion field of the entire view associated to the self-motion, using said global flow and said first input image and creates a predictive image of said second input image. The global area image processor then calculates differential image data, which is a difference between said predictive image and said second input image.
    Type: Grant
    Filed: July 21, 2003
    Date of Patent: November 13, 2007
    Assignee: Honda Giken Kogyo Kabushiki Kaisha
    Inventors: Hiroshi Tsujino, Hiroshi Kondo, Shinichi Nagai, Koji Akatsuka, Atsushi Miura
  • Patent number: 7221797
    Abstract: An image recognizing apparatus and method is provided for recognizing behavior of a mobile unit accurately with an image of external environment acquired during the mobile unit is moving. Behavior command output block 12 outputs behavior commands to cause the mobile unit 32 move. Local feature extraction block 16 extracts features of local areas of the image from the image of external environment acquired on the mobile unit 32 when the behavior command is output. Global feature extraction block 18 extracts feature of global area of the image using the features of local areas. Learning block 20 calculates probability models for recognizing behavior given to the mobile unit 32 based on the feature of global area of the image. After learning is finished, behavior of the mobile unit 32 may be recognized rapidly and accurately by applying the probability models to an image of external environment acquired in mobile unit 32 afresh.
    Type: Grant
    Filed: April 26, 2002
    Date of Patent: May 22, 2007
    Assignee: Honda Giken Kogyo Kabushiki Kaisha
    Inventors: Takamasa Koshizen, Koji Akatsuka, Hiroshi Tsujino
  • Publication number: 20060290479
    Abstract: Disclosed is a driver-assistance vehicle including one or more lighting members which are placed within peripheral vision of a driver and which are arranged on respective sides of the vehicle. Furthermore, the driver-assistance vehicle includes a vehicle behavior sensing unit for predicting or sensing a state of the vehicle, and a light controller for controlling the lighting members, based on the sensed state. With this driver-assistance vehicle, the driver can be assisted in distributing his attention.
    Type: Application
    Filed: June 28, 2006
    Publication date: December 28, 2006
    Applicant: Honda Motor Co., Ltd.
    Inventors: Koji Akatsuka, Hiroshi Uematsu
  • Patent number: 7062071
    Abstract: An object detection apparatus is provided for detecting both stationary objects and moving objects accurately from an image captured from a moving mobile unit. The object detection apparatus of the present invention applies Gabor filter to two or more input images captured by an imaging device such as CCD camera mounted on a mobile unit, and calculates optical flow of local areas in the input images. Then the object detection apparatus closely removes optical flow produced by motion of the mobile unit by estimating optical flow produced from background of the input images. In other words, the object detection apparatus clarifies the area where object is not present (“ground”) in the input images. By removing such “ground” part, the area where objects seems to be present (“feature”) is extracted from the input images. Finally, the object detection apparatus determines whether objects are present or not using flow information of the extracted “feature” part.
    Type: Grant
    Filed: December 17, 2002
    Date of Patent: June 13, 2006
    Assignee: Honda Giken Kogyo Kabushiki Kaisha
    Inventors: Hiroshi Tsujino, Hiroshi Kondo, Atsushi Miura, Shinichi Nagai, Koji Akatsuka
  • Publication number: 20050248654
    Abstract: An object detection apparatus and method capable of detecting objects based on visual images captured by a self-moving unit. A sequential images output section makes a train of a first input image and a second input image sequential to the first input image and outputs said train. A local area image processor calculates local flows based on said first input image and said second input image. An inertia information acquiring section measures self-motion of the unit to calculate inertia information thereof. A global area image processor uses said inertia information to estimate global flow, which is a motion field of the entire view associated to the self-motion, using said global flow and said first input image and creates a predictive image of said second input image. The global area image processor then calculates differential image data, which is a difference between said predictive image and said second input image.
    Type: Application
    Filed: July 21, 2003
    Publication date: November 10, 2005
    Inventors: Hiroshi Tsujino, Hiroshi Kondo, Shinichi Nagai, Koji Akatsuka, Atsushi Miura
  • Publication number: 20050105771
    Abstract: The object detection apparatus according to the invention detects an object based on input images that are captured sequentially in time in a moving unit. The apparatus generates an action command to be sent to the moving unit, calculates flow information for each local area in the input image, and estimates an action of the moving unit based on the flow information. The apparatus calculates a difference between the estimated action and the action command and then determines a specific local area as a figure area when such difference in association with that specific local area exhibits an error larger than a predetermined value. The apparatus determines presence/absence of an object in the figure area.
    Type: Application
    Filed: August 30, 2004
    Publication date: May 19, 2005
    Inventors: Shinichi Nagai, Hiroshi Tsujino, Tetsuya Ido, Takamasa Koshizen, Koji Akatsuka, Hiroshhi Kondo, Atsushi Miura
  • Publication number: 20030152271
    Abstract: An object detection apparatus is provided for detecting both stationary objects and moving objects accurately from an image captured from a moving mobile unit.
    Type: Application
    Filed: December 17, 2002
    Publication date: August 14, 2003
    Inventors: Hiroshi Tsujino, Hiroshi Kondo, Atsushi Miura, Shinichi Nagai, Koji Akatsuka
  • Publication number: 20030007682
    Abstract: An image recognizing apparatus and method is provided for recognizing behavior of a mobile unit accurately with an image of external environment acquired during the mobile unit is moving.
    Type: Application
    Filed: April 26, 2002
    Publication date: January 9, 2003
    Inventors: Takamasa Koshizen, Koji Akatsuka, Hiroshi Tsujino