Patents by Inventor Norikazu Sugimoto

Norikazu Sugimoto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220379894
    Abstract: A driving support device includes a prediction unit, a trajectory determination unit, and a necessity determination unit. The prediction unit is configured to predict an increase degree of an inter-vehicle distance between other vehicles in response to the cut-in of the subject vehicle, and determine whether the lane change is permissible based on the increase degree. The necessity determination unit is configured to determine whether a necessity level of the lane change is within an acceptable range. The prediction unit is configured to cancel the determination whether the lane change is permissible based on the increase degree when it is determined that the necessity level is within the acceptable range, and determine whether the lane change is permissible based on a linear prediction of a behavior of the other vehicles.
    Type: Application
    Filed: August 11, 2022
    Publication date: December 1, 2022
    Inventors: AKIRA ITOH, NORIKAZU SUGIMOTO
  • Patent number: 8886357
    Abstract: It is possible to perform robot motor learning in a quick and stable manner using a reinforcement learning apparatus including: a first-type environment parameter obtaining unit that obtains a value of one or more first-type environment parameters; a control parameter value calculation unit that calculates a value of one or more control parameters maximizing a reward by using the value of the one or more first-type environment parameters; a control parameter value output unit that outputs the value of the one or more control parameters to the control object; a second-type environment parameter obtaining unit that obtains a value of one or more second-type environment parameters; a virtual external force calculation unit that calculates the virtual external force by using the value of the one or more second-type environment parameters; and a virtual external force output unit that outputs the virtual external force to the control object.
    Type: Grant
    Filed: March 28, 2012
    Date of Patent: November 11, 2014
    Assignees: Advanced Telecommunications Research Institute International, Honda Motor Co., Ltd.
    Inventors: Norikazu Sugimoto, Yugo Ueda, Tadaaki Hasegawa, Soshi Iba, Koji Akatsuka
  • Patent number: 8392346
    Abstract: A reinforcement learning system (1) of the present invention utilizes a value of a first value gradient function (dV1/dt) in the learning performed by a second learning device (122), namely in evaluating a second reward (r2(t)). The first value gradient function (dV1/dt) is a temporal differential of a first value function (V1) which is defined according to a first reward (r1(t)) obtained from an environment and is served as a learning result given by a first learning device (121). An action policy which should be taken by a robot (R) to execute a task is determined based on the second reward (r2(t)).
    Type: Grant
    Filed: November 2, 2009
    Date of Patent: March 5, 2013
    Assignees: Honda Motor Co., Ltd., Advanced Telecommunications Research Institute International
    Inventors: Yugo Ueda, Tadaaki Hasegawa, Soshi Iba, Koji Akatsuka, Norikazu Sugimoto
  • Publication number: 20120253514
    Abstract: It is possible to perform robot motor learning in a quick and stable manner using a reinforcement learning apparatus including: a first-type environment parameter obtaining unit that obtains a value of one or more first-type environment parameters; a control parameter value calculation unit that calculates a value of one or more control parameters maximizing a reward by using the value of the one or more first-type environment parameters; a control parameter value output unit that outputs the value of the one or more control parameters to the control object; a second-type environment parameter obtaining unit that obtains a value of one or more second-type environment parameters; a virtual external force calculation unit that calculates the virtual external force by using the value of the one or more second-type environment parameters; and a virtual external force output unit that outputs the virtual external force to the control object.
    Type: Application
    Filed: March 28, 2012
    Publication date: October 4, 2012
    Inventors: Norikazu Sugimoto, Yugo Ueda, Tadaaki Hasegawa, Soshi Iba, Koji Akatsuka
  • Publication number: 20100114807
    Abstract: A reinforcement learning system (1) of the present invention utilizes a value of a first value gradient function (dV1/dt) in the learning performed by a second learning device (122), namely in evaluating a second reward (r2(t)). The first value gradient function (dV1/dt) is a temporal differential of a first value function (V1) which is defined according to a first reward (r1(t)) obtained from an environment and is served as a learning result given by a first learning device (121). An action policy which should be taken by a robot (R) to execute a task is determined based on the second reward (r2(t)).
    Type: Application
    Filed: November 2, 2009
    Publication date: May 6, 2010
    Applicants: HONDA MOTOR CO., LTD., ADVANCED TELECOMMUNICATIONS RESEARCH INSTITUTE INTERNATIONAL
    Inventors: Yugo Ueda, Tadaaki Hasegawa, Soshi Iba, Koji Akatsuka, Norikazu Sugimoto