Patents by Inventor Yuutarou Takahashi

Yuutarou Takahashi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230364812
    Abstract: The objective of the present invention is to provide a robot system with which, if the position of a robot becomes displaced, it is easy to perform work by employing a camera or the like to apply a three-dimensional correction. This robot system is provided with: a robot 2; a robot conveying device 3 on which the robot is mounted, for moving the robot to a predetermined work space; at least two target marks 4 installed in the work space; a target mark position acquiring unit 5 for obtaining a three-dimensional position by using a vision sensor provided on the robot 2 to perform stereoscopic measurement of the at least two target marks 4; a displacement amount acquiring unit 6 for obtaining the displacement amount between the robot 2 and a desired relative position in the work space, from the acquired three-dimensional position; and a robot control unit 7 for activating the robot 2 using a value adjusted from a prescribed activation amount, using the acquired displacement amount.
    Type: Application
    Filed: October 5, 2021
    Publication date: November 16, 2023
    Applicant: FANUC CORPORATION
    Inventors: Yuutarou TAKAHASHI, Fumikazu WARASHINA, Junichirou YOSHIDA
  • Patent number: 11253999
    Abstract: A machine learning device includes a state observation unit for observing, as state variables, an image of a workpiece captured by a vision sensor, and a movement amount of an arm end portion from an arbitrary position, the movement amount being calculated so as to bring the image close to a target image; a determination data retrieval unit for retrieving the target image as determination data; and a learning unit for learning the movement amount to move the arm end portion or the workpiece from the arbitrary position to a target position. The target position is a position in which the vision sensor and the workpiece have a predetermined relative positional relationship. The target image is an image of the workpiece captured by the vision sensor when the arm end portion or the workpiece is disposed in the target position.
    Type: Grant
    Filed: March 22, 2019
    Date of Patent: February 22, 2022
    Assignee: FANUC CORPORATION
    Inventors: Fumikazu Warashina, Yuutarou Takahashi
  • Publication number: 20210019920
    Abstract: An image processing apparatus includes a graphical image generator that generates a first graphical image and a second graphical image, and a mode switching unit that switches between a first display mode and a second display mode so that a display displays the first graphical image or the second graphical image. Each graphical image includes a color wheel indicating a two-dimensional color space of hue and saturation and a lightness bar indicating a one-dimensional color space of lightness. In the first graphical image, each position in the color wheel and the lightness bar is indicated by the hue and saturation indicated by that position. In the second graphical image, a distribution of colors included in the color image in the color space of hue and saturation or the color space of lightness is superposed on the color wheel and the lightness bar.
    Type: Application
    Filed: May 28, 2020
    Publication date: January 21, 2021
    Applicant: FANUC CORPORATION
    Inventors: Fumikazu WARASHINA, Yuutarou TAKAHASHI, Wanfeng FU
  • Patent number: 10525598
    Abstract: A positioning system using a robot, capable of eliminating an error factor of the robot such as thermal expansion or backlash can be eliminated, and carrying out positioning of the robot with accuracy higher than inherent positioning accuracy of the robot. The positioning system has a robot with a movable arm, visual feature portions provided to a robot hand, and vision sensors positioned at a fixed position outside the robot and configured to capture the feature portions. The hand is configured to grip an object on which the feature portions are formed, and the vision sensors are positioned and configured to capture the respective feature portions.
    Type: Grant
    Filed: March 23, 2017
    Date of Patent: January 7, 2020
    Assignee: FANUC CORPORATION
    Inventors: Yuutarou Takahashi, Shouta Takizawa, Fumikazu Warashina
  • Publication number: 20190299405
    Abstract: A machine learning device includes a state observation unit for observing, as state variables, an image of a workpiece captured by a vision sensor, and a movement amount of an arm end portion from an arbitrary position, the movement amount being calculated so as to bring the image close to a target image; a determination data retrieval unit for retrieving the target image as determination data; and a learning unit for learning the movement amount to move the arm end portion or the workpiece from the arbitrary position to a target position. The target position is a position in which the vision sensor and the workpiece have a predetermined relative positional relationship. The target image is an image of the workpiece captured by the vision sensor when the arm end portion or the workpiece is disposed in the target position.
    Type: Application
    Filed: March 22, 2019
    Publication date: October 3, 2019
    Inventors: Fumikazu WARASHINA, Yuutarou TAKAHASHI
  • Publication number: 20170274534
    Abstract: A positioning system using a robot, capable of eliminating an error factor of the robot such as thermal expansion or backlash can be eliminated, and carrying out positioning of the robot with accuracy higher than inherent positioning accuracy of the robot. The positioning system has a robot with a movable arm, visual feature portions provided to a robot hand, and vision sensors positioned at a fixed position outside the robot and configured to capture the feature portions. The hand is configured to grip an object on which the feature portions are formed, and the vision sensors are positioned and configured to capture the respective feature portions.
    Type: Application
    Filed: March 23, 2017
    Publication date: September 28, 2017
    Applicant: FANUC CORPORATION
    Inventors: Yuutarou Takahashi, Shouta Takizawa, Fumikazu Warashina