Patents by Inventor Tetsuaki Kato

Tetsuaki Kato has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230173660
    Abstract: A method for teaching and controlling a robot to perform an operation based on human demonstration with images from a camera. The method includes a demonstration phase where a camera detects a human hand grasping and moving a workpiece to define a rough trajectory of the robotic movement of the workpiece. Line features or other geometric features on the workpiece collected during the demonstration phase are used in an image-based visual servoing (IBVS) approach which refines a final placement position of the workpiece, where the IBVS control takes over the workpiece placement during the final approach by the robot. Moving object detection is used for automatically localizing both object and hand position in 2D image space, and then identifying line features on the workpiece by removing line features belonging to the hand using hand keypoint detection.
    Type: Application
    Filed: December 6, 2021
    Publication date: June 8, 2023
    Inventors: Kaimeng Wang, Tetsuaki Kato
  • Publication number: 20230173673
    Abstract: A method for tuning the force control parameters for a general robotic assembly operation. The method uses numerical optimization to evaluate different combinations of the parameters for a robot force controller in a simulation environment that is built based on a real-world robotic setup. This method performs autonomous tuning for assembly tasks based on closed loop force control simulation, where random samples from a distribution of force control parameter values are evaluated, and the optimization routine iteratively redefines the parameter distribution to find optimal values of the parameters. Each simulated assembly is evaluated using multiple simulations including random part positioning uncertainties. The performance of each simulated assembly is evaluated by the average of the simulation results, thus ensuring that the selected control parameters will perform well in most possible conditions.
    Type: Application
    Filed: December 6, 2021
    Publication date: June 8, 2023
    Inventors: Yu Zhao, Tetsuaki Kato
  • Publication number: 20230169324
    Abstract: A system and method for training a neural network. The method includes modelling a plurality of different sized objects to generate virtual images of the objects using computer graphics software and generating a placement virtual image by randomly and sequentially selecting the modelled objects and placing the selected modelled objects within a predetermined boundary in a predetermined pattern using the software. The method also includes rendering a virtual image of the placement virtual image based on predetermined data and information using the computer graphics software and generating an annotated virtual image by independently labeling the objects in the rendered virtual image using the software. The method repeats generating a placement virtual image, rendering a virtual image and generating an annotated virtual for a plurality of randomly and sequentially selected modelled objects, and then trains the neural network using the plurality of rendered virtual images and the annotated virtual images.
    Type: Application
    Filed: November 30, 2021
    Publication date: June 1, 2023
    Inventors: Te Tang, Tetsuaki Kato
  • Publication number: 20230166406
    Abstract: A method and system for calculating a minimum distance from a robot to dynamic objects in a robot workspace. The method uses images from one or more three-dimensional cameras, where edges of objects are detected in each image, and the robot and the background are subtracted from the resultant image, leaving only object edge pixels. Depth values are then overlaid on the object edge pixels, and distance calculations are performed only between the edge pixels and control points on the robot arms. Two or more cameras may be used to resolve object occlusion, where each camera's minimum distance is computed independently and the maximum of the cameras' minimum distances is used as the actual result. The use of multiple cameras does not significantly increase computational load, and does require calibration of the cameras with respect to each other.
    Type: Application
    Filed: November 29, 2021
    Publication date: June 1, 2023
    Inventors: Chiara Landi, Hsien-Chung Lin, Tetsuaki Kato
  • Publication number: 20230169675
    Abstract: A system and method for identifying a box to be picked up by a robot from a stack of boxes. The method includes obtaining a 2D red-green-blue (RGB) color image of the boxes and a 2D depth map image of the boxes using a 3D camera. The method employs an image segmentation process that uses a simplified mask R-CNN executable by a central processing unit (CPU) to predict which pixels in the RGB image are associated with each box, where the pixels associated with each box are assigned a unique label that combine to define a mask for the box. The method then identifies a location for picking up the box using the segmentation image.
    Type: Application
    Filed: November 30, 2021
    Publication date: June 1, 2023
    Inventors: Te Tang, Tetsuaki Kato
  • Publication number: 20230158670
    Abstract: A method and system for dynamic collision avoidance motion planning for industrial robots. An obstacle avoidance motion optimization routine receives a planned path and obstacle detection data as inputs, and computes a commanded robot path which avoids any detected obstacles. Robot joint motions to follow the tool center point path are used by a robot controller to command robot motion. The planning and optimization calculations are performed in a feedback loop which is decoupled from the controller feedback loop which computes robot commands based on actual robot position. The two feedback loops perform planning, command and control calculations in real time, including responding to dynamic obstacles which may be present in the robot workspace. The optimization calculations include a safety function which efficiently incorporates both relative position and relative velocity of the obstacles with respect to the robot.
    Type: Application
    Filed: November 19, 2021
    Publication date: May 25, 2023
    Inventors: Hsien-Chung Lin, Chiara Talignani Landi, Chi-Keng Tsai, Tetsuaki Kato
  • Patent number: 11644811
    Abstract: A method and system for adapting a CNC machine tool path from a nominal workpiece shape to an actual workpiece shape. The method includes defining a grid of feature points on a nominal workpiece shape, where the feature points encompass an area around the machine tool path but do not necessarily include points on the machine tool path. A probe is used to detect locations of the feature points on an actual workpiece. A space mapping function is computed as a transformation from the nominal feature points to the actual feature points, and the function is applied to the nominal tool path to compute a new tool path. The new tool path is used by the CNC machine to operate on the actual workpiece. The feature points are used to characterize the three dimensional shape of the working surface of the actual workpiece, not just a curve or outline.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: May 9, 2023
    Assignee: FANUC CORPORATION
    Inventors: Te Tang, Tetsuaki Kato
  • Publication number: 20230123463
    Abstract: A method and system for robotic motion planning which perform dynamic velocity attenuation to avoid robot collision with static or dynamic objects. The technique maintains the planned robot tool path even when speed reduction is necessary, by providing feedback of a computed slowdown ratio to a tracking controller so that the path computation is always synchronized with current robot speed. The technique uses both robot-obstacle distance and relative velocity to determine when to apply velocity attenuation, and computes a joint speed limit vector based on a robot-obstacle distance, a maximum obstacle speed, and a computed stopping time as a function of the joint speed. Two different control structure implementations are disclosed, both of which provide feedback of the slowdown ratio to the motion planner as needed for faithful path following. A method of establishing velocity attenuation priority in multi-robot systems is also provided.
    Type: Application
    Filed: October 15, 2021
    Publication date: April 20, 2023
    Inventors: Hsien-Chung Lin, Tetsuaki Kato
  • Publication number: 20230120598
    Abstract: A method for teaching a robot to perform an operation based on human demonstration using force and vision sensors. The method includes a vision sensor to detect position and pose of both the human's hand and optionally a workpiece during teaching of an operation such as pick, move and place. The force sensor, located either beneath the workpiece or on a tool, is used to detect force information. Data from the vision and force sensors, along with other optional inputs, are used to teach both motions and state change logic for the operation being taught. Several techniques are disclosed for determining state change logic, such as the transition from approaching to grasping. Techniques for improving motion programming to remove extraneous motions by the hand are also disclosed. Robot programming commands are then generated from the hand position and orientation data, along with the state transitions.
    Type: Application
    Filed: October 15, 2021
    Publication date: April 20, 2023
    Inventors: Kaimeng Wang, Tetsuaki Kato
  • Patent number: 11554496
    Abstract: A system and method for extracting features from a 2D image of an object using a deep learning neural network and a vector field estimation process. The method includes extracting a plurality of possible feature points, generating a mask image that defines pixels in the 2D image where the object is located, and generating a vector field image for each extracted feature point that includes an arrow directed towards the extracted feature point. The method also includes generating a vector intersection image by identifying an intersection point where the arrows for every combination of two pixels in the 2D image intersect. The method assigns a score for each intersection point depending on the distance from each pixel for each combination of two pixels and the intersection point, and generates a point voting image that identifies a feature location from a number of clustered points.
    Type: Grant
    Filed: April 3, 2020
    Date of Patent: January 17, 2023
    Assignee: FANUC CORPORATION
    Inventors: Te Tang, Tetsuaki Kato
  • Publication number: 20220383538
    Abstract: A system and method for identifying an object to be picked up by a robot. The method includes obtaining a 2D red-green-blue (RGB) color image and a 2D depth map image of the objects using a 3D camera, where pixels in the depth map image are assigned a value identifying the distance from the camera to the objects. The method generates a segmentation image of the objects using a deep learning convolutional neural network that performs an image segmentation process that extracts features from the RGB image, assigns a label to the pixels so that objects in the segmentation image have the same label and rotates the object using the orientation of the object in the segmented image. The method then identifies a location for picking up the object using the segmentation image and the depth map image and rotates the object when it is picked up.
    Type: Application
    Filed: May 25, 2021
    Publication date: December 1, 2022
    Inventors: Te Tang, Tetsuaki Kato
  • Publication number: 20220379475
    Abstract: A system and method identifying an object, such as a transparent object, to be picked up by a robot from a bin of objects. The method includes obtaining a 2D red-green-blue (RGB) color image and a 2D depth map image of the objects using a 3D camera, where pixels in the depth map image are assigned a value identifying the distance from the camera to the objects. The method generates a segmentation image of the objects using a deep learning mask R-CNN (convolutional neural network) that performs an image segmentation process that extracts features from the RGB image and assigns a label to the pixels so that objects in the segmentation image have the same label. The method then identifies a location for picking up the object using the segmentation image and the depth map image.
    Type: Application
    Filed: May 25, 2021
    Publication date: December 1, 2022
    Inventors: Te Tang, Tetsuaki Kato
  • Publication number: 20220373998
    Abstract: A method for determining a position of an object moving along a conveyor belt. The method includes measuring the position of the conveyor belt while the conveyor belt is moving using a motor encoder and providing a measured position signal of the position of the object based on the measured position of the conveyor belt. The method also includes determining that the conveyor belt has stopped, providing a CAD model of the object and generating a point cloud representation of the object using a 3D vision system. The method then matches the model and the point cloud to determine the position of the object, provides a model position signal of the position of the object based on the matched model and point cloud, and uses the model position signal to correct an error in the measured position signal that occurs as a result of the conveyor belt being stopped.
    Type: Application
    Filed: May 21, 2021
    Publication date: November 24, 2022
    Inventors: Chiara Talignani Landi, Hsien-Chung Lin, Tetsuaki Kato, Chi-Keng Tsai
  • Patent number: 11475589
    Abstract: A system and method for obtaining a 3D pose of an object using 2D images from a 2D camera and a learned-based neural network. The neural network extracts a plurality of features on the object from the 2D images and generates a heatmap for each of the extracted features that identify the probability of a location of a feature point on the object by a color representation. The method provides a feature point image that includes the feature points from the heatmaps on the 2D images, and estimates the 3D pose of the object by comparing the feature point image and a 3D virtual CAD model of the object.
    Type: Grant
    Filed: April 3, 2020
    Date of Patent: October 18, 2022
    Assignee: FANUC CORPORATION
    Inventors: Te Tang, Tetsuaki Kato
  • Patent number: 11350078
    Abstract: A system and method for obtaining a 3D pose of an object using 2D images from multiple 2D cameras. The method includes positioning a first 2D camera so that it is directed towards the object along a first optical axis, obtaining 2D images of the object by the first 2D camera, and extracting feature points from the 2D images from the first 2D camera using a first feature extraction process. The method also includes positioning a second 2D camera so that it is directed towards the object along a second optical axis, obtaining 2D images of the object by the second 2D camera, and extracting feature points from the 2D images from the second 2D camera using a second feature extraction process. The method then estimates the 3D pose of the object using the extracted feature points from both of the first and second feature extraction process.
    Type: Grant
    Filed: April 3, 2020
    Date of Patent: May 31, 2022
    Assignee: FANUC CORPORATION
    Inventors: Te Tang, Tetsuaki Kato
  • Patent number: 11318611
    Abstract: A method for controlling a robot to perform a complex assembly task such as insertion of a component with multiple pins or pegs into a structure with multiple holes. The method uses an impedance controller including multiple reference centers with one set of gain factors. Only translational gain factors are used—one for a spring force and one for a damping force—and no rotational gains. The method computes spring-damping forces from reference center positions and velocities using the gain values, and measures contact force and torque with a sensor coupled between the robot arm and the component being manipulated. The computed spring-damping forces are then summed with the measured contact force and torque, to provide a resultant force and torque at the center of gravity of the component. A new component pose is then computed based on the resultant force and torque using impedance controller calculations.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: May 3, 2022
    Assignee: FANUC CORPORATION
    Inventors: Yu Zhao, Tetsuaki Kato
  • Publication number: 20220080581
    Abstract: A method for dual arm robot teaching from dual hand detection in human demonstration. A camera image of the demonstrator's hands and workpieces is provided to a first neural network which determines the identity of the left and right hand from the image, and also provides cropped sub-images of the identified hands. The cropped sub-images are provided to a second neural network which detects the poses of both the left and right hand from the images. The dual hand pose data for an entire operation is converted to robot gripper pose data and used for teaching two robot arms to perform the operation on the workpieces, where each hand's motion is assigned to one robot arm. Edge detection from camera images may be used to refine robot motions in order to improve part localization for tasks requiring precision, such as inserting a part into an aperture.
    Type: Application
    Filed: October 15, 2021
    Publication date: March 17, 2022
    Inventors: Kaimeng Wang, Tetsuaki Kato
  • Publication number: 20220084238
    Abstract: A system and method for obtaining a 3D pose of objects, such as transparent objects, in a group of objects to allow a robot to pick up the objects. The method includes obtaining a 2D red-green-blue (RGB) color image of the objects using a camera, and generating a segmentation image of the RGB images by performing an image segmentation process using a deep learning convolutional neural network that extracts features from the RGB image and assigns a label to pixels in the segmentation image so that objects in the segmentation image have the same label. The method also includes separating the segmentation image into a plurality of cropped images where each cropped image includes one of the objects, estimating the 3D pose of each object in each cropped image, and combining the 3D poses into a single pose image.
    Type: Application
    Filed: September 11, 2020
    Publication date: March 17, 2022
    Inventors: Te Tang, Tetsuaki Kato
  • Publication number: 20220080580
    Abstract: A method for dual hand detection in robot teaching from human demonstration. A camera image of the demonstrator's hands and workpieces is provided to a first neural network which determines the identity of the left and right hand of the human demonstrator from the image, and also provides cropped sub-images of the identified hands. The first neural network is trained using images in which the left and right hands are pre-identified. The cropped sub-images are then provided to a second neural network which detects the pose of both the left and right hand from the images, where the sub-image for the left hand is horizontally flipped before and after the hand pose detection if second neural network is trained with right hand images. The hand pose data is converted to robot gripper pose data and used for teaching a robot to perform an operation through human demonstration.
    Type: Application
    Filed: September 11, 2020
    Publication date: March 17, 2022
    Inventors: Kaimeng Wang, Tetsuaki Kato
  • Publication number: 20220072712
    Abstract: A system and method for identifying a box to be picked up by a robot from a stack of boxes. The method includes obtaining a 2D red-green-blue (RGB) color image of the boxes and a 2D depth map image of the boxes using a 3D camera, where pixels in the depth map image are assigned a value identifying the distance from the camera to the boxes. The method generates a segmentation image of the boxes by performing an image segmentation process that extracts features from the RGB image and the depth map image, combines the extracted features in the images and assigns a label to the pixels in a features image so that each box in the segmentation image has the same label. The method then identifies a location for picking up the box using the segmentation image.
    Type: Application
    Filed: September 9, 2020
    Publication date: March 10, 2022
    Inventors: Te Tang, Tetsuaki Kato