Patents by Inventor Yongxiang Fan

Yongxiang Fan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12017371
    Abstract: A method for line matching during image-based visual servoing control of a robot performing a workpiece installation. The method uses a target image from human demonstration and a current image of a robotic execution phase. A plurality of lines are identified in the target and current images, and an initial pairing of target-current lines is defined based on distance and angle. An optimization computation determines image transposes which minimize a cost function formulated to include both direction and distance between target lines and current lines using 2D data in the camera image plane, and constraint equations which relate the lines in the image plane to the 3D workpiece pose. The rotational and translational transposes which minimize the cost function are used to update the line pair matching, and the best line pairs are used to compute a difference signal for controlling robot motion during visual servoing.
    Type: Grant
    Filed: March 15, 2022
    Date of Patent: June 25, 2024
    Assignee: FANUC CORPORATION
    Inventors: Kaimeng Wang, Yongxiang Fan
  • Patent number: 12017355
    Abstract: A method for modularizing high dimensional neural networks into neural networks of lower input dimensions. The method is suited to generating full-DOF robot grasping actions based on images of parts to be picked. In one example, a first network encodes grasp positional dimensions and a second network encodes rotational dimensions. The first network is trained to predict a position at which a grasp quality is maximized for any value of the grasp rotations. The second network is trained to identify the maximum grasp quality while searching only at the position from the first network. Thus, the two networks collectively identify an optimal grasp, while each network's searching space is reduced. Many grasp positions and rotations can be evaluated in a search quantity of the sum of the evaluated positions and rotations, rather than the product. Dimensions may be separated in any suitable fashion, including three neural networks in some applications.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: June 25, 2024
    Assignee: FANUC Corporation
    Inventor: Yongxiang Fan
  • Patent number: 12017356
    Abstract: A robotic grasp generation technique for part picking applications. Part and gripper geometry are provided as inputs, typically from CAD files. Gripper kinematics are also defined as an input. A set of candidate grasps is provided using any known preliminary grasp generation tool. A point model of the part and a model of the gripper contact surfaces with a clearance margin are used in an optimization computation applied to each of the candidate grasps, resulting in an adjusted grasp database. The adjusted grasps optimize grasp quality using a virtual gripper surface, which positions the actual gripper surface a small distance away from the part. A signed distance field calculation is then performed on each of the adjusted grasps, and those with any collision between the gripper and the part are discarded. The resulting grasp database includes high quality collision-free grasps for use in a robotic part pick-and-place operation.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: June 25, 2024
    Assignee: FANUC CORPORATION
    Inventors: Yongxiang Fan, Hsien-Chung Lin
  • Publication number: 20240198543
    Abstract: An automated technique for robot gripper fingertip design. A workpiece design and a bin shape are provided as inputs, along with a parameterized gripper design. The gripper parameters define the lengths of segments of the fingertips, and the bend angle between fingertip segments. A fingertip shape, defined by selecting parameter values, is used in a simulated picking of parts from many different randomly defined piles of workpieces in the bin. Grasps for the simulated bin picking are pre-defined and provided as input. A score is assigned to the particular fingertip shape based on the average number of leftovers from the many simulated bin picking operations. A new fingertip shape is then defined by selecting new values for the parameters, and the simulations are repeated to assign a score for the new fingertip shape. This process is repeated to suitably sample the parameter range, and a best-performing fingertip shape is identified.
    Type: Application
    Filed: December 16, 2022
    Publication date: June 20, 2024
    Inventors: Yongxiang Fan, Tetsuaki Kato
  • Publication number: 20240198524
    Abstract: A system and method for providing deep collision avoidance between objects in a robotic system. For a collision between a part and an object, the method decomposes the part into a union of part balls having a known radius and center location and decomposes the object into a union of object balls having a known radius and center location. The method obtains a Minkowski difference between each pair of the part balls and the object balls, converts each Minkowski difference into a Minkowski ball having a known center location and radius, and combines the Minkowski balls into a union of overlapping Minkowski balls. The method determines an outer boundary of the union of the overlapping Minkowski balls, extracts boundary points on the boundary as escape vectors, and maps each of the escape vectors into collision-free part pose.
    Type: Application
    Filed: December 16, 2022
    Publication date: June 20, 2024
    Inventor: Yongxiang Fan
  • Publication number: 20240190002
    Abstract: An object interference checking technique using point sets which uses CAD models of objects and obstacles and converts the CAD models to 3D points. The 3D point locations are updated based on object motion. The 3D points are then converted to 3D grid space indices defining space occupied by any point on any object or obstacle. The 3D grid space indices are then converted to 1D indices and the 1D indices are stored as a set per object and per position. Swept volumes for an object are created by computing a union of the 1D index sets across multiple motion steps. Interference checking between objects is performed by computing an intersection of the 1D index sets for a given motion step or position. The 1D indices are converted back to 3D coordinates to define the 3D shapes of the swept volumes and interferences.
    Type: Application
    Filed: February 27, 2024
    Publication date: June 13, 2024
    Inventors: Hsien-Chung Lin, Yongxiang Fan, Tetsuaki Kato
  • Publication number: 20240123618
    Abstract: A robot interference checking motion planning technique using point sets. The technique uses CAD models of robot arms and obstacles and converts the CAD models to 3D point sets. The 3D point set coordinates are updated at each time step based on robot and obstacle motion. The 3D points are then converted to 3D grid space indices indicating space occupied by any point on any part. The 3D grid space indices are converted to 1D indices and the 1D indices are stored as sets per object and per time step. Interference checking is performed by computing an intersection of the 1D index sets for a given time step. Swept volumes are created by computing a union of the 1D index sets across multiple time steps. The 1D indices are converted back to 3D coordinates to define the 3D shapes of the swept volumes and the 3D locations of any interferences.
    Type: Application
    Filed: December 14, 2023
    Publication date: April 18, 2024
    Inventors: Hsien-Chung Lin, Yongxiang Fan, Tetsuaki Kato
  • Publication number: 20240109181
    Abstract: A technique for robotic grasp teaching by human demonstration. A human demonstrates a grasp on a workpiece, while a camera provides images of the demonstration which are analyzed to identify a hand pose relative to the workpiece. The hand pose is converted to a plane representing two fingers of a gripper. The hand plane is used to determine a grasp region on the workpiece which corresponds to the human demonstration. The grasp region and the hand pose are used in an optimization computation which is run repeatedly with randomization to generate multiple grasps approximating the demonstration, where each of the optimized grasps is a stable, high quality grasp with gripper-workpiece surface contact. A best one of the generated grasps is then selected and added to a grasp database. The human demonstration may be repeated on different locations of the workpiece to provide multiple different grasps in the database.
    Type: Application
    Filed: September 23, 2022
    Publication date: April 4, 2024
    Inventors: Kaimeng Wang, Yongxiang Fan
  • Patent number: 11919161
    Abstract: A robotic grasp generation technique for machine tending applications. Part and gripper geometry are provided as inputs, typically from CAD files. Gripper kinematics are also defined as an input. Preferred and prohibited grasp locations on the part may also be defined as inputs, to ensure that the computed grasp candidates enable the robot to load the part into a machining station such that the machining station can grasp a particular location on the part. An optimization solver is used to compute a quality grasp with stable surface contact between the part and the gripper, with no interference between the gripper and the part, and allowing for the preferred and prohibited grasp locations which were defined as inputs. All surfaces of the gripper fingers are considered for grasping and collision avoidance. A loop with random initialization is used to automatically compute many hundreds of diverse grasps for the part.
    Type: Grant
    Filed: October 15, 2021
    Date of Patent: March 5, 2024
    Assignee: FANUC CORPORATION
    Inventor: Yongxiang Fan
  • Patent number: 11878424
    Abstract: A robot interference checking motion planning technique using point sets. The technique uses CAD models of robot arms and obstacles and converts the CAD models to 3D point sets. The 3D point set coordinates are updated at each time step based on robot and obstacle motion. The 3D points are then converted to 3D grid space indices indicating space occupied by any point on any part. The 3D grid space indices are converted to 1D indices and the 1D indices are stored as sets per object and per time step. Interference checking is performed by computing an intersection of the 1D index sets for a given time step. Swept volumes are created by computing a union of the 1D index sets across multiple time steps. The 1D indices are converted back to 3D coordinates to define the 3D shapes of the swept volumes and the 3D locations of any interferences.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: January 23, 2024
    Assignee: FANUC CORPORATION
    Inventors: Hsien-Chung Lin, Yongxiang Fan, Tetsuaki Kato
  • Patent number: 11809521
    Abstract: A method for modularizing high dimensional neural networks into neural networks of lower input dimensions. The method is suited to generating full-DOF robot grasping actions based on images of parts to be picked. In one example, a first network encodes grasp positional dimensions and a second network encodes rotational dimensions. The first network is trained to predict a position at which a grasp quality is maximized for any value of the grasp rotations. The second network is trained to identify the maximum grasp quality while searching only at the position from the first network. Thus, the two networks collectively identify an optimal grasp, while each network's searching space is reduced. Many grasp positions and rotations can be evaluated in a search quantity of the sum of the evaluated positions and rotations, rather than the product. Dimensions may be separated in any suitable fashion, including three neural networks in some applications.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: November 7, 2023
    Assignee: FANUC CORPORATION
    Inventor: Yongxiang Fan
  • Publication number: 20230294291
    Abstract: A method for line matching during image-based visual servoing control of a robot performing a workpiece installation. The method uses a target image from human demonstration and a current image of a robotic execution phase. A plurality of lines are identified in the target and current images, and an initial pairing of target-current lines is defined based on distance and angle. An optimization computation determines image transposes which minimize a cost function formulated to include both direction and distance between target lines and current lines using 2D data in the camera image plane, and constraint equations which relate the lines in the image plane to the 3D workpiece pose. The rotational and translational transposes which minimize the cost function are used to update the line pair matching, and the best line pairs are used to compute a difference signal for controlling robot motion during visual servoing.
    Type: Application
    Filed: March 15, 2022
    Publication date: September 21, 2023
    Inventors: Kaimeng Wang, Yongxiang Fan
  • Publication number: 20230256602
    Abstract: A region-based robotic grasp generation technique for machine tending or bin picking applications. Part and gripper geometry are provided as inputs, typically from CAD files, along with gripper kinematics. A human user defines one or more target grasp regions on the part, using a graphical user interface displaying the part geometry. The target grasp regions are identified by the user based on the user's knowledge of how the part may be grasped to ensure that the part can be subsequently placed in a proper destination pose. For each of the target grasp regions, an optimization solver is used to compute a plurality of quality grasps with stable surface contact between the part and the gripper, and no part-gripper interference. The computed grasps for each target grasp region are placed in a grasp database which is used by a robot in actual bin picking operations.
    Type: Application
    Filed: February 17, 2022
    Publication date: August 17, 2023
    Inventors: Yongxiang Fan, Chi-Keng Tsai, Tetsuaki Kato
  • Publication number: 20230173674
    Abstract: A robot interference checking motion planning technique using point sets. The technique uses CAD models of robot arms and obstacles and converts the CAD models to 3D point sets. The 3D point set coordinates are updated at each time step based on robot and obstacle motion. The 3D points are then converted to 3D grid space indices indicating space occupied by any point on any part. The 3D grid space indices are converted to 1D indices and the 1D indices are stored as sets per object and per time step. Interference checking is performed by computing an intersection of the 1D index sets for a given time step. Swept volumes are created by computing a union of the 1D index sets across multiple time steps. The 1D indices are converted back to 3D coordinates to define the 3D shapes of the swept volumes and the 3D locations of any interferences.
    Type: Application
    Filed: December 6, 2021
    Publication date: June 8, 2023
    Inventors: Hsien-Chung Lin, Yongxiang Fan, Tetsuaki Kato
  • Publication number: 20230166398
    Abstract: A robotic grasp generation technique for part picking applications. Part and gripper geometry are provided as inputs, typically from CAD files. Gripper kinematics are also defined as an input. A set of candidate grasps is provided using any known preliminary grasp generation tool. A point model of the part and a model of the gripper contact surfaces with a clearance margin are used in an optimization computation applied to each of the candidate grasps, resulting in an adjusted grasp database. The adjusted grasps optimize grasp quality using a virtual gripper surface, which positions the actual gripper surface a small distance away from the part. A signed distance field calculation is then performed on each of the adjusted grasps, and those with any collision between the gripper and the part are discarded. The resulting grasp database includes high quality collision-free grasps for use in a robotic part pick-and-place operation.
    Type: Application
    Filed: November 30, 2021
    Publication date: June 1, 2023
    Inventors: Yongxiang Fan, Hsien-Chung Lin
  • Patent number: 11654564
    Abstract: A grasp generation technique for robotic pick-up of parts. A database of solid or surface models is provided for all objects and grippers which are to be evaluated. A gripper is selected and a random initialization is performed, where random objects and poses are selected from the object database. An iterative optimization computation is then performed, where many hundreds of grasps are computed for each part with surface contact between the part and the gripper, and sampling for grasp diversity and global optimization. Finally, a physical environment simulation is performed, where the grasps for each part are mapped to simulated piles of objects in a bin scenario. The grasp points and approach directions from the physical environment simulation are then used to train neural networks for grasp learning in real-world robotic operations, where the simulation results are correlated to camera depth image data to identify a high quality grasp.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: May 23, 2023
    Assignee: FANUC CORPORATION
    Inventor: Yongxiang Fan
  • Publication number: 20230124599
    Abstract: A robotic grasp generation technique for machine tending applications. Part and gripper geometry are provided as inputs, typically from CAD files. Gripper kinematics are also defined as an input. Preferred and prohibited grasp locations on the part may also be defined as inputs, to ensure that the computed grasp candidates enable the robot to load the part into a machining station such that the machining station can grasp a particular location on the part. An optimization solver is used to compute a quality grasp with stable surface contact between the part and the gripper, with no interference between the gripper and the part, and allowing for the preferred and prohibited grasp locations which were defined as inputs. All surfaces of the gripper fingers are considered for grasping and collision avoidance. A loop with random initialization is used to automatically compute many hundreds of diverse grasps for the part.
    Type: Application
    Filed: October 15, 2021
    Publication date: April 20, 2023
    Inventor: Yongxiang Fan
  • Publication number: 20220388162
    Abstract: A method for modularizing high dimensional neural networks into neural networks of lower input dimensions. The method is suited to generating full-DOF robot grasping actions based on images of parts to be picked. In one example, a first network encodes grasp positional dimensions and a second network encodes rotational dimensions. The first network is trained to predict a position at which a grasp quality is maximized for any value of the grasp rotations. The second network is trained to identify the maximum grasp quality while searching only at the position from the first network. Thus, the two networks collectively identify an optimal grasp, while each network's searching space is reduced. Many grasp positions and rotations can be evaluated in a search quantity of the sum of the evaluated positions and rotations, rather than the product. Dimensions may be separated in any suitable fashion, including three neural networks in some applications.
    Type: Application
    Filed: June 8, 2021
    Publication date: December 8, 2022
    Inventor: Yongxiang Fan
  • Publication number: 20220391638
    Abstract: A method for modularizing high dimensional neural networks into neural networks of lower input dimensions. The method is suited to generating full-DOF robot grasping actions based on images of parts to be picked. In one example, a first network encodes grasp positional dimensions and a second network encodes rotational dimensions. The first network is trained to predict a position at which a grasp quality is maximized for any value of the grasp rotations. The second network is trained to identify the maximum grasp quality while searching only at the position from the first network. Thus, the two networks collectively identify an optimal grasp, while each network's searching space is reduced. Many grasp positions and rotations can be evaluated in a search quantity of the sum of the evaluated positions and rotations, rather than the product. Dimensions may be separated in any suitable fashion, including three neural networks in some applications.
    Type: Application
    Filed: June 8, 2021
    Publication date: December 8, 2022
    Inventor: Yongxiang Fan
  • Publication number: 20220072707
    Abstract: A grasp generation technique for robotic pick-up of parts. A database of solid or surface models is provided for all objects and grippers which are to be evaluated. A gripper is selected and a random initialization is performed, where random objects and poses are selected from the object database. An iterative optimization computation is then performed, where many hundreds of grasps are computed for each part with surface contact between the part and the gripper, and sampling for grasp diversity and global optimization. Finally, a physical environment simulation is performed, where the grasps for each part are mapped to simulated piles of objects in a bin scenario. The grasp points and approach directions from the physical environment simulation are then used to train neural networks for grasp learning in real-world robotic operations, where the simulation results are correlated to camera depth image data to identify a high quality grasp.
    Type: Application
    Filed: September 10, 2020
    Publication date: March 10, 2022
    Inventor: Yongxiang Fan