Patents by Inventor Yongxiang Fan

Yongxiang Fan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240142175
    Abstract: Disclosed are an unblocking apparatus for a furnace discharging pipe and a use method. The unblocking apparatus includes a rail, a rail car that may move along the rail, an unblocking drive mechanism arranged on the rail car, a heat-unblocking component, a cold-unblocking component, and a material receiving component that is used to receive a blocking material in the discharging pipe, and a drive end of the unblocking drive mechanism is detachably connected with one end of the heat-unblocking component and the cold-unblocking component respectively. The present application effectively handles different blockage situations of the furnace discharging pipe by connecting the unblocking drive mechanism with an unblocking rod capable of heat-unblocking and a drilling rod capable of cold-unblocking, thereby two modes of heat-unblocking and cold-unblocking are performed on the furnace discharging pipe; and the discharging pipe may be unblocked by a remote operation.
    Type: Application
    Filed: October 23, 2023
    Publication date: May 2, 2024
    Applicants: China Nuclear Sichuan Environmental Protection Engineering Co., Ltd., China Building Materials Academy, China Nuclear Power Engineering Co., Ltd.
    Inventors: Weidong XU, Yu CHANG, Yongchang ZHU, Hong DUAN, Chunyu TIAN, Wei WU, Debo YANG, Qingbin ZHAO, Shuaizhen WU, Lin WANG, Zhu CUI, Heyi GUO, Maosong FAN, Yuancheng SUN, Jie MEI, Xiaoli AN, Yongxiang ZHAO, Qinda LIU
  • Patent number: 11965010
    Abstract: Disclosed are a method for preparing a porcine-derived interferon-delta 5 (pIFN-?5) and an application of the pIFN-?5, where the method for preparing pIFN-?5 includes the following steps: step S1, obtaining a DNA fragment containing pIFN-?5 gene through reverse transcription-polymerase chain reaction (RT-PCR) amplification by using the total RNA of pretreated porcine small intestinal epithelial cells IPEC-J2; step S2, inserting the DNA fragment containing pIFN-?5 gene into an exogenous expression vector to construct a recombinant expression vector for expressing the pIFN-?5 gene; and step S3, introducing the recombinant expression vector into a suitable host cell, and driving the host cell to express the pIFN-?5 gene to obtain the pIFN-?5. The recombinant pIFN-?5 protein is used to prepare drugs or preparations for inhibiting infection of porcine epidemic diarrhea virus (PEDV), porcine transmissible gastroenteritis virus (TGEV), porcine delta coronavirus (PDCoV) and porcine rotavirus (PRoV).
    Type: Grant
    Filed: April 27, 2023
    Date of Patent: April 23, 2024
    Assignee: JIANGSU ACADEMY OF AGRICULTURAL SCIENCES
    Inventors: Baochao Fan, Bin Li, Shiying Song, Xuehan Zhang, Xuejiao Zhu, Jinzhu Zhou, Yongxiang Zhao, Jizong Li, Rongli Guo, Weilu Guo, Xue Zhang
  • Publication number: 20240123618
    Abstract: A robot interference checking motion planning technique using point sets. The technique uses CAD models of robot arms and obstacles and converts the CAD models to 3D point sets. The 3D point set coordinates are updated at each time step based on robot and obstacle motion. The 3D points are then converted to 3D grid space indices indicating space occupied by any point on any part. The 3D grid space indices are converted to 1D indices and the 1D indices are stored as sets per object and per time step. Interference checking is performed by computing an intersection of the 1D index sets for a given time step. Swept volumes are created by computing a union of the 1D index sets across multiple time steps. The 1D indices are converted back to 3D coordinates to define the 3D shapes of the swept volumes and the 3D locations of any interferences.
    Type: Application
    Filed: December 14, 2023
    Publication date: April 18, 2024
    Inventors: Hsien-Chung Lin, Yongxiang Fan, Tetsuaki Kato
  • Publication number: 20240109181
    Abstract: A technique for robotic grasp teaching by human demonstration. A human demonstrates a grasp on a workpiece, while a camera provides images of the demonstration which are analyzed to identify a hand pose relative to the workpiece. The hand pose is converted to a plane representing two fingers of a gripper. The hand plane is used to determine a grasp region on the workpiece which corresponds to the human demonstration. The grasp region and the hand pose are used in an optimization computation which is run repeatedly with randomization to generate multiple grasps approximating the demonstration, where each of the optimized grasps is a stable, high quality grasp with gripper-workpiece surface contact. A best one of the generated grasps is then selected and added to a grasp database. The human demonstration may be repeated on different locations of the workpiece to provide multiple different grasps in the database.
    Type: Application
    Filed: September 23, 2022
    Publication date: April 4, 2024
    Inventors: Kaimeng Wang, Yongxiang Fan
  • Patent number: 11919161
    Abstract: A robotic grasp generation technique for machine tending applications. Part and gripper geometry are provided as inputs, typically from CAD files. Gripper kinematics are also defined as an input. Preferred and prohibited grasp locations on the part may also be defined as inputs, to ensure that the computed grasp candidates enable the robot to load the part into a machining station such that the machining station can grasp a particular location on the part. An optimization solver is used to compute a quality grasp with stable surface contact between the part and the gripper, with no interference between the gripper and the part, and allowing for the preferred and prohibited grasp locations which were defined as inputs. All surfaces of the gripper fingers are considered for grasping and collision avoidance. A loop with random initialization is used to automatically compute many hundreds of diverse grasps for the part.
    Type: Grant
    Filed: October 15, 2021
    Date of Patent: March 5, 2024
    Assignee: FANUC CORPORATION
    Inventor: Yongxiang Fan
  • Patent number: 11878424
    Abstract: A robot interference checking motion planning technique using point sets. The technique uses CAD models of robot arms and obstacles and converts the CAD models to 3D point sets. The 3D point set coordinates are updated at each time step based on robot and obstacle motion. The 3D points are then converted to 3D grid space indices indicating space occupied by any point on any part. The 3D grid space indices are converted to 1D indices and the 1D indices are stored as sets per object and per time step. Interference checking is performed by computing an intersection of the 1D index sets for a given time step. Swept volumes are created by computing a union of the 1D index sets across multiple time steps. The 1D indices are converted back to 3D coordinates to define the 3D shapes of the swept volumes and the 3D locations of any interferences.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: January 23, 2024
    Assignee: FANUC CORPORATION
    Inventors: Hsien-Chung Lin, Yongxiang Fan, Tetsuaki Kato
  • Patent number: 11809521
    Abstract: A method for modularizing high dimensional neural networks into neural networks of lower input dimensions. The method is suited to generating full-DOF robot grasping actions based on images of parts to be picked. In one example, a first network encodes grasp positional dimensions and a second network encodes rotational dimensions. The first network is trained to predict a position at which a grasp quality is maximized for any value of the grasp rotations. The second network is trained to identify the maximum grasp quality while searching only at the position from the first network. Thus, the two networks collectively identify an optimal grasp, while each network's searching space is reduced. Many grasp positions and rotations can be evaluated in a search quantity of the sum of the evaluated positions and rotations, rather than the product. Dimensions may be separated in any suitable fashion, including three neural networks in some applications.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: November 7, 2023
    Assignee: FANUC CORPORATION
    Inventor: Yongxiang Fan
  • Publication number: 20230294291
    Abstract: A method for line matching during image-based visual servoing control of a robot performing a workpiece installation. The method uses a target image from human demonstration and a current image of a robotic execution phase. A plurality of lines are identified in the target and current images, and an initial pairing of target-current lines is defined based on distance and angle. An optimization computation determines image transposes which minimize a cost function formulated to include both direction and distance between target lines and current lines using 2D data in the camera image plane, and constraint equations which relate the lines in the image plane to the 3D workpiece pose. The rotational and translational transposes which minimize the cost function are used to update the line pair matching, and the best line pairs are used to compute a difference signal for controlling robot motion during visual servoing.
    Type: Application
    Filed: March 15, 2022
    Publication date: September 21, 2023
    Inventors: Kaimeng Wang, Yongxiang Fan
  • Publication number: 20230256602
    Abstract: A region-based robotic grasp generation technique for machine tending or bin picking applications. Part and gripper geometry are provided as inputs, typically from CAD files, along with gripper kinematics. A human user defines one or more target grasp regions on the part, using a graphical user interface displaying the part geometry. The target grasp regions are identified by the user based on the user's knowledge of how the part may be grasped to ensure that the part can be subsequently placed in a proper destination pose. For each of the target grasp regions, an optimization solver is used to compute a plurality of quality grasps with stable surface contact between the part and the gripper, and no part-gripper interference. The computed grasps for each target grasp region are placed in a grasp database which is used by a robot in actual bin picking operations.
    Type: Application
    Filed: February 17, 2022
    Publication date: August 17, 2023
    Inventors: Yongxiang Fan, Chi-Keng Tsai, Tetsuaki Kato
  • Publication number: 20230173674
    Abstract: A robot interference checking motion planning technique using point sets. The technique uses CAD models of robot arms and obstacles and converts the CAD models to 3D point sets. The 3D point set coordinates are updated at each time step based on robot and obstacle motion. The 3D points are then converted to 3D grid space indices indicating space occupied by any point on any part. The 3D grid space indices are converted to 1D indices and the 1D indices are stored as sets per object and per time step. Interference checking is performed by computing an intersection of the 1D index sets for a given time step. Swept volumes are created by computing a union of the 1D index sets across multiple time steps. The 1D indices are converted back to 3D coordinates to define the 3D shapes of the swept volumes and the 3D locations of any interferences.
    Type: Application
    Filed: December 6, 2021
    Publication date: June 8, 2023
    Inventors: Hsien-Chung Lin, Yongxiang Fan, Tetsuaki Kato
  • Publication number: 20230166398
    Abstract: A robotic grasp generation technique for part picking applications. Part and gripper geometry are provided as inputs, typically from CAD files. Gripper kinematics are also defined as an input. A set of candidate grasps is provided using any known preliminary grasp generation tool. A point model of the part and a model of the gripper contact surfaces with a clearance margin are used in an optimization computation applied to each of the candidate grasps, resulting in an adjusted grasp database. The adjusted grasps optimize grasp quality using a virtual gripper surface, which positions the actual gripper surface a small distance away from the part. A signed distance field calculation is then performed on each of the adjusted grasps, and those with any collision between the gripper and the part are discarded. The resulting grasp database includes high quality collision-free grasps for use in a robotic part pick-and-place operation.
    Type: Application
    Filed: November 30, 2021
    Publication date: June 1, 2023
    Inventors: Yongxiang Fan, Hsien-Chung Lin
  • Patent number: 11654564
    Abstract: A grasp generation technique for robotic pick-up of parts. A database of solid or surface models is provided for all objects and grippers which are to be evaluated. A gripper is selected and a random initialization is performed, where random objects and poses are selected from the object database. An iterative optimization computation is then performed, where many hundreds of grasps are computed for each part with surface contact between the part and the gripper, and sampling for grasp diversity and global optimization. Finally, a physical environment simulation is performed, where the grasps for each part are mapped to simulated piles of objects in a bin scenario. The grasp points and approach directions from the physical environment simulation are then used to train neural networks for grasp learning in real-world robotic operations, where the simulation results are correlated to camera depth image data to identify a high quality grasp.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: May 23, 2023
    Assignee: FANUC CORPORATION
    Inventor: Yongxiang Fan
  • Publication number: 20230124599
    Abstract: A robotic grasp generation technique for machine tending applications. Part and gripper geometry are provided as inputs, typically from CAD files. Gripper kinematics are also defined as an input. Preferred and prohibited grasp locations on the part may also be defined as inputs, to ensure that the computed grasp candidates enable the robot to load the part into a machining station such that the machining station can grasp a particular location on the part. An optimization solver is used to compute a quality grasp with stable surface contact between the part and the gripper, with no interference between the gripper and the part, and allowing for the preferred and prohibited grasp locations which were defined as inputs. All surfaces of the gripper fingers are considered for grasping and collision avoidance. A loop with random initialization is used to automatically compute many hundreds of diverse grasps for the part.
    Type: Application
    Filed: October 15, 2021
    Publication date: April 20, 2023
    Inventor: Yongxiang Fan
  • Publication number: 20220391638
    Abstract: A method for modularizing high dimensional neural networks into neural networks of lower input dimensions. The method is suited to generating full-DOF robot grasping actions based on images of parts to be picked. In one example, a first network encodes grasp positional dimensions and a second network encodes rotational dimensions. The first network is trained to predict a position at which a grasp quality is maximized for any value of the grasp rotations. The second network is trained to identify the maximum grasp quality while searching only at the position from the first network. Thus, the two networks collectively identify an optimal grasp, while each network's searching space is reduced. Many grasp positions and rotations can be evaluated in a search quantity of the sum of the evaluated positions and rotations, rather than the product. Dimensions may be separated in any suitable fashion, including three neural networks in some applications.
    Type: Application
    Filed: June 8, 2021
    Publication date: December 8, 2022
    Inventor: Yongxiang Fan
  • Publication number: 20220388162
    Abstract: A method for modularizing high dimensional neural networks into neural networks of lower input dimensions. The method is suited to generating full-DOF robot grasping actions based on images of parts to be picked. In one example, a first network encodes grasp positional dimensions and a second network encodes rotational dimensions. The first network is trained to predict a position at which a grasp quality is maximized for any value of the grasp rotations. The second network is trained to identify the maximum grasp quality while searching only at the position from the first network. Thus, the two networks collectively identify an optimal grasp, while each network's searching space is reduced. Many grasp positions and rotations can be evaluated in a search quantity of the sum of the evaluated positions and rotations, rather than the product. Dimensions may be separated in any suitable fashion, including three neural networks in some applications.
    Type: Application
    Filed: June 8, 2021
    Publication date: December 8, 2022
    Inventor: Yongxiang Fan
  • Publication number: 20220072707
    Abstract: A grasp generation technique for robotic pick-up of parts. A database of solid or surface models is provided for all objects and grippers which are to be evaluated. A gripper is selected and a random initialization is performed, where random objects and poses are selected from the object database. An iterative optimization computation is then performed, where many hundreds of grasps are computed for each part with surface contact between the part and the gripper, and sampling for grasp diversity and global optimization. Finally, a physical environment simulation is performed, where the grasps for each part are mapped to simulated piles of objects in a bin scenario. The grasp points and approach directions from the physical environment simulation are then used to train neural networks for grasp learning in real-world robotic operations, where the simulation results are correlated to camera depth image data to identify a high quality grasp.
    Type: Application
    Filed: September 10, 2020
    Publication date: March 10, 2022
    Inventor: Yongxiang Fan
  • Patent number: 10556347
    Abstract: A method to automate the process of teaching by allowing the robot to compare a live image of the work space along the desired path with a reference image of a similar workspace associated with a nominal path approximating the desired path, the robot teaches itself by comparing the reference image to the live image and generating the desired path at the desired location, hence eliminating human involvement in the process with most of its shortcomings. The present invention also overcomes the problem of site displacement or distortion by monitoring its progress along the path by sensors and modifying the path to conform to the desired path.
    Type: Grant
    Filed: November 1, 2017
    Date of Patent: February 11, 2020
    Assignee: Brachium, Inc.
    Inventors: Hadi Akeel, Yongxiang Fan
  • Publication number: 20190047145
    Abstract: A method to automate the process of teaching by allowing the robot to compare a live image of the work space along the desired path with a reference image of a similar workspace associated with a nominal path approximating the desired path, the robot teaches itself by comparing the reference image to the live image and generating the desired path at the desired location, hence eliminating human involvement in the process with most of its shortcomings. The present invention also overcomes the problem of site displacement or distortion by monitoring its progress along the path by sensors and modifying the path to conform to the desired path.
    Type: Application
    Filed: November 1, 2017
    Publication date: February 14, 2019
    Applicant: Brachium Labs LLC
    Inventors: Hadi Akeel, Yongxiang Fan