Patents by Inventor Kaimeng Wang
Kaimeng Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12030187Abstract: A robot system includes: a feature point position detection unit configured to detect, at a constant cycle, a position of a feature point of an obstacle that moves or deforms within a motion range of a robot; a movement path calculation unit configured to calculate a movement path of the robot before a motion of the robot; a mapping function derivation unit configured to derive a mapping function based on a position of the feature point that is detected at a time interval; and a path adjustment unit configured to dynamically adjust the movement path of the robot using the derived mapping function.Type: GrantFiled: July 5, 2019Date of Patent: July 9, 2024Assignee: FANUC CORPORATIONInventors: Kaimeng Wang, Wenjie Chen, Tetsuaki Kato
-
Patent number: 12017371Abstract: A method for line matching during image-based visual servoing control of a robot performing a workpiece installation. The method uses a target image from human demonstration and a current image of a robotic execution phase. A plurality of lines are identified in the target and current images, and an initial pairing of target-current lines is defined based on distance and angle. An optimization computation determines image transposes which minimize a cost function formulated to include both direction and distance between target lines and current lines using 2D data in the camera image plane, and constraint equations which relate the lines in the image plane to the 3D workpiece pose. The rotational and translational transposes which minimize the cost function are used to update the line pair matching, and the best line pairs are used to compute a difference signal for controlling robot motion during visual servoing.Type: GrantFiled: March 15, 2022Date of Patent: June 25, 2024Assignee: FANUC CORPORATIONInventors: Kaimeng Wang, Yongxiang Fan
-
Publication number: 20240201677Abstract: A method for teaching a robot to perform an operation including human demonstration using inverse reinforcement learning and a reinforcement learning reward function. A demonstrator performs an operation with contact force and workpiece motion data recorded. The demonstration data is used to train an encoder neural network which captures the human skill, defining a Gaussian distribution of probabilities for a set of states and actions. Encoder and decoder neural networks are then used in live robotic operations, where the decoder is used by a robot controller to compute actions based on force and motion state data from the robot. After each operation, the reward function is computed, with a Kullback-Leibler divergence term which rewards a small difference between human demonstration and robot operation probability curves, and a completion term which rewards a successful operation by the robot. The decoder is trained using reinforcement learning to maximize the reward function.Type: ApplicationFiled: December 20, 2022Publication date: June 20, 2024Inventors: Kaimeng Wang, Yu Zhao
-
Publication number: 20240109181Abstract: A technique for robotic grasp teaching by human demonstration. A human demonstrates a grasp on a workpiece, while a camera provides images of the demonstration which are analyzed to identify a hand pose relative to the workpiece. The hand pose is converted to a plane representing two fingers of a gripper. The hand plane is used to determine a grasp region on the workpiece which corresponds to the human demonstration. The grasp region and the hand pose are used in an optimization computation which is run repeatedly with randomization to generate multiple grasps approximating the demonstration, where each of the optimized grasps is a stable, high quality grasp with gripper-workpiece surface contact. A best one of the generated grasps is then selected and added to a grasp database. The human demonstration may be repeated on different locations of the workpiece to provide multiple different grasps in the database.Type: ApplicationFiled: September 23, 2022Publication date: April 4, 2024Inventors: Kaimeng Wang, Yongxiang Fan
-
Patent number: 11813749Abstract: A method for teaching a robot to perform an operation based on human demonstration with images from a camera. The method includes a teaching phase where a 2D or 3D camera detects a human hand grasping and moving a workpiece, and images of the hand and workpiece are analyzed to determine a robot gripper pose and positions which equate to the pose and positions of the hand and corresponding pose and positions of the workpiece. Robot programming commands are then generated from the computed gripper pose and position relative to the workpiece pose and position. In a replay phase, the camera identifies workpiece pose and position, and the programming commands cause the robot to move the gripper to pick, move and place the workpiece as demonstrated. A teleoperation mode is also disclosed, where camera images of a human hand are used to control movement of the robot in real time.Type: GrantFiled: April 8, 2020Date of Patent: November 14, 2023Assignee: FANUC CORPORATIONInventors: Kaimeng Wang, Tetsuaki Kato
-
Publication number: 20230294291Abstract: A method for line matching during image-based visual servoing control of a robot performing a workpiece installation. The method uses a target image from human demonstration and a current image of a robotic execution phase. A plurality of lines are identified in the target and current images, and an initial pairing of target-current lines is defined based on distance and angle. An optimization computation determines image transposes which minimize a cost function formulated to include both direction and distance between target lines and current lines using 2D data in the camera image plane, and constraint equations which relate the lines in the image plane to the 3D workpiece pose. The rotational and translational transposes which minimize the cost function are used to update the line pair matching, and the best line pairs are used to compute a difference signal for controlling robot motion during visual servoing.Type: ApplicationFiled: March 15, 2022Publication date: September 21, 2023Inventors: Kaimeng Wang, Yongxiang Fan
-
Patent number: 11712797Abstract: A method for dual hand detection in robot teaching from human demonstration. A camera image of the demonstrator's hands and workpieces is provided to a first neural network which determines the identity of the left and right hand of the human demonstrator from the image, and also provides cropped sub-images of the identified hands. The first neural network is trained using images in which the left and right hands are pre-identified. The cropped sub-images are then provided to a second neural network which detects the pose of both the left and right hand from the images, where the sub-image for the left hand is horizontally flipped before and after the hand pose detection if second neural network is trained with right hand images. The hand pose data is converted to robot gripper pose data and used for teaching a robot to perform an operation through human demonstration.Type: GrantFiled: September 11, 2020Date of Patent: August 1, 2023Assignee: FANUC CORPORATIONInventors: Kaimeng Wang, Tetsuaki Kato
-
Publication number: 20230173660Abstract: A method for teaching and controlling a robot to perform an operation based on human demonstration with images from a camera. The method includes a demonstration phase where a camera detects a human hand grasping and moving a workpiece to define a rough trajectory of the robotic movement of the workpiece. Line features or other geometric features on the workpiece collected during the demonstration phase are used in an image-based visual servoing (IBVS) approach which refines a final placement position of the workpiece, where the IBVS control takes over the workpiece placement during the final approach by the robot. Moving object detection is used for automatically localizing both object and hand position in 2D image space, and then identifying line features on the workpiece by removing line features belonging to the hand using hand keypoint detection.Type: ApplicationFiled: December 6, 2021Publication date: June 8, 2023Inventors: Kaimeng Wang, Tetsuaki Kato
-
Publication number: 20230120598Abstract: A method for teaching a robot to perform an operation based on human demonstration using force and vision sensors. The method includes a vision sensor to detect position and pose of both the human's hand and optionally a workpiece during teaching of an operation such as pick, move and place. The force sensor, located either beneath the workpiece or on a tool, is used to detect force information. Data from the vision and force sensors, along with other optional inputs, are used to teach both motions and state change logic for the operation being taught. Several techniques are disclosed for determining state change logic, such as the transition from approaching to grasping. Techniques for improving motion programming to remove extraneous motions by the hand are also disclosed. Robot programming commands are then generated from the hand position and orientation data, along with the state transitions.Type: ApplicationFiled: October 15, 2021Publication date: April 20, 2023Inventors: Kaimeng Wang, Tetsuaki Kato
-
Publication number: 20220080581Abstract: A method for dual arm robot teaching from dual hand detection in human demonstration. A camera image of the demonstrator's hands and workpieces is provided to a first neural network which determines the identity of the left and right hand from the image, and also provides cropped sub-images of the identified hands. The cropped sub-images are provided to a second neural network which detects the poses of both the left and right hand from the images. The dual hand pose data for an entire operation is converted to robot gripper pose data and used for teaching two robot arms to perform the operation on the workpieces, where each hand's motion is assigned to one robot arm. Edge detection from camera images may be used to refine robot motions in order to improve part localization for tasks requiring precision, such as inserting a part into an aperture.Type: ApplicationFiled: October 15, 2021Publication date: March 17, 2022Inventors: Kaimeng Wang, Tetsuaki Kato
-
Publication number: 20220080580Abstract: A method for dual hand detection in robot teaching from human demonstration. A camera image of the demonstrator's hands and workpieces is provided to a first neural network which determines the identity of the left and right hand of the human demonstrator from the image, and also provides cropped sub-images of the identified hands. The first neural network is trained using images in which the left and right hands are pre-identified. The cropped sub-images are then provided to a second neural network which detects the pose of both the left and right hand from the images, where the sub-image for the left hand is horizontally flipped before and after the hand pose detection if second neural network is trained with right hand images. The hand pose data is converted to robot gripper pose data and used for teaching a robot to perform an operation through human demonstration.Type: ApplicationFiled: September 11, 2020Publication date: March 17, 2022Inventors: Kaimeng Wang, Tetsuaki Kato
-
Patent number: 11207788Abstract: A hand control apparatus including an extracting unit extracting a grip pattern of an object having a shape closest to that of the object acquired by a shape acquiring unit from a storage unit storing and associating shapes of plural types of objects and grip patterns, a position and posture calculating unit calculating a gripping position and posture of the hand in accordance with the extracted grip pattern, a hand driving unit causing the hand to grip the object based on the calculated gripping position and posture, a determining unit determining if a gripped state of the object is appropriate based on information acquired by at least one of the shape acquiring unit, a force sensor and a tactile sensor, and a gripped state correcting unit correcting at least one of the gripping position and the posture when it is determined that the gripped state of the object is inappropriate.Type: GrantFiled: February 4, 2019Date of Patent: December 28, 2021Assignee: FANUC CORPORATIONInventors: Wenjie Chen, Tetsuaki Kato, Kaimeng Wang
-
Publication number: 20210316449Abstract: A method for teaching a robot to perform an operation based on human demonstration with images from a camera. The method includes a teaching phase where a 2D or 3D camera detects a human hand grasping and moving a workpiece, and images of the hand and workpiece are analyzed to determine a robot gripper pose and positions which equate to the pose and positions of the hand and corresponding pose and positions of the workpiece. Robot programming commands are then generated from the computed gripper pose and position relative to the workpiece pose and position. In a replay phase, the camera identifies workpiece pose and position, and the programming commands cause the robot to move the gripper to pick, move and place the workpiece as demonstrated. A teleoperation mode is also disclosed, where camera images of a human hand are used to control movement of the robot in real time.Type: ApplicationFiled: April 8, 2020Publication date: October 14, 2021Inventors: Kaimeng Wang, Tetsuaki Kato
-
Patent number: 11130236Abstract: A robot movement teaching apparatus including a movement path extraction unit configured to process time-varying images of a first workpiece and fingers or arms of a human working on the first workpiece, and thereby extract a movement path of the fingers or arms of the human; a mapping generation unit configured to generate a transform function for transformation from the first workpiece to a second workpiece worked on by a robot, based on feature points of the first workpiece and feature points of the second workpiece; and a movement path generation unit configured to generate a movement path of the robot based on the movement path of the fingers or arms of the human extracted by the movement path extraction unit and based on the transform function generated by the mapping generation unit.Type: GrantFiled: March 6, 2019Date of Patent: September 28, 2021Assignee: FANUC CORPORATIONInventors: Wenjie Chen, Kaimeng Wang, Tetsuaki Kato
-
Patent number: 11000949Abstract: A control device includes a learning control part in which a difference is calculated between a target position and an actual position of a portion detected based on a sensor, and an operation-speed change rate is increased or reduced several times within a maximum value of the operation-speed change rate set for increasing or reducing the operation speed of a robot mechanism unit and within allowance conditions of vibrations occurring at the portion to be controlled; meanwhile, learning is repeated to calculate an updated compensation amount based on the difference and a previous compensation amount previously calculated for suppressing vibrations at each operation-speed change rate, and a convergent compensation amount and a convergent operation-speed change rate are stored after convergence of the compensation amount and the operation-speed change rate.Type: GrantFiled: February 21, 2018Date of Patent: May 11, 2021Assignee: FANUC CORPORATIONInventors: Shinichi Washizu, Hajime Suzuki, Kaimeng Wang
-
Patent number: 10994422Abstract: A robot system that performs desired processing on a processing target object using a processing tool. The robot system includes a robot having an arm tip that holds the processing tool, a position detector that detects a position of the arm tip, and a robot controller that controls an operation of the robot based on a position command and a position feedback detected by the position detector. The robot controller includes an adjustment operation creating unit that, during adjustment of operation parameters for controlling the operation of the robot, acquires an application and an operation area of the robot and automatically creates an adjustment operation corresponding to the acquired application and the operation area and a parameter adjustment unit that automatically adjusts the operation parameters during execution of the adjustment operation created by the adjustment operation creating unit so that a performance required for the application is satisfied.Type: GrantFiled: November 27, 2018Date of Patent: May 4, 2021Assignee: FANUC CORPORATIONInventors: Hajime Suzuki, Shuusuke Watanabe, Kaimeng Wang
-
Patent number: 10814485Abstract: A device that can prevent a decrease in an efficiency of a manufacturing line. The device includes a shape acquisition section for acquiring a shape of a workpiece; a motion pattern acquisition section for acquiring basic motion patterns including a reference workpiece shape, a reference working position in the reference workpiece shape, and a type of an operation carried out on the reference working position; a similarity determination section for determining whether a shape of the workpiece is similar to the reference work piece shape; a position determination section for, based on a shape of the workpiece and the reference workpiece shape, determining the working position on the workpiece that corresponds to the reference working position; and an motion-path generation section for, by changing the reference working position to the determined working position, generating a motion path.Type: GrantFiled: April 6, 2018Date of Patent: October 27, 2020Assignee: Fanuc CorporationInventors: Kaimeng Wang, Wenjie Chen, Kouichirou Hayashi
-
Patent number: 10737384Abstract: A robot system includes a light source, an image capture device, a robot mechanism unit having a target site of position control where the light source is provided, and a robot controller that controls the position of the robot mechanism unit based on a position command, a position feedback, and a position compensation value. The robot controller includes a path acquisition unit that makes the image capture device capture an image of light from the light source continuously during the predetermined operation to acquire a path of the light source from the image capture device, a positional error estimation unit that estimates positional error of the path of the light source from the position command based on the acquired path of the light source and the position command, and a compensation value generation unit that generates the position compensation value based on the estimated positional error.Type: GrantFiled: August 13, 2018Date of Patent: August 11, 2020Assignee: FANUC CORPORATIONInventors: Nobuaki Yamaoka, Hajime Suzuki, Kaimeng Wang
-
Patent number: 10646995Abstract: A control device repeats learning of: calculating an allowable condition for speed variations during a processing operation based on an allowable condition for processing error; setting an operating speed change rate used to increase or reduce an operating speed of a robot mechanism unit using a calculated allowable condition for speed variations; and, while increasing or reducing the operating speed change rate over a plurality of repetitions within a range not exceeding a maximum value of the operating speed change rate and within a range of an allowable condition for vibrations occurring in a control target, calculates a new correction amount based on an amount of difference between a position of the control target detected based on a sensor and a target position, and a previously-calculated correction amount.Type: GrantFiled: June 26, 2018Date of Patent: May 12, 2020Assignee: FANUC CORPORATIONInventors: Shinichi Washizu, Hajime Suzuki, Kaimeng Wang
-
Patent number: 10618164Abstract: A robot system is provide with a robot control device that includes an operation control unit and a learning control unit. The learning control unit performs a learning control in which a vibration correction amount for correcting a vibration generated at a control target portion of a robot is calculated and the vibration correction amount is employed in the operation command at a next time. The learning control unit includes a plurality of learning control parts for calculating the vibration correction amount and a selection unit that selects one of the plurality of learning control parts on the basis of operation information of the robot when the robot is made to be operated by an operation program that is a target of the learning control.Type: GrantFiled: February 9, 2018Date of Patent: April 14, 2020Assignee: FANUC CORPORATIONInventors: Kaimeng Wang, Satoshi Inagaki, Wenjie Chen