Patents by Inventor Dongnian Li

Dongnian Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230230312
    Abstract: A digital twin modeling method to assemble a robotic teleoperation environment, including: capturing images of the teleoperation environment; identifying a part being assembled; querying the assembly assembling order to obtain a list of assembled parts according to the part being assembled; generating a three-dimensional model of the current assembly from the list and calculating position pose information of the current assembly in an image acquisition device coordinate system; loading a three-dimensional model of the robot, determining a coordinate transformation relationship between a robot coordinate system and an image acquisition device coordinate system; determining position pose information of the robot in an image acquisition device coordinate system from the coordinate transformation relationship; determining a relative positional relationship between the current assembly and the robot from position pose information of the current assembly and the robot in an image acquisition device coordinate syste
    Type: Application
    Filed: August 23, 2022
    Publication date: July 20, 2023
    Inventors: Chengjun CHEN, Zhengxu ZHAO, Tianliang HU, Jianhua ZHANG, Yang GUO, Dongnian LI, Qinghai ZHANG, Yuanlin GUAN
  • Patent number: 11504846
    Abstract: The present invention relates to a robot teaching system based on image segmentation and surface electromyography and robot teaching method thereof, comprising a RGB-D camera, a surface electromyography sensor, a robot and a computer, wherein the RGB-D camera collects video information of robot teaching scenes and sends to the computer; the surface electromyography sensor acquires surface electromyography signals and inertial acceleration signals of the robot teacher, and sends to the computer; the computer recognizes a articulated arm and a human joint, detects a contact position between the articulated arm and the human joint, and further calculates strength and direction of forces rendered from a human contact position after the human joint contacts the articulated arm, and sends a signal controlling the contacted articulated arm to move along with such a strength and direction of forces and robot teaching is done.
    Type: Grant
    Filed: January 20, 2021
    Date of Patent: November 22, 2022
    Assignee: QINGDAO UNIVERSITY OF TECHNOLOGY
    Inventors: Chengjun Chen, Yong Pan, Dongnian Li, Zhengxu Zhao, Jun Hong
  • Patent number: 11440179
    Abstract: A system for robot teaching based on RGB-D images and a teach pendant, including an RGB-D camera, a host computer, a posture teach pendant, and an AR teaching system which includes an AR registration card, an AR module, a virtual robot model, a path planning unit and a posture teaching unit. The RGB-D camera collects RGB images and depth images of a physical working environment in real time. In the path planning unit, path points of a robot end effector are selected, and a 3D coordinates of the path points in the basic coordinate system of the virtual robot model are calculated; the posture teaching unit records the received posture data as the postures of a path point where the virtual robot model is located, so that the virtual robot model is driven to move according to the postures and positions of the path points, thereby completing the robot teaching.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: September 13, 2022
    Assignee: QINGDAO UNIVERSITY OF TECHNOLOGY
    Inventors: Chengjun Chen, Yong Pan, Dongnian Li, Jun Hong
  • Publication number: 20220161422
    Abstract: The present invention relates to a robot teaching system based on image segmentation and surface electromyography and robot teaching method thereof, comprising a RGB-D camera, a surface electromyography sensor, a robot and a computer, wherein the RGB-D camera collects video information of robot teaching scenes and sends to the computer; the surface electromyography sensor acquires surface electromyography signals and inertial acceleration signals of the robot teacher, and sends to the computer; the computer recognizes a articulated arm and a human joint, detects a contact position between the articulated arm and the human joint, and further calculates strength and direction of forces rendered from a human contact position after the human joint contacts the articulated arm, and sends a signal controlling the contacted articulated arm to move along with such a strength and direction of forces and robot teaching is done.
    Type: Application
    Filed: January 20, 2021
    Publication date: May 26, 2022
    Inventors: Chengjun Chen, Yong Pan, Dongnian Li, Zhengxu Zhao, Jun Hong
  • Patent number: 10964025
    Abstract: The present invention relates to an assembly monitoring method based on deep learning, comprising steps of: creating a training set for a physical assembly body, the training set comprising a depth image set Di and a label image set Li of a 3D assembly body at multiple monitoring angles, wherein i represents an assembly step, the depth image set Di in the ith step corresponds to the label image set Li in the ith step, and in label images in the label image set Li, different parts of the 3D assembly body are rendered by different colors; training a deep learning network model by the training set; and obtaining, by the depth camera, a physical assembly body depth image C in a physical assembly scene, inputting the physical assembly body depth image C into the deep learning network model, and outputting a pixel segmentation image of the physical assembly body, in which different parts are represented by pixel colors to identify all the parts of the physical assembly body.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: March 30, 2021
    Assignee: Qingdao University of Technology
    Inventors: Chengjun Chen, Chunlin Zhang, Dongnian Li, Jun Hong
  • Publication number: 20210023694
    Abstract: A system for robot teaching based on RGB-D images and a teach pendant, including an RGB-D camera, a host computer, a posture teach pendant, and an AR teaching system which includes an AR registration card, an AR module, a virtual robot model, a path planning unit and a posture teaching unit. The RGB-D camera collects RGB images and depth images of a physical working environment in real time. In the path planning unit, path points of a robot end effector are selected, and a 3D coordinates of the path points in the basic coordinate system of the virtual robot model are calculated; the posture teaching unit records the received posture data as the postures of a path point where the virtual robot model is located, so that the virtual robot model is driven to move according to the postures and positions of the path points, thereby completing the robot teaching.
    Type: Application
    Filed: February 28, 2020
    Publication date: January 28, 2021
    Inventors: Chengjun CHEN, Yong PAN, Dongnian LI, Jun HONG
  • Publication number: 20200273177
    Abstract: The present invention relates to an assembly monitoring method based on deep learning, comprising steps of: creating a training set for a physical assembly body, the training set comprising a depth image set Di and a label image set Li of a 3D assembly body at multiple monitoring angles, wherein i represents an assembly step, the depth image set Di in the ith step corresponds to the label image set Li in the ith step, and in label images in the label image set Li, different parts of the 3D assembly body are rendered by different colors; training a deep learning network model by the training set; and obtaining, by the depth camera, a physical assembly body depth image C in a physical assembly scene, inputting the physical assembly body depth image C into the deep learning network model, and outputting a pixel segmentation image of the physical assembly body, in which different parts are represented by pixel colors to identify all the parts of the physical assembly body.
    Type: Application
    Filed: January 10, 2020
    Publication date: August 27, 2020
    Inventors: Chengjun Chen, Chunlin Zhang, Dongnian Li, Jun Hong