Patents by Inventor Chengjun Chen
Chengjun Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250084652Abstract: A building construction robot is provided. The building construction robot includes a vehicle body, a feeding assembly is arranged in the vehicle body, a mounting frame is fixedly connected to one end of the vehicle body, and the mounting frame is located at a discharge end close to the feeding assembly. One end of a vibrating assembly is fixedly connected to one end of a top of the vehicle body, and the other end of the vibrating assembly passes through a middle part of the mounting frame. A leveling assembly is arranged at one end, away from the vehicle body, of the mounting frame, a measuring part is arranged at a top of the mounting frame, and a moving part is arranged at a bottom of the vehicle body.Type: ApplicationFiled: May 17, 2023Publication date: March 13, 2025Applicant: QINGDAO UNIVERSITY OF TECHNOLOGYInventors: Yang Li, Chengjun Chen, Xuefeng Zhang, Guangzheng Wang, Yongqi Wang, Liping Liang, Jianze Liu, Fazhan Yang, Fu'e Ren
-
Patent number: 12243159Abstract: A digital twin modeling method to assemble a robotic teleoperation environment, including: capturing images of the teleoperation environment; identifying a part being assembled; querying the assembly assembling order to obtain a list of assembled parts according to the part being assembled; generating a three-dimensional model of the current assembly from the list and calculating position pose information of the current assembly in an image acquisition device coordinate system; loading a three-dimensional model of the robot, determining a coordinate transformation relationship between a robot coordinate system and an image acquisition device coordinate system; determining position pose information of the robot in an image acquisition device coordinate system from the coordinate transformation relationship; determining a relative positional relationship between the current assembly and the robot from position pose information of the current assembly and the robot in an image acquisition device coordinate systeType: GrantFiled: August 23, 2022Date of Patent: March 4, 2025Assignees: QINGDAO UNIVERSITY OF TECHNOLOGY, SHANDONG UNIVERSITY, EBEI UNIVERSITY OF TECHNOLOGYInventors: Chengjun Chen, Zhengxu Zhao, Tianliang Hu, Jianhua Zhang, Yang Guo, Dongnian Li, Qinghai Zhang, Yuanlin Guan
-
Publication number: 20230230312Abstract: A digital twin modeling method to assemble a robotic teleoperation environment, including: capturing images of the teleoperation environment; identifying a part being assembled; querying the assembly assembling order to obtain a list of assembled parts according to the part being assembled; generating a three-dimensional model of the current assembly from the list and calculating position pose information of the current assembly in an image acquisition device coordinate system; loading a three-dimensional model of the robot, determining a coordinate transformation relationship between a robot coordinate system and an image acquisition device coordinate system; determining position pose information of the robot in an image acquisition device coordinate system from the coordinate transformation relationship; determining a relative positional relationship between the current assembly and the robot from position pose information of the current assembly and the robot in an image acquisition device coordinate systeType: ApplicationFiled: August 23, 2022Publication date: July 20, 2023Inventors: Chengjun CHEN, Zhengxu ZHAO, Tianliang HU, Jianhua ZHANG, Yang GUO, Dongnian LI, Qinghai ZHANG, Yuanlin GUAN
-
Patent number: 11504846Abstract: The present invention relates to a robot teaching system based on image segmentation and surface electromyography and robot teaching method thereof, comprising a RGB-D camera, a surface electromyography sensor, a robot and a computer, wherein the RGB-D camera collects video information of robot teaching scenes and sends to the computer; the surface electromyography sensor acquires surface electromyography signals and inertial acceleration signals of the robot teacher, and sends to the computer; the computer recognizes a articulated arm and a human joint, detects a contact position between the articulated arm and the human joint, and further calculates strength and direction of forces rendered from a human contact position after the human joint contacts the articulated arm, and sends a signal controlling the contacted articulated arm to move along with such a strength and direction of forces and robot teaching is done.Type: GrantFiled: January 20, 2021Date of Patent: November 22, 2022Assignee: QINGDAO UNIVERSITY OF TECHNOLOGYInventors: Chengjun Chen, Yong Pan, Dongnian Li, Zhengxu Zhao, Jun Hong
-
Patent number: 11440179Abstract: A system for robot teaching based on RGB-D images and a teach pendant, including an RGB-D camera, a host computer, a posture teach pendant, and an AR teaching system which includes an AR registration card, an AR module, a virtual robot model, a path planning unit and a posture teaching unit. The RGB-D camera collects RGB images and depth images of a physical working environment in real time. In the path planning unit, path points of a robot end effector are selected, and a 3D coordinates of the path points in the basic coordinate system of the virtual robot model are calculated; the posture teaching unit records the received posture data as the postures of a path point where the virtual robot model is located, so that the virtual robot model is driven to move according to the postures and positions of the path points, thereby completing the robot teaching.Type: GrantFiled: February 28, 2020Date of Patent: September 13, 2022Assignee: QINGDAO UNIVERSITY OF TECHNOLOGYInventors: Chengjun Chen, Yong Pan, Dongnian Li, Jun Hong
-
Publication number: 20220161422Abstract: The present invention relates to a robot teaching system based on image segmentation and surface electromyography and robot teaching method thereof, comprising a RGB-D camera, a surface electromyography sensor, a robot and a computer, wherein the RGB-D camera collects video information of robot teaching scenes and sends to the computer; the surface electromyography sensor acquires surface electromyography signals and inertial acceleration signals of the robot teacher, and sends to the computer; the computer recognizes a articulated arm and a human joint, detects a contact position between the articulated arm and the human joint, and further calculates strength and direction of forces rendered from a human contact position after the human joint contacts the articulated arm, and sends a signal controlling the contacted articulated arm to move along with such a strength and direction of forces and robot teaching is done.Type: ApplicationFiled: January 20, 2021Publication date: May 26, 2022Inventors: Chengjun Chen, Yong Pan, Dongnian Li, Zhengxu Zhao, Jun Hong
-
Patent number: 10964025Abstract: The present invention relates to an assembly monitoring method based on deep learning, comprising steps of: creating a training set for a physical assembly body, the training set comprising a depth image set Di and a label image set Li of a 3D assembly body at multiple monitoring angles, wherein i represents an assembly step, the depth image set Di in the ith step corresponds to the label image set Li in the ith step, and in label images in the label image set Li, different parts of the 3D assembly body are rendered by different colors; training a deep learning network model by the training set; and obtaining, by the depth camera, a physical assembly body depth image C in a physical assembly scene, inputting the physical assembly body depth image C into the deep learning network model, and outputting a pixel segmentation image of the physical assembly body, in which different parts are represented by pixel colors to identify all the parts of the physical assembly body.Type: GrantFiled: January 10, 2020Date of Patent: March 30, 2021Assignee: Qingdao University of TechnologyInventors: Chengjun Chen, Chunlin Zhang, Dongnian Li, Jun Hong
-
Publication number: 20210023694Abstract: A system for robot teaching based on RGB-D images and a teach pendant, including an RGB-D camera, a host computer, a posture teach pendant, and an AR teaching system which includes an AR registration card, an AR module, a virtual robot model, a path planning unit and a posture teaching unit. The RGB-D camera collects RGB images and depth images of a physical working environment in real time. In the path planning unit, path points of a robot end effector are selected, and a 3D coordinates of the path points in the basic coordinate system of the virtual robot model are calculated; the posture teaching unit records the received posture data as the postures of a path point where the virtual robot model is located, so that the virtual robot model is driven to move according to the postures and positions of the path points, thereby completing the robot teaching.Type: ApplicationFiled: February 28, 2020Publication date: January 28, 2021Inventors: Chengjun CHEN, Yong PAN, Dongnian LI, Jun HONG
-
Publication number: 20200273177Abstract: The present invention relates to an assembly monitoring method based on deep learning, comprising steps of: creating a training set for a physical assembly body, the training set comprising a depth image set Di and a label image set Li of a 3D assembly body at multiple monitoring angles, wherein i represents an assembly step, the depth image set Di in the ith step corresponds to the label image set Li in the ith step, and in label images in the label image set Li, different parts of the 3D assembly body are rendered by different colors; training a deep learning network model by the training set; and obtaining, by the depth camera, a physical assembly body depth image C in a physical assembly scene, inputting the physical assembly body depth image C into the deep learning network model, and outputting a pixel segmentation image of the physical assembly body, in which different parts are represented by pixel colors to identify all the parts of the physical assembly body.Type: ApplicationFiled: January 10, 2020Publication date: August 27, 2020Inventors: Chengjun Chen, Chunlin Zhang, Dongnian Li, Jun Hong
-
Patent number: 6510410Abstract: A method and an apparatus for automatic recognition of tone languages, employing the steps of converting the words of speech into an electrical signal, generating spectral features from the electrical signal, extracting pitch values from the electrical signal, combining said spectral features and the pitch values into acoustic feature vectors, comparing the acoustic feature vectors with prototypes of phonemes in an acoustic prototype database including prototypes of toned vowels to produce labels, and matching the labels to text using a decoder comprising a phonetic vocabulary and a language model database.Type: GrantFiled: July 28, 2000Date of Patent: January 21, 2003Assignee: International Business Machines CorporationInventors: Julian Chengjun Chen, Guo Kang Fu, Hai Ping Li, Li Qin Shen