Patents by Inventor Hongxia RAO

Hongxia RAO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11875534
    Abstract: The present invention provides a pose estimation method for an unmanned aerial vehicle based on point, line and plane feature fusion, the method comprises the following steps: S1 extracting an RGB image and a depth map by using an RGB-D camera; S2 constructing a plane parameter space (PPS) according to the depth map, and mapping the depth map from a Cartesian space to the plane parameter space (PPS); S3 constructing an inverse octree structure, comprising: reversely constructing cell nodes for the plane parameter space (PPS) according to an octree rule, and for each cell node, performing fitting by using Gaussian distribution; S4 extracting a plane feature, comprising: extracting the plane feature on the basis of degree of feature extraction algorithm of the inverse octree; S5 extracting a linear feature, comprising: extracting a linear feature on the RGB image by using LSD algorithm detection.
    Type: Grant
    Filed: December 29, 2022
    Date of Patent: January 16, 2024
    Assignee: GUANGDONG UNIVERSITY OF TECHNOLOGY
    Inventors: Renquan Lu, Jie Tao, Hui Peng, Jianhong Weng, Yong Xu, Hongxia Rao
  • Publication number: 20230281868
    Abstract: The present invention provides a pose estimation method for an unmanned aerial vehicle based on point, line and plane feature fusion, the method comprises the following steps: S1 extracting an RGB image and a depth map by using an RGB-D camera; S2 constructing a plane parameter space (PPS) according to the depth map, and mapping the depth map from a Cartesian space to the plane parameter space (PPS); S3 constructing an inverse octree structure, comprising: reversely constructing cell nodes for the plane parameter space (PPS) according to an octree rule, and for each cell node, performing fitting by using Gaussian distribution; S4 extracting a plane feature, comprising: extracting the plane feature on the basis of degree of feature extraction algorithm of the inverse octree; S5 extracting a linear feature, comprising: extracting a linear feature on the RGB image by using LSD algorithm detection.
    Type: Application
    Filed: December 29, 2022
    Publication date: September 7, 2023
    Inventors: Renquan LU, Jie TAO, Hui PENG, Jianhong WENG, Yong XU, Hongxia RAO
  • Patent number: 11670070
    Abstract: The present invention discloses a cloud-edge-end cooperative control method of a 5G networked UAV for security rescue, including: an image acquisition step: performing, by a single-chip microcomputer, attitude resolution on data acquired by a detection sensor, to obtain image data; a sparse landmark map building step: performing, by a control platform, front-end feature point matching, local map building and optimization, loopback detection, and frame resolution on the image data, to generate a sparse landmark map; a three-dimensional dense map building step: generating, by an edge cloud, a three-dimensional dense map based on a key frame pose and key frame observation data of the sparse landmark map; a high-precision semantic map building step: obtaining a high-precision semantic map; and a UAV movement step: adjusting, by the driving mechanism, a pose of the UAV according to the three-dimensional dense map or the high-precision semantic map.
    Type: Grant
    Filed: December 28, 2021
    Date of Patent: June 6, 2023
    Assignee: GUANGDONG UNIVERSITY OF TECHNOLOGY
    Inventors: Renquan Lu, Hongxia Rao, Yong Xu, Yuru Guo, Jie Tao, Chang Liu, Ye Kuang
  • Patent number: 11634227
    Abstract: A landing tracking control method comprises the following contents: a tracking model training stage and an unmanned aerial vehicle real-time tracking stage. The landing tracking control method extracts a network Snet by using a lightweight feature and makes modification, so that an extraction speed of the feature is increased to better meet a real-time requirement. Weight allocation on the importance of channel information is carried out to differentiate effective features more purposefully and utilize the features, so that the tracking precision is improved. In order to improve a training effect of the network, a loss function of an RPN network is optimized, a regression precision of a target frame is measured by using CIOU, and meanwhile, calculation of classified loss function is adjusted according to CIOU, and a relation between a regression network and classification network is enhanced.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: April 25, 2023
    Assignee: GUANGDONG UNIVERSITY OF TECHNOLOGY
    Inventors: Renquan Lu, Yong Xu, Hongxia Rao, Chang Liu, Hui Chen, Yongmin Luo, Hui Peng
  • Publication number: 20220332415
    Abstract: A landing tracking control method comprises the following contents: a tracking model training stage and an unmanned aerial vehicle real-time tracking stage. The landing tracking control method extracts a network Snet by using a lightweight feature and makes modification, so that an extraction speed of the feature is increased to better meet a real-time requirement. Weight allocation on the importance of channel information is carried out to differentiate effective features more purposefully and utilize the features, so that the tracking precision is improved. In order to improve a training effect of the network, a loss function of an RPN network is optimized, a regression precision of a target frame is measured by using CIOU, and meanwhile, calculation of classified loss function is adjusted according to CIOU, and a relation between a regression network and classification network is enhanced.
    Type: Application
    Filed: January 26, 2022
    Publication date: October 20, 2022
    Inventors: Renquan LU, Yong XU, Hongxia RAO, Hui CHEN, Yongmin LUO
  • Patent number: 11450016
    Abstract: A nearshore real-time positioning and mapping method for an unmanned surface vehicle with multiple distance measuring sensors comprises: acquiring predicted gesture data of the unmanned surface vehicle by an inertia measurement unit; acquiring radar point cloud data by a laser radar, projecting the radar point cloud data to a depth map, and reserving ground points and break points on the depth map; dividing the depth map into six sub depth maps, obtaining a feature point set via a curvature of each laser point, and converting all the feature point sets of the laser radar into coordinates of the unmanned surface vehicle; obtaining a relative gesture transformation matrix of the current unmanned surface vehicle via the radar cloud point data of two adjacent frames; accruing multiple factors, and optimizing a gesture of the unmanned surface vehicle in form of a factor pattern; and constructing a three-dimensional cloud point pattern.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: September 20, 2022
    Assignee: GUANGDONG UNIVERSITY OF TECHNOLOGY
    Inventors: Renquan Lu, Yong Xu, Zijie Chen, Ming Lin, Hongxia Rao
  • Publication number: 20220207855
    Abstract: The present invention discloses a cloud-edge-end cooperative control method of a 5G networked UAV for security rescue, including: an image acquisition step: performing, by a single-chip microcomputer, attitude resolution on data acquired by a detection sensor, to obtain image data; a sparse landmark map building step: performing, by a control platform, front-end feature point matching, local map building and optimization, loopback detection, and frame resolution on the image data, to generate a sparse landmark map; a three-dimensional dense map building step: generating, by an edge cloud, a three-dimensional dense map based on a key frame pose and key frame observation data of the sparse landmark map; a high-precision semantic map building step: obtaining a high-precision semantic map; and a UAV movement step: adjusting, by the driving mechanism, a pose of the UAV according to the three-dimensional dense map or the high-precision semantic map.
    Type: Application
    Filed: December 28, 2021
    Publication date: June 30, 2022
    Inventors: Renquan LU, Hongxia RAO, Yong XU, Yuru GUO, Jie TAO