Patents by Inventor Xingang Wu

Xingang Wu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240120633
    Abstract: An ultra-wideband electromagnetic band gap (EBG) structure includes multiple EBG units in an array. Each EBG unit includes a power plane, a dielectric substrate and a ground plane from top to bottom. The power plane includes a patch, a coupled complementary split ring resonator (C-CSRR) and a plurality of semi-improved Z-bridge structures. Each edge of the patch is provided with a semi-improved Z-bridge structure. The C-CSRR is provided within a ring formed by the semi-improved Z-bridge structures. The Z-bridge structure includes a first horizontal branch, a first vertical branch, a second horizontal branch and a second vertical branch connected in sequence. The second vertical branch is connected to the patch. First horizontal branches of adjacent EBG units are connected to each other. A circuit board including the aforementioned EBG structure is also provided.
    Type: Application
    Filed: December 13, 2023
    Publication date: April 11, 2024
    Inventors: Xingang REN, Shengyang WEI, Yali ZHAO, Guoxing SUN, Jiayu RAO, Gang WANG, Kaikun NIU, Xianliang WU, Zhixiang HUANG, Yingsong LI, Yong PENG
  • Patent number: 11532151
    Abstract: A vision-LiDAR fusion method and system based on deep canonical correlation analysis are provided. The method comprises: collecting RGB images and point cloud data of a road surface synchronously; extracting features of the RGB images to obtain RGB features; performing coordinate system conversion and rasterization on the point cloud data in turn, and then extracting features to obtain point cloud features; inputting point cloud features and RGB features into a pre-established and well-trained fusion model at the same time, to output feature-enhanced fused point cloud features, wherein the fusion model fuses RGB features to point cloud features by using correlation analysis and in combination with a deep neural network; and inputting the fused point cloud features into a pre-established object detection network to achieve object detection. A similarity calculation matrix is utilized to fuse two different modal features.
    Type: Grant
    Filed: April 29, 2022
    Date of Patent: December 20, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xinyu Zhang, Li Wang, Jun Li, Lijun Zhao, Zhiwei Li, Shiyan Zhang, Lei Yang, Xingang Wu, Hanwen Gao, Lei Zhu, Tianlei Zhang
  • Publication number: 20220366681
    Abstract: A vision-LiDAR fusion method and system based on deep canonical correlation analysis are provided. The method comprises: collecting RGB images and point cloud data of a road surface synchronously; extracting features of the RGB images to obtain RGB features; performing coordinate system conversion and rasterization on the point cloud data in turn, and then extracting features to obtain point cloud features; inputting point cloud features and RGB features into a pre-established and well-trained fusion model at the same time, to output feature-enhanced fused point cloud features, wherein the fusion model fuses RGB features to point cloud features by using correlation analysis and in combination with a deep neural network; and inputting the fused point cloud features into a pre-established object detection network to achieve object detection. A similarity calculation matrix is utilized to fuse two different modal features.
    Type: Application
    Filed: April 29, 2022
    Publication date: November 17, 2022
    Applicant: Tsinghua University
    Inventors: Xinyu ZHANG, Li WANG, Jun LI, Lijun ZHAO, Zhiwei LI, Shiyan ZHANG, Lei YANG, Xingang WU, Hanwen GAO, Lei ZHU, Tianlei ZHANG
  • Patent number: 11493937
    Abstract: A takeoff and landing control method of a multimodal air-ground amphibious vehicle includes: receiving dynamic parameters of the multimodal air-ground amphibious vehicle; processing the dynamic parameters by a coupled dynamic model of the multimodal air-ground amphibious vehicle to obtain dynamic control parameters of the multimodal air-ground amphibious vehicle, wherein the coupled dynamic model of the multimodal air-ground amphibious vehicle comprises a motion equation of the multimodal air-ground amphibious vehicle in a touchdown state; and the motion equation of the multimodal air-ground amphibious vehicle in a touchdown state is determined by a two-degree-of-freedom suspension dynamic equation and a six-degree-of-freedom motion equation of the multimodal air-ground amphibious vehicle in the touchdown state; and controlling takeoff and landing of the multimodal air-ground amphibious vehicle according to the dynamic control parameters of the multimodal air-ground amphibious vehicle.
    Type: Grant
    Filed: January 17, 2022
    Date of Patent: November 8, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xinyu Zhang, Jun Li, Qifan Tan, Jianxi Luo, Huaping Liu, Kangyao Huang, Xingang Wu
  • Publication number: 20220229448
    Abstract: A takeoff and landing control method of a multimodal air-ground amphibious vehicle includes: receiving dynamic parameters of the multimodal air-ground amphibious vehicle; processing the dynamic parameters by a coupled dynamic model of the multimodal air-ground amphibious vehicle to obtain dynamic control parameters of the multimodal air-ground amphibious vehicle, wherein the coupled dynamic model of the multimodal air-ground amphibious vehicle comprises a motion equation of the multimodal air-ground amphibious vehicle in a touchdown state; and the motion equation of the multimodal air-ground amphibious vehicle in a touchdown state is determined by a two-degree-of-freedom suspension dynamic equation and a six-degree-of-freedom motion equation of the multimodal air-ground amphibious vehicle in the touchdown state; and controlling takeoff and landing of the multimodal air-ground amphibious vehicle according to the dynamic control parameters of the multimodal air-ground amphibious vehicle.
    Type: Application
    Filed: January 17, 2022
    Publication date: July 21, 2022
    Applicant: Tsinghua University
    Inventors: Xinyu ZHANG, Jun LI, Qifan TAN, Jianxi LUO, Huaping LIU, Kangyao HUANG, Xingang WU
  • Patent number: 11380089
    Abstract: An all-weather target detection method based on a vision and millimeter wave fusion includes: simultaneously acquiring continuous image data and point cloud data using two types of sensors of a vehicle-mounted camera and a millimeter wave radar; pre-processing the image data and point cloud data; fusing the pre-processed image data and point cloud data by using a pre-established fusion model, and outputting a fused feature map; and inputting the fused feature map into a YOLOv5 detection network for detection, and outputting a target detection result by non-maximum suppression. The method fully fuses millimeter wave radar echo intensity and distance information with the vehicle-mounted camera images. It analyzes different features of a millimeter wave radar point cloud and fuses the features with image information by using different feature extraction structures and ways, so that the advantages of the two types of sensor data complement each other.
    Type: Grant
    Filed: December 7, 2021
    Date of Patent: July 5, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xinyu Zhang, Jun Li, Zhiwei Li, Huaping Liu, Xingang Wu
  • Publication number: 20220207868
    Abstract: An all-weather target detection method based on a vision and millimeter wave fusion includes: simultaneously acquiring continuous image data and point cloud data using two types of sensors of a vehicle-mounted camera and a millimeter wave radar; pre-processing the image data and point cloud data; fusing the pre-processed image data and point cloud data by using a pre-established fusion model, and outputting a fused feature map; and inputting the fused feature map into a YOLOv5 detection network for detection, and outputting a target detection result by non-maximum suppression. The method fully fuses millimeter wave radar echo intensity and distance information with the vehicle-mounted camera images. It analyzes different features of a millimeter wave radar point cloud and fuses the features with image information by using different feature extraction structures and ways, so that the advantages of the two types of sensor data complement each other.
    Type: Application
    Filed: December 7, 2021
    Publication date: June 30, 2022
    Applicant: Tsinghua University
    Inventors: Xinyu ZHANG, Jun LI, Zhiwei LI, Huaping LIU, Xingang WU
  • Patent number: 11120276
    Abstract: A deep multimodal cross-layer intersecting fusion method, a terminal device and a storage medium are provided. The method includes: acquiring an RGB image and point cloud data containing lane lines, and pre-processing the RGB image and point cloud data; and inputting the pre-processed RGB image and point cloud data into a pre-constructed and trained semantic segmentation model, and outputting an image segmentation result. The semantic segmentation model is configured to implement cross-layer intersecting fusion of the RGB image and point cloud data. In the new method, a feature of a current layer of a current modality is fused with features of all subsequent layers of another modality, such that not only can similar or proximate features be fused, but also dissimilar or non-proximate features can be fused, thereby achieving full and comprehensive fusion of features. All fusion connections are controlled by a learnable parameter.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: September 14, 2021
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xinyu Zhang, Zhiwei Li, Huaping Liu, Xingang Wu