Patents by Inventor Qifan Tan

Qifan Tan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12361693
    Abstract: A method and an apparatus for collaborative end-to-end large model-oriented self-driving trajectory decision-making are provided. The method includes: processing an RGB image from ego vehicle, an RGB image from surrounding vehicle, and an RGB image from road-side by using a first feature extraction network to obtain a first image feature, a second image feature, and a third image feature, respectively; fusing first image feature, second image feature, and third image feature to obtain an image fusion feature; processing point cloud data of road-side to obtain a road-side point cloud feature; processing image fusion feature and road-side point cloud feature to obtain a first BEV feature and a second BEV feature; fusing first BEV feature and second BEV feature to obtain a fused BEV feature; and fusing prompt information and fused BEV feature to obtain text information, and then processing text information to obtain an ego vehicle trajectory decision-making result.
    Type: Grant
    Filed: November 22, 2024
    Date of Patent: July 15, 2025
    Assignee: Beijing University of Chemical Technology
    Inventors: Zhiwei Li, Zhiyu Zhang, Fengli Lu, Kunfeng Wang, Huaping Liu, Qifan Tan, Wei Zhang, Li Wang, Tianyu Shen, Hui Li, Xiaoming Xie
  • Patent number: 12354375
    Abstract: A multimodal perception decision-making method for autonomous driving based on a large language model includes: acquiring an RGB image and an infrared image of a target area at current time; processing the RGB image using a target detection model to obtain a predicted bounding box and a corresponding target detection category; processing the infrared image and the predicted bounding box and the corresponding target detection categories by using a segmentation model to obtain a target mask image; fusing the RGB image, the target mask image and the infrared image using a fusion model to obtain a fused feature map; performing fusion processing on first prompt information representing a user intent, second prompt information representing target detection category priorities, and the fused feature map, using a large Vision-Language Model to obtain textual information; and processing the textual information using a large natural language model to obtain a perception decision-making result.
    Type: Grant
    Filed: January 17, 2025
    Date of Patent: July 8, 2025
    Assignee: Beijing University of Chemical Technology
    Inventors: Zhiwei Li, Tingzhen Zhang, Haohan Wu, Weizheng Zhang, Weiye Xiao, Kunfeng Wang, Wei Zhang, Tianyu Shen, Li Wang, Qifan Tan
  • Patent number: 11493937
    Abstract: A takeoff and landing control method of a multimodal air-ground amphibious vehicle includes: receiving dynamic parameters of the multimodal air-ground amphibious vehicle; processing the dynamic parameters by a coupled dynamic model of the multimodal air-ground amphibious vehicle to obtain dynamic control parameters of the multimodal air-ground amphibious vehicle, wherein the coupled dynamic model of the multimodal air-ground amphibious vehicle comprises a motion equation of the multimodal air-ground amphibious vehicle in a touchdown state; and the motion equation of the multimodal air-ground amphibious vehicle in a touchdown state is determined by a two-degree-of-freedom suspension dynamic equation and a six-degree-of-freedom motion equation of the multimodal air-ground amphibious vehicle in the touchdown state; and controlling takeoff and landing of the multimodal air-ground amphibious vehicle according to the dynamic control parameters of the multimodal air-ground amphibious vehicle.
    Type: Grant
    Filed: January 17, 2022
    Date of Patent: November 8, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xinyu Zhang, Jun Li, Qifan Tan, Jianxi Luo, Huaping Liu, Kangyao Huang, Xingang Wu
  • Publication number: 20220229448
    Abstract: A takeoff and landing control method of a multimodal air-ground amphibious vehicle includes: receiving dynamic parameters of the multimodal air-ground amphibious vehicle; processing the dynamic parameters by a coupled dynamic model of the multimodal air-ground amphibious vehicle to obtain dynamic control parameters of the multimodal air-ground amphibious vehicle, wherein the coupled dynamic model of the multimodal air-ground amphibious vehicle comprises a motion equation of the multimodal air-ground amphibious vehicle in a touchdown state; and the motion equation of the multimodal air-ground amphibious vehicle in a touchdown state is determined by a two-degree-of-freedom suspension dynamic equation and a six-degree-of-freedom motion equation of the multimodal air-ground amphibious vehicle in the touchdown state; and controlling takeoff and landing of the multimodal air-ground amphibious vehicle according to the dynamic control parameters of the multimodal air-ground amphibious vehicle.
    Type: Application
    Filed: January 17, 2022
    Publication date: July 21, 2022
    Applicant: Tsinghua University
    Inventors: Xinyu ZHANG, Jun LI, Qifan TAN, Jianxi LUO, Huaping LIU, Kangyao HUANG, Xingang WU
  • Patent number: 11222217
    Abstract: A lane line detection method using a fusion network based on an attention mechanism, and a terminal device are provided. The method includes: synchronously acquiring natural images and point cloud data of a road surface; and inputting the natural images and the point cloud data into a pre-built and trained fusion network to output a lane line detection result. Time series frames and an attention mechanism are added to the fusion network to perform information fusing on the point cloud data and the natural images. Specifically, continuous frames are used to improve detection network performance to deal with complex situations such as label loss and vehicle being blocked; low-dimensional features are stitched with high-dimensional features by Skip Connection to make up for image detail information that is continuously lost as the network goes deeper, and the Decoder module is used to restore image dimensions to obtain a final result.
    Type: Grant
    Filed: July 8, 2021
    Date of Patent: January 11, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xinyu Zhang, Jun Li, Zhiwei Li, Qifan Tan, Li Wang