Patents by Inventor Huaping Liu

Huaping Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11974731
    Abstract: The invention proposes an automatic throat swab sampling system, which comprises an automatic to-be-tested person identity information checking and collection prompting unit, a to-be-tested person head positioning unit, a navigation positioning unit which enters the oral cavity of a to-be-detected person along with the tail end of the sample collection execution unit for collecting a throat image and determining depth information between a depth sensor and the throat of the to-be-detected person, a sample collection execution unit including a three-degree-of-freedom guide rail type device and a one-degree-of-freedom end execution mechanism, an automatic throat swab loading and unloading unit and a remote monitoring unit which are arranged on a working platform.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: May 7, 2024
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Fuchun Sun, Bin Fang, Huaping Liu
  • Patent number: 11976348
    Abstract: The present invention relates to a carbide tool cleaning and coating production line and a method, including a cleaning device including a support frame, a cleaning mechanism and a drying mechanism are sequentially disposed under the support frame connected to a moving mechanism, the moving mechanism is connected to a lifting mechanism being capable of being connected to a tool fixture bracket being configured to accommodate the tool fixture; a coating device including a coating chamber which a plane target mechanism and a turntable assembly disposed in, the turntable assembly is capable of being connected to a plurality of tool fixtures being capable of rotating around an axial line of the coating chamber under the driving of the turntable assembly and rotating around an axial line thereof at the same time; and, a manipulator being disposed between the cleaning device and the coating device.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: May 7, 2024
    Assignees: QINGDAO UNIVERSITY OF TECHNOLOGY, NINGBO SANHAN ALLOY MATERIAL CO., LTD.
    Inventors: Yanbin Zhang, Liang Luo, Lizhi Tang, Changhe Li, Weixi Ji, Binhui Wan, Shuo Yin, Huajun Cao, Bingheng Lu, Xin Cui, Mingzheng Liu, Teng Gao, Jie Xu, Huiming Luo, Haizhou Xu, Min Yang, Huaping Hong, Xiaoming Wang, Yuying Yang, Haogang Li, Wuxing Ma, Shuai Chen
  • Patent number: 11951618
    Abstract: A multi-procedure integrated automatic production line for hard alloy blades under robot control is provided. The production line includes a rail-guided robot. A cutter passivation device and a blade cleaning and drying device are arranged on one side of the rail-guided robot. A blade-coating transfer table, a blade coating device, a blade boxing transfer table, a blade-tooling dismounting device and a blade boxing device are sequentially arranged on another side of the rail-guided robot. The blade-tooling dismounting device is arranged on one side of the blade boxing transfer table. The production line further includes squirrel-cage toolings for carrying the blades. The squirrel-cage tooling that are loaded with the blades can run among the cutter passivation device, the blade cleaning and drying device, the blade-coating transfer table and the blade boxing transfer table. The blades after being treated through the blade-tooling dismounting device are sent to the blade boxing device.
    Type: Grant
    Filed: August 17, 2021
    Date of Patent: April 9, 2024
    Assignees: Qingdao University of Technology, Ningbo Sanhan Alloy Material Co., Ltd.
    Inventors: Changhe Li, Teng Gao, Liang Luo, Lizhi Tang, Yanbin Zhang, Weixi Ji, Binhui Wan, Shuo Yin, Huajun Cao, Bingheng Lu, Xin Cui, Mingzheng Liu, Jie Xu, Huiming Luo, Haizhou Xu, Min Yang, Huaping Hong, Yuying Yang, Haogang Li, Wuxing Ma, Shuai Chen
  • Publication number: 20220414856
    Abstract: The present invention proposes a fabric defect detection method based on multi-modal deep learning. First, a tactile sensor is placed onto the fabric surface with different defects to collect the fabric texture images, a camera is used to collect the corresponding fabric external images, and a fabric external image and a fabric texture image constitute a set of fabric detection data; then, a feature extraction network and a multi-modal fusion network are connected to establish a classification model based on multi-modal deep learning, which uses the fabric texture image and fabric external image in each set of collected fabric detection data as input, and the fabric defect as output; said classification model is trained using the collected fabric detection data; finally, the trained classification model is used to detect the fabric defect. The present invention employs vision-touch complementary information, which can greatly improve the accuracy and robustness of detection.
    Type: Application
    Filed: August 26, 2020
    Publication date: December 29, 2022
    Inventors: Fuchun Sun, Bin Fang, Huaping Liu
  • Patent number: 11493937
    Abstract: A takeoff and landing control method of a multimodal air-ground amphibious vehicle includes: receiving dynamic parameters of the multimodal air-ground amphibious vehicle; processing the dynamic parameters by a coupled dynamic model of the multimodal air-ground amphibious vehicle to obtain dynamic control parameters of the multimodal air-ground amphibious vehicle, wherein the coupled dynamic model of the multimodal air-ground amphibious vehicle comprises a motion equation of the multimodal air-ground amphibious vehicle in a touchdown state; and the motion equation of the multimodal air-ground amphibious vehicle in a touchdown state is determined by a two-degree-of-freedom suspension dynamic equation and a six-degree-of-freedom motion equation of the multimodal air-ground amphibious vehicle in the touchdown state; and controlling takeoff and landing of the multimodal air-ground amphibious vehicle according to the dynamic control parameters of the multimodal air-ground amphibious vehicle.
    Type: Grant
    Filed: January 17, 2022
    Date of Patent: November 8, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xinyu Zhang, Jun Li, Qifan Tan, Jianxi Luo, Huaping Liu, Kangyao Huang, Xingang Wu
  • Patent number: 11397242
    Abstract: A 3D object detection method based on multi-view feature fusion of 4D RaDAR and LiDAR point clouds includes simultaneously acquiring RaDAR point cloud data and LiDAR point cloud data; and inputting the RaDAR point cloud data and the lidar point cloud data into a pre-established and trained RaDAR and LiDAR fusion network and outputting a 3D object detection result, wherein the RaDAR and LiDAR fusion network is configured to learn interaction information of a LiDAR and a RaDAR from a bird's eye view and a perspective view, respectively, and concatenate the interaction information to achieve fusion of the RaDAR point cloud data and the lidar point cloud data. The method can combine advantages of RaDAR and LiDAR, while avoiding disadvantages of the two modalities as much as possible to obtain a better 3D object detection result.
    Type: Grant
    Filed: December 31, 2021
    Date of Patent: July 26, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xinyu Zhang, Jun Li, Li Wang, Jianxi Luo, Huaping Liu, Yuchao Liu
  • Publication number: 20220229448
    Abstract: A takeoff and landing control method of a multimodal air-ground amphibious vehicle includes: receiving dynamic parameters of the multimodal air-ground amphibious vehicle; processing the dynamic parameters by a coupled dynamic model of the multimodal air-ground amphibious vehicle to obtain dynamic control parameters of the multimodal air-ground amphibious vehicle, wherein the coupled dynamic model of the multimodal air-ground amphibious vehicle comprises a motion equation of the multimodal air-ground amphibious vehicle in a touchdown state; and the motion equation of the multimodal air-ground amphibious vehicle in a touchdown state is determined by a two-degree-of-freedom suspension dynamic equation and a six-degree-of-freedom motion equation of the multimodal air-ground amphibious vehicle in the touchdown state; and controlling takeoff and landing of the multimodal air-ground amphibious vehicle according to the dynamic control parameters of the multimodal air-ground amphibious vehicle.
    Type: Application
    Filed: January 17, 2022
    Publication date: July 21, 2022
    Applicant: Tsinghua University
    Inventors: Xinyu ZHANG, Jun LI, Qifan TAN, Jianxi LUO, Huaping LIU, Kangyao HUANG, Xingang WU
  • Patent number: 11380089
    Abstract: An all-weather target detection method based on a vision and millimeter wave fusion includes: simultaneously acquiring continuous image data and point cloud data using two types of sensors of a vehicle-mounted camera and a millimeter wave radar; pre-processing the image data and point cloud data; fusing the pre-processed image data and point cloud data by using a pre-established fusion model, and outputting a fused feature map; and inputting the fused feature map into a YOLOv5 detection network for detection, and outputting a target detection result by non-maximum suppression. The method fully fuses millimeter wave radar echo intensity and distance information with the vehicle-mounted camera images. It analyzes different features of a millimeter wave radar point cloud and fuses the features with image information by using different feature extraction structures and ways, so that the advantages of the two types of sensor data complement each other.
    Type: Grant
    Filed: December 7, 2021
    Date of Patent: July 5, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xinyu Zhang, Jun Li, Zhiwei Li, Huaping Liu, Xingang Wu
  • Publication number: 20220207868
    Abstract: An all-weather target detection method based on a vision and millimeter wave fusion includes: simultaneously acquiring continuous image data and point cloud data using two types of sensors of a vehicle-mounted camera and a millimeter wave radar; pre-processing the image data and point cloud data; fusing the pre-processed image data and point cloud data by using a pre-established fusion model, and outputting a fused feature map; and inputting the fused feature map into a YOLOv5 detection network for detection, and outputting a target detection result by non-maximum suppression. The method fully fuses millimeter wave radar echo intensity and distance information with the vehicle-mounted camera images. It analyzes different features of a millimeter wave radar point cloud and fuses the features with image information by using different feature extraction structures and ways, so that the advantages of the two types of sensor data complement each other.
    Type: Application
    Filed: December 7, 2021
    Publication date: June 30, 2022
    Applicant: Tsinghua University
    Inventors: Xinyu ZHANG, Jun LI, Zhiwei LI, Huaping LIU, Xingang WU
  • Patent number: 11249174
    Abstract: An automatic calibration method and system for spatial positions of a laser radar and a camera sensor is provided. The method includes: adjusting a spatial position of the laser radar relative to the camera sensor to obtain a plurality of spatial position relationships of the laser radar and the camera sensor; for a spatial position relationship, calculating a gray value of each laser radar point conforming to the line features after projection as a score, and accumulating scores of all laser radar points as a total score; traversing all the spatial position relationships to obtain a plurality of total scores; and selecting a spatial position relationship of the laser radar and the camera sensor corresponding to a highest total score from the plurality of total scores to serve as a calibrated position relationship of the laser radar and the camera sensor.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: February 15, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xinyu Zhang, Huaping Liu
  • Publication number: 20220026547
    Abstract: An automatic calibration method and system for spatial positions of a laser radar and a camera sensor is provided. The method includes: adjusting a spatial position of the laser radar relative to the camera sensor to obtain a plurality of spatial position relationships of the laser radar and the camera sensor; for a spatial position relationship, calculating a gray value of each laser radar point conforming to the line features after projection as a score, and accumulating scores of all laser radar points as a total score; traversing all the spatial position relationships to obtain a plurality of total scores; and selecting a spatial position relationship of the laser radar and the camera sensor corresponding to a highest total score from the plurality of total scores to serve as a calibrated position relationship of the laser radar and the camera sensor.
    Type: Application
    Filed: June 2, 2021
    Publication date: January 27, 2022
    Applicant: Tsinghua University
    Inventors: Xinyu ZHANG, Huaping LIU
  • Publication number: 20210369252
    Abstract: The invention proposes an automatic throat swab sampling system, which comprises an automatic to-be-tested person identity information checking and collection prompting unit, a to-be-tested person head positioning unit, a navigation positioning unit which enters the oral cavity of a to-be-detected person along with the tail end of the sample collection execution unit for collecting a throat image and determining depth information between a depth sensor and the throat of the to-be-detected person, a sample collection execution unit including a three-degree-of-freedom guide rail type device and a one-degree-of-freedom end execution mechanism, an automatic throat swab loading and unloading unit and a remote monitoring unit which are arranged on a working platform.
    Type: Application
    Filed: May 20, 2021
    Publication date: December 2, 2021
    Inventors: Fuchun Sun, Bin Fang, Huaping Liu
  • Publication number: 20210321875
    Abstract: The invention relates to a real-time monitoring and recognition system based on bracelet temperature measurement and positioning, comprising an ID card reader, multiple smart bracelets, multiple wireless node devices mounted at intervals in crowd gathering occasions for collecting sensing information of all smart bracelets nearby, and a cloud data processing unit for acquiring the temperature, position and motion state information of a wearer to intelligently identify the real-time body temperature health state, motion state, regional activity state, accompanying aggregation state and the like of individuals and groups through big data analysis in combination with the information, wherein each smart bracelet comprises an in-wrist temperature measurement sensor, a motion sensor, a Bluetooth sensor, a display lamp, a battery, a battery charging port, a vibration alarm and an embedded type chip processor which are arranged in a waterproof wristband; it is particularly suitable for individual and group monitoring
    Type: Application
    Filed: May 21, 2021
    Publication date: October 21, 2021
    Inventors: Bin Fang, Fuchun Sun, Huaping Liu
  • Patent number: 11120276
    Abstract: A deep multimodal cross-layer intersecting fusion method, a terminal device and a storage medium are provided. The method includes: acquiring an RGB image and point cloud data containing lane lines, and pre-processing the RGB image and point cloud data; and inputting the pre-processed RGB image and point cloud data into a pre-constructed and trained semantic segmentation model, and outputting an image segmentation result. The semantic segmentation model is configured to implement cross-layer intersecting fusion of the RGB image and point cloud data. In the new method, a feature of a current layer of a current modality is fused with features of all subsequent layers of another modality, such that not only can similar or proximate features be fused, but also dissimilar or non-proximate features can be fused, thereby achieving full and comprehensive fusion of features. All fusion connections are controlled by a learnable parameter.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: September 14, 2021
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xinyu Zhang, Zhiwei Li, Huaping Liu, Xingang Wu
  • Patent number: 11099275
    Abstract: A LiDAR point cloud reflection intensity complementation method includes: acquiring a grayscale image and an original point cloud of a same road surface using a calibrated vehicle-mounted webcam and LiDAR; extracting edge information of the grayscale image using a preset edge extraction strategy, to obtain an edge image of the grayscale image; preprocessing the original point cloud to obtain an original point cloud reflection intensity projection image and an interpolated complementary point cloud reflection intensity projection image; and inputting the grayscale image, the edge image of the grayscale image, the original point cloud reflection intensity projection image and the interpolated complementary point cloud reflection intensity projection image into a pre-trained point cloud reflection intensity complementation model, and outputting a complementary point cloud reflection intensity projection image. A LiDAR point cloud reflection intensity complementation system is further provided.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: August 24, 2021
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xinyu Zhang, Zhiwei Li, Huaping Liu, Yiqian Lu, Yuchao Liu
  • Patent number: 11002859
    Abstract: An intelligent vehicle positioning method based on feature point calibration is provided. The intelligent vehicle positioning method includes: determining whether an intelligent vehicle is located in a blind area or a non-blind area; when the intelligent vehicle is located in a GNSS non-blind area, combining GNSS signals, odometer data and inertial measurement unit data, acquiring a current pose of the intelligent vehicle through a Kalman filtering, scanning a surrounding environment of the intelligent vehicle by using a laser radar, extracting corner points and arc features and performing processing to obtain the corner points and circle centers as the feature points, calculating global coordinates and weights of the feature points and storing the global coordinates and the weights of the feature points in a current feature point list.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: May 11, 2021
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xinyu Zhang, Jun Li, Shichun Guo, Huaping Liu, Mo Zhou, Wenju Gao
  • Patent number: 10929694
    Abstract: A lane detection method based on vision and lidar multi-level fusion includes: calibrating obtained point cloud data and an obtained video image; constructing a point cloud clustering model by fusing height information, reflection intensity information of the point cloud data, and RGB information of the video image, obtaining point clouds of a road based on the point cloud clustering model, and obtaining a lane surface as a first lane candidate region by performing least square fitting on the point clouds; obtaining four-channel road information by fusing the reflection intensity information of the point cloud data and the RGB information of the video image, inputting the four-channel road information into the semantic segmentation network 3D-LaneNet, and outputting an image of a second lane candidate region; and fusing the first lane candidate region and the second lane candidate region, and combining the two lane candidate regions into a final lane region.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: February 23, 2021
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xinyu Zhang, Jun Li, Zhiwei Li, Huaping Liu, Zhenhong Zou
  • Publication number: 20160280547
    Abstract: The object of the present invention is to provide a method of accurately separating optically active CNTs with single (n, m), as well as optically active carbon nanotubes obtained by the method. A plurality of gel-filled columns are connected in series. An excess amount of carbon nanotube dispersion passes therethrough, so that carbon nanotubes with specific optical activities are adsorbed to each of the columns. The carbon nanotubes are eluted by an eluent. In this manner, optically active carbon nanotubes with specific structures can be separated with high accuracy.
    Type: Application
    Filed: March 26, 2014
    Publication date: September 29, 2016
    Inventors: Huaping LIU, Takeshi TANAKA, Hiromichi KATAURA
  • Patent number: 9298673
    Abstract: The present disclosure provides an electronic device and an information processing method. The electronic device comprises: a processor which comprises N processing units to process data and perform data input and output; a data processing interface coupled to Q processing units among the N processing units, which compresses raw data received from the Q processing units to obtain compressed data; and a memory coupled to the data processing interface, which receives and stores the compressed data, where N?1 and 1?Q?N.
    Type: Grant
    Filed: September 29, 2014
    Date of Patent: March 29, 2016
    Assignee: Lenovo (Beijing) Limited
    Inventors: Huaping Liu, Wei Xie
  • Publication number: 20150163518
    Abstract: The present disclosure provides an electronic device and an information processing method. The electronic device comprises: a processor which comprises N processing units to process data and perform data input and output; a data processing interface coupled to Q processing units among the N processing units, which compresses raw data received from the Q processing units to obtain compressed data; and a memory coupled to the data processing interface, which receives and stores the compressed data, where N?1 and 1?Q?N.
    Type: Application
    Filed: September 29, 2014
    Publication date: June 11, 2015
    Inventors: Huaping Liu, Wei Xie