Patents by Inventor Xin Fan

Xin Fan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220148213
    Abstract: The present invention discloses a method for fully automatically detecting chessboard corner points, and belongs to the field of image processing and computer vision. Full automatic detection of chessboard corner points is completed by setting one or a plurality of marks with colors or certain shapes on a chessboard to mark an initial position, shooting an image and conducting corresponding processing, using a homography matrix H calculated by initial pixel coordinates of a unit grid in a pixel coordinate system and manually set world coordinates in a world coordinate system to expand outwards, and finally spreading to the whole chessboard region.
    Type: Application
    Filed: March 5, 2020
    Publication date: May 12, 2022
    Inventors: Weiqiang KONG, Deyun LV, Wei ZHONG, Risheng LIU, Xin FAN, Zhongxuan LUO
  • Patent number: 11315273
    Abstract: The present invention discloses a disparity estimation method for weakly supervised trusted cost propagation, which utilizes a deep learning method to optimize the initial cost obtained by the traditional method. By combining and making full use of respective advantages, the problems of false matching and difficult matching of untextured regions in the traditional method are solved, and the method for weakly supervised trusted cost propagation avoids the problem of data label dependency of the deep learning method.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: April 26, 2022
    Assignees: DALIAN UNIVERSITY OF TECHNOLOGY, PENG CHENG LABORATORY
    Inventors: Wei Zhong, Hong Zhang, Haojie Li, Zhihui Wang, Risheng Liu, Xin Fan, Zhongxuan Luo, Shengquan Li
  • Patent number: 11315318
    Abstract: The present invention discloses a method for constructing a grid map by using a binocular stereo camera. A high-performance computing platform is constructed by using a binocular camera and a GPU, and a high-performance solving algorithm is constructed to obtain a high-quality grid map containing three-dimensional information. The system in the present invention is easy to construct, so the input data may be collected by using the binocular stereo camera; the program is simple and easy to implement. According to the present invention, the grid height is calculated by using spatial prior information and statistical knowledge, so that a three-dimensional result is more robust; and according to the present invention, the adaptive threshold of grids is solved by using spatial geometry, filtering and screening of the grids are completed, and thus the generalization ability and robustness of the algorithm are improved.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: April 26, 2022
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Wei Zhong, Shenglun Chen, Haojie Li, Zhihui Wang, Risheng Liu, Xin Fan, Zhongxuan Luo
  • Patent number: 11302105
    Abstract: The present invention discloses a grid map obstacle detection method fusing probability and height information, and belongs to the field of image processing and computer vision. A high-performance computing platform is constructed by using a GPU, and a high-performance solving algorithm is constructed to obtain obstacle information in a map. The system is easy to construct, the program is simple, and is easy to implement. The positions of obstacles are acquired in a multi-layer grid map by fusing probability and height information, so the robustness is high and the precision is high.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: April 12, 2022
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Wei Zhong, Shenglun Chen, Haojie Li, Zhihui Wang, Risheng Liu, Xin Fan, Zhongxuan Luo
  • Patent number: 11295168
    Abstract: The invention discloses a depth estimation and color correction method for monocular underwater images based on deep neural network, which belongs to the field of image processing and computer vision. The framework consists of two parts: style transfer subnetwork and task subnetwork. The style transfer subnetwork is constructed based on generative adversarial network, which is used to transfer the apparent information of underwater images to land images and obtain abundant and effective synthetic labeled data, while the task subnetwork combines the underwater depth estimation and color correction tasks with the stack network structure, carries out collaborative learning to improve their respective accuracies, and reduces the gap between the synthetic underwater image and the real underwater image through the domain adaptation strategy, so as to improve the network's ability to process the real underwater image.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: April 5, 2022
    Assignee: Dalian University of Technology
    Inventors: Xinchen Ye, Rui Xu, Xin Fan
  • Patent number: 11293183
    Abstract: The joint has a prefabricated-reinforced-concrete column, a reinforced-concrete foundation, a column anchoring longitudinal bar, a grouting sleeve and a foundation anchoring steel bar. The foundation anchoring steel bar and the column anchoring longitudinal bar are connected by a seam filling material filling the grouting sleeve. A splicing seam between the reinforced-concrete foundation and the prefabricated-reinforced-concrete column is filled with the seam filling material. The foundation anchoring steel bar includes a vertical portion and a horizontal portion. The vertical portion includes an upper-portion anchoring section protruding out of an upper surface of the reinforced-concrete foundation, a middle-portion non-adhesive section buried within the foundation and a lower-portion anchoring section. An exterior of the middle-portion non-adhesive section is provided with an isolating sheath for isolating the middle-portion non-adhesive section and the concrete adhesion.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: April 5, 2022
    Inventors: Haishan Guo, Hu Qi, Kang Liu, Liming Li, Xin Fan, Lida Tian, Jiao Geng, Dongyan Wang
  • Publication number: 20220092809
    Abstract: The present invention discloses a disparity estimation method for weakly supervised trusted cost propagation, which utilizes a deep learning method to optimize the initial cost obtained by the traditional method. By combining and making full use of respective advantages, the problems of false matching and difficult matching of untextured regions in the traditional method are solved, and the method for weakly supervised trusted cost propagation avoids the problem of data label dependency of the deep learning method.
    Type: Application
    Filed: March 5, 2020
    Publication date: March 24, 2022
    Inventors: Wei ZHONG, Hong ZHANG, Haojie LI, Zhihui WANG, Risheng LIU, Xin FAN, Zhongxuan LUO, Shengquan LI
  • Publication number: 20220046218
    Abstract: The present invention discloses a disparity image stitching and visualization method based on multiple pairs of binocular cameras. A calibration algorithm is used to solve the positional relationship between binocular cameras, and the prior information is used to solve a homography matrix between images; internal parameters and external parameters of the cameras are used to perform camera coordinate system transformation of depth images; the graph cut algorithm has high time complexity and depends on the number of nodes in a graph; the present invention divides the images into layers, and solutions are obtained layer by layer and iterated; then the homography matrix is used to perform image coordinate system transformation of the depth images, and a stitching seam is synthesized to realize seamless panoramic depth image stitching; and finally, depth information of a disparity image is superimposed on a visible light image.
    Type: Application
    Filed: March 5, 2020
    Publication date: February 10, 2022
    Inventors: Xin FAN, Risheng LIU, Zhuoxiao LI, Wei ZHONG, Zhongxuan LUO
  • Publication number: 20220044442
    Abstract: The present invention proposes a bi-level optimization-based infrared and visible light fusion method, adopts a pair of infrared camera and visible light camera to acquire images, and relates to the construction of a bi-level paradigm infrared and visible light image fusion algorithm, which is an infrared and visible light fusion algorithm using mathematical modeling. Binocular cameras and NVIDIA TX2 are used to construct a high-performance computing platform and to construct a high-performance solving algorithm to obtain a high-quality infrared and visible light fusion image. The system is easy to construct, and the input data can be acquired by using stereo binocular infrared and visible light cameras respectively; the program is simple and easy to implement; and the fusion image is divided into an image domain and a gradient domain for fusion by means of mathematical modeling according to different imaging principles of infrared and visible light cameras.
    Type: Application
    Filed: March 5, 2020
    Publication date: February 10, 2022
    Inventors: Risheng LIU, Xin FAN, Jinyuan LIU, Wei ZHONG, Zhongxuan LUO
  • Publication number: 20220044356
    Abstract: The present invention discloses a large-field-angle image real-time stitching method based on calibration, and belongs to the field of image processing and computer vision. First, a calibration algorithm is used to solve the positional relationship between cameras, and the prior information is used to solve a homography matrix between images.
    Type: Application
    Filed: March 5, 2020
    Publication date: February 10, 2022
    Inventors: Wei ZHONG, Yuankai XIANG, Haojie LI, Zhihui WANG, Risheng LIU, Xin FAN, Zhongxuan LUO
  • Publication number: 20220044474
    Abstract: The present invention discloses a method for constructing a grid map by using a binocular stereo camera. A high-performance computing platform is constructed by using a binocular camera and a GPU, and a high-performance solving algorithm is constructed to obtain a high-quality grid map containing three-dimensional information. The system in the present invention is easy to construct, so the input data may be collected by using the binocular stereo camera; the program is simple and easy to implement. According to the present invention, the grid height is calculated by using spatial prior information and statistical knowledge, so that a three-dimensional result is more robust; and according to the present invention, the adaptive threshold of grids is solved by using spatial geometry, filtering and screening of the grids are completed, and thus the generalization ability and robustness of the algorithm are improved.
    Type: Application
    Filed: March 5, 2020
    Publication date: February 10, 2022
    Inventors: Wei ZHONG, Shenglun CHEN, Haojie LI, Zhihui WANG, Risheng LIU, Xin FAN, Zhongxuan LUO
  • Publication number: 20220044374
    Abstract: The present invention provides an infrared and visible light fusion method, and belongs to the field of image processing and computer vision. The present invention adopts a pair of infrared binocular camera and visible light binocular camera to acquire images, relates to the construction of a fusion image pyramid and a significant vision enhancement algorithm, and is an infrared and visible light fusion algorithm using multi-scale transform. The present invention uses the binocular cameras and NVIDIATX2 to construct a high-performance computing platform and to construct a high-performance solving algorithm to obtain a high-quality infrared and visible light fusion image. The present invention constructs an image pyramid by designing a filtering template according to different imaging principles of infrared and visible light cameras, obtains image information at different scales, performs image super-resolution and significant enhancement, and finally achieves real-time performance through GPU acceleration.
    Type: Application
    Filed: March 5, 2020
    Publication date: February 10, 2022
    Inventors: Risheng LIU, Xin FAN, Jinyuan LIU, Wei ZHONG, Zhongxuan LUO
  • Publication number: 20220044375
    Abstract: The present invention proposes a saliency map enhancement-based infrared and visible light fusion method, which is an infrared and visible light fusion algorithm using filtering decomposition and significant enhancement. Binocular cameras and NVIDIA TX2 are used to construct a high-performance computing platform and to construct a high-performance solving algorithm to obtain a high-quality infrared and visible light fusion image. The system is easy to construct, and the input data can be acquired by using stereo binocular infrared and visible light cameras respectively; the program is simple and easy to implement; the input image is decomposed into a background layer and a detail layer by means of filtering decomposition according to different imaging principles of infrared and visible light cameras, a saliency map enhancement-based fusion method is designed for the background layer, and a pixel contrast-based fusion algorithm is designed for the detail layer.
    Type: Application
    Filed: March 5, 2020
    Publication date: February 10, 2022
    Inventors: Risheng LIU, Xin FAN, Jinyuan LIU, Wei ZHONG, Zhongxuan LUO
  • Publication number: 20220046220
    Abstract: The present invention discloses a multispectral stereo camera self-calibration algorithm based on track feature registration, and belongs to the field of image processing and computer vision. Optimal matching points are obtained by extracting and matching motion tracks of objects, and external parameters are corrected accordingly. Compared with an ordinary method, the present invention uses the tracks of moving objects as the features required for self-calibration. The advantage of using the tracks is good cross-modal robustness. In addition, direct matching of the tracks also saves the steps of extraction and matching the feature points, thereby achieving the advantages of simple operation and accurate results.
    Type: Application
    Filed: March 5, 2020
    Publication date: February 10, 2022
    Inventors: Wei ZHONG, Haojie LI, Boqian LIU, Zhihui WANG, Risheng LIU, Zhongxuan LUO, Xin FAN
  • Patent number: 11241734
    Abstract: Provided is a double-station cleaning system comprising a cleaning machine, wherein workpieces are placed into the feeding frames, the feeding frames are placed into trays, the bracket assembly is pushed to drive the trays to move along the guide rail assembly, the feeding frame in one of the trays is conveyed to a feeding inlet of the cleaning machine, the feeding frame is pushed to move along two rows of nylon wheels and linear guide rails inside the cleaning machine, and the feeding frame and the workpieces in the feeding frame are pushed into the cleaning machine so as to be cleaned. The double stations work alternately, so that the work time is saved, and the work efficiency is increased.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: February 8, 2022
    Assignee: CITIC DICASTAL CO., LTD
    Inventors: Hailong Liu, Haipeng Feng, Naili Li, Hongji Zhou, Jiansheng Wang, Zhen Ge, Zhi Chen, Fengbao Luo, Xin Fan, Yongyue Huang
  • Publication number: 20220036589
    Abstract: The present invention discloses a multispectral camera external parameter self-calibration algorithm based on edge features, and belongs to the field of image processing and computer vision. Because a visible light camera and an infrared camera belong to different modes, fewer satisfactory point pairs are obtained by directly extracting and matching feature points. In order to solve the problem, the method starts from the edge features, and finds an optimal corresponding position of an infrared image on a visible light image through edge extraction and matching. In this way, a search range is reduced and the number of the satisfactory matched point pairs is increased, thereby more effectively conducting joint self-calibration on the infrared camera and the visible light camera. The operation is simple and results are accurate.
    Type: Application
    Filed: March 5, 2020
    Publication date: February 3, 2022
    Inventors: Wei ZHONG, Boqian LIU, Haojie LI, Zhihui WANG, Risheng LIU, Xin FAN, Zhongxuan LUO
  • Patent number: 11238602
    Abstract: The present invention provides a method for estimating high-quality depth map based on depth prediction and enhancement sub-networks, belonging to the technical field of image processing and computer vision. This method constructs depth prediction subnetwork to predict depth information from color image and uses depth enhancement subnetwork to obtain high-quality depth map by recovering the low-resolution depth map. It is easy to construct the system, and can obtain the high-quality depth map from the corresponding color image directly by the well-trained end to end network. The algorithm is easy to be implemented. It uses high-frequency component of color image to help to recover the lost depth boundaries information caused by down-sampling operators in depth prediction sub-network, and finally obtains high-quality and high-resolution depth maps. It uses spatial pyramid pooling structure to increase the accuracy of depth map prediction for multi-scale objects in the scene.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: February 1, 2022
    Assignee: Dalian University of Technology
    Inventors: Xinchen Ye, Wei Zhong, Haojie Li, Lin Lin, Xin Fan, Zhongxuan Luo
  • Patent number: 11240477
    Abstract: The present invention provides a method and a device for image rectification, which are applied in the field of image processing. The method includes: receiving two images, the two images are images of a target object captured from different viewpoints; performing an epipolar rectification on the two images; rectifying the two images after the epipolar rectification based on the image contents; and splicing the two images after the rectification based on the image contents. The method can rectify the images captured from different viewpoints, thereby ensuring the pixel alignment, and avoiding visual discomfort to the viewer.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: February 1, 2022
    Assignee: ArcSoft Corporation Limited
    Inventors: Wang Miao, Li Yu, Xin Fan
  • Publication number: 20220028043
    Abstract: A multispectral camera dynamic stereo calibration algorithm is based on saliency features. The joint self-calibration method comprises the following steps: step 1: conducting de-distortion and binocular correction on an original image according to internal parameters and original external parameters of an infrared camera and a visible light camera. Step 2: Detecting the saliency of the infrared image and the visible light image respectively based on a histogram contrast method. Step 3: Extracting feature points on the infrared image and the visible light image. Step 4: Matching the feature points extracted in the previous step. Step 5: judging a feature point coverage area. Step 6: correcting the calibration result. The present invention solves the change of a positional relationship between an infrared camera and a visible light camera due to factors such as temperature, humidity and vibration.
    Type: Application
    Filed: March 5, 2020
    Publication date: January 27, 2022
    Inventors: Wei ZHONG, Haojie LI, Boqian LIU, Zhihui WANG, Risheng LIU, Zhongxuan LUO, Xin FAN
  • Patent number: 11210803
    Abstract: The present invention provides a method of dense 3D scene reconstruction based on monocular camera and belongs to the technical field of image processing and computer vision, which builds the reconstruction strategy with fusion of traditional geometry-based depth computation and convolutional neural network (CNN) based depth prediction, and formulates depth reconstruction model solved by efficient algorithm to obtain high-quality dense depth map. The system is easy to construct because of its low requirement for hardware resources and achieves dense reconstruction only depending on ubiquitous monocular cameras.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: December 28, 2021
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Xinchen Ye, Wei Zhong, Zhihui Wang, Haojie Li, Lin Lin, Xin Fan, Zhongxuan Luo