Patents by Inventor Xinchen YE

Xinchen YE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11551029
    Abstract: The invention discloses a deep network lung texture recognition method combined with multi-scale attention, which belongs to the field of image processing and computer vision. In order to accurately recognize the typical texture of diffuse lung disease in computed tomography (CT) images of the lung, a unique attention mechanism module and multi-scale feature fusion module were designed to construct a deep convolutional neural network combing multi-scale and attention, which achieves high-precision automatic recognition of typical textures of diffuse lung diseases. In addition, the proposed network structure is clear, easy to construct, and easy to implement.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: January 10, 2023
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Rui Xu, Xinchen Ye, Haojie Li, Lin Lin
  • Patent number: 11501435
    Abstract: The invention discloses an unsupervised content-preserved domain adaptation method for multiple CT lung texture recognition, which belongs to the field of image processing and computer vision. This method enables the deep network model of lung texture recognition trained in advance on one type of CT data (on the source domain), when applied to another CT image (on the target domain), under the premise of only obtaining target domain CT image and not requiring manually label the typical lung texture, the adversarial learning mechanism and the specially designed content consistency network module can be used to fine-tune the deep network model to maintain high performance in lung texture recognition on the target domain. This method not only saves development labor and time costs, but also is easy to implement and has high practicability.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: November 15, 2022
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Rui Xu, Xinchen Ye, Haojie Li, Lin Lin
  • Patent number: 11295168
    Abstract: The invention discloses a depth estimation and color correction method for monocular underwater images based on deep neural network, which belongs to the field of image processing and computer vision. The framework consists of two parts: style transfer subnetwork and task subnetwork. The style transfer subnetwork is constructed based on generative adversarial network, which is used to transfer the apparent information of underwater images to land images and obtain abundant and effective synthetic labeled data, while the task subnetwork combines the underwater depth estimation and color correction tasks with the stack network structure, carries out collaborative learning to improve their respective accuracies, and reduces the gap between the synthetic underwater image and the real underwater image through the domain adaptation strategy, so as to improve the network's ability to process the real underwater image.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: April 5, 2022
    Assignee: Dalian University of Technology
    Inventors: Xinchen Ye, Rui Xu, Xin Fan
  • Patent number: 11238602
    Abstract: The present invention provides a method for estimating high-quality depth map based on depth prediction and enhancement sub-networks, belonging to the technical field of image processing and computer vision. This method constructs depth prediction subnetwork to predict depth information from color image and uses depth enhancement subnetwork to obtain high-quality depth map by recovering the low-resolution depth map. It is easy to construct the system, and can obtain the high-quality depth map from the corresponding color image directly by the well-trained end to end network. The algorithm is easy to be implemented. It uses high-frequency component of color image to help to recover the lost depth boundaries information caused by down-sampling operators in depth prediction sub-network, and finally obtains high-quality and high-resolution depth maps. It uses spatial pyramid pooling structure to increase the accuracy of depth map prediction for multi-scale objects in the scene.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: February 1, 2022
    Assignee: Dalian University of Technology
    Inventors: Xinchen Ye, Wei Zhong, Haojie Li, Lin Lin, Xin Fan, Zhongxuan Luo
  • Patent number: 11210803
    Abstract: The present invention provides a method of dense 3D scene reconstruction based on monocular camera and belongs to the technical field of image processing and computer vision, which builds the reconstruction strategy with fusion of traditional geometry-based depth computation and convolutional neural network (CNN) based depth prediction, and formulates depth reconstruction model solved by efficient algorithm to obtain high-quality dense depth map. The system is easy to construct because of its low requirement for hardware resources and achieves dense reconstruction only depending on ubiquitous monocular cameras.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: December 28, 2021
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Xinchen Ye, Wei Zhong, Zhihui Wang, Haojie Li, Lin Lin, Xin Fan, Zhongxuan Luo
  • Publication number: 20210390723
    Abstract: The present invention provides a monocular unsupervised depth estimation method based on contextual attention mechanism, belonging to the technical field of image processing and computer vision. The invention adopts a depth estimation method based on a hybrid geometric enhancement loss function and a context attention mechanism, and adopts a depth estimation sub-network, an edge sub-network and a camera pose estimation sub-network based on convolutional neural network to obtain high-quality depth maps. The present invention uses convolutional neural network to obtain the corresponding high-quality depth map from the monocular image sequences in an end-to-end manner. The system is easy to construct, the program framework is easy to implement, and the algorithm runs fast; the method uses an unsupervised method to solve the depth information, avoiding the problem that ground-truth data is difficult to obtain in the supervised method.
    Type: Application
    Filed: December 2, 2020
    Publication date: December 16, 2021
    Inventors: Xinchen YE, Rui XU, Xin FAN
  • Publication number: 20210390338
    Abstract: The invention discloses a deep network lung texture recognition method combined with multi-scale attention, which belongs to the field of image processing and computer vision. In order to accurately recognize the typical texture of diffuse lung disease in computed tomography (CT) images of the lung, a unique attention mechanism module and multi-scale feature fusion module were designed to construct a deep convolutional neural network combing multi-scale and attention, which achieves high-precision automatic recognition of typical textures of diffuse lung diseases. In addition, the proposed network structure is clear, easy to construct, and easy to implement.
    Type: Application
    Filed: December 4, 2020
    Publication date: December 16, 2021
    Inventors: Rui XU, Xinchen YE, Haojie LI, Lin LIN
  • Publication number: 20210390686
    Abstract: The invention discloses an unsupervised content-preserved domain adaptation method for multiple CT lung texture recognition, which belongs to the field of image processing and computer vision. This method enables the deep network model of lung texture recognition trained in advance on one type of CT data (on the source domain), when applied to another CT image (on the target domain), under the premise of only obtaining target domain CT image and not requiring manually label the typical lung texture, the adversarial learning mechanism and the specially designed content consistency network module can be used to fine-tune the deep network model to maintain high performance in lung texture recognition on the target domain. This method not only saves development labor and time costs, but also is easy to implement and has high practicability.
    Type: Application
    Filed: December 4, 2020
    Publication date: December 16, 2021
    Inventors: Rui XU, Xinchen YE, Haojie LI, Lin LIN
  • Publication number: 20210390339
    Abstract: The invention discloses a depth estimation and color correction method for monocular underwater images based on deep neural network, which belongs to the field of image processing and computer vision. The framework consists of two parts: style transfer subnetwork and task subnetwork. The style transfer subnetwork is constructed based on generative adversarial network, which is used to transfer the apparent information of underwater images to land images and obtain abundant and effective synthetic labeled data, while the task subnetwork combines the underwater depth estimation and color correction tasks with the stack network structure, carries out collaborative learning to improve their respective accuracies, and reduces the gap between the synthetic underwater image and the real underwater image through the domain adaptation strategy, so as to improve the network's ability to process the real underwater image.
    Type: Application
    Filed: December 4, 2020
    Publication date: December 16, 2021
    Inventors: Xinchen YE, Rui XU, Xin FAN
  • Patent number: 11170502
    Abstract: Provided is a method based on deep neural network to extract appearance and geometry features for pulmonary textures classification, which belongs to the technical fields of medical image processing and computer vision. Taking 217 pulmonary computed tomography images as original data, several groups of datasets are generated through a preprocessing procedure. Each group includes a CT image patch, a corresponding image patch containing geometry information and a ground-truth label. A dual-branch residual network is constructed, including two branches separately takes CT image patches and corresponding image patches containing geometry information as input. Appearance and geometry information of pulmonary textures are learnt by the dual-branch residual network, and then they are fused to achieve high accuracy for pulmonary texture classification. Besides, the proposed network architecture is clear, easy to be constructed and implemented.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: November 9, 2021
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Rui Xu, Xinchen Ye, Lin Lin, Haojie Li, Xin Fan, Zhongxuan Luo
  • Publication number: 20200273190
    Abstract: The present invention provides a method of dense 3D scene reconstruction based on monocular camera and belongs to the technical field of image processing and computer vision, which builds the reconstruction strategy with fusion of traditional geometry-based depth computation and convolutional neural network (CNN) based depth prediction, and formulates depth reconstruction model solved by efficient algorithm to obtain high-quality dense depth map. The system is easy to construct because of its low requirement for hardware resources and achieves dense reconstruction only depending on ubiquitous monocular cameras.
    Type: Application
    Filed: January 7, 2019
    Publication date: August 27, 2020
    Inventors: Xinchen YE, Wei ZHONG, Zhihui WANG, Haojie LI, Lin LIN, Xin FAN, Zhongxuan LUO
  • Publication number: 20200265597
    Abstract: The present invention provides a method for estimating high-quality depth map based on depth prediction and enhancement sub-networks, belonging to the technical field of image processing and computer vision. This method constructs depth prediction subnetwork to predict depth information from color image and uses depth enhancement subnetwork to obtain high-quality depth map by recovering the low-resolution depth map. It is easy to construct the system, and can obtain the high-quality depth map from the corresponding color image directly by the well-trained end to end network. The algorithm is easy to be implemented. It uses high-frequency component of color image to help to recover the lost depth boundaries information caused by down-sampling operators in depth prediction sub-network, and finally obtains high-quality and high-resolution depth maps. It uses spatial pyramid pooling structure to increase the accuracy of depth map prediction for multi-scale objects in the scene.
    Type: Application
    Filed: January 7, 2019
    Publication date: August 20, 2020
    Inventors: Xinchen YE, Wei ZHONG, Haojie LI, Lin LIN, Xin FAN, Zhongxuan LUO
  • Publication number: 20200258218
    Abstract: Provided is a method based on deep neural network to extract appearance and geometry features for pulmonary textures classification, which belongs to the technical fields of medical image processing and computer vision. Taking 217 pulmonary computed tomography images as original data, several groups of datasets are generated through a preprocessing procedure. Each group includes a CT image patch, a corresponding image patch containing geometry information and a ground-truth label. A dual-branch residual network is constructed, including two branches separately takes CT image patches and corresponding image patches containing geometry information as input. Appearance and geometry information of pulmonary textures are learnt by the dual-branch residual network, and then they are fused to achieve high accuracy for pulmonary texture classification. Besides, the proposed network architecture is clear, easy to be constructed and implemented.
    Type: Application
    Filed: January 7, 2019
    Publication date: August 13, 2020
    Inventors: Rui XU, Xinchen YE, Lin LIN, Haojie LI, Xin FAN, Zhongxuan LUO