Patents by Inventor Weidan YAN

Weidan YAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250124687
    Abstract: The present application discloses a face image restoration method, a system, a storage medium and a device, the restoration model adopted in the above method starts from structured information of a face, generates a structured face graph based on features of the to-be-restored face image, and restores face image by the structured face graph and the decoder, which can solve the problem that it is difficult to capture and learn structured information based on the traditional convolution operation, and improve the indicators of the face restoration and enriches the visualization effect.
    Type: Application
    Filed: May 20, 2024
    Publication date: April 17, 2025
    Inventors: Dengyin ZHANG, Weidan YAN, Chao ZHOU, Junhao YING, Yingying FENG, Yi LIU
  • Publication number: 20250124550
    Abstract: A method for processing low resolution degraded image, a system, a storage medium, and a device therefor are provided. The present disclosure adopts a dual branch processing model, which includes an image restoration branch and an image super-resolution branch. At the same time, a fusion module is used to fuse and learn image features from the two domains, thereby improving the problem of error accumulation and high computational cost caused by the two-stage processing method.
    Type: Application
    Filed: May 31, 2024
    Publication date: April 17, 2025
    Inventors: Dengyin ZHANG, Weidan YAN, Can CHEN, Junhao YING, Qunjian DU, Yi LIU
  • Publication number: 20240311963
    Abstract: The present application discloses a text image super-resolution method based on text assistance, including: obtaining a low-resolution text image to be reconstructed; inputting the low-resolution text image into a pre-trained text image super-resolution model, and determining a reconstructed text image based on an output of the text image super-resolution model; a method of constructing and training the text image super-resolution model includes: obtaining a text image dataset; and training the pre-constructed text image super-resolution model by using the text image dataset to obtain the trained text image super-resolution model. Compared to other ordinary super-resolution models, this text image super-resolution model fuses the text sequence features with the image texture features, and fully exploits and utilizes the text information in the low-resolution image, which can help to improve the quality of reconstructed text image.
    Type: Application
    Filed: November 6, 2023
    Publication date: September 19, 2024
    Inventors: Dengyin ZHANG, Junhao YING, Weidan YAN
  • Patent number: 12086989
    Abstract: A medical image segmentation method include: 1) acquiring a medical image data set; 2) acquiring, from the medical image data set, an original image and a real segmentation image of a target region in the original image in pair to serve as an input data set of a pre-built constant-scaling segmentation network, the input data set including a training set, a verification set, and a test set; 3) training the constant-scaling segmentation network by using the training set to obtain a trained segmentation network model, and verifying the constant-scaling segmentation network by using the verification set, the constant-scaling segmentation network including a feature extraction module and a resolution amplifying module; and 4) inputting the original image to be segmented into the segmentation network model for segmentation to obtain a real segmentation image.
    Type: Grant
    Filed: March 17, 2022
    Date of Patent: September 10, 2024
    Inventors: Dengyin Zhang, Weidan Yan, Rong Zhao, Hong Zhu, Shuo Yang, Qunjian Du, Junjie Sun
  • Publication number: 20230206603
    Abstract: The present disclosure discloses a high-precision point cloud completion method based on deep learning and a device thereof, which comprises the following steps: introducing dynamic kernel convolution PAConv into a feature extraction module, learning a weight coefficient according to the positional relationship between each point and its neighboring points, and adaptively constructing the convolution kernel in combination with the weight matrix. A spatial attention mechanism is added to a feature fusion module, which facilitates a decoder to better learn the relationship among various features, and thus better represent the feature information. A discriminator module comprises global and local attention discriminator modules, which use multi-layer full connection to classify and determine whether the generated results conform to the real point cloud distribution globally and locally, respectively, so as to optimize the generated results.
    Type: Application
    Filed: January 9, 2023
    Publication date: June 29, 2023
    Applicant: NANJING UNIVERSITY OF POSTS AND TELECOMMUNICATIONS
    Inventors: Dengyin ZHANG, Yingying FENG, Li HUANG, Weidan YAN
  • Patent number: 11663705
    Abstract: The present disclosure discloses an image haze removal method and apparatus, and a device. The method includes: acquiring a hazy image to be processed; and obtaining a haze-free image corresponding to the hazy image by inputting the hazy image into a pre-trained haze removal model. The present disclosure uses the residual dual attention fusion modules as basic modules of the neural network, so that each feature map can obtain pixel features while enhancing the global dependence, thus improving the image dehazing effect.
    Type: Grant
    Filed: November 15, 2022
    Date of Patent: May 30, 2023
    Assignee: NANJING UNIVERSITY OF POSTS AND TELECOMMUNICATIONS
    Inventors: Dengyin Zhang, Hong Zhu, Wensheng Han, Weidan Yan, Yingjie Kou
  • Publication number: 20230089280
    Abstract: The present disclosure discloses an image haze removal method and apparatus, and a device. The method includes: acquiring a hazy image to be processed; and obtaining a haze-free image corresponding to the hazy image by inputting the hazy image into a pre-trained haze removal model. The present disclosure uses the residual dual attention fusion modules as basic modules of the neural network, so that each feature map can obtain pixel features while enhancing the global dependence, thus improving the image dehazing effect.
    Type: Application
    Filed: November 15, 2022
    Publication date: March 23, 2023
    Applicant: NANJING UNIVERSITY OF POSTS AND TELECOMMUNICATIONS
    Inventors: Dengyin ZHANG, Hong ZHU, Wensheng HAN, Weidan YAN, Yingjie KOU
  • Patent number: 11580646
    Abstract: A medical image segmentation method based on a U-Net, including: sending real segmentation image and original image to a generative adversarial network for data enhancement to generate a composite image with a label; then putting the composite image into original data set to obtain an expanded data set, and sending the expanded data set to improved multi-feature fusion segmentation network for training. A Dilated Convolution Module is added between the shallow and deep feature skip connections of the segmentation network to obtain receptive fields with different sizes, which enhances the fusion of detail information and deep semantics, improves the adaptability to the size of the segmentation target, and improves the medical image segmentation accuracy. The over-fitting problem that occurs when training the segmentation network is alleviated by using the expanded data set of the generative adversarial network.
    Type: Grant
    Filed: January 5, 2022
    Date of Patent: February 14, 2023
    Assignee: NANJING UNIVERSITY OF POSTS AND TELECOMMUNICATIONS
    Inventors: Dengyin Zhang, Rong Zhao, Weidan Yan
  • Publication number: 20220398737
    Abstract: A medical image segmentation method include: 1) acquiring a medical image data set; 2) acquiring, from the medical image data set, an original image and a real segmentation image of a target region in the original image in pair to serve as an input data set of a pre-built constant-scaling segmentation network, the input data set including a training set, a verification set, and a test set; 3) training the constant-scaling segmentation network by using the training set to obtain a trained segmentation network model, and verifying the constant-scaling segmentation network by using the verification set, the constant-scaling segmentation network including a feature extraction module and a resolution amplifying module; and 4) inputting the original image to be segmented into the segmentation network model for segmentation to obtain a real segmentation image.
    Type: Application
    Filed: March 17, 2022
    Publication date: December 15, 2022
    Inventors: Dengyin ZHANG, Weidan YAN, Rong ZHAO, Hong ZHU, Shuo YANG, Qunjian DU, Junjie SUN
  • Publication number: 20220309674
    Abstract: A medical image segmentation method based on a U-Net, including: sending real segmentation image and original image to a generative adversarial network for data enhancement to generate a composite image with a label; then putting the composite image into original data set to obtain an expanded data set, and sending the expanded data set to improved multi-feature fusion segmentation network for training. A Dilated Convolution Module is added between the shallow and deep feature skip connections of the segmentation network to obtain receptive fields with different sizes, which enhances the fusion of detail information and deep semantics, improves the adaptability to the size of the segmentation target, and improves the medical image segmentation accuracy. The over-fitting problem that occurs when training the segmentation network is alleviated by using the expanded data set of the generative adversarial network.
    Type: Application
    Filed: January 5, 2022
    Publication date: September 29, 2022
    Inventors: Dengyin ZHANG, Rong ZHAO, Weidan YAN
  • Publication number: 20220291956
    Abstract: A distributed container scheduling method includes: monitoring a container creation event in a Kubernetes API-Server in real time, and validating a container created once a new container creation event is detected; updating a container scheduling queue with containers passing the validation; when the container scheduling queue is empty, performing no operation until the containers passing the validation are added to the queue; when the container scheduling queue is not empty, reading the containers to be scheduled from the container scheduling queue in sequence, and selecting, from a Kubernetes cluster, an optimal node corresponding to the containers to be scheduled to generate a container scheduling two-tuple; and scheduling, based on the container scheduling two-tuple, the containers to be scheduled to the optimal node to finish the distributed container scheduling operation.
    Type: Application
    Filed: March 22, 2022
    Publication date: September 15, 2022
    Inventors: Dengyin ZHANG, Junjiang LI, Zijie LIU, Yi CHENG, Yingjie KOU, Hong ZHU, Weidan YAN