Patents by Inventor Xiaopeng Wei

Xiaopeng Wei has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220230324
    Abstract: A method for segmenting a camouflaged object image based on distraction mining is disclosed. PFNet successively includes a multi-layer feature extractor, a positioning module, and a focusing module. The multi-layer feature extractor uses a traditional feature extraction network to obtain different levels of contextual features; the positioning module first uses RGB feature information to initially determine the position of the camouflaged object in the image; the focusing module mines the information and removes the distraction information based on the image RGB feature information and preliminary position information, and finally determines the boundary of the camouflaged object step by step. The method of the present invention introduces the concept of distraction information into the problem of segmentation of the camouflaged object and develops a new information exploration and distraction information removal strategy to help the segmentation of the camouflaged object image.
    Type: Application
    Filed: June 2, 2021
    Publication date: July 21, 2022
    Inventors: Xin YANG, Haiyang MEI, Wen DONG, Xiaopeng WEI, Dengping FAN
  • Publication number: 20220230322
    Abstract: The invention belongs to scene segmentation's field in computer vision and is a depth-aware method for mirror segmentation. PDNet successively includes a multi-layer feature extractor, a positioning module, and a delineating module. The multi-layer feature extractor uses a traditional feature extraction network to obtain contextual features; the positioning module combines RGB feature information with depth feature information to initially determine the position of the mirror in the image; the delineating module is based on the image RGB feature information, combined with depth information to adjust and determine the boundary of the mirror. This method is the first method that uses both RGB image and depth image to achieve mirror segmentation in an image. The present invention has also been further tested. For mirrors with a large area in a complex environment, the PDNet segmentation results are still excellent, and the results at the boundary of the mirrors are also satisfactory.
    Type: Application
    Filed: June 2, 2021
    Publication date: July 21, 2022
    Inventors: Wen DONG, Xin YANG, Haiyang MEI, Xiaopeng WEI, Qiang ZHANG
  • Publication number: 20220215662
    Abstract: The present invention belongs to the technical field of computer vision, and provides a video semantic segmentation method based on active learning, comprising an image semantic segmentation module, a data selection module based on the active learning and a label propagation module. The image semantic segmentation module is responsible for segmenting image results and extracting high-level features required by the data selection module; the data selection module selects a data subset with rich information at an image level, and selects pixel blocks to be labeled at a pixel level; and the label propagation module realizes migration from image to video tasks and completes the segmentation result of a video quickly to obtain weakly-supervised data. The present invention can rapidly generate weakly-supervised data sets, reduce the cost of manufacture of the data and optimize the performance of a semantic segmentation network.
    Type: Application
    Filed: December 21, 2021
    Publication date: July 7, 2022
    Inventors: Xin YANG, Xiaopeng WEI, Yu QIAO, Qiang ZHANG, Baocai YIN, Haiyin PIAO, Zhenjun DU
  • Publication number: 20220212339
    Abstract: The present invention belongs to the technical field of computer vision and provides a data active selection method for robot grasping. The core content of the present invention is a data selection strategy module, which shares the feature extraction layer of backbone main network and integrates the features of three receptive fields with different sizes. While making full use of the feature extraction module, the present invention greatly reduces the amount of parameters that need to be added. During the training process of the main grasp method detection network model, the data selection strategy module can be synchronously trained to form an end-to-end model. The present invention makes use of naturally existing labeled and unlabeled labels, and makes full use of the labeled data and the unlabeled data. When the amount of the labeled data is small, the network can still be more fully trained.
    Type: Application
    Filed: December 29, 2021
    Publication date: July 7, 2022
    Inventors: Xin YANG, Boyan WEI, Baocai YIN, Qiang ZHANG, Xiaopeng WEI, Zhenjun DU
  • Patent number: 11361534
    Abstract: The invention discloses a method for glass detection in a real scene, which belongs to the field of object detection. The present invention designs a combination method based on LCFI blocks to effectively integrate context features of different scales. Finally, multiple LCFI combination blocks are embedded into the glass detection network GDNet to obtain large-scale context features of different levels, thereby realize reliable and accurate glass detection in various scenarios. The glass detection network GDNet in the present invention can effectively predict the true area of glass in different scenes through this method of fusing context features of different scales, successfully detect glass with different sizes, and effectively handle with glass in different scenes. GDNet has strong adaptability to the various glass area sizes of the images in the glass detection dataset, and has the highest accuracy in the field of the same type of object detection.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: June 14, 2022
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Xin Yang, Xiaopeng Wei, Qiang Zhang, Haiyang Mei, Yuanyuan Liu
  • Publication number: 20220148292
    Abstract: The invention discloses a method for glass detection in a real scene, which belongs to the field of object detection. The present invention designs a combination method based on LCFI blocks to effectively integrate context features of different scales. Finally, multiple LCFI combination blocks are embedded into the glass detection network GDNet to obtain large-scale context features of different levels, thereby realize reliable and accurate glass detection in various scenarios. The glass detection network GDNet in the present invention can effectively predict the true area of glass in different scenes through this method of fusing context features of different scales, successfully detect glass with different sizes, and effectively handle with glass in different scenes. GDNet has strong adaptability to the various glass area sizes of the images in the glass detection dataset, and has the highest accuracy in the field of the same type of object detection.
    Type: Application
    Filed: March 13, 2020
    Publication date: May 12, 2022
    Inventors: Xin YANG, Xiaopeng WEI, Qiang ZHANG, Haiyang MEI, Yuanyuan LIU
  • Patent number: 11195044
    Abstract: The invention belongs to the field of computer vision technology, and provides a fully automatic natural image matting method. For image matting of a single image, it is mainly composed of the extraction of high-level semantic features and low-level structural features, the filtering of pyramid features, the extraction of spatial structure information, and the late optimization of the discriminator network. The invention can generate accurate alpha matte without any auxiliary information, saving the time for scientific researchers to mark auxiliary information and the interaction time when users use it.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: December 7, 2021
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Xin Yang, Xiaopeng Wei, Qiang Zhang, Yuhao Liu, Yu Qiao
  • Publication number: 20210216806
    Abstract: The invention belongs to the field of computer vision technology, and provides a fully automatic natural image matting method. For image matting of a single image, it is mainly composed of the extraction of high-level semantic features and low-level structural features, the filtering of pyramid features, the extraction of spatial structure information, and the late optimization of the discriminator network. The invention can generate accurate alpha matte without any auxiliary information, saving the time for scientific researchers to mark auxiliary information and the interaction time when users use it.
    Type: Application
    Filed: May 13, 2020
    Publication date: July 15, 2021
    Inventors: Xin YANG, Xiaopeng WEI, Qiang ZHANG, Yuhao LIU, Yu QIAO
  • Publication number: 20200287704
    Abstract: The invention relates to the field of strand displacement, and provides a color image encryption method based on a DNA strand displacement analog circuit. Firstly, a reaction module with a light-emitting group and a quenching group is designed through Visual DSD software, and by utilizing the equivalence of a DNA strand displacement reaction module and an ideal reaction module, an analog circuit formed by the DNA strand displacement reaction module can perform analog on the dynamics characteristics of an ideal reaction network formed by the ideal reaction module, wherein the Rössler chaotic system can be described by the idealized reaction network. Secondly, data generated by the DNA strand displacement analog circuit is converted into a chaotic sequence matched with a plaintext image in size after being extended, and finally, the color image encryption effect is achieved through scrambling and diffusion operations.
    Type: Application
    Filed: May 22, 2020
    Publication date: September 10, 2020
    Inventors: Qiang Zhang, Chengye Zou, Changjun Zhou, Xiaopeng Wei
  • Patent number: 8548154
    Abstract: The present invention discloses a ringtone uploading service method and system used in color ring back tone (CRBT) service. The present invention is that when a ringtone system successfully uploads a ringtone to at least one but not all ringtone servers, it records ringtone uploading information, sends a message about the successful uploading of the ringtone to an uploading terminal, and re-uploads the ringtone to the ringtone servers to which the ringtone has not been successfully uploaded. The ringtone uploading solution proposed in the present invention will greatly improve the success rate of ringtone uploading.
    Type: Grant
    Filed: September 14, 2009
    Date of Patent: October 1, 2013
    Assignee: ZTE Corporation
    Inventors: Meixia Liu, Xiaopeng Wei
  • Publication number: 20110235795
    Abstract: The present invention discloses a ringtone uploading service method and system used in color ring back tone (CRBT) service. The present invention is that when a ringtone system successfully uploads a ringtone to at least one but not all ringtone servers, it records ringtone uploading information, sends a message about the successful uploading of the ringtone to an uploading terminal, and re-uploads the ringtone to the ringtone servers to which the ringtone has not been successfully uploaded. The ringtone uploading solution proposed in the present invention will greatly improve the success rate of ringtone uploading.
    Type: Application
    Filed: September 14, 2009
    Publication date: September 29, 2011
    Applicant: ZTE Corporation
    Inventors: Meixia Liu, Xiaopeng Wei
  • Publication number: 20100082800
    Abstract: Multiple features of email traffic are analyzed and extracted. Feature vectors comprising the multiple features are created and cluster analysis is utilized to track spam generation even from dynamically changing or aliased IP addresses.
    Type: Application
    Filed: September 29, 2008
    Publication date: April 1, 2010
    Applicant: YAHOO! INC
    Inventors: Stanley WEI, Xiaopeng Wei, Vishwanath Tumkur RAMARAO, Dragomir YANKOV