Patents by Inventor Haiyang MEI

Haiyang MEI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11816843
    Abstract: A method for segmenting a camouflaged object image based on distraction mining is disclosed. PFNet successively includes a multi-layer feature extractor, a positioning module, and a focusing module. The multi-layer feature extractor uses a traditional feature extraction network to obtain different levels of contextual features; the positioning module first uses RGB feature information to initially determine the position of the camouflaged object in the image; the focusing module mines the information and removes the distraction information based on the image RGB feature information and preliminary position information, and finally determines the boundary of the camouflaged object step by step. The method of the present invention introduces the concept of distraction information into the problem of segmentation of the camouflaged object and develops a new information exploration and distraction information removal strategy to help the segmentation of the camouflaged object image.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: November 14, 2023
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Xin Yang, Haiyang Mei, Wen Dong, Xiaopeng Wei, Dengping Fan
  • Patent number: 11756204
    Abstract: The invention belongs to scene segmentation's field in computer vision and is a depth-aware method for mirror segmentation. PDNet successively includes a multi-layer feature extractor, a positioning module, and a delineating module. The multi-layer feature extractor uses a traditional feature extraction network to obtain contextual features; the positioning module combines RGB feature information with depth feature information to initially determine the position of the mirror in the image; the delineating module is based on the image RGB feature information, combined with depth information to adjust and determine the boundary of the mirror. This method is the first method that uses both RGB image and depth image to achieve mirror segmentation in an image. The present invention has also been further tested. For mirrors with a large area in a complex environment, the PDNet segmentation results are still excellent, and the results at the boundary of the mirrors are also satisfactory.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: September 12, 2023
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Wen Dong, Xin Yang, Haiyang Mei, Xiaopeng Wei, Qiang Zhang
  • Publication number: 20220230324
    Abstract: A method for segmenting a camouflaged object image based on distraction mining is disclosed. PFNet successively includes a multi-layer feature extractor, a positioning module, and a focusing module. The multi-layer feature extractor uses a traditional feature extraction network to obtain different levels of contextual features; the positioning module first uses RGB feature information to initially determine the position of the camouflaged object in the image; the focusing module mines the information and removes the distraction information based on the image RGB feature information and preliminary position information, and finally determines the boundary of the camouflaged object step by step. The method of the present invention introduces the concept of distraction information into the problem of segmentation of the camouflaged object and develops a new information exploration and distraction information removal strategy to help the segmentation of the camouflaged object image.
    Type: Application
    Filed: June 2, 2021
    Publication date: July 21, 2022
    Inventors: Xin YANG, Haiyang MEI, Wen DONG, Xiaopeng WEI, Dengping FAN
  • Publication number: 20220230322
    Abstract: The invention belongs to scene segmentation's field in computer vision and is a depth-aware method for mirror segmentation. PDNet successively includes a multi-layer feature extractor, a positioning module, and a delineating module. The multi-layer feature extractor uses a traditional feature extraction network to obtain contextual features; the positioning module combines RGB feature information with depth feature information to initially determine the position of the mirror in the image; the delineating module is based on the image RGB feature information, combined with depth information to adjust and determine the boundary of the mirror. This method is the first method that uses both RGB image and depth image to achieve mirror segmentation in an image. The present invention has also been further tested. For mirrors with a large area in a complex environment, the PDNet segmentation results are still excellent, and the results at the boundary of the mirrors are also satisfactory.
    Type: Application
    Filed: June 2, 2021
    Publication date: July 21, 2022
    Inventors: Wen DONG, Xin YANG, Haiyang MEI, Xiaopeng WEI, Qiang ZHANG
  • Patent number: 11361534
    Abstract: The invention discloses a method for glass detection in a real scene, which belongs to the field of object detection. The present invention designs a combination method based on LCFI blocks to effectively integrate context features of different scales. Finally, multiple LCFI combination blocks are embedded into the glass detection network GDNet to obtain large-scale context features of different levels, thereby realize reliable and accurate glass detection in various scenarios. The glass detection network GDNet in the present invention can effectively predict the true area of glass in different scenes through this method of fusing context features of different scales, successfully detect glass with different sizes, and effectively handle with glass in different scenes. GDNet has strong adaptability to the various glass area sizes of the images in the glass detection dataset, and has the highest accuracy in the field of the same type of object detection.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: June 14, 2022
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Xin Yang, Xiaopeng Wei, Qiang Zhang, Haiyang Mei, Yuanyuan Liu
  • Publication number: 20220148292
    Abstract: The invention discloses a method for glass detection in a real scene, which belongs to the field of object detection. The present invention designs a combination method based on LCFI blocks to effectively integrate context features of different scales. Finally, multiple LCFI combination blocks are embedded into the glass detection network GDNet to obtain large-scale context features of different levels, thereby realize reliable and accurate glass detection in various scenarios. The glass detection network GDNet in the present invention can effectively predict the true area of glass in different scenes through this method of fusing context features of different scales, successfully detect glass with different sizes, and effectively handle with glass in different scenes. GDNet has strong adaptability to the various glass area sizes of the images in the glass detection dataset, and has the highest accuracy in the field of the same type of object detection.
    Type: Application
    Filed: March 13, 2020
    Publication date: May 12, 2022
    Inventors: Xin YANG, Xiaopeng WEI, Qiang ZHANG, Haiyang MEI, Yuanyuan LIU