Patents by Inventor Xiaopeng Wei

Xiaopeng Wei has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12228941
    Abstract: The present invention proposes an active scene mapping method based on constraint guidance and space optimization strategies, comprising a global planning stage and a local planning stage; in the global planning stage, the next exploration goal of a robot is calculated to guide the robot to explore a scene; and after the next exploration goal is determined, specific actions are generated according to the next exploration goal, the position of the robot and the constructed occupancy map in the local planning stage to drive the robot to go to a next exploration goal, and observation data is collected to update the information of the occupancy map. The present invention can effectively avoid long-distance round trips in the exploration process so that the robot can take account of information gain and movement loss in the exploration process, find a balance of exploration efficiency, and realize the improvement of active mapping efficiency.
    Type: Grant
    Filed: September 11, 2023
    Date of Patent: February 18, 2025
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Xin Yang, Xuefeng Yin, Baocai Yin, Xiaopeng Wei
  • Publication number: 20240355140
    Abstract: The present invention belongs to the technical field of computer vision, and proposes a lightweight real-time emotion analysis method incorporating eye tracking. In the method, gray frames and event frames that have synchronized time are acquired through event-based cameras and respectively input to a frame branch and an event branch; the frame branch extracts spatial features by convolution operations, and the event branch extracts temporal features through conv-SNN blocks; the frame branch has a guide attention mechanism for the event branch; and the spatial features and the temporal features are integrated by fully connected layers, The final output is the average of the n fully connected layer outputs, which represents the final expression. The method can recognize the emotional expression of any stage in various complex light changing scenarios; and in the case of limited accuracy loss, the emotion recognition time is shortened to achieve “real-time” user emotion analysis.
    Type: Application
    Filed: June 20, 2022
    Publication date: October 24, 2024
    Inventors: Xin YANG, Xiaopeng WEI, Bo DONG, Haiwei ZHANG
  • Patent number: 12118096
    Abstract: The present disclosure discloses an image encryption method based on multi-scale compressed sensing and a Markov model. According to the difference in information carried by low-frequency coefficients and high-frequency coefficients of an image, different sampling rates are set for the low-frequency coefficients and the high-frequency coefficients of the image, which can effectively improve the reconstruction quality of a decrypted image. The decrypted image obtained by the present disclosure has higher quality than the decrypted image generated by the existing scheme, and a better visual effect and more complete original image information can be obtained.
    Type: Grant
    Filed: September 20, 2022
    Date of Patent: October 15, 2024
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Qiang Zhang, Bin Wang, Pengfei Wang, Yuandi Shi, Rongrong Chen, Xiaopeng Wei
  • Publication number: 20240295879
    Abstract: The present invention proposes an active scene mapping method based on constraint guidance and space optimization strategies, comprising a global planning stage and a local planning stage; in the global planning stage, the next exploration goal of a robot is calculated to guide the robot to explore a scene; and after the next exploration goal is determined, specific actions are generated according to the next exploration goal, the position of the robot and the constructed occupancy map in the local planning stage to drive the robot to go to a next exploration goal, and observation data is collected to update the information of the occupancy map. The present invention can effectively avoid long-distance round trips in the exploration process so that the robot can take account of information gain and movement loss in the exploration process, find a balance of exploration efficiency, and realize the improvement of active mapping efficiency.
    Type: Application
    Filed: September 11, 2023
    Publication date: September 5, 2024
    Inventors: Xin YANG, Xuefeng YIN, Baocai YIN, Xiaopeng WEI
  • Publication number: 20240257304
    Abstract: The present disclosure relates to a tactile pattern Super Resolution (SR) reconstruction method and acquisition system, which belong to the field of tactile perception. First, a High Resolution (HR) tactile pattern sample is obtained by using a Low Resolution (LR) tactile sensor; then, a deep learning-based tactile SR model is trained by using a tactile SR data set; and finally, reconstructing the tactile data of a contact surface to be measured as an SR tactile pattern by using the tactile SR model. The present disclosure uses the existing taxel-based LR tactile sensor and adopts a deep learning-based tactile SR reconstruction technology, which can effectively restore the shape of the contact surface, improves the resolution of the tactile sensor, and meanwhile, maintains the characteristics of the sensor being light, flexible, and easy to be integrated into devices, such as a robot.
    Type: Application
    Filed: June 15, 2022
    Publication date: August 1, 2024
    Inventors: Qian LIU, Bing WU, Qiang ZHANG, Xiaopeng WEI
  • Publication number: 20240137207
    Abstract: The present disclosure discloses a method for encrypting a visually secure image based on adaptive block compressed sensing and non-negative matrix decomposition. Firstly, the Tetrolet transform is performed on the plain image, then the sparsity degree is optimized on the sparsity matrix and the matrix scrambling is performed, such that the sparsity degree in each block region of the image matrix is equalized. Then according to the image information, the sampling number of the block region is calculated, the measurement matrix is constructed and optimized, and the image is compressed by using the optimized measurement matrix. The compressed image is then scrambled and diffused to complete the encryption process. Finally, the image information is embedded into the carrier image through non-negative matrix decomposition to obtain a visually safe ciphertext image. The decryption process is the inverse of the encryption process.
    Type: Application
    Filed: October 30, 2023
    Publication date: April 25, 2024
    Applicant: DALIAN UNIVERSITY
    Inventors: Qiang ZHANG, Bin WANG, YuanDi SHI, XiaoPeng WEI
  • Publication number: 20240054594
    Abstract: The present disclosure discloses a method for watermarking depth image based on mixed frequency-domain channel attention, relating to the field of artificial neural networks and digital image watermarking; the method includes: step 1: a watermark information processor generating a watermark information feature map; step 2: an encoder generating a watermarked image from a carrier image and a watermark information feature map; step 3: a noise layer taking the watermarked image as an input, and generating a noise image through simulated differentiable noise; step 4: a decoder down-sampling the noise image to recover watermark information; step 5: a countermeasure discriminator classifying the carrier image and the watermarked image such that the encoder generates a watermarked image with a high quality. The present disclosure combines the end-to-end depth watermark model with frequency-domain channel attention to expand an application range of the depth neural network in the field of image watermark.
    Type: Application
    Filed: August 22, 2023
    Publication date: February 15, 2024
    Applicant: DALIAN UNIVERSITY
    Inventors: Qiang ZHANG, Bin WANG, Jun TAN, Rongrong CHEN, Xiaopeng WEI
  • Publication number: 20240028036
    Abstract: The present invention provides a robot dynamic obstacle avoidance method based on a multimodal spiking neural network. The present invention realizes a robot obstacle avoidance method in a dynamic environment by fusing laser radar data and processed event camera data and combining with the intrinsic learnable threshold of the spiking neural network for a scenario comprising dynamic obstacles. It solves the difficulty of failure of obstacle avoidance due to the difficulty in perceiving the dynamic obstacles in the obstacle avoidance task of a robot. The present invention helps the robot to fully perceive the static information and the dynamic information of the environment, uses the learnable threshold mechanism of the spiking neural network for efficient reinforcement learning training and decision making, and realizes autonomous navigation and obstacle avoidance in the dynamic environment. An event data enhanced model is combined to better adapt to the dynamic environment for obstacle avoidance.
    Type: Application
    Filed: September 27, 2023
    Publication date: January 25, 2024
    Inventors: Xin YANG, Xiaopeng WEI, Yang WANG, Qiang ZHANG
  • Patent number: 11816843
    Abstract: A method for segmenting a camouflaged object image based on distraction mining is disclosed. PFNet successively includes a multi-layer feature extractor, a positioning module, and a focusing module. The multi-layer feature extractor uses a traditional feature extraction network to obtain different levels of contextual features; the positioning module first uses RGB feature information to initially determine the position of the camouflaged object in the image; the focusing module mines the information and removes the distraction information based on the image RGB feature information and preliminary position information, and finally determines the boundary of the camouflaged object step by step. The method of the present invention introduces the concept of distraction information into the problem of segmentation of the camouflaged object and develops a new information exploration and distraction information removal strategy to help the segmentation of the camouflaged object image.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: November 14, 2023
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Xin Yang, Haiyang Mei, Wen Dong, Xiaopeng Wei, Dengping Fan
  • Patent number: 11810359
    Abstract: The present invention belongs to the technical field of computer vision, and provides a video semantic segmentation method based on active learning, comprising an image semantic segmentation module, a data selection module based on the active learning and a label propagation module. The image semantic segmentation module is responsible for segmenting image results and extracting high-level features required by the data selection module; the data selection module selects a data subset with rich information at an image level, and selects pixel blocks to be labeled at a pixel level; and the label propagation module realizes migration from image to video tasks and completes the segmentation result of a video quickly to obtain weakly-supervised data. The present invention can rapidly generate weakly-supervised data sets, reduce the cost of manufacture of the data and optimize the performance of a semantic segmentation network.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: November 7, 2023
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Xin Yang, Xiaopeng Wei, Yu Qiao, Qiang Zhang, Baocai Yin, Haiyin Piao, Zhenjun Du
  • Patent number: 11757616
    Abstract: The present invention discloses an image encryption method based on an improved class boosting scheme, which comprises the following steps: acquiring parameters of a hyperchaotic system according to plaintext image information; generating weights required by class perceptron networks through the plain text image information; bringing the parameters into the hyperchaotic system to obtain chaotic sequences, and shuffling the chaotic sequences by a shuffling algorithm; pre-processing the chaotic sequences after shuffling to obtain a sequence required by encryption: and bringing a plaintext image and the sequence into an improved class boosting scheme to obtain a ciphertext image, wherein the improved class boosting scheme is realized based on the class perception networks. The method solves the problems that update and prediction functions in an original boosting network are too simple and easy to predict or the like, so as to obtain the ciphertext image with higher information entropy.
    Type: Grant
    Filed: July 31, 2022
    Date of Patent: September 12, 2023
    Assignee: DALIAN UNVERSITY OF TECHNOLOGY
    Inventors: Qiang Zhang, Pengfei Wang, Bin Wang, Haixiao Li, Rongrong Chen, Xiaopeng Wei
  • Patent number: 11756204
    Abstract: The invention belongs to scene segmentation's field in computer vision and is a depth-aware method for mirror segmentation. PDNet successively includes a multi-layer feature extractor, a positioning module, and a delineating module. The multi-layer feature extractor uses a traditional feature extraction network to obtain contextual features; the positioning module combines RGB feature information with depth feature information to initially determine the position of the mirror in the image; the delineating module is based on the image RGB feature information, combined with depth information to adjust and determine the boundary of the mirror. This method is the first method that uses both RGB image and depth image to achieve mirror segmentation in an image. The present invention has also been further tested. For mirrors with a large area in a complex environment, the PDNet segmentation results are still excellent, and the results at the boundary of the mirrors are also satisfactory.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: September 12, 2023
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Wen Dong, Xin Yang, Haiyang Mei, Xiaopeng Wei, Qiang Zhang
  • Publication number: 20230236587
    Abstract: The present invention provides a data fusion and reconstruction method for fine chemical industry safety production based on a virtual knowledge graph. In view of the characteristics of fine chemical industry safety production data, such as a large amount of structured data, a multi-source heterogeneous database and a strong sequential logic, the present invention innovatively proposes a method of using a virtual knowledge graph to complete the fusion and reconstruction of a traditional database for fine chemical industry. The present invention fuses static structured knowledge in the field of fine chemical industry with a real-time dynamic database for chemical industry safety production in the concept of ontologies for the first time to organize time series data in the form of entities. In addition, the mapping rules of the existing OBDA system are improved based on a data set of the present invention.
    Type: Application
    Filed: November 22, 2022
    Publication date: July 27, 2023
    Inventors: Xin YANG, Xiaopeng WEI, Li ZHU, Xirong XU, Zitao YIN
  • Publication number: 20230169309
    Abstract: The present invention belongs to the technical field of knowledge graph, and provides a knowledge graph construction method for an ethylene oxide derivatives production process. According to data types and characteristics, data sources of the ethylene oxide derivatives production process are sorted and divided into three types: structural data, unstructured data and other types of data. An ontology layer and a data layer of a knowledge graph are constructed by combining top-down and bottom-up methods. A data-driven incremental ontology modeling method is proposed to ensure the expandability of the knowledge graph. For structural knowledge extraction, the safety of original data storage is ensured by means of virtual knowledge graph, and a new mapping mechanism is proposed to realize data materialization. For unstructured knowledge extraction, an entity extraction task is realized on the basis of a BERT-BiLSTM-CRF named entity recognition model by integrating a pre-training language model BERT.
    Type: Application
    Filed: November 22, 2022
    Publication date: June 1, 2023
    Inventors: Xin YANG, Xiaopeng WEI, Li ZHU, Xirong XU, Chenming DUAN
  • Publication number: 20230092700
    Abstract: The present invention discloses an image encryption method based on an improved class boosting scheme, which comprises the following steps: acquiring parameters of a hyperchaotic system according to plaintext image information: generating weights required by class perceptron networks through the plain text image information: bringing the parameters into the hyperchaotic system to obtain chaotic sequences, and shuffling the chaotic sequences by a shuffling algorithm: pre-processing the chaotic sequences after shuffling to obtain a sequence required by encryption: and bringing a plaintext image and the sequence into an improved class boosting scheme to obtain a ciphertext image, wherein the improved class boosting scheme is realized based on the class perception networks. The method solves the problems that update and prediction functions in an original boosting network are too simple and easy to predict or the like, so as to obtain the ciphertext image with higher information entropy.
    Type: Application
    Filed: July 31, 2022
    Publication date: March 23, 2023
    Applicant: DALIAN UNVERSITY OF TECHNOLOGY
    Inventors: Qiang ZHANG, Pengfei WANG, Bin WANG, Haixiao LI, Rongrong CHEN, Xiaopeng WEI
  • Publication number: 20230092977
    Abstract: The present disclosure discloses an image encryption method based on multi-scale compressed sensing and a Markov model. According to the difference in information carried by low-frequency coefficients and high-frequency coefficients of an image, different sampling rates are set for the low-frequency coefficients and the high-frequency coefficients of the image, which can effectively improve the reconstruction quality of a decrypted image. The decrypted image obtained by the present disclosure has higher quality than the decrypted image generated by the existing scheme, and a better visual effect and more complete original image information can be obtained.
    Type: Application
    Filed: September 20, 2022
    Publication date: March 23, 2023
    Applicant: DALIAN UNlVERSITY OF TECHNOLOGY
    Inventors: Qiang ZHANG, Bin WANG, Pengfei WANG, Yuandi SHI, Rongrong CHEN, Xiaopeng WEI
  • Publication number: 20220230322
    Abstract: The invention belongs to scene segmentation's field in computer vision and is a depth-aware method for mirror segmentation. PDNet successively includes a multi-layer feature extractor, a positioning module, and a delineating module. The multi-layer feature extractor uses a traditional feature extraction network to obtain contextual features; the positioning module combines RGB feature information with depth feature information to initially determine the position of the mirror in the image; the delineating module is based on the image RGB feature information, combined with depth information to adjust and determine the boundary of the mirror. This method is the first method that uses both RGB image and depth image to achieve mirror segmentation in an image. The present invention has also been further tested. For mirrors with a large area in a complex environment, the PDNet segmentation results are still excellent, and the results at the boundary of the mirrors are also satisfactory.
    Type: Application
    Filed: June 2, 2021
    Publication date: July 21, 2022
    Inventors: Wen DONG, Xin YANG, Haiyang MEI, Xiaopeng WEI, Qiang ZHANG
  • Publication number: 20220230324
    Abstract: A method for segmenting a camouflaged object image based on distraction mining is disclosed. PFNet successively includes a multi-layer feature extractor, a positioning module, and a focusing module. The multi-layer feature extractor uses a traditional feature extraction network to obtain different levels of contextual features; the positioning module first uses RGB feature information to initially determine the position of the camouflaged object in the image; the focusing module mines the information and removes the distraction information based on the image RGB feature information and preliminary position information, and finally determines the boundary of the camouflaged object step by step. The method of the present invention introduces the concept of distraction information into the problem of segmentation of the camouflaged object and develops a new information exploration and distraction information removal strategy to help the segmentation of the camouflaged object image.
    Type: Application
    Filed: June 2, 2021
    Publication date: July 21, 2022
    Inventors: Xin YANG, Haiyang MEI, Wen DONG, Xiaopeng WEI, Dengping FAN
  • Publication number: 20220215662
    Abstract: The present invention belongs to the technical field of computer vision, and provides a video semantic segmentation method based on active learning, comprising an image semantic segmentation module, a data selection module based on the active learning and a label propagation module. The image semantic segmentation module is responsible for segmenting image results and extracting high-level features required by the data selection module; the data selection module selects a data subset with rich information at an image level, and selects pixel blocks to be labeled at a pixel level; and the label propagation module realizes migration from image to video tasks and completes the segmentation result of a video quickly to obtain weakly-supervised data. The present invention can rapidly generate weakly-supervised data sets, reduce the cost of manufacture of the data and optimize the performance of a semantic segmentation network.
    Type: Application
    Filed: December 21, 2021
    Publication date: July 7, 2022
    Inventors: Xin YANG, Xiaopeng WEI, Yu QIAO, Qiang ZHANG, Baocai YIN, Haiyin PIAO, Zhenjun DU
  • Publication number: 20220212339
    Abstract: The present invention belongs to the technical field of computer vision and provides a data active selection method for robot grasping. The core content of the present invention is a data selection strategy module, which shares the feature extraction layer of backbone main network and integrates the features of three receptive fields with different sizes. While making full use of the feature extraction module, the present invention greatly reduces the amount of parameters that need to be added. During the training process of the main grasp method detection network model, the data selection strategy module can be synchronously trained to form an end-to-end model. The present invention makes use of naturally existing labeled and unlabeled labels, and makes full use of the labeled data and the unlabeled data. When the amount of the labeled data is small, the network can still be more fully trained.
    Type: Application
    Filed: December 29, 2021
    Publication date: July 7, 2022
    Inventors: Xin YANG, Boyan WEI, Baocai YIN, Qiang ZHANG, Xiaopeng WEI, Zhenjun DU