Patents by Inventor Mengdi Fan

Mengdi Fan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220306472
    Abstract: The present disclosure relates to an orthophosphate thermal barrier coating material with high coefficient of thermal expansion and a preparation method thereof. ReM3P3O12 series ceramics with an eulytite crystal structure are prepared by a high-temperature solid-phase reaction for the first time. The ReM3P3O12 ceramic belongs to a ?43 m space group of a cubic crystal system, which not only has a higher melting point and excellent high-temperature phase stability, but also has a lower thermal conductivity and a suitable coefficient of thermal expansion. It can effectively alleviate the stress caused by the mismatch of the coefficient of thermal expansion of the base material and the ceramic layer, so as to meet the requirements of thermal insulation and high-temperature oxidation and corrosion resistance of the hot end parts in long-term service, which has application prospects in the field of thermal barrier coatings.
    Type: Application
    Filed: March 23, 2022
    Publication date: September 29, 2022
    Applicant: Shandong University
    Inventors: Fapeng YU, Guangda Wu, Mengdi Fan, Tingwei Chen, Xiufeng Cheng, Xian Zhao
  • Patent number: 11397890
    Abstract: The present application discloses a cross-media retrieval method based on deep semantic space, which includes a feature generation stage and a semantic space learning stage. In the feature generation stage, a CNN visual feature vector and an LSTM language description vector of an image are generated by simulating a perception process of a person for the image; and topic information about a text is explored by using an LDA topic model, thus extracting an LDA text topic vector. In the semantic space learning phase, a training set image is trained to obtain a four-layer Multi-Sensory Fusion Deep Neural Network, and a training set text is trained to obtain a three-layer text semantic network, respectively. Finally, a test image and a text are respectively mapped to an isomorphic semantic space by using two networks, so as to realize cross-media retrieval. The disclosed method can significantly improve the performance of cross-media retrieval.
    Type: Grant
    Filed: August 16, 2017
    Date of Patent: July 26, 2022
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Mengdi Fan, Peilei Dong, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Publication number: 20220228294
    Abstract: A crystal is of a non-centrosymmetric structure and belongs to the ?43 m point group of the cubic crystal system. M denotes an alkaline earth metal, which can be Ba, Ca, or Sr, and RE denotes a rare earth element, which can be Y, La, Gd, or Yb. The growth method of the M3RE(PO4)3 crystal comprises steps as follows: (1) polycrystalline material synthesis: MCO3, RE2O3, and phosphorous compound are used as raw materials and blended according to the stoichiometric proportions; then, the phosphorous compound is further added to be excessive; the raw materials are sintered twice to obtain the M3RE(PO4)3 polycrystalline material; (2) polycrystalline material melting; (3) Czochralski crystal growth. The M3RE(PO4)3 crystal prepared by the invention is a high-quality single crystal.
    Type: Application
    Filed: June 10, 2020
    Publication date: July 21, 2022
    Inventors: Fapeng YU, Guangda WU, Mengdi FAN, Yanlu LI, Xiufeng CHENG, Xian ZHAO
  • Publication number: 20210256365
    Abstract: The present application discloses a cross-media retrieval method based on deep semantic space, which includes a feature generation stage and a semantic space learning stage. In the feature generation stage, a CNN visual feature vector and an LSTM language description vector of an image are generated by simulating a perception process of a person for the image; and topic information about a text is explored by using an LDA topic model, thus extracting an LDA text topic vector. In the semantic space learning phase, a training set image is trained to obtain a four-layer Multi-Sensory Fusion Deep Neural Network, and a training set text is trained to obtain a three-layer text semantic network, respectively. Finally, a test image and a text are respectively mapped to an isomorphic semantic space by using two networks, so as to realize cross-media retrieval. The disclosed method can significantly improve the performance of cross-media retrieval.
    Type: Application
    Filed: August 16, 2017
    Publication date: August 19, 2021
    Inventors: Wenmin Wang, Mengdi Fan, Peilei Dong, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Patent number: 11030444
    Abstract: Disclosed is a method for detecting pedestrians in an image by using Gaussian penalty. Initial pedestrian boundary box is screened using a Gaussian penalty, to improve the pedestrian detection performance, especially sheltered pedestrians in an image. The method includes acquiring a training data set, a test data set and pedestrian labels of a pedestrian detection image; using the training data set for training to obtain a detection model by using a pedestrian detection method, and acquiring initial pedestrian boundary box and confidence degrees and coordinates thereof; performing Gaussian penalty on the confidence degrees of the pedestrian boundary box, to obtain confidence degree of the pedestrian boundary box after the penalty; and obtaining final pedestrian boundary boxes by screening the pedestrian boundary boxes. Thus, repeated boundary boxes of a single pedestrian are removed while reserving boundary boxes of sheltered pedestrians, thereby realizing the detection of the pedestrians in an image.
    Type: Grant
    Filed: November 24, 2017
    Date of Patent: June 8, 2021
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Peilei Dong, Mengdi Fan, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Patent number: 10719664
    Abstract: A cross-media search method using a VGG convolutional neural network (VGG net) to extract image features. The 4096-dimensional feature of a seventh fully-connected layer (fc7) in the VGG net, after processing by a ReLU activation function, serves as image features. A Fisher Vector based on Word2vec is utilized to extract text features. Semantic matching is performed on heterogeneous images and the text features by means of logistic regression. A correlation between the two heterogeneous features, which are images and text, is found by means of semantic matching based on logistic regression, and thus cross-media search is achieved. The feature extraction method can effectively indicate deep semantics of image and text, improve cross-media search accuracy, and thus greatly improve the cross-media search effect.
    Type: Grant
    Filed: December 1, 2016
    Date of Patent: July 21, 2020
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Liang Han, Mengdi Fan, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Publication number: 20200160048
    Abstract: Disclosed is a method for detecting pedestrians in an image by using Gaussian penalty. Initial pedestrian boundary box is screened using a Gaussian penalty, to improve the pedestrian detection performance, especially sheltered pedestrians in an image. The method includes acquiring a training data set, a test data set and pedestrian labels of a pedestrian detection image; using the training data set for training to obtain a detection model by using a pedestrian detection method, and acquiring initial pedestrian boundary box and confidence degrees and coordinates thereof; performing Gaussian penalty on the confidence degrees of the pedestrian boundary box, to obtain confidence degree of the pedestrian boundary box after the penalty; and obtaining final pedestrian boundary boxes by screening the pedestrian boundary boxes. Thus, repeated boundary boxes of a single pedestrian are removed while reserving boundary boxes of sheltered pedestrians, thereby realizing the detection of the pedestrians in an image.
    Type: Application
    Filed: November 24, 2017
    Publication date: May 21, 2020
    Inventors: Wenmin Wang, Peilei Dong, Mengdi Fan, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Publication number: 20190205393
    Abstract: A cross-media search method using a VGG convolutional neural network (VGG net) to extract image features. The 4096-dimensional feature of a seventh fully-connected layer (fc7) in the VGG net, after processing by a ReLU activation function, serves as image features. A Fisher Vector based on Word2vec is utilized to extract text features. Semantic matching is performed on heterogeneous images and the text features by means of logistic regression. A correlation between the two heterogeneous features, which are images and text, is found by means of semantic matching based on logistic regression, and thus cross-media search is achieved. The feature extraction method can effectively indicate deep semantics of image and text, improve cross-media search accuracy, and thus greatly improve the cross-media search effect.
    Type: Application
    Filed: December 1, 2016
    Publication date: July 4, 2019
    Applicant: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Liang Han, Mengdi Fan, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao