Patents by Inventor Mengdi Fan
Mengdi Fan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220306472Abstract: The present disclosure relates to an orthophosphate thermal barrier coating material with high coefficient of thermal expansion and a preparation method thereof. ReM3P3O12 series ceramics with an eulytite crystal structure are prepared by a high-temperature solid-phase reaction for the first time. The ReM3P3O12 ceramic belongs to a ?43 m space group of a cubic crystal system, which not only has a higher melting point and excellent high-temperature phase stability, but also has a lower thermal conductivity and a suitable coefficient of thermal expansion. It can effectively alleviate the stress caused by the mismatch of the coefficient of thermal expansion of the base material and the ceramic layer, so as to meet the requirements of thermal insulation and high-temperature oxidation and corrosion resistance of the hot end parts in long-term service, which has application prospects in the field of thermal barrier coatings.Type: ApplicationFiled: March 23, 2022Publication date: September 29, 2022Applicant: Shandong UniversityInventors: Fapeng YU, Guangda Wu, Mengdi Fan, Tingwei Chen, Xiufeng Cheng, Xian Zhao
-
Patent number: 11397890Abstract: The present application discloses a cross-media retrieval method based on deep semantic space, which includes a feature generation stage and a semantic space learning stage. In the feature generation stage, a CNN visual feature vector and an LSTM language description vector of an image are generated by simulating a perception process of a person for the image; and topic information about a text is explored by using an LDA topic model, thus extracting an LDA text topic vector. In the semantic space learning phase, a training set image is trained to obtain a four-layer Multi-Sensory Fusion Deep Neural Network, and a training set text is trained to obtain a three-layer text semantic network, respectively. Finally, a test image and a text are respectively mapped to an isomorphic semantic space by using two networks, so as to realize cross-media retrieval. The disclosed method can significantly improve the performance of cross-media retrieval.Type: GrantFiled: August 16, 2017Date of Patent: July 26, 2022Assignee: Peking University Shenzhen Graduate SchoolInventors: Wenmin Wang, Mengdi Fan, Peilei Dong, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
-
Publication number: 20220228294Abstract: A crystal is of a non-centrosymmetric structure and belongs to the ?43 m point group of the cubic crystal system. M denotes an alkaline earth metal, which can be Ba, Ca, or Sr, and RE denotes a rare earth element, which can be Y, La, Gd, or Yb. The growth method of the M3RE(PO4)3 crystal comprises steps as follows: (1) polycrystalline material synthesis: MCO3, RE2O3, and phosphorous compound are used as raw materials and blended according to the stoichiometric proportions; then, the phosphorous compound is further added to be excessive; the raw materials are sintered twice to obtain the M3RE(PO4)3 polycrystalline material; (2) polycrystalline material melting; (3) Czochralski crystal growth. The M3RE(PO4)3 crystal prepared by the invention is a high-quality single crystal.Type: ApplicationFiled: June 10, 2020Publication date: July 21, 2022Inventors: Fapeng YU, Guangda WU, Mengdi FAN, Yanlu LI, Xiufeng CHENG, Xian ZHAO
-
Publication number: 20210256365Abstract: The present application discloses a cross-media retrieval method based on deep semantic space, which includes a feature generation stage and a semantic space learning stage. In the feature generation stage, a CNN visual feature vector and an LSTM language description vector of an image are generated by simulating a perception process of a person for the image; and topic information about a text is explored by using an LDA topic model, thus extracting an LDA text topic vector. In the semantic space learning phase, a training set image is trained to obtain a four-layer Multi-Sensory Fusion Deep Neural Network, and a training set text is trained to obtain a three-layer text semantic network, respectively. Finally, a test image and a text are respectively mapped to an isomorphic semantic space by using two networks, so as to realize cross-media retrieval. The disclosed method can significantly improve the performance of cross-media retrieval.Type: ApplicationFiled: August 16, 2017Publication date: August 19, 2021Inventors: Wenmin Wang, Mengdi Fan, Peilei Dong, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
-
Patent number: 11030444Abstract: Disclosed is a method for detecting pedestrians in an image by using Gaussian penalty. Initial pedestrian boundary box is screened using a Gaussian penalty, to improve the pedestrian detection performance, especially sheltered pedestrians in an image. The method includes acquiring a training data set, a test data set and pedestrian labels of a pedestrian detection image; using the training data set for training to obtain a detection model by using a pedestrian detection method, and acquiring initial pedestrian boundary box and confidence degrees and coordinates thereof; performing Gaussian penalty on the confidence degrees of the pedestrian boundary box, to obtain confidence degree of the pedestrian boundary box after the penalty; and obtaining final pedestrian boundary boxes by screening the pedestrian boundary boxes. Thus, repeated boundary boxes of a single pedestrian are removed while reserving boundary boxes of sheltered pedestrians, thereby realizing the detection of the pedestrians in an image.Type: GrantFiled: November 24, 2017Date of Patent: June 8, 2021Assignee: Peking University Shenzhen Graduate SchoolInventors: Wenmin Wang, Peilei Dong, Mengdi Fan, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
-
Patent number: 10719664Abstract: A cross-media search method using a VGG convolutional neural network (VGG net) to extract image features. The 4096-dimensional feature of a seventh fully-connected layer (fc7) in the VGG net, after processing by a ReLU activation function, serves as image features. A Fisher Vector based on Word2vec is utilized to extract text features. Semantic matching is performed on heterogeneous images and the text features by means of logistic regression. A correlation between the two heterogeneous features, which are images and text, is found by means of semantic matching based on logistic regression, and thus cross-media search is achieved. The feature extraction method can effectively indicate deep semantics of image and text, improve cross-media search accuracy, and thus greatly improve the cross-media search effect.Type: GrantFiled: December 1, 2016Date of Patent: July 21, 2020Assignee: Peking University Shenzhen Graduate SchoolInventors: Wenmin Wang, Liang Han, Mengdi Fan, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
-
Publication number: 20200160048Abstract: Disclosed is a method for detecting pedestrians in an image by using Gaussian penalty. Initial pedestrian boundary box is screened using a Gaussian penalty, to improve the pedestrian detection performance, especially sheltered pedestrians in an image. The method includes acquiring a training data set, a test data set and pedestrian labels of a pedestrian detection image; using the training data set for training to obtain a detection model by using a pedestrian detection method, and acquiring initial pedestrian boundary box and confidence degrees and coordinates thereof; performing Gaussian penalty on the confidence degrees of the pedestrian boundary box, to obtain confidence degree of the pedestrian boundary box after the penalty; and obtaining final pedestrian boundary boxes by screening the pedestrian boundary boxes. Thus, repeated boundary boxes of a single pedestrian are removed while reserving boundary boxes of sheltered pedestrians, thereby realizing the detection of the pedestrians in an image.Type: ApplicationFiled: November 24, 2017Publication date: May 21, 2020Inventors: Wenmin Wang, Peilei Dong, Mengdi Fan, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
-
Publication number: 20190205393Abstract: A cross-media search method using a VGG convolutional neural network (VGG net) to extract image features. The 4096-dimensional feature of a seventh fully-connected layer (fc7) in the VGG net, after processing by a ReLU activation function, serves as image features. A Fisher Vector based on Word2vec is utilized to extract text features. Semantic matching is performed on heterogeneous images and the text features by means of logistic regression. A correlation between the two heterogeneous features, which are images and text, is found by means of semantic matching based on logistic regression, and thus cross-media search is achieved. The feature extraction method can effectively indicate deep semantics of image and text, improve cross-media search accuracy, and thus greatly improve the cross-media search effect.Type: ApplicationFiled: December 1, 2016Publication date: July 4, 2019Applicant: Peking University Shenzhen Graduate SchoolInventors: Wenmin Wang, Liang Han, Mengdi Fan, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao