Patents by Inventor Zihan Cheng
Zihan Cheng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12313732Abstract: A contextual visual-based synthetic-aperture radar (SAR) target detection method and apparatus, and a storage medium, belonging to the field of target detection is described. The method includes: obtaining an SAR image; and inputting the SAR image into a target detection model, and positioning and recognizing a target in the SAR image by using the target detection model, to obtain a detection result. In the present disclosure, a two-way multi-scale connection operation is enhanced through top-down and bottom-up attention, to guide learning of dynamic attention matrices and enhance feature interaction under different resolutions. The model can extract the multi-scale target feature information with higher accuracy, for bounding box regression and classification, to suppress interfering background information, thereby enhancing the visual expressiveness.Type: GrantFiled: May 6, 2022Date of Patent: May 27, 2025Assignee: Anhui UniversityInventors: Jie Chen, Runfan Xia, Zhixiang Huang, Huiyao Wan, Xiaoping Liu, Zihan Cheng, Bocai Wu, Baidong Yao, Zheng Zhou, Jianming Lv, Yun Feng, Wentian Du, Jingqian Yu
-
Patent number: 12087046Abstract: The present disclosure provides a method for fine-grained detection of driver distraction based on unsupervised learning, belonging to the field of driving behavior analysis. The method includes: acquiring distracted driving image data; and inputting the acquired distracted driving image data into an unsupervised learning detection model, analyzing the distracted driving image data by using the unsupervised learning detection model, and determining a driver distraction state according to an analysis result. The unsupervised learning detection model includes a backbone network, projection heads, and a loss function; the backbone network is a RepMLP network structure incorporating a multilayer perceptron (MLP); the projection heads are each an MLP incorporating a residual structure; and the loss function is a loss function based on contrastive learning and a stop-gradient strategy.Type: GrantFiled: April 28, 2022Date of Patent: September 10, 2024Assignee: Anhui UniversityInventors: Jie Chen, Bing Li, Zihan Cheng, Haitao Wang, Jingmin Xi, Yingjian Deng
-
Patent number: 12056940Abstract: The present disclosure provides a transformer-based driver distraction detection method and apparatus, belonging to the field of driving behavior analysis. The method includes: acquiring districted driving image data; building a driver distraction detection model FPT; inputting the acquired distracted driving image data into the driver distraction detection model FPT, analyzing the distracted driving image data by using the driver distraction detection model FPT, and determining a driver distraction state according to an analysis result. The present disclosure proposes a new network model, i.e., a driver distraction detection model FPT, based on Swin, Twins, and other models. Compared with a deep learning model, the FPT model compensates for the drawback that the deep learning model can only extract local features; the FPT model improves the classification accuracy and reduces the parameter quantity and calculation amount compared with the transformer model.Type: GrantFiled: May 10, 2022Date of Patent: August 6, 2024Assignee: Anhui UniversityInventors: Jie Chen, Haitao Wang, Bing Li, Zihan Cheng, Jingmin Xi, Yingjian Deng
-
Patent number: 11818493Abstract: The present disclosure provides a fire source detection method and device under the condition of a small sample size, and a storage medium, and belongs to the field of target detection and industrial deployment. The method includes the steps of acquiring fire source image data from an industrial site; constructing a fire source detection model; inputting the fire source image data to the fire source detection model, and analyzing the fire source image data via the fire source detection model to obtain a detection result, where the detection result includes a specific location, precision and type of a fire source. By means of the method, the problems of insufficient sample capacity and difficulty in training under the condition of a small sample size are solved, and different enhancement methods are used to greatly increase the number and quality of samples and improve the over-fitting ability of models.Type: GrantFiled: May 5, 2022Date of Patent: November 14, 2023Assignee: Anhui UniversityInventors: Jie Chen, Jianming Lv, Zihan Cheng, Zhixiang Huang, Haitao Wang, Bing Li, Huiyao Wan, Yun Feng
-
Publication number: 20230186652Abstract: The present disclosure provides a transformer-based driver distraction detection method and apparatus, belonging to the field of driving behavior analysis. The method includes: acquiring districted driving image data; building a driver distraction detection model FPT; inputting the acquired distracted driving image data into the driver distraction detection model FPT, analyzing the distracted driving image data by using the driver distraction detection model FPT, and determining a driver distraction state according to an analysis result. The present disclosure proposes a new network model, i.e., a driver distraction detection model FPT, based on Swin, Twins, and other models. Compared with a deep learning model, the FPT model compensates for the drawback that the deep learning model can only extract local features; the FPT model improves the classification accuracy and reduces the parameter quantity and calculation amount compared with the transformer model.Type: ApplicationFiled: May 10, 2022Publication date: June 15, 2023Applicant: Anhui UniversityInventors: Jie Chen, Haitao Wang, Bing Li, Zihan Cheng, Jingmin Xi, Yingjian Deng
-
Publication number: 20230186436Abstract: The present disclosure provides a method for fine-grained detection of driver distraction based on unsupervised learning, belonging to the field of driving behavior analysis. The method includes: acquiring distracted driving image data; and inputting the acquired distracted driving image data into an unsupervised learning detection model, analyzing the distracted driving image data by using the unsupervised learning detection model, and determining a driver distraction state according to an analysis result. The unsupervised learning detection model includes a backbone network, projection heads, and a loss function; the backbone network is a RepMLP network structure incorporating a multilayer perceptron (MLP); the projection heads are each an MLP incorporating a residual structure; and the loss function is a loss function based on contrastive learning and a stop-gradient strategy.Type: ApplicationFiled: April 28, 2022Publication date: June 15, 2023Applicant: Anhui UniversityInventors: Jie Chen, Bing Li, Zihan Cheng, Haitao Wang, Jingmin Xi, Yingjian Deng
-
Publication number: 20230188671Abstract: The present disclosure provides a fire source detection method and device under the condition of a small sample size, and a storage medium, and belongs to the field of target detection and industrial deployment. The method includes the steps of acquiring fire source image data from an industrial site; constructing a fire source detection model; inputting the fire source image data to the fire source detection model, and analyzing the fire source image data via the fire source detection model to obtain a detection result, where the detection result includes a specific location, precision and type of a fire source. By means of the method, the problems of insufficient sample capacity and difficulty in training under the condition of a small sample size are solved, and different enhancement methods are used to greatly increase the number and quality of samples and improve the over-fitting ability of models.Type: ApplicationFiled: May 5, 2022Publication date: June 15, 2023Applicant: Anhui UniversityInventors: Jie Chen, Jianming Lv, Zihan Cheng, Zhixiang Huang, Haitao Wang, Bing Li, Huiyao Wan, Yun Feng
-
Publication number: 20230184927Abstract: A contextual visual-based synthetic-aperture radar (SAR) target detection method and apparatus, and a storage medium, belonging to the field of target detection is described. The method includes: obtaining an SAR image; and inputting the SAR image into a target detection model, and positioning and recognizing a target in the SAR image by using the target detection model, to obtain a detection result. In the present disclosure, a two-way multi-scale connection operation is enhanced through top-down and bottom-up attention, to guide learning of dynamic attention matrices and enhance feature interaction under different resolutions. The model can extract the multi-scale target feature information with higher accuracy, for bounding box regression and classification, to suppress interfering background information, thereby enhancing the visual expressiveness.Type: ApplicationFiled: May 6, 2022Publication date: June 15, 2023Applicant: Anhui UniversityInventors: Jie Chen, Runfan Xia, Zhixiang Huang, Huiyao Wan, Xiaoping Liu, Zihan Cheng, Bocai Wu, Baidong Yao, Zheng Zhou, Jianming Lv, Yun Feng, Wentian Du, Jingqian Yu