Patents by Inventor Sannyuya Liu

Sannyuya Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11967175
    Abstract: Provided are a facial expression recognition method and system combined with an attention mechanism. The method comprises: detecting faces comprised in each video frame in a video sequence, and extracting corresponding facial ROIs, so as to obtain facial pictures in each video frame; aligning the facial pictures in each video frame on the basis of location information of facial feature points of the facial pictures; inputting the aligned facial pictures into a residual neural network, and extracting spatial features of facial expressions corresponding to the facial pictures; inputting the spatial features of the facial expressions into a hybrid attention module to acquire fused features of the facial expressions; inputting the fused features of the facial expressions into a gated recurrent unit, and extracting temporal features of the facial expressions; and inputting the temporal features of the facial expressions into a fully connected layer, and classifying and recognizing the facial expressions.
    Type: Grant
    Filed: May 23, 2023
    Date of Patent: April 23, 2024
    Assignee: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Sannyuya Liu, Zongkai Yang, Xiaoliang Zhu, Zhicheng Dai, Liang Zhao
  • Publication number: 20240081705
    Abstract: The present disclosure provides a non-contact fatigue detection system and method based on rPPG. The system and method adopt multi-thread synchronous communication for real-time acquisition and processing of rPPG signal, enabling fatigue status detection. In this setup, the first thread handles real-time rPPG data capture, storage and concatenation, while the second thread conducts real-time analysis and fatigue detection of rPPG data. Through a combination of skin detection and LUV color space conversion, rPPG raw signal extraction is achieved, effectively eliminating interference from internal and external environmental facial noise; Subsequently, an adaptive multi-stage filtering process enhances the signal-to-noise ratio, and a multi-dimensional fusion CNN model ensures accurate detection of respiration and heart rate.
    Type: Application
    Filed: November 16, 2023
    Publication date: March 14, 2024
    Applicant: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Liang ZHAO, Sannyuya LIU, Zongkai YANG, Xiaoliang ZHU, Jianwen SUN, Qing LI, Zhicheng DAI
  • Publication number: 20240023884
    Abstract: Disclosed are a non-contact fatigue detection method and system. The method comprises: sending a millimeter-wave (mmWave) radar signal to a person being detected, receiving an echo signal reflected from the person, and determining a time-frequency domain feature, a non-linear feature and a time-series feature of a vital sign signal; acquiring a facial video image of the person, and performing facial detection and alignment on the basis of the facial video image, for extracting a time domain feature and a spatial domain feature of the person's face; fusing the determined vital sign signal with the time domain feature and the spatial domain feature of the person's face, for obtaining a fused feature; and determining whether the person is in a fatigued state by the fused feature. By fusing the two detection techniques, the method effectively suppressing the interference of subjective and objective factors, and improving the accuracy of fatigue detection.
    Type: Application
    Filed: June 23, 2021
    Publication date: January 25, 2024
    Applicant: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Sannyuya LIU, Zongkai YANG, Liang ZHAO, Xiaoliang ZHU, Zhicheng DAI
  • Publication number: 20240000345
    Abstract: Disclosed are a millimeter-wave (mmWave) radar-based non-contact identity recognition method and system. The method comprises: emitting an mmWave radar signal to a user to be recognized, and receiving an echo signal reflected from the user; performing clutter suppression and echo selection on the echo signal, and extracting a heartbeat signal; segmenting the heartbeat signal beat by beat, and determining its corresponding beat features; and comparing the beat features of the user with the beat feature sets of a standard user group; if the beat features of the user matches one of the beat feature set in the standard user group, the identity recognition being successful; otherwise, being not successful. According to the method, the use of a heartbeat signal for identity recognition has high reliability, and the use of an mmWave radar technology for non-contact identity recognition has high flexibility and accuracy.
    Type: Application
    Filed: April 22, 2021
    Publication date: January 4, 2024
    Applicant: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Zongkai YANG, Sannyuya LIU, Liang ZHAO, Zhicheng DAI, Jianwen SUN, Qing LI
  • Publication number: 20230334862
    Abstract: The present invention discloses construction method and system of a descriptive model of classroom teaching behavior events. The construction method includes steps as the followings: acquiring classroom teaching video data to be trained; dividing the classroom teaching video data to be trained into multiple events according to utterances of a teacher by using a voice activity detection technology; and performing multi-modal recognition on all events by using multiple artificial intelligence technologies to divide the events into sub-events in multiple dimensions, establishing an event descriptive model according to the sub-events, and describing various teaching behavior events of the teacher in a classroom. The present invention divides a classroom video according to voice, which can ensure the completeness of the teacher's non-verbal behavior in each event to the greatest extent.
    Type: Application
    Filed: September 7, 2021
    Publication date: October 19, 2023
    Applicant: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Sannyuya LIU, Zengzhao CHEN, Zhicheng DAI, Shengming WANG, Xiuling HE, Baolin YI
  • Publication number: 20230298382
    Abstract: Provided are a facial expression recognition method and system combined with an attention mechanism. The method comprises: detecting faces comprised in each video frame in a video sequence, and extracting corresponding facial ROIs, so as to obtain facial pictures in each video frame; aligning the facial pictures in each video frame on the basis of location information of facial feature points of the facial pictures; inputting the aligned facial pictures into a residual neural network, and extracting spatial features of facial expressions corresponding to the facial pictures; inputting the spatial features of the facial expressions into a hybrid attention module to acquire fused features of the facial expressions; inputting the fused features of the facial expressions into a gated recurrent unit, and extracting temporal features of the facial expressions; and inputting the temporal features of the facial expressions into a fully connected layer, and classifying and recognizing the facial expressions.
    Type: Application
    Filed: May 23, 2023
    Publication date: September 21, 2023
    Applicant: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Sannyuya LIU, Zongkai YANG, Xiaoliang ZHU, Zhicheng DAI, Liang ZHAO
  • Patent number: 11568012
    Abstract: The disclosure discloses a method for analyzing educational big data on the basis of maps. The method includes acquiring educational resource data and storing the educational resource data into databases according to certain data structures; constructing theme map layers for each analysis theme, classifying and indexing data according to the analysis themes, and superimposing the theme map layers onto base maps to form data maps; analyzing data of the theme map layers according to the analysis themes and acquiring theme analysis results; extracting the data of the multiple theme map layers in target regions, fusing the data and acquiring region analysis results; acquiring learning preference of users; combining the learning preference of the users according to content of user requests and searching the region analysis results in response to the user requests. The disclosure further discloses a system for analyzing the educational big data on the basis of the maps.
    Type: Grant
    Filed: May 8, 2018
    Date of Patent: January 31, 2023
    Assignee: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Zongkai Yang, Sannyuya Liu, Dongbo Zhou, Jianwen Sun, Jiangbo Shu, Hao Li