Patents by Inventor Zongkai YANG

Zongkai YANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250246021
    Abstract: The present invention provides a facial expression recognition method and system based on multi-cue associative learning, belonging to the technical field of computer vision. The recognition method comprises: inputting a pre-recognized facial image into a student model and/or a teacher model for facial expression recognition. A training method comprises: cropping a global facial sample image to obtain an upper half facial sample image and a lower half facial sample image; extracting cue features; acquiring adjacency matrices corresponding to the upper half facial sample image, the lower half facial sample image and the global facial sample image; fusing associated semantics by using a feature-level attention mechanism, so as to acquire the teacher model; supervising training of the teacher model by using a cross-entropy loss; and supervising training of the student model by using label distillation, KL divergence and a cross-entropy loss.
    Type: Application
    Filed: March 12, 2025
    Publication date: July 31, 2025
    Applicant: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Jingying Chen, Ruyi XU, Zongkai YANG, Zhaoyang MA
  • Patent number: 12310718
    Abstract: Disclosed are a millimeter-wave (mmWave) radar-based non-contact identity recognition method and system. The method comprises: emitting an mmWave radar signal to a user to be recognized, and receiving an echo signal reflected from the user; performing clutter suppression and echo selection on the echo signal, and extracting a heartbeat signal; segmenting the heartbeat signal beat by beat, and determining its corresponding beat features; and comparing the beat features of the user with the beat feature sets of a standard user group; if the beat features of the user matches one of the beat feature set of in the standard user group, the identity recognition being successful; otherwise, being not successful. According to the method, the use of a heartbeat signal for identity recognition has high reliability, and the use of an mmWave radar technology for non-contact identity recognition has high flexibility and accuracy.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: May 27, 2025
    Assignee: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Zongkai Yang, Sannyuya Liu, Liang Zhao, Zhicheng Dai, Jianwen Sun, Qing Li
  • Patent number: 12315392
    Abstract: Disclosed are training method and system for autism language barrier based on adaptive learning scaffold, and the method includes the following steps: analyzing and assessing the state of the user before training to obtain an analysis result, and generating an initialized training path based on the analysis result; obtaining training question information, predicting a question-answering correct rate of the user based on user information and training question information, constructing a proximal development zone, and adding training questions that meet the accuracy requirements to the proximal development zone; updating the initialized training path, classifying the training questions in the proximal development zone, adding the classified training questions to the main training task or branch training task, and the user will perform learning and training according to the training task.
    Type: Grant
    Filed: June 13, 2023
    Date of Patent: May 27, 2025
    Assignee: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Zongkai Yang, Lili Liu, Sannyuya Liu, Jingying Chen, Ceyu Deng, Jiaer Chen, Yutao Ling, Shulan Jin, Shengming Wang
  • Publication number: 20240355218
    Abstract: Disclosed are training method and system for autism language barrier based on adaptive learning scaffold, and the method includes the following steps: analyzing and assessing the state of the user before training to obtain an analysis result, and generating an initialized training path based on the analysis result; obtaining training question information, predicting a question-answering correct rate of the user based on user information and training question information, constructing a proximal development zone, and adding training questions that meet the accuracy requirements to the proximal development zone; updating the initialized training path, classifying the training questions in the proximal development zone, adding the classified training questions to the main training task or branch training task, and the user will perform learning and training according to the training task.
    Type: Application
    Filed: June 13, 2023
    Publication date: October 24, 2024
    Applicant: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Zongkai YANG, Lili Liu, Sannyuya LIU, Jingying Chen, Ceyu Deng, Jiaer Chen, Yutao Ling, Shulan Jin, Shengming WANG
  • Patent number: 12036021
    Abstract: The present disclosure provides a non-contact fatigue detection system and method based on rPPG. The system and method adopt multi-thread synchronous communication for real-time acquisition and processing of rPPG signal, enabling fatigue status detection. In this setup, the first thread handles real-time rPPG data capture, storage and concatenation, while the second thread conducts real-time analysis and fatigue detection of rPPG data. Through a combination of skin detection and LUV color space conversion, rPPG raw signal extraction is achieved, effectively eliminating interference from internal and external environmental facial noise; Subsequently, an adaptive multi-stage filtering process enhances the signal-to-noise ratio, and a multi-dimensional fusion CNN model ensures accurate detection of respiration and heart rate.
    Type: Grant
    Filed: November 16, 2023
    Date of Patent: July 16, 2024
    Assignee: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Liang Zhao, Sannyuya Liu, Zongkai Yang, Xiaoliang Zhu, Jianwen Sun, Qing Li, Zhicheng Dai
  • Patent number: 11967175
    Abstract: Provided are a facial expression recognition method and system combined with an attention mechanism. The method comprises: detecting faces comprised in each video frame in a video sequence, and extracting corresponding facial ROIs, so as to obtain facial pictures in each video frame; aligning the facial pictures in each video frame on the basis of location information of facial feature points of the facial pictures; inputting the aligned facial pictures into a residual neural network, and extracting spatial features of facial expressions corresponding to the facial pictures; inputting the spatial features of the facial expressions into a hybrid attention module to acquire fused features of the facial expressions; inputting the fused features of the facial expressions into a gated recurrent unit, and extracting temporal features of the facial expressions; and inputting the temporal features of the facial expressions into a fully connected layer, and classifying and recognizing the facial expressions.
    Type: Grant
    Filed: May 23, 2023
    Date of Patent: April 23, 2024
    Assignee: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Sannyuya Liu, Zongkai Yang, Xiaoliang Zhu, Zhicheng Dai, Liang Zhao
  • Publication number: 20240081705
    Abstract: The present disclosure provides a non-contact fatigue detection system and method based on rPPG. The system and method adopt multi-thread synchronous communication for real-time acquisition and processing of rPPG signal, enabling fatigue status detection. In this setup, the first thread handles real-time rPPG data capture, storage and concatenation, while the second thread conducts real-time analysis and fatigue detection of rPPG data. Through a combination of skin detection and LUV color space conversion, rPPG raw signal extraction is achieved, effectively eliminating interference from internal and external environmental facial noise; Subsequently, an adaptive multi-stage filtering process enhances the signal-to-noise ratio, and a multi-dimensional fusion CNN model ensures accurate detection of respiration and heart rate.
    Type: Application
    Filed: November 16, 2023
    Publication date: March 14, 2024
    Applicant: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Liang ZHAO, Sannyuya LIU, Zongkai YANG, Xiaoliang ZHU, Jianwen SUN, Qing LI, Zhicheng DAI
  • Patent number: 11908344
    Abstract: This application relates to teaching applications of virtual reality technology, and provides a three-dimensional (3D) integrated teaching field system based on a flipped platform and a method for operating the same. The system includes a device deployment module, a teaching resource matching module, an acquisition and processing module and an edge computing module. The method includes spatial division of the 3D comprehensive teaching field system in an offline classroom, device deployment, edge computing, holographic display, data acquisition, motion positioning, and construction of a teaching interactive environment.
    Type: Grant
    Filed: October 10, 2023
    Date of Patent: February 20, 2024
    Assignee: Central China Normal University
    Inventors: Zongkai Yang, Zheng Zhong, Di Wu, Xu Chen
  • Publication number: 20240038086
    Abstract: This application relates to teaching applications of virtual reality technology, and provides a three-dimensional (3D) integrated teaching field system based on a flipped platform and a method for operating the same. The system includes a device deployment module, a teaching resource matching module, an acquisition and processing module and an edge computing module. The method includes spatial division of the 3D comprehensive teaching field system in an offline classroom, device deployment, edge computing, holographic display, data acquisition, motion positioning, and construction of a teaching interactive environment.
    Type: Application
    Filed: October 10, 2023
    Publication date: February 1, 2024
    Inventors: Zongkai YANG, Zheng ZHONG, Di WU, Xu CHEN
  • Publication number: 20240023884
    Abstract: Disclosed are a non-contact fatigue detection method and system. The method comprises: sending a millimeter-wave (mmWave) radar signal to a person being detected, receiving an echo signal reflected from the person, and determining a time-frequency domain feature, a non-linear feature and a time-series feature of a vital sign signal; acquiring a facial video image of the person, and performing facial detection and alignment on the basis of the facial video image, for extracting a time domain feature and a spatial domain feature of the person's face; fusing the determined vital sign signal with the time domain feature and the spatial domain feature of the person's face, for obtaining a fused feature; and determining whether the person is in a fatigued state by the fused feature. By fusing the two detection techniques, the method effectively suppressing the interference of subjective and objective factors, and improving the accuracy of fatigue detection.
    Type: Application
    Filed: June 23, 2021
    Publication date: January 25, 2024
    Applicant: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Sannyuya LIU, Zongkai YANG, Liang ZHAO, Xiaoliang ZHU, Zhicheng DAI
  • Publication number: 20240000345
    Abstract: Disclosed are a millimeter-wave (mmWave) radar-based non-contact identity recognition method and system. The method comprises: emitting an mmWave radar signal to a user to be recognized, and receiving an echo signal reflected from the user; performing clutter suppression and echo selection on the echo signal, and extracting a heartbeat signal; segmenting the heartbeat signal beat by beat, and determining its corresponding beat features; and comparing the beat features of the user with the beat feature sets of a standard user group; if the beat features of the user matches one of the beat feature set in the standard user group, the identity recognition being successful; otherwise, being not successful. According to the method, the use of a heartbeat signal for identity recognition has high reliability, and the use of an mmWave radar technology for non-contact identity recognition has high flexibility and accuracy.
    Type: Application
    Filed: April 22, 2021
    Publication date: January 4, 2024
    Applicant: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Zongkai YANG, Sannyuya LIU, Liang ZHAO, Zhicheng DAI, Jianwen SUN, Qing LI
  • Publication number: 20230298382
    Abstract: Provided are a facial expression recognition method and system combined with an attention mechanism. The method comprises: detecting faces comprised in each video frame in a video sequence, and extracting corresponding facial ROIs, so as to obtain facial pictures in each video frame; aligning the facial pictures in each video frame on the basis of location information of facial feature points of the facial pictures; inputting the aligned facial pictures into a residual neural network, and extracting spatial features of facial expressions corresponding to the facial pictures; inputting the spatial features of the facial expressions into a hybrid attention module to acquire fused features of the facial expressions; inputting the fused features of the facial expressions into a gated recurrent unit, and extracting temporal features of the facial expressions; and inputting the temporal features of the facial expressions into a fully connected layer, and classifying and recognizing the facial expressions.
    Type: Application
    Filed: May 23, 2023
    Publication date: September 21, 2023
    Applicant: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Sannyuya LIU, Zongkai YANG, Xiaoliang ZHU, Zhicheng DAI, Liang ZHAO
  • Patent number: 11568012
    Abstract: The disclosure discloses a method for analyzing educational big data on the basis of maps. The method includes acquiring educational resource data and storing the educational resource data into databases according to certain data structures; constructing theme map layers for each analysis theme, classifying and indexing data according to the analysis themes, and superimposing the theme map layers onto base maps to form data maps; analyzing data of the theme map layers according to the analysis themes and acquiring theme analysis results; extracting the data of the multiple theme map layers in target regions, fusing the data and acquiring region analysis results; acquiring learning preference of users; combining the learning preference of the users according to content of user requests and searching the region analysis results in response to the user requests. The disclosure further discloses a system for analyzing the educational big data on the basis of the maps.
    Type: Grant
    Filed: May 8, 2018
    Date of Patent: January 31, 2023
    Assignee: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Zongkai Yang, Sannyuya Liu, Dongbo Zhou, Jianwen Sun, Jiangbo Shu, Hao Li
  • Patent number: 11410570
    Abstract: A method for operating a comprehensive three-dimensional teaching field, including: collecting, by a sensor, a depth data of a real teaching space, point cloud data of a teacher and voice data of the teacher; performing calculation and caching of an architecture for data storage, transmission and rendering of a virtual teaching space based on edge cloud; building a database model of the virtual teaching space by using a R-tree spatial index structure to realize distributed data storage; generating a virtual avatar model updating in real time by positioning and tracking an action of a user; displaying an image of the virtual teaching space on terminals of the teacher and a student through encoding, uploading, 5G rendering and decoding by using a 5G link.
    Type: Grant
    Filed: January 13, 2022
    Date of Patent: August 9, 2022
    Assignee: Central China Normal University
    Inventors: Zongkai Yang, Zheng Zhong, Di Wu, Xu Chen
  • Patent number: 11282404
    Abstract: A method for generating sense of reality of a virtual object in a teaching scene includes perception of a teaching space, generation of sense of reality of a virtual object and generation of real effect of dynamic interaction. The method is specifically performed through steps of collecting depth data of the teaching space; perceiving changes of a scene object in a field of view in real time; collecting a light intensity in the teaching scene to realize a virtual-real fused lighting effect; generating a shadow effect of the virtual object in real time by using ShadowMap; and guiding a teacher to use a multi-modal algorithm to complete a real-time interaction with the virtual object by setting interactive prompts of a sight target and a virtual hand.
    Type: Grant
    Filed: August 9, 2021
    Date of Patent: March 22, 2022
    Assignee: Central China Normal University
    Inventors: Zongkai Yang, Zheng Zhong, Di Wu, Ke Wu
  • Patent number: 11164289
    Abstract: A method for generating a high-precision and microscopic virtual learning resource includes acquisition of high-definition specimen images, generation of a 3D model of a surface a specimen and interactive display of a microscopic virtual learning resource.
    Type: Grant
    Filed: July 8, 2021
    Date of Patent: November 2, 2021
    Assignee: Central China Normal University
    Inventors: Zongkai Yang, Zheng Zhong, Di Wu
  • Patent number: 11151890
    Abstract: Provided is a 5G interactive distance dedicated teaching system based on holographic terminal and a working method thereof. The distance dedicated teaching system includes a data acquisition module, a data transmission module, a 5G cloud rendering module, a natural interaction module, a holographic display module and a teaching service module.
    Type: Grant
    Filed: April 7, 2021
    Date of Patent: October 19, 2021
    Assignee: Central South Normal University
    Inventors: Zongkai Yang, Zheng Zhong, Di Wu, Ke Wu
  • Publication number: 20210287455
    Abstract: A fusion method for movements of a teacher in a teaching scene includes normalization, motion perception and fusion of movements. According to interaction needs in an enhanced teaching scene, this application establishes a moving information collection and a conversion of moving position and range to realize the normalization of movement.
    Type: Application
    Filed: May 20, 2021
    Publication date: September 16, 2021
    Inventors: Zongkai YANG, Zheng ZHONG, Di WU, Ke WU
  • Patent number: 11120640
    Abstract: A fusion method for movements of a teacher in a teaching scene includes normalization, motion perception and fusion of movements. According to interaction needs in an enhanced teaching scene, this application establishes a moving information collection and a conversion of moving position and range to realize the normalization of movement.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: September 14, 2021
    Assignee: Central China Normal University
    Inventors: Zongkai Yang, Zheng Zhong, Di Wu, Ke Wu
  • Publication number: 20210225186
    Abstract: Provided is a 5G interactive distance dedicated teaching system based on holographic terminal and a working method thereof. The distance dedicated teaching system includes a data acquisition module, a data transmission module, a 5G cloud rendering module, a natural interaction module, a holographic display module and a teaching service module.
    Type: Application
    Filed: April 7, 2021
    Publication date: July 22, 2021
    Inventors: Zongkai YANG, Zheng ZHONG, Di WU, Ke WU