Patents by Inventor Yen-Chu Peng

Yen-Chu Peng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11514715
    Abstract: A deepfake video detection system, including an input data detection module of a video recognition unit for setting a target video; a data pre-processing unit for detecting eye features from the face in the target video; a feature extraction module for extracting eye features and inputting the eye features to a long-term recurrent convolutional neural network (LRCN); and then using a sequence of long-term and short-term memory (LSTM) of a learning module; performing sequence learning; using a state prediction module to predict the output of each neuron, and then using a long and short-term memory model to output the quantized eye state, then connecting to a state quantification module, and comparing the original stored data from the normal video and the quantified eye state information of the target video, and outputting the recognition result by an output data recognition module.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: November 29, 2022
    Assignee: NATIONAL CHENG KUNG UNIVERSITY
    Inventors: Jung-Shian Li, I-Hsien Liu, Chuan-Kang Liu, Po-Yi Wu, Yen-Chu Peng
  • Publication number: 20220129664
    Abstract: A deepfake video detection system, including an input data detection module of a video recognition unit for setting a target video; a data pre-processing unit for detecting eye features from the face in the target video; a feature extraction module for extracting eye features and inputting the eye features to a long-term recurrent convolutional neural network (LRCN); and then using a sequence of long-term and short-term memory (LSTM) of a learning module; performing sequence learning; using a state prediction module to predict the output of each neuron, and then using a long and short-term memory model to output the quantized eye state, then connecting to a state quantification module, and comparing the original stored data from the normal video and the quantified eye state information of the target video, and outputting the recognition result by an output data recognition module.
    Type: Application
    Filed: May 20, 2021
    Publication date: April 28, 2022
    Inventors: Jung-Shian LI, I-Hsien Liu, Chuan-Kang Liu, Po-Yi Wu, Yen-Chu Peng