Patents by Inventor Thi Hanh Vu

Thi Hanh Vu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250104413
    Abstract: The invention proposes the method of shoplifting detection from surveillance camera using artificial intelligence technology. This method consists of the following steps: (1) data preprocessing to create short videos from the input video stream, and feed into the deep learning model in step 2; (2) person feature extraction and shoplifting behavior probability calculation using a hybid deep learning model which combines a three-dimensional convolutional neural network (3D-CNN) and a two-dimensional convolutional neural network (2D-CNN) together to learn the spatial-temporal features and the positions of people in the keyframe of processed videos in step 1; (3) post-processing and warning the shoplifting behavior if it occurs.
    Type: Application
    Filed: September 20, 2024
    Publication date: March 27, 2025
    Applicant: VIETTEL GROUP
    Inventors: Thi Hanh Vu, Thi Hanh Le
  • Publication number: 20240144724
    Abstract: This invention proposes a method of crowd abnormal behavior detection from video using artificial intelligence, includes three steps: step 1: Data-preprocessing; step 2: Feature extraction and abnormal prediction using a three-dimensional convolution neural network (3D CNN), step 3: Post-processing and synthesizing information to issue warning.
    Type: Application
    Filed: September 28, 2023
    Publication date: May 2, 2024
    Applicant: VIETTEL GROUP
    Inventors: Hong Phuc Vu, Thi Hanh Vu, Hong Dang Nguyen, Manh Quy Nguyen
  • Publication number: 20240144489
    Abstract: A method for multi-object tracking from video. The method includes the following steps: (1) Capturing frames from the streaming source and preprocess the data; (2) Extract video features with three choices: a 3D-CNN backbone followed by a Transformer Encoder, a Video Transformer Encoder, a 2D-CNN Encoder with a stack of frames as input followed by a Transformer Encoder; (3) Multi-object tracking using a new end-to-end multi-task deep learning model named JDAT (Joint Detection Association Transformer), then post-processing and updating tracking state with Temporal Aggregation Module (TAM). The deep learning models in step 2 and step 3 are trained simultaneously end-to-end with a loss function that is accumulated over multiple timesteps (Collective Average Loss—CAL). Also, the model can be pretrained with weakly labeled image dataset in a self-supervised learning manner first, then finetuned on supervised video datasets with full tracking labels.
    Type: Application
    Filed: October 3, 2023
    Publication date: May 2, 2024
    Applicant: VIETTEL GROUP
    Inventors: Hong Dang Nguyen, Thi Hanh Vu, Manh Quy Nguyen
  • Publication number: 20240054814
    Abstract: The present invention provides a method of masked face recognition from images by artificial intelligence technology including four steps: step 1: generating the images of face wearing a mask; step 2: training a deep learning model for face detection while wearing a mask; step 3: training the deep learning model for face feature extraction while wearing a mask; step 4: building a full pipeline of masked face recognition from images using the trained models from step 2, step 3, and some post-processing algorithms. The method aims to improve the accuracy of identity verification in the context of wearing masks has become popular and compulsory in life.
    Type: Application
    Filed: August 10, 2023
    Publication date: February 15, 2024
    Applicant: VIETTEL GROUP
    Inventors: Thi Hanh Vu, Van Muoi Pham, Manh Quy Nguyen
  • Publication number: 20230011635
    Abstract: The present invention provides a method of facial expression recognition including 3 steps: step 1: collecting facial expression data, which contributes to solve the problem of lacking data, disparate and bias data, that cause the overfitting problem when training the deep learning model; step 2: designing a new deep learning network that able to focus on special regions of the face to extract and learn the important features of facial expressions by intergating ensemble attention modules into basic deep network architecture like ResNet; step 3: training the ensemble attention deep learning model in step 2 on the collected dataset in step 1, using the combination of two loss functions including ArcFace and Softmax to reduce the overfitting problem.
    Type: Application
    Filed: June 30, 2022
    Publication date: January 12, 2023
    Applicant: VIETTEL GROUP
    Inventors: Thi Hanh Vu, Quang Nhat Vo, Manh Quy Nguyen, Ngoc Duong Hoang, Khac Duy Ngoc Nguyen