Patents by Inventor Feiyue Huang

Feiyue Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10607120
    Abstract: Disclosed are a training method and apparatus for a CNN model, which belong to the field of image recognition. The method comprises: performing a convolution operation, maximal pooling operation and horizontal pooling operation on training images, respectively, to obtain second feature images; determining feature vectors according to the second feature images; processing the feature vectors to obtain category probability vectors; according to the category probability vectors and an initial category, calculating a category error; based on the category error, adjusting model parameters; based on the adjusted model parameters, continuing the model parameters adjusting process, and using the model parameters when the number of iteration times reaches a pre-set number of times as the model parameters for the well-trained CNN model. After the convolution operation and maximal pooling operation on the training images on each level of convolution layer, a horizontal pooling operation is performed.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: March 31, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiang Bai, Feiyue Huang, Xiaowei Guo, Cong Yao, Baoguang Shi
  • Patent number: 10607066
    Abstract: The present disclosure discloses a living body identification method, an information generation method, and a terminal, and belongs to the field of biometric feature recognition. The method includes: providing lip language prompt information, the lip language prompt information including at least two target characters, and the at least two target characters being at least one of: characters of a same lip shape, characters of opposite lip shapes, or characters whose lip shape similarity is in a preset range; collecting at least two frame pictures; detecting whether lip changes of a to-be-identified object in the at least two frame pictures meet a preset condition, when the to-be-identified object reads the at least two target characters; and determining that the to-be-identified object is a living body, if the preset condition is met.
    Type: Grant
    Filed: March 17, 2017
    Date of Patent: March 31, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Jilin Li, Chengjie Wang, Feiyue Huang, Yongjian Wu, Hui Ni, Ruixin Zhang, Guofu Tan
  • Patent number: 10599913
    Abstract: Face model matrix training method, apparatus, and storage medium are provided. The method includes: obtaining a face image library, the face image library including k groups of face images, and each group of face images including at least one face image of at least one person, k>2, and k being an integer; separately parsing each group of the k groups of face images, and calculating a first matrix and a second matrix according to parsing results, the first matrix being an intra-group covariance matrix of facial features of each group of face images, and the second matrix being an inter-group covariance matrix of facial features of the k groups of face images; and training face model matrices according to the first matrix and the second matrix.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: March 24, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Shouhong Ding, Jilin Li, Chengjie Wang, Feiyue Huang, Yongjian Wu, Guofu Tan
  • Publication number: 20190372972
    Abstract: An identity verification method performed at a terminal includes playing in an audio form action guide information including mouth shape guide information selected from a preset action guide information library at a speed corresponding to the action guide information, and collecting a corresponding set of action images within a preset time window; performing matching detection on the collected set of action images and the action guide information, to obtain a living body detection result indicating whether a living body exists in the collected set of action images; according to the living body detection result that indicates that a living body exists in the collected set of action images: collecting user identity information and performing verification according to the collected user identity information, to obtain a user identity information verification result; and determining the identity verification result according to the user identity information verification result.
    Type: Application
    Filed: August 15, 2019
    Publication date: December 5, 2019
    Inventors: Feiyue HUANG, Jilin LI, Guofu Tan, Xiaoli Jiang, Dan Wu, Junwu Chen, Jianguo Xie, Wei Guo, Yihui Liu, Jiandong Xie
  • Publication number: 20190332847
    Abstract: Face model matrix training method, apparatus, and storage medium are provided. The method includes: obtaining a face image library, the face image library including k groups of face images, and each group of face images including at least one face image of at least one person, k>2, and k being an integer; separately parsing each group of the k groups of face images, and calculating a first matrix and a second matrix according to parsing results, the first matrix being an intra-group covariance matrix of facial features of each group of face images, and the second matrix being an inter-group covariance matrix of facial features of the k groups of face images; and training face model matrices according to the first matrix and the second matrix.
    Type: Application
    Filed: July 11, 2019
    Publication date: October 31, 2019
    Inventors: Shouhong DING, Jilin LI, Chengjie WANG, Feiyue HUANG, Yongjian WU, Guofu TAN
  • Patent number: 10438077
    Abstract: A face liveness detection method includes outputting a prompt to complete one or more specified actions in sequence within a specified time period, obtaining a face video, detecting a reference face image frame in the face video using a face detection method, locating a facial keypoint in the reference face image frame, tracking the facial keypoint in one or more subsequent face image frames, determining a state parameter of one of the one or more specified actions using a continuity analysis method according to the facial keypoint, and determining whether the one of the one or more specified actions is completed according to a continuity of the state parameter.
    Type: Grant
    Filed: October 9, 2017
    Date of Patent: October 8, 2019
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Chengjie Wang, Jilin Li, Feiyue Huang, Yongjian Wu
  • Patent number: 10432624
    Abstract: An identity verification method performed at a terminal includes: displaying and/or playing in an audio form action guide information selected from a preset action guide information library, and collecting a corresponding set of action images within a preset time window; performing matching detection on the collected set of action images and the action guide information, to obtain a living body detection result indicating whether a living body exists in the collected set of action images; according to the living body detection result that indicates that a living body exists in the collected set of action images: collecting user identity information and performing verification according to the collected user identity information, to obtain a user identity information verification result; and determining the identity verification result according to the user identity information verification result.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: October 1, 2019
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Feiyue Huang, Jilin Li, Guofu Tan, Xiaoli Jiang, Dan Wu, Junwu Chen, Jianguo Xie, Wei Guo, Yihui Liu, Jiandong Xie
  • Publication number: 20190266385
    Abstract: Disclosed are an evaluation method and an evaluation device for a facial key point positioning result. In some embodiments, the evaluation method includes: acquiring a facial image and one or more positioning result coordinates of a key point of the facial image; performing a normalization process on the positioning result coordinate and an average facial model to obtain a normalized facial image; and extracting a facial feature value of the normalized facial image and calculating an evaluation result based on the facial feature value and a weight vector.
    Type: Application
    Filed: May 10, 2019
    Publication date: August 29, 2019
    Inventors: Chengjie WANG, Jilin LI, Feiyue HUANG, Kekai SHENG, Weiming DONG
  • Patent number: 10395095
    Abstract: Face model matrix training method, apparatus, and storage medium are provided. The method includes: obtaining a face image library, the face image library including k groups of face images, and each group of face images including at least one face image of at least one person, k>2, and k being an integer; separately parsing each group of the k groups of face images, and calculating a first matrix and a second matrix according to parsing results, the first matrix being an intra-group covariance matrix of facial features of each group of face images, and the second matrix being an inter-group covariance matrix of facial features of the k groups of face images; and training face model matrices according to the first matrix and the second matrix.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: August 27, 2019
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Shouhong Ding, Jilin Li, Chengjie Wang, Feiyue Huang, Yongjian Wu, Guofu Tan
  • Publication number: 20190221202
    Abstract: A statistical parameter modeling method is performed by a server. After obtaining model training data, the model training data including a text feature sequence and a corresponding original speech sample sequence, the server inputs an original vector matrix formed by matching a text feature sample point in the text feature sample sequence with a speech sample point in the original speech sample sequence into a statistical parameter model for training and then performs non-linear mapping calculation on the original vector matrix in a hidden layer, to output a corresponding prediction speech sample point. The server then obtains a model parameter of the statistical parameter model according to the prediction speech sample point and a corresponding original speech sample point by using a smallest difference principle, to obtain a corresponding target statistical parameter model.
    Type: Application
    Filed: March 26, 2019
    Publication date: July 18, 2019
    Inventors: Wei Li, Hangyu Yan, Ke Li, Yongjian Wu, Feiyue Huang
  • Publication number: 20190205623
    Abstract: A facial tracking method is provided. The method includes: obtaining, from a video stream, an image that currently needs to be processed as a current image frame; and obtaining coordinates of facial key points in a previous image frame and a confidence level corresponding to the previous image frame. The method also includes calculating coordinates of facial key points in the current image frame according to the coordinates of the facial key points in the previous image frame when the confidence level is higher than a preset threshold; and performing multi-face recognition on the current image frame according to the coordinates of the facial key points in the current image frame. The method also includes calculating a confidence level of the coordinates of the facial key points in the current image frame, and returning to process a next frame until recognition on all image frames is completed.
    Type: Application
    Filed: March 8, 2019
    Publication date: July 4, 2019
    Inventors: Chengjie WANG, Hui NI, Yandan ZHAO, Yabiao WANG, Shouhong DING, Shaoxin LI, Ling ZHAO, Jilin LI, Yongjian WU, Feiyue HUANG, Yicong LIANG
  • Patent number: 10331940
    Abstract: Disclosed are an evaluation method and an evaluation device for a facial key point positioning result. In some embodiments, the evaluation method includes: acquiring a facial image and one or more positioning result coordinates of a key point of the facial image; performing a normalization process on the positioning result coordinate and an average facial model to obtain a normalized facial image; and extracting a facial feature value of the normalized facial image and calculating an evaluation result based on the facial feature value and a weight vector.
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: June 25, 2019
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Chengjie Wang, Jilin Li, Feiyue Huang, Kekai Sheng, Weiming Dong
  • Patent number: 10325181
    Abstract: An image classification method is provided. The method includes: inputting a to-be-classified image into a plurality of neural network models; obtaining data output by multiple non-input layers specified by each neural network model to generate a plurality of image features corresponding to the plurality of neural network models; respectively inputting the plurality of corresponding image features into linear classifiers, each of the linear classifiers being trained by one of the plurality of neural network models for determining whether an image belongs to a preset class; obtaining, using each neural network model, a corresponding probability that the to-be-classified image comprises an object image of the preset class; and determining, according to each obtained probability, whether the to-be-classified image includes the object image of the preset class.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: June 18, 2019
    Assignees: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED, TSINGHUA UNIVERSITY
    Inventors: Kun Xu, Xiaowei Guo, Feiyue Huang, Ruixin Zhang, Juhong Wang, Shimin Hu, Bin Liu
  • Publication number: 20190130920
    Abstract: A method and an apparatus for training a voiceprint recognition system are provided. The method includes obtaining a voice training data set comprising voice segments of users; determining identity vectors of all the voice segments; identifying identity vectors of voice segments of a same user in the determined identity vectors; placing the recognized identity vectors of the same user in the users into one of user categories; and determining an identity vector in the user category as a first identity vector. The method further includes normalizing the first identity vector by using a normalization matrix, a first value being a sum of similarity degrees between the first identity vector in the corresponding category and other identity vectors in the corresponding category; training the normalization matrix, and outputting a training value of the normalization matrix when the normalization matrix maximizes a sum of first values of all the user categories.
    Type: Application
    Filed: December 24, 2018
    Publication date: May 2, 2019
    Inventors: Wei LI, Binghua QIAN, Xingming JIN, Ke LI, Fuzhang WU, Yongjian WU, Feiyue HUANG
  • Publication number: 20190115031
    Abstract: An identity vector generation method is provided. The method includes obtaining to-be-processed speech data. Corresponding acoustic features are extracted from the to-be-processed speech data. A posterior probability that each of the acoustic features belongs to each Gaussian distribution component in a speaker background model is calculated to obtain a statistic. The statistic is mapped to a statistic space to obtain a reference statistic, the statistic space built according to a statistic corresponding to a speech sample exceeding a threshold speech duration. A corrected statistic is determined according to the calculated statistic and the reference statistic; and an identity vector is generated according to the corrected statistic.
    Type: Application
    Filed: December 7, 2018
    Publication date: April 18, 2019
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Wei Li, Binghua Qian, Xingming Jin, Ke Li, Fuzhang Wu, Yongjian Wu, Feiyue Huang
  • Publication number: 20180349590
    Abstract: A sign-in method and server based on facial recognition are provided. The method includes: receiving a face image of a sign-in user from a sign-in terminal. According to the face image of the sign-in user, whether a target registration user matching the sign-in user exists in a pre-stored registration set is detected. The registration set includes a face image of at least one registration user. Further, the target registration user is confirmed as signed in successfully if the target registration user exists in the registration set.
    Type: Application
    Filed: August 10, 2018
    Publication date: December 6, 2018
    Inventors: Feiyue HUANG, Yongjian WU, Guofu TAN, Jilin LI, Zhibo CHEN, Xiaoqing LIANG, Zhiwei TAO, Kejing ZHOU, Ke MEI
  • Publication number: 20180286410
    Abstract: A voice data processing method and apparatus are provided. The method includes obtaining an I-Vector vector of each of voice samples, and determining a target seed sample in the voice samples. A first cosine distance is calculated between an I-Vector vector of the target seed sample and an I-Vector vector of a target remaining voice sample, where the target remaining voice sample is a voice sample other than the target seed sample in the voice samples. A target voice sample is filtered from the voice samples or the target remaining voice sample according to the first cosine distance, to obtain a target voice sample whose first cosine distance is greater than a first threshold.
    Type: Application
    Filed: March 3, 2017
    Publication date: October 4, 2018
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xingming JIN, Wei LI, Fangmai ZHENG, Fuzhang WU, Bilei ZHU, Binghua QIAN, Ke LI, Yongjian WU, Feiyue HUANG
  • Publication number: 20180277103
    Abstract: The embodiments of the present disclosure disclose a method for constructing a speech decoding network in digital speech recognition. The method comprises acquiring training data obtained by digital speech recording, the training data comprising a plurality of speech segments, and each speech segment comprising a plurality of digital speeches; performing acoustic feature extraction on the training data to obtain a feature sequence corresponding to each speech segment; performing progressive training starting from a mono-phoneme acoustic model to obtain an acoustic model; acquiring a language model, and constructing a speech decoding network by the language model and the acoustic model obtained by training.
    Type: Application
    Filed: May 30, 2018
    Publication date: September 27, 2018
    Inventors: Fuzhang WU, Binghua Qian, Wei LI, Ke LI, Yongjian Wu, Feiyue Huang
  • Patent number: 10068128
    Abstract: The present disclosure pertains to the field of image processing technologies and discloses a face key point positioning method and a terminal. The method includes: obtaining a face image; recognizing a face frame in the face image; determining positions of n key points of a target face in the face frame according to the face frame and a first positioning algorithm; performing screening to select, from candidate faces, a similar face whose positions of corresponding key points match the positions of the n key points of the target face; and determining positions of m key points of the similar face selected through screening according to a second positioning algorithm, m being a positive integer. In this way, the problem that positions of key points obtained by a terminal have relatively great deviations in the related technologies is resolved, thereby achieving an effect of improving accuracy of positioned positions of the key points.
    Type: Grant
    Filed: February 21, 2017
    Date of Patent: September 4, 2018
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Chengjie Wang, Jilin Li, Feiyue Huang, Yongjian Wu
  • Patent number: 10055879
    Abstract: A 3D human face reconstruction method and apparatus, and a server are provided. In some embodiments, the method includes determining feature points on an acquired 2D human face image; determining posture parameters of a human face according to the feature points, and adjusting a posture of a universal 3D human face model according to the posture parameters; determining points on the universal 3D human face model corresponding to the feature points, and adjusting the corresponding points in a sheltered status to obtain a preliminary 3D human face model; and performing deformation adjusting on the preliminary 3D human face model, and performing texture mapping on the deformed 3D human face model to obtain a final 3D human face.
    Type: Grant
    Filed: July 17, 2017
    Date of Patent: August 21, 2018
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Chengjie Wang, Jilin Li, Feiyue Huang, Lei Zhang