Patents by Inventor Feiyue Huang

Feiyue Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220076002
    Abstract: An action recognition method includes: obtaining original feature submaps of each of temporal frames on a plurality of convolutional channels by using a multi-channel convolutional layer; calculating, by using each of the temporal frames as a target temporal frame, motion information weights of the target temporal frame on the convolutional channels according to original feature submaps of the target temporal frame and original feature submaps of a next temporal frame, and obtaining motion information feature maps of the target temporal frame on the convolutional channels according to the motion information weights; performing temporal convolution on the motion information feature maps of the target temporal frame to obtain temporal motion feature maps of the target temporal frame; and recognizing an action type of a moving object in image data of the target temporal frame according to the temporal motion feature maps of the target temporal frame on the convolutional channels.
    Type: Application
    Filed: November 18, 2021
    Publication date: March 10, 2022
    Inventors: Donghao LUO, Yabiao WANG, Chenyang GUO, Boyuan DENG, Chengjie WANG, Jilin LI, Feiyue HUANG, Yongjian WU
  • Patent number: 11087476
    Abstract: A trajectory tracking method is provided for a computer device. The method includes performing motion tracking on head images in a plurality of video frames, to obtain motion trajectories corresponding to the head images; acquiring face images corresponding to the head images in the video frames, to obtain face image sets corresponding to the head images; determining from the face image sets corresponding to the head images, at least two face image sets having same face images; and combining motion trajectories corresponding to the at least two face image sets having same face images, to obtain a final motion trajectory of trajectory tracking.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: August 10, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Changwei He, Chengjie Wang, Jilin Li, Yabiao Wang, Yandan Zhao, Yanhao Ge, Hui Ni, Yichao Xiong, Zhenye Gan, Yongjian Wu, Feiyue Huang
  • Patent number: 10992666
    Abstract: An identity verification method performed at a terminal includes playing in an audio form action guide information including mouth shape guide information selected from a preset action guide information library at a speed corresponding to the action guide information, and collecting a corresponding set of action images within a preset time window; performing matching detection on the collected set of action images and the action guide information, to obtain a living body detection result indicating whether a living body exists in the collected set of action images; according to the living body detection result that indicates that a living body exists in the collected set of action images: collecting user identity information and performing verification according to the collected user identity information, to obtain a user identity information verification result; and determining the identity verification result according to the user identity information verification result.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: April 27, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Feiyue Huang, Jilin Li, Guofu Tan, Xiaoli Jiang, Dan Wu, Junwu Chen, Jianguo Xie, Wei Guo, Yihui Liu, Jiandong Xie
  • Patent number: 10909989
    Abstract: An identity vector generation method is provided. The method includes obtaining to-be-processed speech data. Corresponding acoustic features are extracted from the to-be-processed speech data. A posterior probability that each of the acoustic features belongs to each Gaussian distribution component in a speaker background model is calculated to obtain a statistic. The statistic is mapped to a statistic space to obtain a reference statistic, the statistic space built according to a statistic corresponding to a speech sample exceeding a threshold speech duration. A corrected statistic is determined according to the calculated statistic and the reference statistic; and an identity vector is generated according to the corrected statistic.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: February 2, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Wei Li, Binghua Qian, Xingming Jin, Ke Li, Fuzhang Wu, Yongjian Wu, Feiyue Huang
  • Patent number: 10854207
    Abstract: A method and an apparatus for training a voiceprint recognition system are provided. The method includes obtaining a voice training data set comprising voice segments of users; determining identity vectors of all the voice segments; identifying identity vectors of voice segments of a same user in the determined identity vectors; placing the recognized identity vectors of the same user in the users into one of user categories; and determining an identity vector in the user category as a first identity vector. The method further includes normalizing the first identity vector by using a normalization matrix, a first value being a sum of similarity degrees between the first identity vector in the corresponding category and other identity vectors in the corresponding category; training the normalization matrix, and outputting a training value of the normalization matrix when the normalization matrix maximizes a sum of first values of all the user categories.
    Type: Grant
    Filed: December 24, 2018
    Date of Patent: December 1, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Wei Li, Binghua Qian, Xingming Jin, Ke Li, Fuzhang Wu, Yongjian Wu, Feiyue Huang
  • Publication number: 20200356767
    Abstract: This application discloses a human attribute recognition method performed at a computing device. The method includes: determining a human body region image in a surveillance image; inputting the human body region image into a multi-attribute convolutional neural network model, to obtain, for each of a plurality of human attributes in the human body region image, a probability that the human attribute corresponds to a respective predefined attribute value, the multi-attribute convolutional neural network model being obtained by performing multi-attribute recognition and training on a set of pre-obtained training images by using a multi-attribute convolutional neural network; determining, for each of the plurality of human attributes in the human body region image, the attribute value of the human attribute based on the corresponding probability; and displaying the attribute values of the plurality of human attributes next to the human body region image.
    Type: Application
    Filed: July 24, 2020
    Publication date: November 12, 2020
    Inventors: Siqian YANG, Jilin Li, Yongjian Wu, Yichao Yan, Keke He, Yanhano Ge, Feiyue Huang, Chengjie Wang
  • Patent number: 10817708
    Abstract: A facial tracking method is provided. The method includes: obtaining, from a video stream, an image that currently needs to be processed as a current image frame; and obtaining coordinates of facial key points in a previous image frame and a confidence level corresponding to the previous image frame. The method also includes calculating coordinates of facial key points in the current image frame according to the coordinates of the facial key points in the previous image frame when the confidence level is higher than a preset threshold; and performing multi-face recognition on the current image frame according to the coordinates of the facial key points in the current image frame. The method also includes calculating a confidence level of the coordinates of the facial key points in the current image frame, and returning to process a next frame until recognition on all image frames is completed.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: October 27, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Chengjie Wang, Hui Ni, Yandan Zhao, Yabiao Wang, Shouhong Ding, Shaoxin Li, Ling Zhao, Jilin Li, Yongjian Wu, Feiyue Huang, Yicong Liang
  • Publication number: 20200334830
    Abstract: This present disclosure describes a video image processing method and apparatus, a computer-readable medium and an electronic device, relating to the field of image processing technologies. The method includes determining, by a device, a target-object region in a current frame in a video. The device includes a memory storing instructions and a processor in communication with the memory. The method also includes determining, by the device, a target-object tracking image in a next frame and corresponding to the target-object region; and sequentially performing, by the device, a plurality of sets of convolution processing on the target-object tracking image to determine a target-object region in the next frame. A quantity of convolutions of a first set of convolution processing in the plurality of sets of convolution processing is less than a quantity of convolutions of any other set of convolution processing.
    Type: Application
    Filed: July 7, 2020
    Publication date: October 22, 2020
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yabiao WANG, Yanhao GE, Zhenye GAN, Yuan HUANG, Changyou DENG, Yafeng ZHAO, Feiyue HUANG, Yongjian WU, Xiaoming HUANG, Xiaolong LIANG, Chengjie WANG, Jilin LI
  • Publication number: 20200294250
    Abstract: A trajectory tracking method is provided for a computer device. The method includes performing motion tracking on head images in a plurality of video frames, to obtain motion trajectories corresponding to the head images; acquiring face images corresponding to the head images in the video frames, to obtain face image sets corresponding to the head images; determining from the face image sets corresponding to the head images, at least two face image sets having same face images; and combining motion trajectories corresponding to the at least two face image sets having same face images, to obtain a final motion trajectory of trajectory tracking.
    Type: Application
    Filed: June 2, 2020
    Publication date: September 17, 2020
    Inventors: Changwei HE, Chengjie WANG, Jilin LI, Yabiao WANG, Yandan ZHAO, Yanhao GE, Hui NI, Yichao XIONG, Zhenye GAN, Yongjian WU, Feiyue HUANG
  • Patent number: 10713532
    Abstract: The present disclosure discloses an image recognition method and apparatus, and belongs to the field of computer technologies. The method includes: extracting a local binary pattern (LBP) feature vector of a target image; calculating a high-dimensional feature vector of the target image according to the LBP feature vector; obtaining a training matrix, the training matrix being a matrix obtained by training images in an image library by using a joint Bayesian algorithm; and recognizing the target image according to the high-dimensional feature vector of the target image and the training matrix. The image recognition method and apparatus according to the present disclosure may combine LBP algorithm with a joint Bayesian algorithm to perform recognition, thereby improving the accuracy of image recognition.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: July 14, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Shouhong Ding, Jilin Li, Chengjie Wang, Feiyue Huang, Yongjian Wu, Guofu Tan
  • Patent number: 10706263
    Abstract: Disclosed are an evaluation method and an evaluation device for a facial key point positioning result. In some embodiments, the evaluation method includes: acquiring a facial image and one or more positioning result coordinates of a key point of the facial image; performing a normalization process on the positioning result coordinate and an average facial model to obtain a normalized facial image; and extracting a facial feature value of the normalized facial image and calculating an evaluation result based on the facial feature value and a weight vector.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: July 7, 2020
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Chengjie Wang, Jilin Li, Feiyue Huang, Kekai Sheng, Weiming Dong
  • Patent number: 10699699
    Abstract: The embodiments of the present disclosure disclose a method for constructing a speech decoding network in digital speech recognition. The method comprises acquiring training data obtained by digital speech recording, the training data comprising a plurality of speech segments, and each speech segment comprising a plurality of digital speeches; performing acoustic feature extraction on the training data to obtain a feature sequence corresponding to each speech segment; performing progressive training starting from a mono-phoneme acoustic model to obtain an acoustic model; acquiring a language model, and constructing a speech decoding network by the language model and the acoustic model obtained by training.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: June 30, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Fuzhang Wu, Binghua Qian, Wei Li, Ke Li, Yongjian Wu, Feiyue Huang
  • Patent number: 10692503
    Abstract: A voice data processing method and apparatus are provided. The method includes obtaining an I-Vector vector of each of voice samples, and determining a target seed sample in the voice samples. A first cosine distance is calculated between an I-Vector vector of the target seed sample and an I-Vector vector of a target remaining voice sample, where the target remaining voice sample is a voice sample other than the target seed sample in the voice samples. A target voice sample is filtered from the voice samples or the target remaining voice sample according to the first cosine distance, to obtain a target voice sample whose first cosine distance is greater than a first threshold.
    Type: Grant
    Filed: March 3, 2017
    Date of Patent: June 23, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xingming Jin, Wei Li, Fangmai Zheng, Fuzhang Wu, Bilei Zhu, Binghua Qian, Ke Li, Yongjian Wu, Feiyue Huang
  • Patent number: 10664580
    Abstract: A sign-in method and server based on facial recognition are provided. The method includes: receiving a face image of a sign-in user from a sign-in terminal. According to the face image of the sign-in user, whether a target registration user matching the sign-in user exists in a pre-stored registration set is detected. The registration set includes a face image of at least one registration user. Further, the target registration user is confirmed as signed in successfully if the target registration user exists in the registration set.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: May 26, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Feiyue Huang, Yongjian Wu, Guofu Tan, Jilin Li, Zhibo Chen, Xiaoqing Liang, Zhiwei Tao, Kejing Zhou, Ke Mei
  • Patent number: 10664693
    Abstract: Aspects of the disclosure provide a method for adding a target contact to a user's friend list in a social network. A target image of a human body part of the target contact can be received from a user terminal. A target biological feature can be extracted from the target image. Whether the target biological feature matches a reference biological feature of a plurality of prestored reference biological features can be determined. A social account associated with the determined reference biological feature that matches the target biological feature may be determined, and added to the user's friend list.
    Type: Grant
    Filed: April 11, 2018
    Date of Patent: May 26, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Feiyue Huang, Jilin Li, Chengjie Wang
  • Patent number: 10650830
    Abstract: Processing circuitry of an information processing apparatus obtains a set of identity vectors that are calculated according to voice samples from speakers. The identity vectors are classified into speaker classes respectively corresponding to the speakers. The processing circuitry selects, from the identity vectors, first subsets of interclass neighboring identity vectors respectively corresponding to the identity vectors and second subsets of intraclass neighboring identity vectors respectively corresponding to the identity vectors. The processing circuitry determines an interclass difference based on the first subsets of interclass neighboring identity vectors and the corresponding identity vectors; and determines an intraclass difference based on the second subsets of intraclass neighboring identify vectors and the corresponding identity vectors.
    Type: Grant
    Filed: April 16, 2018
    Date of Patent: May 12, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Wei Li, Binghua Qian, Xingming Jin, Ke Li, Fuzhang Wu, Yongjian Wu, Feiyue Huang
  • Patent number: 10650259
    Abstract: The embodiment of the present invention provides a human face recognition method and recognition system. The method includes that: a human face recognition request is acquired, and a statement is randomly generated according to the human face recognition request; audio data and video data returned by a user in response to the statement are acquired; corresponding voice information is acquired according to the audio data; corresponding lip movement information is acquired according to the video data; and when the lip movement information and the voice information satisfy a preset rule, the human face recognition request is permitted. By performing fit goodness matching between the lip movement information and voice information in a video for dynamic human face recognition, an attack by human face recognition with a real photo may be effectively avoided, and higher security is achieved.
    Type: Grant
    Filed: July 7, 2017
    Date of Patent: May 12, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Chengjie Wang, Jilin Li, Hui Ni, Yongjian Wu, Feiyue Huang
  • Patent number: 10607120
    Abstract: Disclosed are a training method and apparatus for a CNN model, which belong to the field of image recognition. The method comprises: performing a convolution operation, maximal pooling operation and horizontal pooling operation on training images, respectively, to obtain second feature images; determining feature vectors according to the second feature images; processing the feature vectors to obtain category probability vectors; according to the category probability vectors and an initial category, calculating a category error; based on the category error, adjusting model parameters; based on the adjusted model parameters, continuing the model parameters adjusting process, and using the model parameters when the number of iteration times reaches a pre-set number of times as the model parameters for the well-trained CNN model. After the convolution operation and maximal pooling operation on the training images on each level of convolution layer, a horizontal pooling operation is performed.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: March 31, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiang Bai, Feiyue Huang, Xiaowei Guo, Cong Yao, Baoguang Shi
  • Patent number: 10607066
    Abstract: The present disclosure discloses a living body identification method, an information generation method, and a terminal, and belongs to the field of biometric feature recognition. The method includes: providing lip language prompt information, the lip language prompt information including at least two target characters, and the at least two target characters being at least one of: characters of a same lip shape, characters of opposite lip shapes, or characters whose lip shape similarity is in a preset range; collecting at least two frame pictures; detecting whether lip changes of a to-be-identified object in the at least two frame pictures meet a preset condition, when the to-be-identified object reads the at least two target characters; and determining that the to-be-identified object is a living body, if the preset condition is met.
    Type: Grant
    Filed: March 17, 2017
    Date of Patent: March 31, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Jilin Li, Chengjie Wang, Feiyue Huang, Yongjian Wu, Hui Ni, Ruixin Zhang, Guofu Tan
  • Patent number: 10599913
    Abstract: Face model matrix training method, apparatus, and storage medium are provided. The method includes: obtaining a face image library, the face image library including k groups of face images, and each group of face images including at least one face image of at least one person, k>2, and k being an integer; separately parsing each group of the k groups of face images, and calculating a first matrix and a second matrix according to parsing results, the first matrix being an intra-group covariance matrix of facial features of each group of face images, and the second matrix being an inter-group covariance matrix of facial features of the k groups of face images; and training face model matrices according to the first matrix and the second matrix.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: March 24, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Shouhong Ding, Jilin Li, Chengjie Wang, Feiyue Huang, Yongjian Wu, Guofu Tan