Patents by Inventor Feiyue Huang
Feiyue Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11288309Abstract: A melody information processing method is described. A piece of Musical Instrument Digital Interface (MIDI) data corresponding to a song is received, a song identifier of the song is obtained, first melody information is generated according to the MIDI data, and the first melody information is stored in association with the song identifier in a melody database. Moreover, a user unaccompanied-singing audio data set that is uploaded from a user terminal is received, second melody information corresponding to the song identifier is extracted according to the user unaccompanied-singing audio data set, and the second melody information is stored in association with the song identifier in the melody database.Type: GrantFiled: April 12, 2018Date of Patent: March 29, 2022Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Bilei Zhu, Fangmai Zheng, Xingming Jin, Ke Li, Yongjian Wu, Feiyue Huang
-
Patent number: 11289069Abstract: A statistical parameter modeling method is performed by a server. After obtaining model training data, the model training data including a text feature sequence and a corresponding original speech sample sequence, the server inputs an original vector matrix formed by matching a text feature sample point in the text feature sample sequence with a speech sample point in the original speech sample sequence into a statistical parameter model for training and then performs non-linear mapping calculation on the original vector matrix in a hidden layer, to output a corresponding prediction speech sample point. The server then obtains a model parameter of the statistical parameter model according to the prediction speech sample point and a corresponding original speech sample point by using a smallest difference principle, to obtain a corresponding target statistical parameter model.Type: GrantFiled: March 26, 2019Date of Patent: March 29, 2022Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Wei Li, Hangyu Yan, Ke Li, Yongjian Wu, Feiyue Huang
-
Patent number: 11275932Abstract: This application discloses a human attribute recognition method performed at a computing device. The method includes: determining a human body region image in a surveillance image; inputting the human body region image into a multi-attribute convolutional neural network model, to obtain, for each of a plurality of human attributes in the human body region image, a probability that the human attribute corresponds to a respective predefined attribute value, the multi-attribute convolutional neural network model being obtained by performing multi-attribute recognition and training on a set of pre-obtained training images by using a multi-attribute convolutional neural network; determining, for each of the plurality of human attributes in the human body region image, the attribute value of the human attribute based on the corresponding probability; and displaying the attribute values of the plurality of human attributes next to the human body region image.Type: GrantFiled: July 24, 2020Date of Patent: March 15, 2022Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Siqian Yang, Jilin Li, Yongjian Wu, Yichao Yan, Keke He, Yanhao Ge, Feiyue Huang, Chengjie Wang
-
Publication number: 20220076002Abstract: An action recognition method includes: obtaining original feature submaps of each of temporal frames on a plurality of convolutional channels by using a multi-channel convolutional layer; calculating, by using each of the temporal frames as a target temporal frame, motion information weights of the target temporal frame on the convolutional channels according to original feature submaps of the target temporal frame and original feature submaps of a next temporal frame, and obtaining motion information feature maps of the target temporal frame on the convolutional channels according to the motion information weights; performing temporal convolution on the motion information feature maps of the target temporal frame to obtain temporal motion feature maps of the target temporal frame; and recognizing an action type of a moving object in image data of the target temporal frame according to the temporal motion feature maps of the target temporal frame on the convolutional channels.Type: ApplicationFiled: November 18, 2021Publication date: March 10, 2022Inventors: Donghao LUO, Yabiao WANG, Chenyang GUO, Boyuan DENG, Chengjie WANG, Jilin LI, Feiyue HUANG, Yongjian WU
-
Patent number: 11087476Abstract: A trajectory tracking method is provided for a computer device. The method includes performing motion tracking on head images in a plurality of video frames, to obtain motion trajectories corresponding to the head images; acquiring face images corresponding to the head images in the video frames, to obtain face image sets corresponding to the head images; determining from the face image sets corresponding to the head images, at least two face image sets having same face images; and combining motion trajectories corresponding to the at least two face image sets having same face images, to obtain a final motion trajectory of trajectory tracking.Type: GrantFiled: June 2, 2020Date of Patent: August 10, 2021Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Changwei He, Chengjie Wang, Jilin Li, Yabiao Wang, Yandan Zhao, Yanhao Ge, Hui Ni, Yichao Xiong, Zhenye Gan, Yongjian Wu, Feiyue Huang
-
Patent number: 10992666Abstract: An identity verification method performed at a terminal includes playing in an audio form action guide information including mouth shape guide information selected from a preset action guide information library at a speed corresponding to the action guide information, and collecting a corresponding set of action images within a preset time window; performing matching detection on the collected set of action images and the action guide information, to obtain a living body detection result indicating whether a living body exists in the collected set of action images; according to the living body detection result that indicates that a living body exists in the collected set of action images: collecting user identity information and performing verification according to the collected user identity information, to obtain a user identity information verification result; and determining the identity verification result according to the user identity information verification result.Type: GrantFiled: August 15, 2019Date of Patent: April 27, 2021Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Feiyue Huang, Jilin Li, Guofu Tan, Xiaoli Jiang, Dan Wu, Junwu Chen, Jianguo Xie, Wei Guo, Yihui Liu, Jiandong Xie
-
Patent number: 10909989Abstract: An identity vector generation method is provided. The method includes obtaining to-be-processed speech data. Corresponding acoustic features are extracted from the to-be-processed speech data. A posterior probability that each of the acoustic features belongs to each Gaussian distribution component in a speaker background model is calculated to obtain a statistic. The statistic is mapped to a statistic space to obtain a reference statistic, the statistic space built according to a statistic corresponding to a speech sample exceeding a threshold speech duration. A corrected statistic is determined according to the calculated statistic and the reference statistic; and an identity vector is generated according to the corrected statistic.Type: GrantFiled: December 7, 2018Date of Patent: February 2, 2021Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Wei Li, Binghua Qian, Xingming Jin, Ke Li, Fuzhang Wu, Yongjian Wu, Feiyue Huang
-
Patent number: 10854207Abstract: A method and an apparatus for training a voiceprint recognition system are provided. The method includes obtaining a voice training data set comprising voice segments of users; determining identity vectors of all the voice segments; identifying identity vectors of voice segments of a same user in the determined identity vectors; placing the recognized identity vectors of the same user in the users into one of user categories; and determining an identity vector in the user category as a first identity vector. The method further includes normalizing the first identity vector by using a normalization matrix, a first value being a sum of similarity degrees between the first identity vector in the corresponding category and other identity vectors in the corresponding category; training the normalization matrix, and outputting a training value of the normalization matrix when the normalization matrix maximizes a sum of first values of all the user categories.Type: GrantFiled: December 24, 2018Date of Patent: December 1, 2020Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Wei Li, Binghua Qian, Xingming Jin, Ke Li, Fuzhang Wu, Yongjian Wu, Feiyue Huang
-
Publication number: 20200356767Abstract: This application discloses a human attribute recognition method performed at a computing device. The method includes: determining a human body region image in a surveillance image; inputting the human body region image into a multi-attribute convolutional neural network model, to obtain, for each of a plurality of human attributes in the human body region image, a probability that the human attribute corresponds to a respective predefined attribute value, the multi-attribute convolutional neural network model being obtained by performing multi-attribute recognition and training on a set of pre-obtained training images by using a multi-attribute convolutional neural network; determining, for each of the plurality of human attributes in the human body region image, the attribute value of the human attribute based on the corresponding probability; and displaying the attribute values of the plurality of human attributes next to the human body region image.Type: ApplicationFiled: July 24, 2020Publication date: November 12, 2020Inventors: Siqian YANG, Jilin Li, Yongjian Wu, Yichao Yan, Keke He, Yanhano Ge, Feiyue Huang, Chengjie Wang
-
Patent number: 10817708Abstract: A facial tracking method is provided. The method includes: obtaining, from a video stream, an image that currently needs to be processed as a current image frame; and obtaining coordinates of facial key points in a previous image frame and a confidence level corresponding to the previous image frame. The method also includes calculating coordinates of facial key points in the current image frame according to the coordinates of the facial key points in the previous image frame when the confidence level is higher than a preset threshold; and performing multi-face recognition on the current image frame according to the coordinates of the facial key points in the current image frame. The method also includes calculating a confidence level of the coordinates of the facial key points in the current image frame, and returning to process a next frame until recognition on all image frames is completed.Type: GrantFiled: March 8, 2019Date of Patent: October 27, 2020Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Chengjie Wang, Hui Ni, Yandan Zhao, Yabiao Wang, Shouhong Ding, Shaoxin Li, Ling Zhao, Jilin Li, Yongjian Wu, Feiyue Huang, Yicong Liang
-
Publication number: 20200334830Abstract: This present disclosure describes a video image processing method and apparatus, a computer-readable medium and an electronic device, relating to the field of image processing technologies. The method includes determining, by a device, a target-object region in a current frame in a video. The device includes a memory storing instructions and a processor in communication with the memory. The method also includes determining, by the device, a target-object tracking image in a next frame and corresponding to the target-object region; and sequentially performing, by the device, a plurality of sets of convolution processing on the target-object tracking image to determine a target-object region in the next frame. A quantity of convolutions of a first set of convolution processing in the plurality of sets of convolution processing is less than a quantity of convolutions of any other set of convolution processing.Type: ApplicationFiled: July 7, 2020Publication date: October 22, 2020Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Yabiao WANG, Yanhao GE, Zhenye GAN, Yuan HUANG, Changyou DENG, Yafeng ZHAO, Feiyue HUANG, Yongjian WU, Xiaoming HUANG, Xiaolong LIANG, Chengjie WANG, Jilin LI
-
Publication number: 20200294250Abstract: A trajectory tracking method is provided for a computer device. The method includes performing motion tracking on head images in a plurality of video frames, to obtain motion trajectories corresponding to the head images; acquiring face images corresponding to the head images in the video frames, to obtain face image sets corresponding to the head images; determining from the face image sets corresponding to the head images, at least two face image sets having same face images; and combining motion trajectories corresponding to the at least two face image sets having same face images, to obtain a final motion trajectory of trajectory tracking.Type: ApplicationFiled: June 2, 2020Publication date: September 17, 2020Inventors: Changwei HE, Chengjie WANG, Jilin LI, Yabiao WANG, Yandan ZHAO, Yanhao GE, Hui NI, Yichao XIONG, Zhenye GAN, Yongjian WU, Feiyue HUANG
-
Patent number: 10713532Abstract: The present disclosure discloses an image recognition method and apparatus, and belongs to the field of computer technologies. The method includes: extracting a local binary pattern (LBP) feature vector of a target image; calculating a high-dimensional feature vector of the target image according to the LBP feature vector; obtaining a training matrix, the training matrix being a matrix obtained by training images in an image library by using a joint Bayesian algorithm; and recognizing the target image according to the high-dimensional feature vector of the target image and the training matrix. The image recognition method and apparatus according to the present disclosure may combine LBP algorithm with a joint Bayesian algorithm to perform recognition, thereby improving the accuracy of image recognition.Type: GrantFiled: March 19, 2018Date of Patent: July 14, 2020Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Shouhong Ding, Jilin Li, Chengjie Wang, Feiyue Huang, Yongjian Wu, Guofu Tan
-
Patent number: 10706263Abstract: Disclosed are an evaluation method and an evaluation device for a facial key point positioning result. In some embodiments, the evaluation method includes: acquiring a facial image and one or more positioning result coordinates of a key point of the facial image; performing a normalization process on the positioning result coordinate and an average facial model to obtain a normalized facial image; and extracting a facial feature value of the normalized facial image and calculating an evaluation result based on the facial feature value and a weight vector.Type: GrantFiled: May 10, 2019Date of Patent: July 7, 2020Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Chengjie Wang, Jilin Li, Feiyue Huang, Kekai Sheng, Weiming Dong
-
Patent number: 10699699Abstract: The embodiments of the present disclosure disclose a method for constructing a speech decoding network in digital speech recognition. The method comprises acquiring training data obtained by digital speech recording, the training data comprising a plurality of speech segments, and each speech segment comprising a plurality of digital speeches; performing acoustic feature extraction on the training data to obtain a feature sequence corresponding to each speech segment; performing progressive training starting from a mono-phoneme acoustic model to obtain an acoustic model; acquiring a language model, and constructing a speech decoding network by the language model and the acoustic model obtained by training.Type: GrantFiled: May 30, 2018Date of Patent: June 30, 2020Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Fuzhang Wu, Binghua Qian, Wei Li, Ke Li, Yongjian Wu, Feiyue Huang
-
Patent number: 10692503Abstract: A voice data processing method and apparatus are provided. The method includes obtaining an I-Vector vector of each of voice samples, and determining a target seed sample in the voice samples. A first cosine distance is calculated between an I-Vector vector of the target seed sample and an I-Vector vector of a target remaining voice sample, where the target remaining voice sample is a voice sample other than the target seed sample in the voice samples. A target voice sample is filtered from the voice samples or the target remaining voice sample according to the first cosine distance, to obtain a target voice sample whose first cosine distance is greater than a first threshold.Type: GrantFiled: March 3, 2017Date of Patent: June 23, 2020Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Xingming Jin, Wei Li, Fangmai Zheng, Fuzhang Wu, Bilei Zhu, Binghua Qian, Ke Li, Yongjian Wu, Feiyue Huang
-
Patent number: 10664693Abstract: Aspects of the disclosure provide a method for adding a target contact to a user's friend list in a social network. A target image of a human body part of the target contact can be received from a user terminal. A target biological feature can be extracted from the target image. Whether the target biological feature matches a reference biological feature of a plurality of prestored reference biological features can be determined. A social account associated with the determined reference biological feature that matches the target biological feature may be determined, and added to the user's friend list.Type: GrantFiled: April 11, 2018Date of Patent: May 26, 2020Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Feiyue Huang, Jilin Li, Chengjie Wang
-
Patent number: 10664580Abstract: A sign-in method and server based on facial recognition are provided. The method includes: receiving a face image of a sign-in user from a sign-in terminal. According to the face image of the sign-in user, whether a target registration user matching the sign-in user exists in a pre-stored registration set is detected. The registration set includes a face image of at least one registration user. Further, the target registration user is confirmed as signed in successfully if the target registration user exists in the registration set.Type: GrantFiled: August 10, 2018Date of Patent: May 26, 2020Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Feiyue Huang, Yongjian Wu, Guofu Tan, Jilin Li, Zhibo Chen, Xiaoqing Liang, Zhiwei Tao, Kejing Zhou, Ke Mei
-
Patent number: 10650259Abstract: The embodiment of the present invention provides a human face recognition method and recognition system. The method includes that: a human face recognition request is acquired, and a statement is randomly generated according to the human face recognition request; audio data and video data returned by a user in response to the statement are acquired; corresponding voice information is acquired according to the audio data; corresponding lip movement information is acquired according to the video data; and when the lip movement information and the voice information satisfy a preset rule, the human face recognition request is permitted. By performing fit goodness matching between the lip movement information and voice information in a video for dynamic human face recognition, an attack by human face recognition with a real photo may be effectively avoided, and higher security is achieved.Type: GrantFiled: July 7, 2017Date of Patent: May 12, 2020Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Chengjie Wang, Jilin Li, Hui Ni, Yongjian Wu, Feiyue Huang
-
Patent number: 10650830Abstract: Processing circuitry of an information processing apparatus obtains a set of identity vectors that are calculated according to voice samples from speakers. The identity vectors are classified into speaker classes respectively corresponding to the speakers. The processing circuitry selects, from the identity vectors, first subsets of interclass neighboring identity vectors respectively corresponding to the identity vectors and second subsets of intraclass neighboring identity vectors respectively corresponding to the identity vectors. The processing circuitry determines an interclass difference based on the first subsets of interclass neighboring identity vectors and the corresponding identity vectors; and determines an intraclass difference based on the second subsets of intraclass neighboring identify vectors and the corresponding identity vectors.Type: GrantFiled: April 16, 2018Date of Patent: May 12, 2020Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Wei Li, Binghua Qian, Xingming Jin, Ke Li, Fuzhang Wu, Yongjian Wu, Feiyue Huang