Patents by Inventor Jilin Li
Jilin Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10607066Abstract: The present disclosure discloses a living body identification method, an information generation method, and a terminal, and belongs to the field of biometric feature recognition. The method includes: providing lip language prompt information, the lip language prompt information including at least two target characters, and the at least two target characters being at least one of: characters of a same lip shape, characters of opposite lip shapes, or characters whose lip shape similarity is in a preset range; collecting at least two frame pictures; detecting whether lip changes of a to-be-identified object in the at least two frame pictures meet a preset condition, when the to-be-identified object reads the at least two target characters; and determining that the to-be-identified object is a living body, if the preset condition is met.Type: GrantFiled: March 17, 2017Date of Patent: March 31, 2020Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Jilin Li, Chengjie Wang, Feiyue Huang, Yongjian Wu, Hui Ni, Ruixin Zhang, Guofu Tan
-
Patent number: 10599913Abstract: Face model matrix training method, apparatus, and storage medium are provided. The method includes: obtaining a face image library, the face image library including k groups of face images, and each group of face images including at least one face image of at least one person, k>2, and k being an integer; separately parsing each group of the k groups of face images, and calculating a first matrix and a second matrix according to parsing results, the first matrix being an intra-group covariance matrix of facial features of each group of face images, and the second matrix being an inter-group covariance matrix of facial features of the k groups of face images; and training face model matrices according to the first matrix and the second matrix.Type: GrantFiled: July 11, 2019Date of Patent: March 24, 2020Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Shouhong Ding, Jilin Li, Chengjie Wang, Feiyue Huang, Yongjian Wu, Guofu Tan
-
Publication number: 20200057883Abstract: A face attribute recognition method, electronic device, and storage medium. The method may include obtaining a face image, inputting the face image into an attribute recognition model, performing a forward calculation on the face image using the attribute recognition model to obtain a plurality of attribute values according to different types of attributes, and outputting the plurality of attribute values, the plurality of attribute values indicating recognition results of a plurality of attributes of the face image. The attribute recognition model may be obtained through training based on a plurality of sample face images, a plurality of sample attribute recognition results of the plurality of sample face images, and the different types of attributes.Type: ApplicationFiled: October 28, 2019Publication date: February 20, 2020Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Yanhao Ge, Jilin Li, Chengjie Wang
-
Publication number: 20190372972Abstract: An identity verification method performed at a terminal includes playing in an audio form action guide information including mouth shape guide information selected from a preset action guide information library at a speed corresponding to the action guide information, and collecting a corresponding set of action images within a preset time window; performing matching detection on the collected set of action images and the action guide information, to obtain a living body detection result indicating whether a living body exists in the collected set of action images; according to the living body detection result that indicates that a living body exists in the collected set of action images: collecting user identity information and performing verification according to the collected user identity information, to obtain a user identity information verification result; and determining the identity verification result according to the user identity information verification result.Type: ApplicationFiled: August 15, 2019Publication date: December 5, 2019Inventors: Feiyue HUANG, Jilin LI, Guofu Tan, Xiaoli Jiang, Dan Wu, Junwu Chen, Jianguo Xie, Wei Guo, Yihui Liu, Jiandong Xie
-
Publication number: 20190332847Abstract: Face model matrix training method, apparatus, and storage medium are provided. The method includes: obtaining a face image library, the face image library including k groups of face images, and each group of face images including at least one face image of at least one person, k>2, and k being an integer; separately parsing each group of the k groups of face images, and calculating a first matrix and a second matrix according to parsing results, the first matrix being an intra-group covariance matrix of facial features of each group of face images, and the second matrix being an inter-group covariance matrix of facial features of the k groups of face images; and training face model matrices according to the first matrix and the second matrix.Type: ApplicationFiled: July 11, 2019Publication date: October 31, 2019Inventors: Shouhong DING, Jilin LI, Chengjie WANG, Feiyue HUANG, Yongjian WU, Guofu TAN
-
Patent number: 10438077Abstract: A face liveness detection method includes outputting a prompt to complete one or more specified actions in sequence within a specified time period, obtaining a face video, detecting a reference face image frame in the face video using a face detection method, locating a facial keypoint in the reference face image frame, tracking the facial keypoint in one or more subsequent face image frames, determining a state parameter of one of the one or more specified actions using a continuity analysis method according to the facial keypoint, and determining whether the one of the one or more specified actions is completed according to a continuity of the state parameter.Type: GrantFiled: October 9, 2017Date of Patent: October 8, 2019Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Chengjie Wang, Jilin Li, Feiyue Huang, Yongjian Wu
-
Patent number: 10438329Abstract: The method provided in the present disclosure includes: obtaining an image photographed by a camera, and performing face detection on the image by using a face detection algorithm, to obtain a face pixel set from the image; positioning a facial feature contour mask over the face pixel set, to obtain a to-be-examined pixel set from the face pixel set, the to-be-examined pixel set including: a plurality of pixels within an image area except pixels masked by the facial feature contour mask in the face pixel set; performing edge contour detection on the to-be-examined pixel set, and extracting one or more blemish regions from the to-be-examined pixel set, to obtain a to-be-retouched pixel set, the to-be-retouched pixel set including: a plurality of pixels within an image area belonging to the blemish regions; and retouching all pixels in the to-be-retouched pixel set, to obtain a retouched pixel set.Type: GrantFiled: September 8, 2017Date of Patent: October 8, 2019Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Guofu Tan, Jilin Li
-
Patent number: 10432624Abstract: An identity verification method performed at a terminal includes: displaying and/or playing in an audio form action guide information selected from a preset action guide information library, and collecting a corresponding set of action images within a preset time window; performing matching detection on the collected set of action images and the action guide information, to obtain a living body detection result indicating whether a living body exists in the collected set of action images; according to the living body detection result that indicates that a living body exists in the collected set of action images: collecting user identity information and performing verification according to the collected user identity information, to obtain a user identity information verification result; and determining the identity verification result according to the user identity information verification result.Type: GrantFiled: June 23, 2017Date of Patent: October 1, 2019Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Feiyue Huang, Jilin Li, Guofu Tan, Xiaoli Jiang, Dan Wu, Junwu Chen, Jianguo Xie, Wei Guo, Yihui Liu, Jiandong Xie
-
Patent number: 10410053Abstract: A method for detecting an information card in an image is provided. The method includes performing a line detection to obtain two endpoints of a line segment corresponding to each of four sides of the information card; generating, a linear equation of the side; obtaining coordinates of four intersection points of the four sides of the information card; mapping the coordinates of the four intersection points to four corners of a rectangular box of the information card, to obtain a perspective transformation matrix; performing perspective transformation on image content encircled by four straight lines represented by the four linear equations to provide transformed image content; forming a gradient template according to a layout of information content on the information card; and using the gradient template to match with the transformed image content and determining whether the image content is a correct information card.Type: GrantFiled: September 26, 2017Date of Patent: September 10, 2019Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Chengjie Wang, Hui Ni, Jilin Li
-
Publication number: 20190266385Abstract: Disclosed are an evaluation method and an evaluation device for a facial key point positioning result. In some embodiments, the evaluation method includes: acquiring a facial image and one or more positioning result coordinates of a key point of the facial image; performing a normalization process on the positioning result coordinate and an average facial model to obtain a normalized facial image; and extracting a facial feature value of the normalized facial image and calculating an evaluation result based on the facial feature value and a weight vector.Type: ApplicationFiled: May 10, 2019Publication date: August 29, 2019Inventors: Chengjie WANG, Jilin LI, Feiyue HUANG, Kekai SHENG, Weiming DONG
-
Patent number: 10395095Abstract: Face model matrix training method, apparatus, and storage medium are provided. The method includes: obtaining a face image library, the face image library including k groups of face images, and each group of face images including at least one face image of at least one person, k>2, and k being an integer; separately parsing each group of the k groups of face images, and calculating a first matrix and a second matrix according to parsing results, the first matrix being an intra-group covariance matrix of facial features of each group of face images, and the second matrix being an inter-group covariance matrix of facial features of the k groups of face images; and training face model matrices according to the first matrix and the second matrix.Type: GrantFiled: September 13, 2017Date of Patent: August 27, 2019Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Shouhong Ding, Jilin Li, Chengjie Wang, Feiyue Huang, Yongjian Wu, Guofu Tan
-
Publication number: 20190251337Abstract: A facial tracking method can include receiving a first vector of a first frame, and second vectors of second frames that are prior to the first frame in a video. The first vector is formed by coordinates of first facial feature points in the first frame and determined based on a facial registration method. Each second vector is formed by coordinates of second facial feature points in the respective second frame and previously determined based on the facial tracking method. A second vector of the first frame is determined according to a fitting function based on the second vectors of the first set of second frames. The fitting function has a set of coefficients that are determined by solving a problem of minimizing a function formulated based on a difference between the second vector and the first vector of the current frame, and a square sum of the coefficients.Type: ApplicationFiled: March 18, 2019Publication date: August 15, 2019Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Yicong LIANG, Chengjie WANG, Shaoxin LI, Yandan ZHAO, Jilin LI
-
Publication number: 20190205623Abstract: A facial tracking method is provided. The method includes: obtaining, from a video stream, an image that currently needs to be processed as a current image frame; and obtaining coordinates of facial key points in a previous image frame and a confidence level corresponding to the previous image frame. The method also includes calculating coordinates of facial key points in the current image frame according to the coordinates of the facial key points in the previous image frame when the confidence level is higher than a preset threshold; and performing multi-face recognition on the current image frame according to the coordinates of the facial key points in the current image frame. The method also includes calculating a confidence level of the coordinates of the facial key points in the current image frame, and returning to process a next frame until recognition on all image frames is completed.Type: ApplicationFiled: March 8, 2019Publication date: July 4, 2019Inventors: Chengjie WANG, Hui NI, Yandan ZHAO, Yabiao WANG, Shouhong DING, Shaoxin LI, Ling ZHAO, Jilin LI, Yongjian WU, Feiyue HUANG, Yicong LIANG
-
Patent number: 10331940Abstract: Disclosed are an evaluation method and an evaluation device for a facial key point positioning result. In some embodiments, the evaluation method includes: acquiring a facial image and one or more positioning result coordinates of a key point of the facial image; performing a normalization process on the positioning result coordinate and an average facial model to obtain a normalized facial image; and extracting a facial feature value of the normalized facial image and calculating an evaluation result based on the facial feature value and a weight vector.Type: GrantFiled: August 7, 2017Date of Patent: June 25, 2019Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Chengjie Wang, Jilin Li, Feiyue Huang, Kekai Sheng, Weiming Dong
-
Publication number: 20190138791Abstract: When a target image is captured, the device provides a portion of the target image within a target detection region to a preset first model set to calculate positions of face key points and a first confidence value. The face key points and the first confidence value are output by the first model set for a single input of the portion of the first target image into the first model set. When the first confidence value meets a first threshold corresponding to whether the target image is a face image, the device obtains a second target image corresponding to the positions of the first face key points; the device inputs the second target image into the first model set to calculate a second confidence value, the second confidence value corresponds to accuracy key point positioning, and outputs the first key points if the second confidence value meets a second threshold.Type: ApplicationFiled: December 17, 2018Publication date: May 9, 2019Inventors: Chengjie WANG, Jilin Li, Yandan Zhao, Hui Ni, Yabiao Wang, Ling Zhao
-
Publication number: 20190114467Abstract: A device receives an image-based authentication request from a specified object and performs human face authentication in a manner depending on whether the object wears glasses. Specifically, the device designates a glasses region on a daily photograph of the specified object using a glasses segmentation model. If the regions of the human face in the daily photograph labeled as glasses exceed a first threshold amount, the device modifies the daily photograph by changing pixel values of the regions that are labeled as being obscured by glasses. The device extracts features of a daily human face from the daily photograph and features of an identification human face from the identification photograph. The device approves the authentication request if a matching degree between the features of the daily human face and the features of the identification human face is greater than a second threshold amount.Type: ApplicationFiled: December 3, 2018Publication date: April 18, 2019Inventors: Yicong LIANG, Jilin LI, Chengjie WANG, Shouhong DING
-
Publication number: 20180349590Abstract: A sign-in method and server based on facial recognition are provided. The method includes: receiving a face image of a sign-in user from a sign-in terminal. According to the face image of the sign-in user, whether a target registration user matching the sign-in user exists in a pre-stored registration set is detected. The registration set includes a face image of at least one registration user. Further, the target registration user is confirmed as signed in successfully if the target registration user exists in the registration set.Type: ApplicationFiled: August 10, 2018Publication date: December 6, 2018Inventors: Feiyue HUANG, Yongjian WU, Guofu TAN, Jilin LI, Zhibo CHEN, Xiaoqing LIANG, Zhiwei TAO, Kejing ZHOU, Ke MEI
-
Publication number: 20180307928Abstract: The present application discloses a live human face verification method and device. The device acquires face images captured by at least two cameras and performs feature point registration on the face images according to preset face feature points, to obtain corresponding feature point combinations between the face images. After fitting out a homography transformation matrix among the feature point combinations, the device calculates transformation errors of the feature point combinations using the homography transformation matrix to obtain an error calculation result, and performing live human face verification of the face images according to the error calculation result. The embodiments of the present application do not need to calibrate the cameras, so the amount of calculation of a living body judgment algorithm can be reduced; moreover, the cameras can be freely placed, and thereby the flexibility and convenience of living body judgment can be increased.Type: ApplicationFiled: June 29, 2018Publication date: October 25, 2018Inventors: Guanbo BAO, Jilin LI
-
Patent number: 10068128Abstract: The present disclosure pertains to the field of image processing technologies and discloses a face key point positioning method and a terminal. The method includes: obtaining a face image; recognizing a face frame in the face image; determining positions of n key points of a target face in the face frame according to the face frame and a first positioning algorithm; performing screening to select, from candidate faces, a similar face whose positions of corresponding key points match the positions of the n key points of the target face; and determining positions of m key points of the similar face selected through screening according to a second positioning algorithm, m being a positive integer. In this way, the problem that positions of key points obtained by a terminal have relatively great deviations in the related technologies is resolved, thereby achieving an effect of improving accuracy of positioned positions of the key points.Type: GrantFiled: February 21, 2017Date of Patent: September 4, 2018Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Chengjie Wang, Jilin Li, Feiyue Huang, Yongjian Wu
-
Patent number: 10055879Abstract: A 3D human face reconstruction method and apparatus, and a server are provided. In some embodiments, the method includes determining feature points on an acquired 2D human face image; determining posture parameters of a human face according to the feature points, and adjusting a posture of a universal 3D human face model according to the posture parameters; determining points on the universal 3D human face model corresponding to the feature points, and adjusting the corresponding points in a sheltered status to obtain a preliminary 3D human face model; and performing deformation adjusting on the preliminary 3D human face model, and performing texture mapping on the deformed 3D human face model to obtain a final 3D human face.Type: GrantFiled: July 17, 2017Date of Patent: August 21, 2018Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Chengjie Wang, Jilin Li, Feiyue Huang, Lei Zhang