Patents by Inventor Jilin Li

Jilin Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11087476
    Abstract: A trajectory tracking method is provided for a computer device. The method includes performing motion tracking on head images in a plurality of video frames, to obtain motion trajectories corresponding to the head images; acquiring face images corresponding to the head images in the video frames, to obtain face image sets corresponding to the head images; determining from the face image sets corresponding to the head images, at least two face image sets having same face images; and combining motion trajectories corresponding to the at least two face image sets having same face images, to obtain a final motion trajectory of trajectory tracking.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: August 10, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Changwei He, Chengjie Wang, Jilin Li, Yabiao Wang, Yandan Zhao, Yanhao Ge, Hui Ni, Yichao Xiong, Zhenye Gan, Yongjian Wu, Feiyue Huang
  • Patent number: 10990803
    Abstract: When a target image is captured, the device provides a portion of the target image within a target detection region to a preset first model set to calculate positions of face key points and a first confidence value. The face key points and the first confidence value are output by the first model set for a single input of the portion of the first target image into the first model set. When the first confidence value meets a first threshold corresponding to whether the target image is a face image, the device obtains a second target image corresponding to the positions of the first face key points; the device inputs the second target image into the first model set to calculate a second confidence value, the second confidence value corresponds to accuracy key point positioning, and outputs the first key points if the second confidence value meets a second threshold.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: April 27, 2021
    Assignees: TENCENT TECHNOLOGY (SHENZHEN), COMPANY LIMITED
    Inventors: Chengjie Wang, Jilin Li, Yandan Zhao, Hui Ni, Yabiao Wang, Ling Zhao
  • Patent number: 10992666
    Abstract: An identity verification method performed at a terminal includes playing in an audio form action guide information including mouth shape guide information selected from a preset action guide information library at a speed corresponding to the action guide information, and collecting a corresponding set of action images within a preset time window; performing matching detection on the collected set of action images and the action guide information, to obtain a living body detection result indicating whether a living body exists in the collected set of action images; according to the living body detection result that indicates that a living body exists in the collected set of action images: collecting user identity information and performing verification according to the collected user identity information, to obtain a user identity information verification result; and determining the identity verification result according to the user identity information verification result.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: April 27, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Feiyue Huang, Jilin Li, Guofu Tan, Xiaoli Jiang, Dan Wu, Junwu Chen, Jianguo Xie, Wei Guo, Yihui Liu, Jiandong Xie
  • Publication number: 20210049347
    Abstract: This application relates to feature point positioning technologies. The technologies involve positioning a target area in a current image; determining an image feature difference between a target area in a reference image and the target area in the current image, the reference image being a frame of image that is processed before the current image and that includes the target area; determining a target figure point location of the target area in the reference image; determining a target feature point location difference between the target area in the reference image and the target area in the current image according to a feature point location difference determining model and the image feature difference; and positioning a target feature point in the target area in the current image according to the target feature point location of the target area in the reference image and the target feature point location difference.
    Type: Application
    Filed: November 4, 2020
    Publication date: February 18, 2021
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Yandan ZHAO, Yichao YAN, Weijian CAO, Yun CAO, Yanhao GE, Chengjie WANG, Jilin LI
  • Patent number: 10922529
    Abstract: A device receives an image-based authentication request from a specified object and performs human face authentication in a manner depending on whether the object wears glasses. Specifically, the device designates a glasses region on a daily photograph of the specified object using a glasses segmentation model. If the regions of the human face in the daily photograph labeled as glasses exceed a first threshold amount, the device modifies the daily photograph by changing pixel values of the regions that are labeled as being obscured by glasses. The device extracts features of a daily human face from the daily photograph and features of an identification human face from the identification photograph. The device approves the authentication request if a matching degree between the features of the daily human face and the features of the identification human face is greater than a second threshold amount.
    Type: Grant
    Filed: December 3, 2018
    Date of Patent: February 16, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yicong Liang, Jilin Li, Chengjie Wang, Shouhong Ding
  • Patent number: 10909356
    Abstract: A facial tracking method can include receiving a first vector of a first frame, and second vectors of second frames that are prior to the first frame in a video. The first vector is formed by coordinates of first facial feature points in the first frame and determined based on a facial registration method. Each second vector is formed by coordinates of second facial feature points in the respective second frame and previously determined based on the facial tracking method. A second vector of the first frame is determined according to a fitting function based on the second vectors of the first set of second frames. The fitting function has a set of coefficients that are determined by solving a problem of minimizing a function formulated based on a difference between the second vector and the first vector of the current frame, and a square sum of the coefficients.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: February 2, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yicong Liang, Chengjie Wang, Shaoxin Li, Yandan Zhao, Jilin Li
  • Publication number: 20200372243
    Abstract: This application relates to an image processing method and apparatus, a facial recognition method and apparatus, a computer device, and a readable storage medium.
    Type: Application
    Filed: August 12, 2020
    Publication date: November 26, 2020
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Ying TAI, Yun CAO, Shouhong DING, Shaoxin LI, Chengjie WANG, Jilin LI
  • Publication number: 20200364502
    Abstract: This application relates to a model training method.
    Type: Application
    Filed: August 4, 2020
    Publication date: November 19, 2020
    Inventors: Anping LI, Shaoxin LI, Chao CHEN, Pengcheng SHEN, Shuang WU, Jilin LI
  • Publication number: 20200356767
    Abstract: This application discloses a human attribute recognition method performed at a computing device. The method includes: determining a human body region image in a surveillance image; inputting the human body region image into a multi-attribute convolutional neural network model, to obtain, for each of a plurality of human attributes in the human body region image, a probability that the human attribute corresponds to a respective predefined attribute value, the multi-attribute convolutional neural network model being obtained by performing multi-attribute recognition and training on a set of pre-obtained training images by using a multi-attribute convolutional neural network; determining, for each of the plurality of human attributes in the human body region image, the attribute value of the human attribute based on the corresponding probability; and displaying the attribute values of the plurality of human attributes next to the human body region image.
    Type: Application
    Filed: July 24, 2020
    Publication date: November 12, 2020
    Inventors: Siqian YANG, Jilin Li, Yongjian Wu, Yichao Yan, Keke He, Yanhano Ge, Feiyue Huang, Chengjie Wang
  • Publication number: 20200342214
    Abstract: This application relates to a face recognition method performed at a computer server. After obtaining a to-be-recognized face image, the server inputs the to-be-recognized face image into a classification model. The server then obtains a recognition result of the to-be-recognized face image through the classification model. The classification model is obtained by inputting a training sample marked with class information into the classification model, outputting an output result of the training sample, calculating a loss of the classification model in a training process according to the output result, the class information and model parameters of the classification model, and performing back propagation optimization on the classification model according to the loss.
    Type: Application
    Filed: July 13, 2020
    Publication date: October 29, 2020
    Inventors: Anping LI, Shaoxin Li, Chao Chen, Pengchen Shen, Jilin Li
  • Patent number: 10817708
    Abstract: A facial tracking method is provided. The method includes: obtaining, from a video stream, an image that currently needs to be processed as a current image frame; and obtaining coordinates of facial key points in a previous image frame and a confidence level corresponding to the previous image frame. The method also includes calculating coordinates of facial key points in the current image frame according to the coordinates of the facial key points in the previous image frame when the confidence level is higher than a preset threshold; and performing multi-face recognition on the current image frame according to the coordinates of the facial key points in the current image frame. The method also includes calculating a confidence level of the coordinates of the facial key points in the current image frame, and returning to process a next frame until recognition on all image frames is completed.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: October 27, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Chengjie Wang, Hui Ni, Yandan Zhao, Yabiao Wang, Shouhong Ding, Shaoxin Li, Ling Zhao, Jilin Li, Yongjian Wu, Feiyue Huang, Yicong Liang
  • Publication number: 20200334830
    Abstract: This present disclosure describes a video image processing method and apparatus, a computer-readable medium and an electronic device, relating to the field of image processing technologies. The method includes determining, by a device, a target-object region in a current frame in a video. The device includes a memory storing instructions and a processor in communication with the memory. The method also includes determining, by the device, a target-object tracking image in a next frame and corresponding to the target-object region; and sequentially performing, by the device, a plurality of sets of convolution processing on the target-object tracking image to determine a target-object region in the next frame. A quantity of convolutions of a first set of convolution processing in the plurality of sets of convolution processing is less than a quantity of convolutions of any other set of convolution processing.
    Type: Application
    Filed: July 7, 2020
    Publication date: October 22, 2020
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yabiao WANG, Yanhao GE, Zhenye GAN, Yuan HUANG, Changyou DENG, Yafeng ZHAO, Feiyue HUANG, Yongjian WU, Xiaoming HUANG, Xiaolong LIANG, Chengjie WANG, Jilin LI
  • Patent number: 10796179
    Abstract: The present application discloses a live human face verification method and device. The device acquires face images captured by at least two cameras and performs feature point registration on the face images according to preset face feature points, to obtain corresponding feature point combinations between the face images. After fitting out a homography transformation matrix among the feature point combinations, the device calculates transformation errors of the feature point combinations using the homography transformation matrix to obtain an error calculation result, and performing live human face verification of the face images according to the error calculation result. The embodiments of the present application do not need to calibrate the cameras, so the amount of calculation of a living body judgment algorithm can be reduced; moreover, the cameras can be freely placed, and thereby the flexibility and convenience of living body judgment can be increased.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: October 6, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Guanbo Bao, Jilin Li
  • Publication number: 20200294250
    Abstract: A trajectory tracking method is provided for a computer device. The method includes performing motion tracking on head images in a plurality of video frames, to obtain motion trajectories corresponding to the head images; acquiring face images corresponding to the head images in the video frames, to obtain face image sets corresponding to the head images; determining from the face image sets corresponding to the head images, at least two face image sets having same face images; and combining motion trajectories corresponding to the at least two face image sets having same face images, to obtain a final motion trajectory of trajectory tracking.
    Type: Application
    Filed: June 2, 2020
    Publication date: September 17, 2020
    Inventors: Changwei HE, Chengjie WANG, Jilin LI, Yabiao WANG, Yandan ZHAO, Yanhao GE, Hui NI, Yichao XIONG, Zhenye GAN, Yongjian WU, Feiyue HUANG
  • Publication number: 20200257914
    Abstract: A face liveness recognition method includes: obtaining a target image containing a facial image; extracting facial feature data of the facial image in the target image; performing face liveness recognition according to the facial feature data to obtain a first confidence level using a first recognition model, the first confidence level denoting a first probability of recognizing a live face; extracting background feature data from an extended facial image, the extended facial image being obtained by extending a region that covers the facial image; performing face liveness recognition according to the background feature data to obtain a second confidence level using a second recognition model, the second confidence level denoting a second probability of recognizing a live face; and according to the first confidence level and the second confidence level, obtaining a recognition result indicating that the target image is a live facial image.
    Type: Application
    Filed: April 30, 2020
    Publication date: August 13, 2020
    Inventors: Shuang WU, Shouhong DING, Yicong LIANG, Yao LIU, Jilin LI
  • Patent number: 10713532
    Abstract: The present disclosure discloses an image recognition method and apparatus, and belongs to the field of computer technologies. The method includes: extracting a local binary pattern (LBP) feature vector of a target image; calculating a high-dimensional feature vector of the target image according to the LBP feature vector; obtaining a training matrix, the training matrix being a matrix obtained by training images in an image library by using a joint Bayesian algorithm; and recognizing the target image according to the high-dimensional feature vector of the target image and the training matrix. The image recognition method and apparatus according to the present disclosure may combine LBP algorithm with a joint Bayesian algorithm to perform recognition, thereby improving the accuracy of image recognition.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: July 14, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Shouhong Ding, Jilin Li, Chengjie Wang, Feiyue Huang, Yongjian Wu, Guofu Tan
  • Patent number: 10706263
    Abstract: Disclosed are an evaluation method and an evaluation device for a facial key point positioning result. In some embodiments, the evaluation method includes: acquiring a facial image and one or more positioning result coordinates of a key point of the facial image; performing a normalization process on the positioning result coordinate and an average facial model to obtain a normalized facial image; and extracting a facial feature value of the normalized facial image and calculating an evaluation result based on the facial feature value and a weight vector.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: July 7, 2020
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Chengjie Wang, Jilin Li, Feiyue Huang, Kekai Sheng, Weiming Dong
  • Patent number: 10664693
    Abstract: Aspects of the disclosure provide a method for adding a target contact to a user's friend list in a social network. A target image of a human body part of the target contact can be received from a user terminal. A target biological feature can be extracted from the target image. Whether the target biological feature matches a reference biological feature of a plurality of prestored reference biological features can be determined. A social account associated with the determined reference biological feature that matches the target biological feature may be determined, and added to the user's friend list.
    Type: Grant
    Filed: April 11, 2018
    Date of Patent: May 26, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Feiyue Huang, Jilin Li, Chengjie Wang
  • Patent number: 10664580
    Abstract: A sign-in method and server based on facial recognition are provided. The method includes: receiving a face image of a sign-in user from a sign-in terminal. According to the face image of the sign-in user, whether a target registration user matching the sign-in user exists in a pre-stored registration set is detected. The registration set includes a face image of at least one registration user. Further, the target registration user is confirmed as signed in successfully if the target registration user exists in the registration set.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: May 26, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Feiyue Huang, Yongjian Wu, Guofu Tan, Jilin Li, Zhibo Chen, Xiaoqing Liang, Zhiwei Tao, Kejing Zhou, Ke Mei
  • Patent number: 10650259
    Abstract: The embodiment of the present invention provides a human face recognition method and recognition system. The method includes that: a human face recognition request is acquired, and a statement is randomly generated according to the human face recognition request; audio data and video data returned by a user in response to the statement are acquired; corresponding voice information is acquired according to the audio data; corresponding lip movement information is acquired according to the video data; and when the lip movement information and the voice information satisfy a preset rule, the human face recognition request is permitted. By performing fit goodness matching between the lip movement information and voice information in a video for dynamic human face recognition, an attack by human face recognition with a real photo may be effectively avoided, and higher security is achieved.
    Type: Grant
    Filed: July 7, 2017
    Date of Patent: May 12, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Chengjie Wang, Jilin Li, Hui Ni, Yongjian Wu, Feiyue Huang