Patents by Inventor Chengjie Wang

Chengjie Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190138791
    Abstract: When a target image is captured, the device provides a portion of the target image within a target detection region to a preset first model set to calculate positions of face key points and a first confidence value. The face key points and the first confidence value are output by the first model set for a single input of the portion of the first target image into the first model set. When the first confidence value meets a first threshold corresponding to whether the target image is a face image, the device obtains a second target image corresponding to the positions of the first face key points; the device inputs the second target image into the first model set to calculate a second confidence value, the second confidence value corresponds to accuracy key point positioning, and outputs the first key points if the second confidence value meets a second threshold.
    Type: Application
    Filed: December 17, 2018
    Publication date: May 9, 2019
    Inventors: Chengjie WANG, Jilin Li, Yandan Zhao, Hui Ni, Yabiao Wang, Ling Zhao
  • Publication number: 20190114467
    Abstract: A device receives an image-based authentication request from a specified object and performs human face authentication in a manner depending on whether the object wears glasses. Specifically, the device designates a glasses region on a daily photograph of the specified object using a glasses segmentation model. If the regions of the human face in the daily photograph labeled as glasses exceed a first threshold amount, the device modifies the daily photograph by changing pixel values of the regions that are labeled as being obscured by glasses. The device extracts features of a daily human face from the daily photograph and features of an identification human face from the identification photograph. The device approves the authentication request if a matching degree between the features of the daily human face and the features of the identification human face is greater than a second threshold amount.
    Type: Application
    Filed: December 3, 2018
    Publication date: April 18, 2019
    Inventors: Yicong LIANG, Jilin LI, Chengjie WANG, Shouhong DING
  • Patent number: 10068128
    Abstract: The present disclosure pertains to the field of image processing technologies and discloses a face key point positioning method and a terminal. The method includes: obtaining a face image; recognizing a face frame in the face image; determining positions of n key points of a target face in the face frame according to the face frame and a first positioning algorithm; performing screening to select, from candidate faces, a similar face whose positions of corresponding key points match the positions of the n key points of the target face; and determining positions of m key points of the similar face selected through screening according to a second positioning algorithm, m being a positive integer. In this way, the problem that positions of key points obtained by a terminal have relatively great deviations in the related technologies is resolved, thereby achieving an effect of improving accuracy of positioned positions of the key points.
    Type: Grant
    Filed: February 21, 2017
    Date of Patent: September 4, 2018
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Chengjie Wang, Jilin Li, Feiyue Huang, Yongjian Wu
  • Patent number: 10055879
    Abstract: A 3D human face reconstruction method and apparatus, and a server are provided. In some embodiments, the method includes determining feature points on an acquired 2D human face image; determining posture parameters of a human face according to the feature points, and adjusting a posture of a universal 3D human face model according to the posture parameters; determining points on the universal 3D human face model corresponding to the feature points, and adjusting the corresponding points in a sheltered status to obtain a preliminary 3D human face model; and performing deformation adjusting on the preliminary 3D human face model, and performing texture mapping on the deformed 3D human face model to obtain a final 3D human face.
    Type: Grant
    Filed: July 17, 2017
    Date of Patent: August 21, 2018
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Chengjie Wang, Jilin Li, Feiyue Huang, Lei Zhang
  • Publication number: 20180232570
    Abstract: Aspects of the disclosure provide a method for adding a target contact to a user's friend list in a social network. A target image of a human body part of the target contact can be received from a user terminal. A target biological feature can be extracted from the target image. Whether the target biological feature matches a reference biological feature of a plurality of prestored reference biological features can be determined. A social account associated with the determined reference biological feature that matches the target biological feature may be determined, and added to the user's friend list.
    Type: Application
    Filed: April 11, 2018
    Publication date: August 16, 2018
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Feiyue HUANG, Jilin LI, Chengjie WANG
  • Publication number: 20180225842
    Abstract: A method of determining a facial pose angle of a human face within an image is provided. After capturing a first image of the human face, respective coordinates of a predefined set of facial feature points of the human face in the first image are obtained. The predefined set of facial feature points includes an odd number of facial feature points, e.g., at least a first pair of symmetrical facial feature points, a second pair of symmetrical facial feature points, and a first single facial feature point. The predefined set of facial feature points are not coplanar. Next, one or more predefined key values based on the respective coordinates of the predefined set of facial feature points of the human face in the first image are calculated. Finally, a pre-established correspondence table is queried using the one or more predefined key values to determine the facial pose angle of the human face in the first image.
    Type: Application
    Filed: April 3, 2018
    Publication date: August 9, 2018
    Inventor: Chengjie WANG
  • Publication number: 20180225800
    Abstract: A method for identifying whether a standard picture contains a watermark is provided. After obtaining a set of sample standard pictures, one or more sample pictures in the set of sample standard pictures are adjusted to a preset size. The sample pictures in the set of sample standard pictures do not contain watermark information. Next, an average of pixel attribute values of the sample pictures at pixel positions of the preset size is calculated. The average of the pixel attribute values at the pixel positions of the preset size is normalized to obtain the watermark-presence probabilities of the pixel positions of the preset size. Then, a target picture is adjusted to the preset size and a sum of products of pixel attribute values of the target picture at the pixel positions of the preset size and the corresponding watermark-presence probability are calculated. Finally, it is determined whether the target picture contains a watermark according to the sum of products.
    Type: Application
    Filed: April 4, 2018
    Publication date: August 9, 2018
    Inventor: Chengjie WANG
  • Publication number: 20180204094
    Abstract: The present disclosure discloses an image recognition method and apparatus, and belongs to the field of computer technologies. The method includes: extracting a local binary pattern (LBP) feature vector of a target image; calculating a high-dimensional feature vector of the target image according to the LBP feature vector; obtaining a training matrix, the training matrix being a matrix obtained by training images in an image library by using a joint Bayesian algorithm; and recognizing the target image according to the high-dimensional feature vector of the target image and the training matrix. The image recognition method and apparatus according to the present disclosure may combine LBP algorithm with a joint Bayesian algorithm to perform recognition, thereby improving the accuracy of image recognition.
    Type: Application
    Filed: March 19, 2018
    Publication date: July 19, 2018
    Inventors: Shouhong Ding, Jilin Li, Chengjie Wang, Feiyue Huang, Yongjian Wu, Guofu Tan
  • Publication number: 20180032828
    Abstract: A face liveness detection method includes outputting a prompt to complete one or more specified actions in sequence within a specified time period, obtaining a face video, detecting a reference face image frame in the face video using a face detection method, locating a facial keypoint in the reference face image frame, tracking the facial keypoint in one or more subsequent face image frames, determining a state parameter of one of the one or more specified actions using a continuity analysis method according to the facial keypoint, and determining whether the one of the one or more specified actions is completed according to a continuity of the state parameter.
    Type: Application
    Filed: October 9, 2017
    Publication date: February 1, 2018
    Inventors: Chengjie WANG, Jilin LI, Feiyue HUANG, Yongjian WU
  • Publication number: 20180018512
    Abstract: Method, apparatus, system, and storage medium for detecting an information card in an image are provided. The method includes performing a line detection to obtain two endpoints of a line segment corresponding to each of four sides of the information card; generating, a linear equation of the side; obtaining coordinates of four intersection points of the four sides of the information card; mapping the coordinates of the four intersection points to four corners of a rectangular box of the information card, to obtain a perspective transformation matrix; performing perspective transformation on image content encircled by four straight lines represented by the four linear equations to provide transformed image content; forming a gradient template according to a layout of information content on the information card; and using the gradient template to match with the transformed image content and determining whether the image content is a correct information card.
    Type: Application
    Filed: September 26, 2017
    Publication date: January 18, 2018
    Inventors: Chengjie WANG, Hui NI, Jilin LI
  • Publication number: 20180018503
    Abstract: Method, terminal, and storage medium for tracking facial critical area are provided. The method includes accessing a frame of image in a video file; obtaining coordinate frame data of a facial part in the image; determining initial coordinate frame data of a critical area in the facial part according to the coordinate frame data of the facial part; obtaining coordinate frame data of the critical area according to the initial coordinate frame data of the critical area in the facial part; accessing an adjacent next frame of image in the video file; obtaining initial coordinate frame data of the critical area in the facial part for the adjacent next frame of image by using the coordinate frame data of the critical area in the frame; and obtaining coordinate frame data of the critical area for the adjacent next frame of image according to the initial coordinate frame data thereof.
    Type: Application
    Filed: September 26, 2017
    Publication date: January 18, 2018
    Inventor: Chengjie WANG
  • Publication number: 20180005017
    Abstract: Face model matrix training method, apparatus, and storage medium are provided. The method includes: obtaining a face image library, the face image library including k groups of face images, and each group of face images including at least one face image of at least one person, k>2, and k being an integer; separately parsing each group of the k groups of face images, and calculating a first matrix and a second matrix according to parsing results, the first matrix being an intra-group covariance matrix of facial features of each group of face images, and the second matrix being an inter-group covariance matrix of facial features of the k groups of face images; and training face model matrices according to the first matrix and the second matrix.
    Type: Application
    Filed: September 13, 2017
    Publication date: January 4, 2018
    Inventors: Shouhong DING, Jilin LI, Chengjie WANG, Feiyue HUANG, Yongjian WU, Guofu TAN
  • Publication number: 20170364738
    Abstract: This application discloses a method and a terminal for detecting glasses in a face image. The method includes: obtaining a face image; determining a nose bridge region in the face image; detecting an image change in the nose bridge region to obtain an image change result of the nose bridge region; and determining whether there are glasses in the face image according to the image change result of the nose bridge region. The terminal for detecting glasses in a face image matches the method.
    Type: Application
    Filed: September 5, 2017
    Publication date: December 21, 2017
    Inventors: Chengjie WANG, Guofu TAN, Hui NI
  • Publication number: 20170344811
    Abstract: Embodiments of the present disclosure provide an image processing method and apparatus. The method includes detecting a human face region in each frame of an image in a to-be-processed video; locating a lip region in the human face region; extracting feature column pixels in the lip region from each frame of the image; building a lip change graph based on the feature column pixels; and recognizing a lip movement according to a pattern feature of the lip change graph.
    Type: Application
    Filed: August 18, 2017
    Publication date: November 30, 2017
    Inventors: HUI NI, CHENGJIE WANG
  • Publication number: 20170337420
    Abstract: Disclosed are an evaluation method and an evaluation device for a facial key point positioning result. In some embodiments, the evaluation method includes: acquiring a facial image and one or more positioning result coordinates of a key point of the facial image; performing a normalization process on the positioning result coordinate and an average facial model to obtain a normalized facial image; and extracting a facial feature value of the normalized facial image and calculating an evaluation result based on the facial feature value and a weight vector.
    Type: Application
    Filed: August 7, 2017
    Publication date: November 23, 2017
    Inventors: Chengjie WANG, Jilin LI, Feiyue HUANG, Kekai SHENG, Weiming DONG
  • Publication number: 20170316598
    Abstract: A 3D human face reconstruction method and apparatus, and a server are provided. In some embodiments, the method includes determining feature points on an acquired 2D human face image; determining posture parameters of a human face according to the feature points, and adjusting a posture of a universal 3D human face model according to the posture parameters; determining points on the universal 3D human face model corresponding to the feature points, and adjusting the corresponding points in a sheltered status to obtain a preliminary 3D human face model; and performing deformation adjusting on the preliminary 3D human face model, and performing texture mapping on the deformed 3D human face model to obtain a final 3D human face.
    Type: Application
    Filed: July 17, 2017
    Publication date: November 2, 2017
    Inventors: Chengjie Wang, Jilin Li, Feiyue Huang, Lei Zhang
  • Publication number: 20170308739
    Abstract: The embodiment of the present invention provides a human face recognition method and recognition system. The method includes that: a human face recognition request is acquired, and a statement is randomly generated according to the human face recognition request; audio data and video data returned by a user in response to the statement are acquired; corresponding voice information is acquired according to the audio data; corresponding lip movement information is acquired according to the video data; and when the lip movement information and the voice information satisfy a preset rule, the human face recognition request is permitted. By performing fit goodness matching between the lip movement information and voice information in a video for dynamic human face recognition, an attack by human face recognition with a real photo may be effectively avoided, and higher security is achieved.
    Type: Application
    Filed: July 7, 2017
    Publication date: October 26, 2017
    Inventors: Chengjie WANG, Jilin LI, Hui NI, Yongjian WU, Feiyue HUANG
  • Publication number: 20170247945
    Abstract: A rotor hub for a wind turbine may generally include a hub body defining both a plurality of blade flanges and a plurality of access ports spaced apart from the blade flanges. In addition, the rotor hub may include a ladder assembly extending within an interior of the hub body. The ladder assembly may include a plurality of platforms, with each platform defining a planar surface and being circumferentially aligned with a respective one of the plurality of access ports. The ladder assembly may also include a connecting frame extending between each pair of adjacent platforms so as to couple the adjacent platforms to one another. The connecting frame may extend lengthwise along a reference line defined between the adjacent platforms. The platforms may be positioned relative to one another such that the reference line extends at a non-perpendicular angle relative to the planar surfaces defined by the adjacent platforms.
    Type: Application
    Filed: February 24, 2017
    Publication date: August 31, 2017
    Inventors: Chengjie Wang, Mohan Muthu Kumar Sivanantham, Vidya Sagar Meesala
  • Publication number: 20170193287
    Abstract: The present disclosure discloses a living body identification method, an information generation method, and a terminal, and belongs to the field of biometric feature recognition. The method includes: providing lip language prompt information, the lip language prompt information including at least two target characters, and the at least two target characters being at least one of: characters of a same lip shape, characters of opposite lip shapes, or characters whose lip shape similarity is in a preset range; collecting at least two frame pictures; detecting whether lip changes of a to-be-identified object in the at least two frame pictures meet a preset condition, when the to-be-identified object reads the at least two target characters; and determining that the to-be-identified object is a living body, if the preset condition is met.
    Type: Application
    Filed: March 17, 2017
    Publication date: July 6, 2017
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Jilin LI, Chengjie WANG, Feiyue HUANG, Yongjian WU, Hui NI, Ruixin ZHANG, Guofu TAN
  • Publication number: 20170161551
    Abstract: The present disclosure pertains to the field of image processing technologies and discloses a face key point positioning method and a terminal. The method includes: obtaining a face image; recognizing a face frame in the face image; determining positions of n key points of a target face in the face frame according to the face frame and a first positioning algorithm; performing screening to select, from candidate faces, a similar face whose positions of corresponding key points match the positions of the n key points of the target face; and determining positions of m key points of the similar face selected through screening according to a second positioning algorithm, m being a positive integer. In this way, the problem that positions of key points obtained by a terminal have relatively great deviations in the related technologies is resolved, thereby achieving an effect of improving accuracy of positioned positions of the key points.
    Type: Application
    Filed: February 21, 2017
    Publication date: June 8, 2017
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Chengjie WANG, Jilin LI, Feiyue HUANG, Yongjian WU