Patents by Inventor Chengjie Wang
Chengjie Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10487581Abstract: A rotor hub for a wind turbine may generally include a hub body defining both a plurality of blade flanges and a plurality of access ports spaced apart from the blade flanges. In addition, the rotor hub may include a ladder assembly extending within an interior of the hub body. The ladder assembly may include a plurality of platforms, with each platform defining a planar surface and being circumferentially aligned with a respective one of the plurality of access ports. The ladder assembly may also include a connecting frame extending between each pair of adjacent platforms so as to couple the adjacent platforms to one another. The connecting frame may extend lengthwise along a reference line defined between the adjacent platforms. The platforms may be positioned relative to one another such that the reference line extends at a non-perpendicular angle relative to the planar surfaces defined by the adjacent platforms.Type: GrantFiled: February 24, 2017Date of Patent: November 26, 2019Assignee: General Electric CompanyInventors: Chengjie Wang, Mohan Muthu Kumar Sivanantham, Vidya Sagar Meesala
-
Publication number: 20190332847Abstract: Face model matrix training method, apparatus, and storage medium are provided. The method includes: obtaining a face image library, the face image library including k groups of face images, and each group of face images including at least one face image of at least one person, k>2, and k being an integer; separately parsing each group of the k groups of face images, and calculating a first matrix and a second matrix according to parsing results, the first matrix being an intra-group covariance matrix of facial features of each group of face images, and the second matrix being an inter-group covariance matrix of facial features of the k groups of face images; and training face model matrices according to the first matrix and the second matrix.Type: ApplicationFiled: July 11, 2019Publication date: October 31, 2019Inventors: Shouhong DING, Jilin LI, Chengjie WANG, Feiyue HUANG, Yongjian WU, Guofu TAN
-
Patent number: 10452893Abstract: Method, terminal, and storage medium for tracking facial critical area are provided. The method includes accessing a frame of image in a video file; obtaining coordinate frame data of a facial part in the image; determining initial coordinate frame data of a critical area in the facial part according to the coordinate frame data of the facial part; obtaining coordinate frame data of the critical area according to the initial coordinate frame data of the critical area in the facial part; accessing an adjacent next frame of image in the video file; obtaining initial coordinate frame data of the critical area in the facial part for the adjacent next frame of image by using the coordinate frame data of the critical area in the frame; and obtaining coordinate frame data of the critical area for the adjacent next frame of image according to the initial coordinate frame data thereof.Type: GrantFiled: September 26, 2017Date of Patent: October 22, 2019Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventor: Chengjie Wang
-
Patent number: 10438077Abstract: A face liveness detection method includes outputting a prompt to complete one or more specified actions in sequence within a specified time period, obtaining a face video, detecting a reference face image frame in the face video using a face detection method, locating a facial keypoint in the reference face image frame, tracking the facial keypoint in one or more subsequent face image frames, determining a state parameter of one of the one or more specified actions using a continuity analysis method according to the facial keypoint, and determining whether the one of the one or more specified actions is completed according to a continuity of the state parameter.Type: GrantFiled: October 9, 2017Date of Patent: October 8, 2019Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Chengjie Wang, Jilin Li, Feiyue Huang, Yongjian Wu
-
Patent number: 10410053Abstract: A method for detecting an information card in an image is provided. The method includes performing a line detection to obtain two endpoints of a line segment corresponding to each of four sides of the information card; generating, a linear equation of the side; obtaining coordinates of four intersection points of the four sides of the information card; mapping the coordinates of the four intersection points to four corners of a rectangular box of the information card, to obtain a perspective transformation matrix; performing perspective transformation on image content encircled by four straight lines represented by the four linear equations to provide transformed image content; forming a gradient template according to a layout of information content on the information card; and using the gradient template to match with the transformed image content and determining whether the image content is a correct information card.Type: GrantFiled: September 26, 2017Date of Patent: September 10, 2019Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Chengjie Wang, Hui Ni, Jilin Li
-
Publication number: 20190266385Abstract: Disclosed are an evaluation method and an evaluation device for a facial key point positioning result. In some embodiments, the evaluation method includes: acquiring a facial image and one or more positioning result coordinates of a key point of the facial image; performing a normalization process on the positioning result coordinate and an average facial model to obtain a normalized facial image; and extracting a facial feature value of the normalized facial image and calculating an evaluation result based on the facial feature value and a weight vector.Type: ApplicationFiled: May 10, 2019Publication date: August 29, 2019Inventors: Chengjie WANG, Jilin LI, Feiyue HUANG, Kekai SHENG, Weiming DONG
-
Patent number: 10395095Abstract: Face model matrix training method, apparatus, and storage medium are provided. The method includes: obtaining a face image library, the face image library including k groups of face images, and each group of face images including at least one face image of at least one person, k>2, and k being an integer; separately parsing each group of the k groups of face images, and calculating a first matrix and a second matrix according to parsing results, the first matrix being an intra-group covariance matrix of facial features of each group of face images, and the second matrix being an inter-group covariance matrix of facial features of the k groups of face images; and training face model matrices according to the first matrix and the second matrix.Type: GrantFiled: September 13, 2017Date of Patent: August 27, 2019Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Shouhong Ding, Jilin Li, Chengjie Wang, Feiyue Huang, Yongjian Wu, Guofu Tan
-
Patent number: 10395094Abstract: This application discloses a method and a terminal for detecting glasses in a face image. The method includes: obtaining a face image; determining a nose bridge region in the face image; detecting an image change in the nose bridge region to obtain an image change result of the nose bridge region; and determining whether there are glasses in the face image according to the image change result of the nose bridge region. The terminal for detecting glasses in a face image matches the method.Type: GrantFiled: September 5, 2017Date of Patent: August 27, 2019Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Chengjie Wang, Guofu Tan, Hui Ni
-
Publication number: 20190251337Abstract: A facial tracking method can include receiving a first vector of a first frame, and second vectors of second frames that are prior to the first frame in a video. The first vector is formed by coordinates of first facial feature points in the first frame and determined based on a facial registration method. Each second vector is formed by coordinates of second facial feature points in the respective second frame and previously determined based on the facial tracking method. A second vector of the first frame is determined according to a fitting function based on the second vectors of the first set of second frames. The fitting function has a set of coefficients that are determined by solving a problem of minimizing a function formulated based on a difference between the second vector and the first vector of the current frame, and a square sum of the coefficients.Type: ApplicationFiled: March 18, 2019Publication date: August 15, 2019Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Yicong LIANG, Chengjie WANG, Shaoxin LI, Yandan ZHAO, Jilin LI
-
Patent number: 10360441Abstract: Embodiments of the present disclosure provide an image processing method and apparatus. The method includes detecting a human face region in each frame of an image in a to-be-processed video; locating a lip region in the human face region; extracting feature column pixels in the lip region from each frame of the image; building a lip change graph based on the feature column pixels; and recognizing a lip movement according to a pattern feature of the lip change graph.Type: GrantFiled: August 18, 2017Date of Patent: July 23, 2019Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Hui Ni, Chengjie Wang
-
Publication number: 20190205623Abstract: A facial tracking method is provided. The method includes: obtaining, from a video stream, an image that currently needs to be processed as a current image frame; and obtaining coordinates of facial key points in a previous image frame and a confidence level corresponding to the previous image frame. The method also includes calculating coordinates of facial key points in the current image frame according to the coordinates of the facial key points in the previous image frame when the confidence level is higher than a preset threshold; and performing multi-face recognition on the current image frame according to the coordinates of the facial key points in the current image frame. The method also includes calculating a confidence level of the coordinates of the facial key points in the current image frame, and returning to process a next frame until recognition on all image frames is completed.Type: ApplicationFiled: March 8, 2019Publication date: July 4, 2019Inventors: Chengjie WANG, Hui NI, Yandan ZHAO, Yabiao WANG, Shouhong DING, Shaoxin LI, Ling ZHAO, Jilin LI, Yongjian WU, Feiyue HUANG, Yicong LIANG
-
Patent number: 10331940Abstract: Disclosed are an evaluation method and an evaluation device for a facial key point positioning result. In some embodiments, the evaluation method includes: acquiring a facial image and one or more positioning result coordinates of a key point of the facial image; performing a normalization process on the positioning result coordinate and an average facial model to obtain a normalized facial image; and extracting a facial feature value of the normalized facial image and calculating an evaluation result based on the facial feature value and a weight vector.Type: GrantFiled: August 7, 2017Date of Patent: June 25, 2019Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Chengjie Wang, Jilin Li, Feiyue Huang, Kekai Sheng, Weiming Dong
-
Publication number: 20190138791Abstract: When a target image is captured, the device provides a portion of the target image within a target detection region to a preset first model set to calculate positions of face key points and a first confidence value. The face key points and the first confidence value are output by the first model set for a single input of the portion of the first target image into the first model set. When the first confidence value meets a first threshold corresponding to whether the target image is a face image, the device obtains a second target image corresponding to the positions of the first face key points; the device inputs the second target image into the first model set to calculate a second confidence value, the second confidence value corresponds to accuracy key point positioning, and outputs the first key points if the second confidence value meets a second threshold.Type: ApplicationFiled: December 17, 2018Publication date: May 9, 2019Inventors: Chengjie WANG, Jilin Li, Yandan Zhao, Hui Ni, Yabiao Wang, Ling Zhao
-
Publication number: 20190114467Abstract: A device receives an image-based authentication request from a specified object and performs human face authentication in a manner depending on whether the object wears glasses. Specifically, the device designates a glasses region on a daily photograph of the specified object using a glasses segmentation model. If the regions of the human face in the daily photograph labeled as glasses exceed a first threshold amount, the device modifies the daily photograph by changing pixel values of the regions that are labeled as being obscured by glasses. The device extracts features of a daily human face from the daily photograph and features of an identification human face from the identification photograph. The device approves the authentication request if a matching degree between the features of the daily human face and the features of the identification human face is greater than a second threshold amount.Type: ApplicationFiled: December 3, 2018Publication date: April 18, 2019Inventors: Yicong LIANG, Jilin LI, Chengjie WANG, Shouhong DING
-
Patent number: 10068128Abstract: The present disclosure pertains to the field of image processing technologies and discloses a face key point positioning method and a terminal. The method includes: obtaining a face image; recognizing a face frame in the face image; determining positions of n key points of a target face in the face frame according to the face frame and a first positioning algorithm; performing screening to select, from candidate faces, a similar face whose positions of corresponding key points match the positions of the n key points of the target face; and determining positions of m key points of the similar face selected through screening according to a second positioning algorithm, m being a positive integer. In this way, the problem that positions of key points obtained by a terminal have relatively great deviations in the related technologies is resolved, thereby achieving an effect of improving accuracy of positioned positions of the key points.Type: GrantFiled: February 21, 2017Date of Patent: September 4, 2018Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Chengjie Wang, Jilin Li, Feiyue Huang, Yongjian Wu
-
Patent number: 10055879Abstract: A 3D human face reconstruction method and apparatus, and a server are provided. In some embodiments, the method includes determining feature points on an acquired 2D human face image; determining posture parameters of a human face according to the feature points, and adjusting a posture of a universal 3D human face model according to the posture parameters; determining points on the universal 3D human face model corresponding to the feature points, and adjusting the corresponding points in a sheltered status to obtain a preliminary 3D human face model; and performing deformation adjusting on the preliminary 3D human face model, and performing texture mapping on the deformed 3D human face model to obtain a final 3D human face.Type: GrantFiled: July 17, 2017Date of Patent: August 21, 2018Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Chengjie Wang, Jilin Li, Feiyue Huang, Lei Zhang
-
Publication number: 20180232570Abstract: Aspects of the disclosure provide a method for adding a target contact to a user's friend list in a social network. A target image of a human body part of the target contact can be received from a user terminal. A target biological feature can be extracted from the target image. Whether the target biological feature matches a reference biological feature of a plurality of prestored reference biological features can be determined. A social account associated with the determined reference biological feature that matches the target biological feature may be determined, and added to the user's friend list.Type: ApplicationFiled: April 11, 2018Publication date: August 16, 2018Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Feiyue HUANG, Jilin LI, Chengjie WANG
-
Publication number: 20180225842Abstract: A method of determining a facial pose angle of a human face within an image is provided. After capturing a first image of the human face, respective coordinates of a predefined set of facial feature points of the human face in the first image are obtained. The predefined set of facial feature points includes an odd number of facial feature points, e.g., at least a first pair of symmetrical facial feature points, a second pair of symmetrical facial feature points, and a first single facial feature point. The predefined set of facial feature points are not coplanar. Next, one or more predefined key values based on the respective coordinates of the predefined set of facial feature points of the human face in the first image are calculated. Finally, a pre-established correspondence table is queried using the one or more predefined key values to determine the facial pose angle of the human face in the first image.Type: ApplicationFiled: April 3, 2018Publication date: August 9, 2018Inventor: Chengjie WANG
-
Publication number: 20180225800Abstract: A method for identifying whether a standard picture contains a watermark is provided. After obtaining a set of sample standard pictures, one or more sample pictures in the set of sample standard pictures are adjusted to a preset size. The sample pictures in the set of sample standard pictures do not contain watermark information. Next, an average of pixel attribute values of the sample pictures at pixel positions of the preset size is calculated. The average of the pixel attribute values at the pixel positions of the preset size is normalized to obtain the watermark-presence probabilities of the pixel positions of the preset size. Then, a target picture is adjusted to the preset size and a sum of products of pixel attribute values of the target picture at the pixel positions of the preset size and the corresponding watermark-presence probability are calculated. Finally, it is determined whether the target picture contains a watermark according to the sum of products.Type: ApplicationFiled: April 4, 2018Publication date: August 9, 2018Inventor: Chengjie WANG
-
Publication number: 20180204094Abstract: The present disclosure discloses an image recognition method and apparatus, and belongs to the field of computer technologies. The method includes: extracting a local binary pattern (LBP) feature vector of a target image; calculating a high-dimensional feature vector of the target image according to the LBP feature vector; obtaining a training matrix, the training matrix being a matrix obtained by training images in an image library by using a joint Bayesian algorithm; and recognizing the target image according to the high-dimensional feature vector of the target image and the training matrix. The image recognition method and apparatus according to the present disclosure may combine LBP algorithm with a joint Bayesian algorithm to perform recognition, thereby improving the accuracy of image recognition.Type: ApplicationFiled: March 19, 2018Publication date: July 19, 2018Inventors: Shouhong Ding, Jilin Li, Chengjie Wang, Feiyue Huang, Yongjian Wu, Guofu Tan