Patents by Inventor Yandan Zhao
Yandan Zhao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240249337Abstract: Information recommendation system usually involve a multitask problem, which tries to predict not only users' click-through rate (CTR) but also the post-click conversion rate (CVR). At the same time, for multi-functional information systems, there are commonly multiple services for users, such as news feed, search engine, and product suggestions. The prediction/ranking model should be conducted in a multi-scene manner. In the present patent document, embodiments of a unified ranking model for such a multi-task and multi-scene problem are disclosed. The disclosed model explores independent and non-shared embeddings for each task and scene, which reduces the coupling between tasks and scenes. Therefore, new tasks or scenes may be added easily. Besides, a simplified network may be chosen beyond the embedding layer, which largely improves the ranking efficiency for various online services. Extensive offline and online experiments demonstrated the superiority of model embodiments.Type: ApplicationFiled: October 15, 2021Publication date: July 25, 2024Applicants: Baidu USA LLC, Baidu.com Times Technology (Beijing) Co., Ltd.Inventors: Shulong TAN, Meifang LI, Weijie ZHAO, Yandan ZHENG, Xin PEI, Ping LI
-
Patent number: 11961242Abstract: A target tracking method is provided for a computer device. The method includes determining a target candidate region of a current image frame; capturing a target candidate image matching the target candidate region from the current image frame; determining a target region of the current image frame according to an image feature of the target candidate image; determining motion prediction data of a next image frame relative to the current image frame by using a motion prediction model and according to the image feature of the target candidate image; and determining a target candidate region of the next image frame according to the target region and the motion prediction data.Type: GrantFiled: September 25, 2020Date of Patent: April 16, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Yandan Zhao, Chengjie Wang, Weijian Cao, Yun Cao, Pan Cheng, Yuan Huang
-
Publication number: 20230100427Abstract: This application discloses a face image processing method performed by an electronic device. The method includes: acquiring a face image of a source face and a face template image of a template face; performing three-dimensional face modeling on the face image and the face template image to obtain a three-dimensional face image feature of the face image and a three-dimensional face template image feature of the face template image; fusing the three-dimensional face image feature and the three-dimensional face template image feature to obtain a three-dimensional fusion feature; performing face replacement feature extraction on the face image based on the face template image to obtain an initial face replacement feature; transforming the initial face replacement feature based on the three-dimensional fusion feature to obtain a target face replacement feature; and replacing the template face with the source face based on the target face replacement feature to obtain a target face image.Type: ApplicationFiled: November 28, 2022Publication date: March 30, 2023Inventors: Keke HE, Junwei ZHU, Yandan ZHAO, Xu CHEN, Ying TAI, Chengjie WANG, Jilin LI, Feiyue HUANG
-
Publication number: 20230048906Abstract: This application provides a method for reconstructing a three-dimensional model, a method for training a three-dimensional reconstruction model, an apparatus, a computer device, and a storage medium. The method for reconstructing a three-dimensional model includes: obtaining an image feature coefficient of an input image; respectively obtaining, according to the image feature coefficient, a global feature map and an initial local feature map based on a texture and a shape of the input image; performing edge smoothing on the initial local feature map, to obtain a target local feature map; respectively splicing the global feature map and the target local feature map based on the texture and the shape, to obtain a target texture image and a target shape image; and performing three-dimensional model reconstruction according to the target texture image and the target shape image, to obtain a target three-dimensional model.Type: ApplicationFiled: October 28, 2022Publication date: February 16, 2023Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Yandan ZHAO, Shuheng LIN, Xuan CAO, Yanhao GE, Chengjie WANG, Weijan CAO
-
Patent number: 11200404Abstract: This application relates to feature point positioning technologies. The technologies involve positioning a target area in a current image; determining an image feature difference between a target area in a reference image and the target area in the current image, the reference image being a frame of image that is processed before the current image and that includes the target area; determining a target figure point location of the target area in the reference image; determining a target feature point location difference between the target area in the reference image and the target area in the current image according to a feature point location difference determining model and the image feature difference; and positioning a target feature point in the target area in the current image according to the target feature point location of the target area in the reference image and the target feature point location difference.Type: GrantFiled: November 4, 2020Date of Patent: December 14, 2021Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Yandan Zhao, Yichao Yan, Weijian Cao, Yun Cao, Yanhao Ge, Chengjie Wang, Jilin Li
-
Patent number: 11087476Abstract: A trajectory tracking method is provided for a computer device. The method includes performing motion tracking on head images in a plurality of video frames, to obtain motion trajectories corresponding to the head images; acquiring face images corresponding to the head images in the video frames, to obtain face image sets corresponding to the head images; determining from the face image sets corresponding to the head images, at least two face image sets having same face images; and combining motion trajectories corresponding to the at least two face image sets having same face images, to obtain a final motion trajectory of trajectory tracking.Type: GrantFiled: June 2, 2020Date of Patent: August 10, 2021Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Changwei He, Chengjie Wang, Jilin Li, Yabiao Wang, Yandan Zhao, Yanhao Ge, Hui Ni, Yichao Xiong, Zhenye Gan, Yongjian Wu, Feiyue Huang
-
Patent number: 10990803Abstract: When a target image is captured, the device provides a portion of the target image within a target detection region to a preset first model set to calculate positions of face key points and a first confidence value. The face key points and the first confidence value are output by the first model set for a single input of the portion of the first target image into the first model set. When the first confidence value meets a first threshold corresponding to whether the target image is a face image, the device obtains a second target image corresponding to the positions of the first face key points; the device inputs the second target image into the first model set to calculate a second confidence value, the second confidence value corresponds to accuracy key point positioning, and outputs the first key points if the second confidence value meets a second threshold.Type: GrantFiled: December 17, 2018Date of Patent: April 27, 2021Assignees: TENCENT TECHNOLOGY (SHENZHEN), COMPANY LIMITEDInventors: Chengjie Wang, Jilin Li, Yandan Zhao, Hui Ni, Yabiao Wang, Ling Zhao
-
Publication number: 20210049347Abstract: This application relates to feature point positioning technologies. The technologies involve positioning a target area in a current image; determining an image feature difference between a target area in a reference image and the target area in the current image, the reference image being a frame of image that is processed before the current image and that includes the target area; determining a target figure point location of the target area in the reference image; determining a target feature point location difference between the target area in the reference image and the target area in the current image according to a feature point location difference determining model and the image feature difference; and positioning a target feature point in the target area in the current image according to the target feature point location of the target area in the reference image and the target feature point location difference.Type: ApplicationFiled: November 4, 2020Publication date: February 18, 2021Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Yandan ZHAO, Yichao YAN, Weijian CAO, Yun CAO, Yanhao GE, Chengjie WANG, Jilin LI
-
Patent number: 10909356Abstract: A facial tracking method can include receiving a first vector of a first frame, and second vectors of second frames that are prior to the first frame in a video. The first vector is formed by coordinates of first facial feature points in the first frame and determined based on a facial registration method. Each second vector is formed by coordinates of second facial feature points in the respective second frame and previously determined based on the facial tracking method. A second vector of the first frame is determined according to a fitting function based on the second vectors of the first set of second frames. The fitting function has a set of coefficients that are determined by solving a problem of minimizing a function formulated based on a difference between the second vector and the first vector of the current frame, and a square sum of the coefficients.Type: GrantFiled: March 18, 2019Date of Patent: February 2, 2021Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Yicong Liang, Chengjie Wang, Shaoxin Li, Yandan Zhao, Jilin Li
-
Publication number: 20210012510Abstract: A target tracking method is provided for a computer device. The method includes determining a target candidate region of a current image frame; capturing a target candidate image matching the target candidate region from the current image frame; determining a target region of the current image frame according to an image feature of the target candidate image; determining motion prediction data of a next image frame relative to the current image frame by using a motion prediction model and according to the image feature of the target candidate image; and determining a target candidate region of the next image frame according to the target region and the motion prediction data.Type: ApplicationFiled: September 25, 2020Publication date: January 14, 2021Inventors: Yandan ZHAO, Chengjie WANG, Weijian CAO, Yun CAO, Pan CHENG, Yuan HUANG
-
Patent number: 10817708Abstract: A facial tracking method is provided. The method includes: obtaining, from a video stream, an image that currently needs to be processed as a current image frame; and obtaining coordinates of facial key points in a previous image frame and a confidence level corresponding to the previous image frame. The method also includes calculating coordinates of facial key points in the current image frame according to the coordinates of the facial key points in the previous image frame when the confidence level is higher than a preset threshold; and performing multi-face recognition on the current image frame according to the coordinates of the facial key points in the current image frame. The method also includes calculating a confidence level of the coordinates of the facial key points in the current image frame, and returning to process a next frame until recognition on all image frames is completed.Type: GrantFiled: March 8, 2019Date of Patent: October 27, 2020Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Chengjie Wang, Hui Ni, Yandan Zhao, Yabiao Wang, Shouhong Ding, Shaoxin Li, Ling Zhao, Jilin Li, Yongjian Wu, Feiyue Huang, Yicong Liang
-
Publication number: 20200294250Abstract: A trajectory tracking method is provided for a computer device. The method includes performing motion tracking on head images in a plurality of video frames, to obtain motion trajectories corresponding to the head images; acquiring face images corresponding to the head images in the video frames, to obtain face image sets corresponding to the head images; determining from the face image sets corresponding to the head images, at least two face image sets having same face images; and combining motion trajectories corresponding to the at least two face image sets having same face images, to obtain a final motion trajectory of trajectory tracking.Type: ApplicationFiled: June 2, 2020Publication date: September 17, 2020Inventors: Changwei HE, Chengjie WANG, Jilin LI, Yabiao WANG, Yandan ZHAO, Yanhao GE, Hui NI, Yichao XIONG, Zhenye GAN, Yongjian WU, Feiyue HUANG
-
Publication number: 20190251337Abstract: A facial tracking method can include receiving a first vector of a first frame, and second vectors of second frames that are prior to the first frame in a video. The first vector is formed by coordinates of first facial feature points in the first frame and determined based on a facial registration method. Each second vector is formed by coordinates of second facial feature points in the respective second frame and previously determined based on the facial tracking method. A second vector of the first frame is determined according to a fitting function based on the second vectors of the first set of second frames. The fitting function has a set of coefficients that are determined by solving a problem of minimizing a function formulated based on a difference between the second vector and the first vector of the current frame, and a square sum of the coefficients.Type: ApplicationFiled: March 18, 2019Publication date: August 15, 2019Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Yicong LIANG, Chengjie WANG, Shaoxin LI, Yandan ZHAO, Jilin LI
-
Publication number: 20190205623Abstract: A facial tracking method is provided. The method includes: obtaining, from a video stream, an image that currently needs to be processed as a current image frame; and obtaining coordinates of facial key points in a previous image frame and a confidence level corresponding to the previous image frame. The method also includes calculating coordinates of facial key points in the current image frame according to the coordinates of the facial key points in the previous image frame when the confidence level is higher than a preset threshold; and performing multi-face recognition on the current image frame according to the coordinates of the facial key points in the current image frame. The method also includes calculating a confidence level of the coordinates of the facial key points in the current image frame, and returning to process a next frame until recognition on all image frames is completed.Type: ApplicationFiled: March 8, 2019Publication date: July 4, 2019Inventors: Chengjie WANG, Hui NI, Yandan ZHAO, Yabiao WANG, Shouhong DING, Shaoxin LI, Ling ZHAO, Jilin LI, Yongjian WU, Feiyue HUANG, Yicong LIANG
-
Publication number: 20190138791Abstract: When a target image is captured, the device provides a portion of the target image within a target detection region to a preset first model set to calculate positions of face key points and a first confidence value. The face key points and the first confidence value are output by the first model set for a single input of the portion of the first target image into the first model set. When the first confidence value meets a first threshold corresponding to whether the target image is a face image, the device obtains a second target image corresponding to the positions of the first face key points; the device inputs the second target image into the first model set to calculate a second confidence value, the second confidence value corresponds to accuracy key point positioning, and outputs the first key points if the second confidence value meets a second threshold.Type: ApplicationFiled: December 17, 2018Publication date: May 9, 2019Inventors: Chengjie WANG, Jilin Li, Yandan Zhao, Hui Ni, Yabiao Wang, Ling Zhao