Patents by Inventor Yibing Song

Yibing Song has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11961325
    Abstract: An image processing method and apparatus, a computer-readable medium, and an electronic device are provided. The image processing method includes: respectively projecting, according to a plurality of view angle parameters corresponding to a plurality of view angles, a face model of a target object onto a plurality of face images of the target object acquired from the plurality of view angles, to determine correspondences between regions on the face model and regions on the face image; respectively extracting, based on the correspondences and a target region in the face model that need to generate a texture image, images corresponding to the target region from the plurality of face images; and fusing the images that correspond to the target region and that are respectively extracted from the plurality of face images, to generate the texture image.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: April 16, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiangkai Lin, Linchao Bao, Yonggen Ling, Yibing Song, Wei Liu
  • Patent number: 11954574
    Abstract: A neural processor. In some embodiments, the processor includes a first tile, a second tile, a memory, and a bus. The bus may be connected to the memory, the first tile, and the second tile. The first tile may include: a first weight register, a second weight register, an activations buffer, a first multiplier, and a second multiplier. The activations buffer may be configured to include: a first queue connected to the first multiplier and a second queue connected to the second multiplier. The first queue may include a first register and a second register adjacent to the first register, the first register being an output register of the first queue. The first tile may be configured: in a first state: to multiply, in the first multiplier, a first weight by an activation from the output register of the first queue, and in a second state: to multiply, in the first multiplier, the first weight by an activation from the second register of the first queue.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: April 9, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ilia Ovsiannikov, Ali Shafiee Ardestani, Joseph H. Hassoun, Lei Wang, Sehwan Lee, JoonHo Song, Jun-Woo Jang, Yibing Michelle Wang, Yuecheng Li
  • Patent number: 11923379
    Abstract: Provided is a method for preparing a display substrate. The method includes: providing a substrate, the substrate including a plurality of pixel island regions spaced apart and a plurality of bridge regions connecting adjacent pixel island regions; forming thin film transistors and first signal lines in the pixel island regions, and forming first connecting bridges in the bridge regions; and forming second signal lines, second connecting bridges, and a source/drain layer on the substrate by a one-time patterning process.
    Type: Grant
    Filed: April 4, 2023
    Date of Patent: March 5, 2024
    Assignee: BOE Technology Group Co., Ltd.
    Inventors: Caiyu Qu, Fangxu Cao, Yanjun Hao, Huijuan Zhang, Yibing Fan, Zunqing Song, Dengyun Chen
  • Patent number: 11704817
    Abstract: This application disclose a method for training a model performed at a computing device. The method includes: acquiring a template image and a test image; invoking a first object recognition model to process a feature of a tracked object in the template image to obtain a first reference response, and a second object recognition model to process the feature in the template image to obtain a second reference response; invoking the first model to process a feature of a tracked object in the test image to obtain a first test response, and the second model to process the feature to obtain a second test response; tracking the first test response to obtain a tracking response of the tracked object; and updating the first object recognition model based on differences between the first and second reference responses, that between the first and second test responses, and that between a tracking label and the tracking response.
    Type: Grant
    Filed: July 7, 2021
    Date of Patent: July 18, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Ning Wang, Yibing Song, Wei Liu
  • Patent number: 11636613
    Abstract: A computer application method for generating a three-dimensional (3D) face model is provided, performed by a face model generation model running on a computer device, the method including: obtaining a two-dimensional (2D) face image as input to the face model generation model; extracting global features and local features of the 2D face image; obtaining a 3D face model parameter based on the global features and the local features; and outputting a 3D face model corresponding to the 2D face image based on the 3D face model parameter.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: April 25, 2023
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Yajing Chen, Yibing Song, Yonggen Ling, Linchao Bao, Wei Liu
  • Publication number: 20230077356
    Abstract: An image processing method and an image processing apparatus are provided. The method includes: acquiring a first image including an image of a target person and a second image including an image of target clothes; generating, based on image features of the first image and image features of the second image, a target appearance flow feature for representing deformation of the target clothes matching a body of the target person, and generating, based on the target appearance flow feature, a deformed image of the target clothes matching the body; and generating a virtual dress-up image, in which the target person wears the target clothes matching the body, by fusing the deformed image with the first image.
    Type: Application
    Filed: October 31, 2022
    Publication date: March 16, 2023
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yibing SONG, Yuying Ge, Wei Liu
  • Publication number: 20230017112
    Abstract: This disclosure is related to an image generation method and apparatus. The method includes: obtaining a first body image including a target body and a first clothes image including target clothes; transforming the first clothes image based on a posture of the target body in the first body image to obtain a second clothes image, the second clothes image including the target clothes, and a posture of the target clothes matching the posture of the target body; performing feature extraction on the second clothes image, an image of a bare area in the first body image, and the first body image to obtain a clothes feature, a skin feature, and a body feature respectively; and generating a second body image based on the clothes feature, the skin feature, and the body feature, the target body in the second body image wearing the target clothes.
    Type: Application
    Filed: September 20, 2022
    Publication date: January 19, 2023
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yibing SONG, Chongjian GE
  • Publication number: 20210343041
    Abstract: A method for obtaining a position of a target is provided. A plurality of frames of images is received. A first image in the plurality of frames of images includes a to-be-detected target. A position obtaining model is invoked, a model parameter of the position obtaining model is obtained through training based on a first position of a selected target in a first sample image and a second position of the selected target in the first sample image. The second position is predicted based on a third position of the selected target in a second sample image. The third position is predicted based on the first position. A position of the to-be-detected target in a second image is determined based on the model parameter and a position of the to-be-detected target in the first image via the position obtaining model.
    Type: Application
    Filed: July 15, 2021
    Publication date: November 4, 2021
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Ning WANG, Yibing SONG, Wei LIU
  • Publication number: 20210335002
    Abstract: This application disclose a method for training a model performed at a computing device. The method includes: acquiring a template image and a test image; invoking a first object recognition model to process a feature of a tracked object in the template image to obtain a first reference response, and a second object recognition model to process the feature in the template image to obtain a second reference response; invoking the first model to process a feature of a tracked object in the test image to obtain a first test response, and the second model to process the feature to obtain a second test response; tracking the first test response to obtain a tracking response of the tracked object; and updating the first object recognition model based on differences between the first and second reference responses, that between the first and second test responses, and that between a tracking label and the tracking response.
    Type: Application
    Filed: July 7, 2021
    Publication date: October 28, 2021
    Inventors: Ning WANG, Yibing SONG, Wei LIU
  • Publication number: 20210286977
    Abstract: A computer application method for generating a three-dimensional (3D) face model is provided, performed by a face model generation model running on a computer device, the method including: obtaining a two-dimensional (2D) face image as input to the face model generation model; extracting global features and local features of the 2D face image; obtaining a 3D face model parameter based on the global features and the local features; and outputting a 3D face model corresponding to the 2D face image based on the 3D face model parameter.
    Type: Application
    Filed: June 3, 2021
    Publication date: September 16, 2021
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Yajing CHEN, Yibing SONG, Yonggen LING, Linchao BAO, Wei LIU
  • Publication number: 20210247761
    Abstract: A guard route security method and a computer readable storage medium are provided. The method includes receiving positioning information of a security guided vehicle and generating a security guided vehicle track; searching a low-position camera and a high-position camera around the security guided vehicle according to the positioning information, and feeding back searching information; and turning on the high-position camera to trace and monitor the security guided vehicle according to the searching information. According to the guard route security method, the security guided vehicle is shot and monitored at a high-position camera monitor to achieve a follow-up control of the security guided vehicle with the high-position cameras, which is convenient for the security guards to quickly learn the surrounding environment and carry out security work for guard route more intuitively and effectively.
    Type: Application
    Filed: March 26, 2019
    Publication date: August 12, 2021
    Inventors: Shenghui Chen, Xiaohong Xu, Guanjie Xu, Bo Wang, Huaming Yang, Ying Hu, Yongtao Zhang, Xiwan Ning, Xiang Yu, Tongyu Huang, Gang Wang, Yibing Song, Yuqing Hou, Shuangguang Liu
  • Publication number: 20210250549
    Abstract: The present invention provides a camera link method and a computer storage medium. The method includes acquiring a real-time video of a camera, selecting a point in the real-time video as a linked point and acquiring coordinates of the linked point; designating a linked camera; and acquiring a real-time video of the linked camera, and turning on the real-time video of the linked camera or positioning the linked camera to the linked point. According to the camera link method and the computer storage medium of the present invention, linkage among multiple cameras, and automatically linkage to multiple dome cameras can be achieved, and the application range is wide.
    Type: Application
    Filed: March 7, 2019
    Publication date: August 12, 2021
    Inventors: Shenghui Chen, Wenli Wang, Huaming Yang, Bo Wang, Chaowei Meng, Ying Hu, Jiangming Li, Yongtao Zhang, Xiwan Ning, Xiang Yu, Tongyu Huang, Gang Wang, Yibing Song, Yuqing Hou, Shuangguang Liu
  • Publication number: 20210248380
    Abstract: A video playing method for synchronously displaying AR information includes capturing a video code stream containing AR information by an AR camera; extracting the AR information from the video code stream frame by frame, generating subtitle information during said extraction, and storing the subtitle information as a subtitle file; storing the video code stream after said extraction as a video file; combining the subtitle file with the video file to create a general video file; and parsing and playing the general video file on a third-party player. The video with AR information captured by an AR camera can be parsed by a third-party player and synchronously displayed along with the video play.
    Type: Application
    Filed: March 19, 2019
    Publication date: August 12, 2021
    Inventors: Jiebin Li, Tongyu Huang, Gang Wang, Yibing Song, Yuqing Hou, Shuangguang Liu
  • Publication number: 20210183044
    Abstract: An image processing method and apparatus, a computer-readable medium, and an electronic device are provided. The image processing method includes: respectively projecting, according to a plurality of view angle parameters corresponding to a plurality of view angles, a face model of a target object onto a plurality of face images of the target object acquired from the plurality of view angles, to determine correspondences between regions on the face model and regions on the face image; respectively extracting, based on the correspondences and a target region in the face model that need to generate a texture image, images corresponding to the target region from the plurality of face images; and fusing the images that correspond to the target region and that are respectively extracted from the plurality of face images, to generate the texture image.
    Type: Application
    Filed: February 24, 2021
    Publication date: June 17, 2021
    Inventors: Xiangkai LIN, Linchao BAO, Yonggen LING, Yibing SONG, Wei LIU
  • Patent number: 10909766
    Abstract: A video map engine system includes a configuration management client, multiple video equipments, a video access server, an augmented reality processor, and an augmented reality client. The parameters of the video equipments includes azimuth angle P, vertical angle T and zoom factor Z of the video equipment, the augmented reality client is adapted for calculating the location where the augmented reality tag is presented in the real-time video according to the values of P, T, Z and the target location carried by the augmented reality tag, and presenting the augmented reality tag on the corresponding location of the real-time video. Therefore, the real-time video is served as the base map, and the augmented reality tag is presented on the base map, thereby achieving a video map effect.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: February 2, 2021
    Assignee: GOSUNCN TECHNOLOGY GROUP CO., LTD.
    Inventors: Shenghui Chen, Guanjie Xu, Chaowei Meng, Weijian Hu, Xianjing Lin, Jianrong Zhong, Zhuofeng Liu, Zhizhao Deng, Shengxin Jiang, Kejun Luo, Wenguo Gao, Xiwan Ning, Chunsen Qiu, Tongyu Huang, Gang Wang, Yibing Song, Yuqing Hou, Shuangguang Liu
  • Publication number: 20200258304
    Abstract: A video map engine system includes a configuration management client, multiple video equipments, a video access server, an augmented reality processor, and an augmented reality client. The parameters of the video equipments includes azimuth angle P, vertical angle T and zoom factor Z of the video equipment, the augmented reality client is adapted for calculating the location where the augmented reality tag is presented in the real-time video according to the values of P, T, Z and the target location carried by the augmented reality tag, and presenting the augmented reality tag on the corresponding location of the real-time video. Therefore, the real-time video is served as the base map, and the augmented reality tag is presented on the base map, thereby achieving a video map effect.
    Type: Application
    Filed: February 1, 2019
    Publication date: August 13, 2020
    Inventors: Shenghui Chen, Guanjie Xu, Chaowei Meng, Weijian Hu, Xianjing Lin, Jianrong Zhong, Zhuofeng Liu, Zhizhao Deng, Shengxin Jiang, Kejun Luo, Wenguo Gao, Xiwan Ning, Chunsen Qiu, Tongyu Huang, Gang Wang, Yibing Song, Yuqing Hou, Shuangguang Liu
  • Patent number: 10593043
    Abstract: Systems and methods are disclosed for segmenting a digital image to identify an object portrayed in the digital image from background pixels in the digital image. In particular, in one or more embodiments, the disclosed systems and methods use a first neural network and a second neural network to generate image information used to generate a segmentation mask that corresponds to the object portrayed in the digital image. Specifically, in one or more embodiments, the disclosed systems and methods optimize a fit between a mask boundary of the segmentation mask to edges of the object portrayed in the digital image to accurately segment the object within the digital image.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: March 17, 2020
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Yibing Song, Xin Lu, Xiaohui Shen, Jimei Yang
  • Publication number: 20180232887
    Abstract: Systems and methods are disclosed for segmenting a digital image to identify an object portrayed in the digital image from background pixels in the digital image. In particular, in one or more embodiments, the disclosed systems and methods use a first neural network and a second neural network to generate image information used to generate a segmentation mask that corresponds to the object portrayed in the digital image. Specifically, in one or more embodiments, the disclosed systems and methods optimize a fit between a mask boundary of the segmentation mask to edges of the object portrayed in the digital image to accurately segment the object within the digital image.
    Type: Application
    Filed: April 10, 2018
    Publication date: August 16, 2018
    Inventors: Zhe Lin, Yibing Song, Xin Lu, Xiaohui Shen, Jimei Yang
  • Patent number: 9972092
    Abstract: Systems and methods are disclosed for segmenting a digital image to identify an object portrayed in the digital image from background pixels in the digital image. In particular, in one or more embodiments, the disclosed systems and methods use a first neural network and a second neural network to generate image information used to generate a segmentation mask that corresponds to the object portrayed in the digital image. Specifically, in one or more embodiments, the disclosed systems and methods optimize a fit between a mask boundary of the segmentation mask to edges of the object portrayed in the digital image to accurately segment the object within the digital image.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: May 15, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Zhe Lin, Yibing Song, Xin Lu, Xiaohui Shen, Jimei Yang
  • Publication number: 20170287137
    Abstract: Systems and methods are disclosed for segmenting a digital image to identify an object portrayed in the digital image from background pixels in the digital image. In particular, in one or more embodiments, the disclosed systems and methods use a first neural network and a second neural network to generate image information used to generate a segmentation mask that corresponds to the object portrayed in the digital image. Specifically, in one or more embodiments, the disclosed systems and methods optimize a fit between a mask boundary of the segmentation mask to edges of the object portrayed in the digital image to accurately segment the object within the digital image.
    Type: Application
    Filed: March 31, 2016
    Publication date: October 5, 2017
    Inventors: Zhe Lin, Yibing Song, Xin Lu, Xiaohui Shen, Jimei Yang