Patents by Inventor Songfan Yang

Songfan Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11651255
    Abstract: The present disclosure relates to a method and an apparatus for object preference prediction, and a computer readable medium. The method includes: acquiring evaluation information indicating preference values of partial users in a user set for partial objects in an object set; acquiring auxiliary information of at least one of the user set and the object set, wherein the auxiliary information indicates an attribute of at least one of a corresponding user in the user set and a corresponding object in the object set; determining a user feature representation and an object feature representation using a matrix decomposition model, based on the evaluation information and the auxiliary information; and determining a preference prediction value of a target user in the user set for a target object in the object set based on the user feature representation and the object feature representation.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: May 16, 2023
    Assignee: BEIJING CENTURY TAL EDUCATION TECHNOLOGY CO., LTD.
    Inventors: Tianqiao Liu, Zitao Liu, Songfan Yang, Yan Huang, Bangxin Zhang
  • Publication number: 20220044137
    Abstract: The present disclosure relates to a method and an apparatus for object preference prediction, and a computer readable medium. The method includes: acquiring evaluation information indicating preference values of partial users in a user set for partial objects in an object set; acquiring auxiliary information of at least one of the user set and the object set, wherein the auxiliary information indicates an attribute of at least one of a corresponding user in the user set and a corresponding object in the object set; determining a user feature representation and an object feature representation using a matrix decomposition model, based on the evaluation information and the auxiliary information; and determining a preference prediction value of a target user in the user set for a target object in the object set based on the user feature representation and the object feature representation.
    Type: Application
    Filed: December 17, 2019
    Publication date: February 10, 2022
    Inventors: Tianqiao LIU, Zitao LIU, Songfan YANG, Yan HUANG, Bangxin ZHANG
  • Patent number: 11010600
    Abstract: A face emotion recognition method based on dual-stream convolutional neural network uses a multi-scale face expression recognition network to single frame face images and face sequences to perform learning classification. The method includes constructing a multi-scale face expression recognition network which includes a channel network with a resolution of 224×224 and a channel network with a resolution of 336×336, extracting facial expression characteristics at different resolutions through the recognition network, effectively combining static characteristics of images and dynamic characteristics of expression sequence to perform training and learning, fusing the two channel models, testing and obtaining a classification effect of facial expressions. The present invention fully utilizes the advantages of deep learning, effectively avoids the problems of manual extraction of feature deviations and long time, and makes the method provided by the present invention more adaptable.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: May 18, 2021
    Assignee: Sichuan University
    Inventors: Linbo Qing, Songfan Yang, Xiaohai He, Qizhi Teng
  • Publication number: 20190311188
    Abstract: A face emotion recognition method based on dual-stream convolutional neural network uses a multi-scale face expression recognition network to single frame face images and face sequences to perform learning classification. The method includes constructing a multi-scale face expression recognition network which includes a channel network with a resolution of 224×224 and a channel network with a resolution of 336×336, extracting facial expression characteristics at different resolutions through the recognition network, effectively combining static characteristics of images and dynamic characteristics of expression sequence to perform training and learning, fusing the two channel models, testing and obtaining a classification effect of facial expressions. The present invention fully utilizes the advantages of deep learning, effectively avoids the problems of manual extraction of feature deviations and long time, and makes the method provided by the present invention more adaptable.
    Type: Application
    Filed: June 24, 2019
    Publication date: October 10, 2019
    Inventors: Linbo Qing, Songfan Yang, Xiaohai He, Qizhi Teng