Patents by Inventor Yanlin Weng

Yanlin Weng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11354841
    Abstract: The present disclosure discloses a speech-driven facial animation generation method. The method is mainly divided into six steps: extracting speech features, collecting frequency information, summarizing time information, decoding action features, driving a facial model, and sliding a signal window. The present disclosure can drive, according to an input speech audio signal, any facial model in real time under a particular delay to generate animation. The quality of the animation reaches the currently most advanced speech animation technology level, and has the characteristics of light weight and good robustness. The present disclosure can be used to generate speech animation under different scenes, such as VR virtual social networking, and virtual speech assistants and games.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: June 7, 2022
    Assignees: ZHEJIANG UNIVERSITY, FACEUNITY TECHNOLOGY CO., LTD.
    Inventors: Kun Zhou, Yujin Chai, Yanlin Weng, Lvdi Wang
  • Publication number: 20210233299
    Abstract: The present disclosure discloses a speech-driven facial animation generation method. The method is mainly divided into six steps: extracting speech features, collecting frequency information, summarizing time information, decoding action features, driving a facial model, and sliding a signal window. The present disclosure can drive, according to an input speech audio signal, any facial model in real time under a particular delay to generate animation. The quality of the animation reaches the currently most advanced speech animation technology level, and has the characteristics of light weight and good robustness. The present disclosure can be used to generate speech animation under different scenes, such as VR virtual social networking, and virtual speech assistants and games.
    Type: Application
    Filed: March 29, 2021
    Publication date: July 29, 2021
    Inventors: Kun ZHOU, Yujin CHAI, Yanlin WENG, Lvdi WANG
  • Patent number: 9792725
    Abstract: The invention discloses a method for image and video virtual hairstyle modeling, including: performing data acquisition for a target subject by using a digital device and obtaining a hairstyle region from an image by segmenting; obtaining a uniformly distributed static hairstyle model which conforms to the original hairstyle region by solving an orientation ambiguity problem of an image hairstyle orientation field, calculating a movement of the hairstyle in a video by tracing a movement of a head model and estimating non-rigid deformation, generating a dynamic hairstyle model in every moment during the moving process, so that the dynamic hairstyle model fits the real movement of the hairstyle in the video naturally. The method is used to perform virtual 3D model reconstruction with physical rationality for individual hairstyles in single-views and video sequences, and widely applied in creating virtual characters and many hairstyle editing applications for images and videos.
    Type: Grant
    Filed: November 7, 2014
    Date of Patent: October 17, 2017
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Yanlin Weng, Menglei Chai, Lvdi Wang, Kun Zhou
  • Patent number: 9367940
    Abstract: The invention discloses a method for single-view hair modeling and portrait editing. The method is capable of 3D structure reconstruction for individual's hairstyle in an input image, and it requires only a small amount of user inputs to bring about a variety of portrait editing functions; after steps of image preprocessing, 3D head model reconstruction, 2D strands extraction and 3D hairstyle reconstruction, the method finally achieves portrait editing functions such as portrait pop-ups, hairstyle replacements, hairstyle editing, etc.; the invention discloses a method for creating a 3D hair model from a single portrait view for the first time, thereby bringing about a series of practical portrait hairstyle editing functions, of which the effect is superior to methods in the prior art, and having features such as simple interactions and highly efficient calculations.
    Type: Grant
    Filed: April 25, 2014
    Date of Patent: June 14, 2016
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Yanlin Weng, Lvdi Wang, Menglei Chai, Kun Zhou
  • Patent number: 9361723
    Abstract: The invention discloses a method for real-time face animation based on single video camera. The method tracks 3D locations of face feature points in real time by adopting a single video camera, and parameterizes head poses and facial expressions according to the 3D locations, finally may map these parameters into an avatar to drive face animation of an animation character. The present invention may achieve a real time speed by merely adopting a usual video camera of the user instead of an advanced acquisition equipment; the present invention may process all kinds of wide-angle rotations, translation and exaggerated expressions of faces accurately; the present invention may also work under different illumination and background environments, which include indoor and sunny outdoor.
    Type: Grant
    Filed: October 17, 2014
    Date of Patent: June 7, 2016
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Kun Zhou, Yanlin Weng, Chen Cao
  • Publication number: 20150054825
    Abstract: The invention discloses a method for image and video virtual hairstyle modeling, including: performing data acquisition for a target subject by using a digital device and obtaining a hairstyle region from an image by segmenting; obtaining a uniformly distributed static hairstyle model which conforms to the original hairstyle region by solving an orientation ambiguity problem of an image hairstyle orientation field, calculating a movement of the hairstyle in a video by tracing a movement of a head model and estimating non-rigid deformation, generating a dynamic hairstyle model in every moment during the moving process, so that the dynamic hairstyle model fits the real movement of the hairstyle in the video naturally. The method is used to perform virtual 3D model reconstruction with physical rationality for individual hairstyles in single-views and video sequences, and widely applied in creating virtual characters and many hairstyle editing applications for images and videos.
    Type: Application
    Filed: November 7, 2014
    Publication date: February 26, 2015
    Inventors: YANLIN WENG, MENGLEI CHAI, LVDI WANG, KUN ZHOU
  • Publication number: 20150035825
    Abstract: The invention discloses a method for real-time face animation based on single video camera. The method tracks 3D locations of face feature points in real time by adopting a single video camera, and parameterizes head poses and facial expressions according to the 3D locations, finally may map these parameters into an avatar to drive face animation of an animation character. The present invention may achieve a real time speed by merely adopting a usual video camera of the user instead of an advanced acquisition equipment; the present invention may process all kinds of wide-angle rotations, translation and exaggerated expressions of faces accurately; the present invention may also work under different illumination and background environments, which include indoor and sunny outdoor.
    Type: Application
    Filed: October 17, 2014
    Publication date: February 5, 2015
    Inventors: KUN ZHOU, YANLIN WENG, CHEN CAO
  • Publication number: 20140233849
    Abstract: The invention discloses a method for single-view hair modeling and portrait editing. The method is capable of 3D structure reconstruction for individual's hairstyle in an input image, and it requires only a small amount of user inputs to bring about a variety of portrait editing functions; after steps of image preprocessing, 3D head model reconstruction, 2D strands extraction and 3D hairstyle reconstruction, the method finally achieves portrait editing functions such as portrait pop-ups, hairstyle replacements, hairstyle editing, etc.; the invention discloses a method for creating a 3D hair model from a single portrait view for the first time, thereby bringing about a series of practical portrait hairstyle editing functions, of which the effect is superior to methods in the prior art, and having features such as simple interactions and highly efficient calculations.
    Type: Application
    Filed: April 25, 2014
    Publication date: August 21, 2014
    Applicant: ZHEJIANG UNIVERSITY
    Inventors: Yanlin Weng, Lvdi Wang, Menglei Chai, Kun Zhou