Patents by Inventor Lvdi Wang

Lvdi Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11354841
    Abstract: The present disclosure discloses a speech-driven facial animation generation method. The method is mainly divided into six steps: extracting speech features, collecting frequency information, summarizing time information, decoding action features, driving a facial model, and sliding a signal window. The present disclosure can drive, according to an input speech audio signal, any facial model in real time under a particular delay to generate animation. The quality of the animation reaches the currently most advanced speech animation technology level, and has the characteristics of light weight and good robustness. The present disclosure can be used to generate speech animation under different scenes, such as VR virtual social networking, and virtual speech assistants and games.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: June 7, 2022
    Assignees: ZHEJIANG UNIVERSITY, FACEUNITY TECHNOLOGY CO., LTD.
    Inventors: Kun Zhou, Yujin Chai, Yanlin Weng, Lvdi Wang
  • Publication number: 20210233299
    Abstract: The present disclosure discloses a speech-driven facial animation generation method. The method is mainly divided into six steps: extracting speech features, collecting frequency information, summarizing time information, decoding action features, driving a facial model, and sliding a signal window. The present disclosure can drive, according to an input speech audio signal, any facial model in real time under a particular delay to generate animation. The quality of the animation reaches the currently most advanced speech animation technology level, and has the characteristics of light weight and good robustness. The present disclosure can be used to generate speech animation under different scenes, such as VR virtual social networking, and virtual speech assistants and games.
    Type: Application
    Filed: March 29, 2021
    Publication date: July 29, 2021
    Inventors: Kun ZHOU, Yujin CHAI, Yanlin WENG, Lvdi WANG
  • Patent number: 9792725
    Abstract: The invention discloses a method for image and video virtual hairstyle modeling, including: performing data acquisition for a target subject by using a digital device and obtaining a hairstyle region from an image by segmenting; obtaining a uniformly distributed static hairstyle model which conforms to the original hairstyle region by solving an orientation ambiguity problem of an image hairstyle orientation field, calculating a movement of the hairstyle in a video by tracing a movement of a head model and estimating non-rigid deformation, generating a dynamic hairstyle model in every moment during the moving process, so that the dynamic hairstyle model fits the real movement of the hairstyle in the video naturally. The method is used to perform virtual 3D model reconstruction with physical rationality for individual hairstyles in single-views and video sequences, and widely applied in creating virtual characters and many hairstyle editing applications for images and videos.
    Type: Grant
    Filed: November 7, 2014
    Date of Patent: October 17, 2017
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Yanlin Weng, Menglei Chai, Lvdi Wang, Kun Zhou
  • Patent number: 9367940
    Abstract: The invention discloses a method for single-view hair modeling and portrait editing. The method is capable of 3D structure reconstruction for individual's hairstyle in an input image, and it requires only a small amount of user inputs to bring about a variety of portrait editing functions; after steps of image preprocessing, 3D head model reconstruction, 2D strands extraction and 3D hairstyle reconstruction, the method finally achieves portrait editing functions such as portrait pop-ups, hairstyle replacements, hairstyle editing, etc.; the invention discloses a method for creating a 3D hair model from a single portrait view for the first time, thereby bringing about a series of practical portrait hairstyle editing functions, of which the effect is superior to methods in the prior art, and having features such as simple interactions and highly efficient calculations.
    Type: Grant
    Filed: April 25, 2014
    Date of Patent: June 14, 2016
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Yanlin Weng, Lvdi Wang, Menglei Chai, Kun Zhou
  • Publication number: 20150054825
    Abstract: The invention discloses a method for image and video virtual hairstyle modeling, including: performing data acquisition for a target subject by using a digital device and obtaining a hairstyle region from an image by segmenting; obtaining a uniformly distributed static hairstyle model which conforms to the original hairstyle region by solving an orientation ambiguity problem of an image hairstyle orientation field, calculating a movement of the hairstyle in a video by tracing a movement of a head model and estimating non-rigid deformation, generating a dynamic hairstyle model in every moment during the moving process, so that the dynamic hairstyle model fits the real movement of the hairstyle in the video naturally. The method is used to perform virtual 3D model reconstruction with physical rationality for individual hairstyles in single-views and video sequences, and widely applied in creating virtual characters and many hairstyle editing applications for images and videos.
    Type: Application
    Filed: November 7, 2014
    Publication date: February 26, 2015
    Inventors: YANLIN WENG, MENGLEI CHAI, LVDI WANG, KUN ZHOU
  • Publication number: 20140233849
    Abstract: The invention discloses a method for single-view hair modeling and portrait editing. The method is capable of 3D structure reconstruction for individual's hairstyle in an input image, and it requires only a small amount of user inputs to bring about a variety of portrait editing functions; after steps of image preprocessing, 3D head model reconstruction, 2D strands extraction and 3D hairstyle reconstruction, the method finally achieves portrait editing functions such as portrait pop-ups, hairstyle replacements, hairstyle editing, etc.; the invention discloses a method for creating a 3D hair model from a single portrait view for the first time, thereby bringing about a series of practical portrait hairstyle editing functions, of which the effect is superior to methods in the prior art, and having features such as simple interactions and highly efficient calculations.
    Type: Application
    Filed: April 25, 2014
    Publication date: August 21, 2014
    Applicant: ZHEJIANG UNIVERSITY
    Inventors: Yanlin Weng, Lvdi Wang, Menglei Chai, Kun Zhou
  • Patent number: 8346002
    Abstract: An apparatus and method provide for providing an output image from an input image. The input image may contain at least one portion that does not display certain desired information of the image, such as texture information. The desired information may be obtained from a second portion of the input image and applied to the at least one portion that does not contain the texture information or contains a diminished amount of the texture information. Also, at least one characteristic of the second portion of the input image may not be applied to the at least one portion such as illumination information. In another example, the input image may be decomposed into multiple parts such as a high frequency and a low frequency component. Each component may be hallucinated individually or independently and combined to form the output image.
    Type: Grant
    Filed: July 20, 2007
    Date of Patent: January 1, 2013
    Assignee: Microsoft Corporation
    Inventors: Li-Yi Wei, Kun Zhou, Baining Guo, Heung-Yeung Shum, Lvdi Wang
  • Publication number: 20090022414
    Abstract: An apparatus and method provide for providing an output image from an input image. The input image may contain at least one portion that does not display certain desired information of the image, such as texture information. The desired information may be obtained from a second portion of the input image and applied to the at least one portion that does not contain the texture information or contains a diminished amount of the texture information. Also, at least one characteristic of the second portion of the input image may not be applied to the at least one portion such as illumination information. In another example, the input image may be decomposed into multiple parts such as a high frequency and a low frequency component. Each component may be hallucinated individually or independently and combined to form the output image.
    Type: Application
    Filed: July 20, 2007
    Publication date: January 22, 2009
    Applicant: Microsoft Corporation
    Inventors: Li-Yi Wei, Kun Zhou, Baining Guo, Heung-Yeung Shum, Lvdi Wang