Patents by Inventor Lvdi Wang
Lvdi Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11354841Abstract: The present disclosure discloses a speech-driven facial animation generation method. The method is mainly divided into six steps: extracting speech features, collecting frequency information, summarizing time information, decoding action features, driving a facial model, and sliding a signal window. The present disclosure can drive, according to an input speech audio signal, any facial model in real time under a particular delay to generate animation. The quality of the animation reaches the currently most advanced speech animation technology level, and has the characteristics of light weight and good robustness. The present disclosure can be used to generate speech animation under different scenes, such as VR virtual social networking, and virtual speech assistants and games.Type: GrantFiled: March 29, 2021Date of Patent: June 7, 2022Assignees: ZHEJIANG UNIVERSITY, FACEUNITY TECHNOLOGY CO., LTD.Inventors: Kun Zhou, Yujin Chai, Yanlin Weng, Lvdi Wang
-
Publication number: 20210233299Abstract: The present disclosure discloses a speech-driven facial animation generation method. The method is mainly divided into six steps: extracting speech features, collecting frequency information, summarizing time information, decoding action features, driving a facial model, and sliding a signal window. The present disclosure can drive, according to an input speech audio signal, any facial model in real time under a particular delay to generate animation. The quality of the animation reaches the currently most advanced speech animation technology level, and has the characteristics of light weight and good robustness. The present disclosure can be used to generate speech animation under different scenes, such as VR virtual social networking, and virtual speech assistants and games.Type: ApplicationFiled: March 29, 2021Publication date: July 29, 2021Inventors: Kun ZHOU, Yujin CHAI, Yanlin WENG, Lvdi WANG
-
Patent number: 9792725Abstract: The invention discloses a method for image and video virtual hairstyle modeling, including: performing data acquisition for a target subject by using a digital device and obtaining a hairstyle region from an image by segmenting; obtaining a uniformly distributed static hairstyle model which conforms to the original hairstyle region by solving an orientation ambiguity problem of an image hairstyle orientation field, calculating a movement of the hairstyle in a video by tracing a movement of a head model and estimating non-rigid deformation, generating a dynamic hairstyle model in every moment during the moving process, so that the dynamic hairstyle model fits the real movement of the hairstyle in the video naturally. The method is used to perform virtual 3D model reconstruction with physical rationality for individual hairstyles in single-views and video sequences, and widely applied in creating virtual characters and many hairstyle editing applications for images and videos.Type: GrantFiled: November 7, 2014Date of Patent: October 17, 2017Assignee: ZHEJIANG UNIVERSITYInventors: Yanlin Weng, Menglei Chai, Lvdi Wang, Kun Zhou
-
Patent number: 9367940Abstract: The invention discloses a method for single-view hair modeling and portrait editing. The method is capable of 3D structure reconstruction for individual's hairstyle in an input image, and it requires only a small amount of user inputs to bring about a variety of portrait editing functions; after steps of image preprocessing, 3D head model reconstruction, 2D strands extraction and 3D hairstyle reconstruction, the method finally achieves portrait editing functions such as portrait pop-ups, hairstyle replacements, hairstyle editing, etc.; the invention discloses a method for creating a 3D hair model from a single portrait view for the first time, thereby bringing about a series of practical portrait hairstyle editing functions, of which the effect is superior to methods in the prior art, and having features such as simple interactions and highly efficient calculations.Type: GrantFiled: April 25, 2014Date of Patent: June 14, 2016Assignee: ZHEJIANG UNIVERSITYInventors: Yanlin Weng, Lvdi Wang, Menglei Chai, Kun Zhou
-
Publication number: 20150054825Abstract: The invention discloses a method for image and video virtual hairstyle modeling, including: performing data acquisition for a target subject by using a digital device and obtaining a hairstyle region from an image by segmenting; obtaining a uniformly distributed static hairstyle model which conforms to the original hairstyle region by solving an orientation ambiguity problem of an image hairstyle orientation field, calculating a movement of the hairstyle in a video by tracing a movement of a head model and estimating non-rigid deformation, generating a dynamic hairstyle model in every moment during the moving process, so that the dynamic hairstyle model fits the real movement of the hairstyle in the video naturally. The method is used to perform virtual 3D model reconstruction with physical rationality for individual hairstyles in single-views and video sequences, and widely applied in creating virtual characters and many hairstyle editing applications for images and videos.Type: ApplicationFiled: November 7, 2014Publication date: February 26, 2015Inventors: YANLIN WENG, MENGLEI CHAI, LVDI WANG, KUN ZHOU
-
Publication number: 20140233849Abstract: The invention discloses a method for single-view hair modeling and portrait editing. The method is capable of 3D structure reconstruction for individual's hairstyle in an input image, and it requires only a small amount of user inputs to bring about a variety of portrait editing functions; after steps of image preprocessing, 3D head model reconstruction, 2D strands extraction and 3D hairstyle reconstruction, the method finally achieves portrait editing functions such as portrait pop-ups, hairstyle replacements, hairstyle editing, etc.; the invention discloses a method for creating a 3D hair model from a single portrait view for the first time, thereby bringing about a series of practical portrait hairstyle editing functions, of which the effect is superior to methods in the prior art, and having features such as simple interactions and highly efficient calculations.Type: ApplicationFiled: April 25, 2014Publication date: August 21, 2014Applicant: ZHEJIANG UNIVERSITYInventors: Yanlin Weng, Lvdi Wang, Menglei Chai, Kun Zhou
-
Patent number: 8346002Abstract: An apparatus and method provide for providing an output image from an input image. The input image may contain at least one portion that does not display certain desired information of the image, such as texture information. The desired information may be obtained from a second portion of the input image and applied to the at least one portion that does not contain the texture information or contains a diminished amount of the texture information. Also, at least one characteristic of the second portion of the input image may not be applied to the at least one portion such as illumination information. In another example, the input image may be decomposed into multiple parts such as a high frequency and a low frequency component. Each component may be hallucinated individually or independently and combined to form the output image.Type: GrantFiled: July 20, 2007Date of Patent: January 1, 2013Assignee: Microsoft CorporationInventors: Li-Yi Wei, Kun Zhou, Baining Guo, Heung-Yeung Shum, Lvdi Wang
-
Publication number: 20090022414Abstract: An apparatus and method provide for providing an output image from an input image. The input image may contain at least one portion that does not display certain desired information of the image, such as texture information. The desired information may be obtained from a second portion of the input image and applied to the at least one portion that does not contain the texture information or contains a diminished amount of the texture information. Also, at least one characteristic of the second portion of the input image may not be applied to the at least one portion such as illumination information. In another example, the input image may be decomposed into multiple parts such as a high frequency and a low frequency component. Each component may be hallucinated individually or independently and combined to form the output image.Type: ApplicationFiled: July 20, 2007Publication date: January 22, 2009Applicant: Microsoft CorporationInventors: Li-Yi Wei, Kun Zhou, Baining Guo, Heung-Yeung Shum, Lvdi Wang