Patents by Inventor Wen-Liang Chi

Wen-Liang Chi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8648866
    Abstract: A facial animation production method for producing 3-dimensional (3D) facial animation data in response to input video data includes the following steps. First, data positioning and character sorting processes are performed on the input video data to acquire first-layer character data, for indicating multiple first-layer character points, and first-layer model data. Next, first-layer model outline data and first-layer character outline data are respectively obtained according to the first-layer model data and the first-layer character data. Then, the first-layer character outline data is compared with the first-layer model outline data to judge whether a judgment condition is satisfied. If not, output character data are produced according to the first-layer character data, and fundamental facial-mesh transformation data are thus produced. Thereafter, the 3D facial animation data are displayed according to the fundamental facial-mesh transformation data.
    Type: Grant
    Filed: July 7, 2010
    Date of Patent: February 11, 2014
    Assignee: Industrial Technology Research Institute
    Inventors: Wen-Hung Ting, Chen-Lan Yen, Wen-Liang Chi, Duan-Li Liao
  • Publication number: 20110141105
    Abstract: A facial animation production method for producing 3-dimensional (3D) facial animation data in response to input video data includes the following steps. First, data positioning and character sorting processes are performed on the input video data to acquire first-layer character data, for indicating multiple first-layer character points, and first-layer model data. Next, first-layer model outline data and first-layer character outline data are respectively obtained according to the first-layer model data and the first-layer character data. Then, the first-layer character outline data is compared with the first-layer model outline data to judge whether a judgment condition is satisfied. If not, output character data are produced according to the first-layer character data, and fundamental facial-mesh transformation data are thus produced. Thereafter, the 3D facial animation data are displayed according to the fundamental facial-mesh transformation data.
    Type: Application
    Filed: July 7, 2010
    Publication date: June 16, 2011
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Wen-Hung Ting, Chen-Lan Yen, Wen-Liang Chi, Duan-Li Liao