Patents by Inventor Huan-wen Hsiao

Huan-wen Hsiao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160283976
    Abstract: A method of enhancing the accuracy to predict the gender of a network user comprises: obtaining campaign gender distribution ratios for each advertising campaign by counting the gender information of groundtruth devices which clicked in the respective advertising campaigns; assigning the gender information for each unknown device by finding out the advertising campaigns that are clicked by the unknown device, multiplying the campaign gender distribution ratios of the clicked advertising campaigns, and comparing the multiplied result with a first certain value; obtaining update campaign gender distribution ratios for each advertising campaign by counting the gender information of groundtruth devices and unknown devices which clicked in the respective advertising campaigns; and comparing a quadratic sum of the difference of the old and update campaign gender distribution ratios with a second certain value for each advertising campaign, and back to the assigning step if the quadratic sum is greater than a second
    Type: Application
    Filed: March 26, 2015
    Publication date: September 29, 2016
    Inventors: Che-Hua YEH, Chih-Han YU, Jyun-Fan TSAI, Kai-Yueh CHANG, Kuan-Hua LIN, Huan-Wen HSIAO, Tse-Ju LIN
  • Patent number: 9336583
    Abstract: Various embodiments are disclosed for image editing. A frame is obtained from a frame sequence depicting at least one individual, and facial characteristics in the frame are analyzed. A utilization score is assigned to the frame based on the detected facial characteristics, and a determination of whether to utilize the frame is made based on the utilization score. A completeness value is assigned, and a determination is made based on the completeness value of whether to repeat the steps above for an additional frame in the frame sequence based on the completeness value. Regions from the frames are combined to generate a composite image.
    Type: Grant
    Filed: April 22, 2014
    Date of Patent: May 10, 2016
    Assignee: CYBERLINK CORP.
    Inventors: Ho-Chao Huang, Huan-Wen Hsiao, Chung-Yi Weng, Cheng-da Chung
  • Publication number: 20140369627
    Abstract: Various embodiments are disclosed for image editing. A frame is obtained from a frame sequence depicting at least one individual, and facial characteristics in the frame are analyzed. A utilization score is assigned to the frame based on the detected facial characteristics, and a determination of whether to utilize the frame is made based on the utilization score. A completeness value is assigned, and a determination is made based on the completeness value of whether to repeat the steps above for an additional frame in the frame sequence based on the completeness value. Regions from the frames are combined to generate a composite image.
    Type: Application
    Filed: April 22, 2014
    Publication date: December 18, 2014
    Applicant: Cyberlink Corp.
    Inventors: Ho-Chao Huang, Huan-Wen Hsiao, Chung-Yi Weng, Cheng-da Chung
  • Patent number: 8867789
    Abstract: Disclosed are various embodiments for tracking an object shown as moving in a video. One embodiment is a method for tracking an object in a video that comprises tracking in a first temporal direction an object in a plurality of video frames and generating a first tracking result, evaluating the first tracking result corresponding to tracking of the object in the first temporal direction, and stopping tracking in the first temporal direction upon the occurrence of a predefined event, wherein the predefined event is based on an evaluated tracking result. The method further comprises obtaining data identifying an object outline of the object upon stopping the tracking in the first temporal direction, tracking in a second temporal direction the object based on the data identifying the object outline of the object to generate a second tracking result, and generating a refined tracking result based on at least on one of the first tracking result, the second tracking result, or a combination thereof.
    Type: Grant
    Filed: January 14, 2013
    Date of Patent: October 21, 2014
    Assignee: Cyberlink Corp.
    Inventors: Huan-Wen Hsiao, Chung-Yi Weng
  • Publication number: 20140198945
    Abstract: Disclosed are various embodiments for tracking an object shown as moving in a video. One embodiment is a method for tracking an object in a video that comprises tracking in a first temporal direction an object in a plurality of video frames and generating a first tracking result, evaluating the first tracking result corresponding to tracking of the object in the first temporal direction, and stopping tracking in the first temporal direction upon the occurrence of a predefined event, wherein the predefined event is based on an evaluated tracking result. The method further comprises obtaining data identifying an object outline of the object upon stopping the tracking in the first temporal direction, tracking in a second temporal direction the object based on the data identifying the object outline of the object to generate a second tracking result, and generating a refined tracking result based on at least on one of the first tracking result, the second tracking result, or a combination thereof.
    Type: Application
    Filed: January 14, 2013
    Publication date: July 17, 2014
    Applicant: CYBERLINK CORP.
    Inventors: Huan-Wen Hsiao, Chung-Yi Weng
  • Patent number: 8503862
    Abstract: Various embodiments described herein provide users with a fast and efficient way for identifying scenes for editing purposes. At least one embodiment is a method for editing video. The method comprises receiving a video with scenes to be edited, receiving a scene selection for editing, and partitioning the selected scene into subscenes based on the presence of subtitles, audio analysis, or a combination of both. The method further comprises identifying subscenes of interest, receiving editing commands for the subscenes of interest, and associating the editing commands with the video for future playback, wherein the video is left unmodified.
    Type: Grant
    Filed: June 12, 2008
    Date of Patent: August 6, 2013
    Assignee: Cyberlink Corp.
    Inventors: Ming-Kai Hsieh, Huan-wen Hsiao
  • Publication number: 20090310932
    Abstract: Various embodiments described herein provide users with a fast and efficient way for identifying scenes for editing purposes. At least one embodiment is a method for editing video. The method comprises receiving a video with scenes to be edited, receiving a scene selection for editing, and partitioning the selected scene into subscenes based on the presence of subtitles, audio analysis, or a combination of both. The method further comprises identifying subscenes of interest, receiving editing commands for the subscenes of interest, and associating the editing commands with the video for future playback, wherein the video is left unmodified.
    Type: Application
    Filed: June 12, 2008
    Publication date: December 17, 2009
    Applicant: CYBERLINK CORPORATION
    Inventors: Ming-Kai Hsieh, Huan-wen Hsiao