Patents by Inventor Yea-Shuan Huang

Yea-Shuan Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8953852
    Abstract: A method for face recognition is provided with collecting a match facial image; retrieving a reference image from image records of a database or an input image; selecting one or more facial features from each of the match facial image and the reference image; obtaining at least one match facial feature and a match deviation of the reference image corresponding to the facial features of the match facial image; creating a match geometric model and a reference geometric model; obtaining a model deviation by comparing the match geometric model and the reference geometric model; and employing a match deviation and a model deviation to obtain a recognition score based on a predetermined rule. The method involves a two-way face recognition by integrating facial features of block matching with geometric model comparison. It employs relationship of match deviation and model deviation.
    Type: Grant
    Filed: June 19, 2012
    Date of Patent: February 10, 2015
    Assignee: Chung Hua University
    Inventors: Yea-Shuan Huang, Kuo-Ta Peng
  • Patent number: 8929597
    Abstract: A method of object tracking is provided with creating areas of a tracking object and a non-tracking object respectively; determining a state of the tracking object and the non-tracking object is separation, proximity, or overlap; creating at least one separation template image of a separation area of the tracking object and/or the non-tracking object if the tracking object is proximate the non-tracking object; fetching all feature points of an overlapping area of the tracking object and the non-tracking object if the tracking object and the non-tracking object overlap; performing a match on each of the feature points and the separation template image so as to calculate a corresponding matching error score respectively; and comparing the matching error score of each feature point with that of the separation template image so as to determine whether the feature points belong to the tracking object or the non-tracking object.
    Type: Grant
    Filed: June 19, 2012
    Date of Patent: January 6, 2015
    Assignee: Chung Hua University
    Inventors: Yea-Shuan Huang, Yu-Chung Chen
  • Patent number: 8593523
    Abstract: A method and an apparatus for capturing facial expressions are provided, in which different facial expressions of a user are captured through a face recognition technique. In the method, a plurality of sequentially captured images containing human faces is received. Regional features of the human faces in the images are respectively captured to generate a target feature vector. The target feature vector is compared with a plurality of previously stored feature vectors to generate a parameter value. When the parameter value is higher than a threshold, one of the images is selected as a target image. Moreover, a facial expression recognition and classification procedures can be further performed. For example, the target image is recognized to obtain a facial expression state, and the image is classified according to the facial expression state.
    Type: Grant
    Filed: March 24, 2011
    Date of Patent: November 26, 2013
    Assignee: Industrial Technology Research Institute
    Inventors: Shian Wan, Yuan-Shi Liao, Yea-Shuan Huang, Shun-Hsu Chuang
  • Publication number: 20130259324
    Abstract: A method for face recognition is provided with collecting a match facial image; retrieving a reference image from image records of a database or an input image; selecting one or more facial features from each of the match facial image and the reference image; obtaining at least one match facial feature and a match deviation of the reference image corresponding to the facial features of the match facial image; creating a match geometric model and a reference geometric model; obtaining a model deviation by comparing the match geometric model and the reference geometric model; and employing a match deviation and a model deviation to obtain a recognition score based on a predetermined rule. The method involves a two-way face recognition by integrating facial features of block matching with geometric model comparison. It employs relationship of match deviation and model deviation.
    Type: Application
    Filed: June 19, 2012
    Publication date: October 3, 2013
    Inventors: Yea-Shuan Huang, Kuo-Ta Peng
  • Publication number: 20130259302
    Abstract: A method of object tracking is provided with creating areas of a tracking object and a non-tracking object respectively; determining a state of the tracking object and the non-tracking object is separation, proximity, or overlap; creating at least one separation template image of a separation area of the tracking object and/or the non-tracking object if the tracking object is proximate the non-tracking object; fetching all feature points of an overlapping area of the tracking object and the non-tracking object if the tracking object and the non-tracking object overlap; performing a match on each of the feature points and the separation template image so as to calculate a corresponding matching error score respectively; and comparing the matching error score of each feature point with that of the separation template image so as to determine whether the feature points belong to the tracking object or the non-tracking object.
    Type: Application
    Filed: June 19, 2012
    Publication date: October 3, 2013
    Inventors: Yea-Shuan Huang, Yu-Chung Chen
  • Patent number: 8311358
    Abstract: The present invention provides a method for extracting an image texture signal, a method for identifying image and a system for identifying an image. The method for extracting an image texture signal comprises the following steps: extracting a first image signal; employing a first operation procedure to the first image signal to obtain a second image signal; employing a second operation procedure to the second image signal to obtain a third image signal; employing a third operation procedure to the third image signal to obtain a fourth image signal; outputting the fourth image signal. Therefore, the first image signal is transformed to the fourth image signal via the method for extracting an image texture signal.
    Type: Grant
    Filed: July 13, 2010
    Date of Patent: November 13, 2012
    Assignee: Chung-Hua University
    Inventors: Yea-Shuan Huang, Chu-Yun Li
  • Publication number: 20120169895
    Abstract: A method and an apparatus for capturing facial expressions are provided, in which different facial expressions of a user are captured through a face recognition technique. In the method, a plurality of sequentially captured images containing human faces is received. Regional features of the human faces in the images are respectively captured to generate a target feature vector. The target feature vector is compared with a plurality of previously stored feature vectors to generate a parameter value. When the parameter value is higher than a threshold, one of the images is selected as a target image. Moreover, a facial expression recognition and classification procedures can be further performed. For example, the target image is recognized to obtain a facial expression state, and the image is classified according to the facial expression state.
    Type: Application
    Filed: March 24, 2011
    Publication date: July 5, 2012
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Shian Wan, Yuan-Shi Liao, Yea-Shuan Huang, Shun-Hsu Chuang
  • Publication number: 20110280488
    Abstract: The present invention provides a method for extracting an image texture signal, a method for identifying image and a system for identifying an image. The method for extracting an image texture signal comprises the following steps: extracting a first image signal; employing a first operation procedure to the first image signal to obtain a second image signal; employing a second operation procedure to the second image signal to obtain a third image signal; employing a third operation procedure to the third image signal to obtain a fourth image signal; outputting the fourth image signal. Therefore, the first image signal is transformed to the fourth image signal via the method for extracting an image texture signal.
    Type: Application
    Filed: July 13, 2010
    Publication date: November 17, 2011
    Inventors: Yea-Shuan HUANG, Chu-Yun Li
  • Patent number: 7929729
    Abstract: A method of image processing, the method comprising receiving an image frame including a plurality of pixels, each of the plurality of pixels including an image information, conducting a first extraction based on the image information to identify foreground pixels related to a foreground object in the image frame and background pixels related to a background of the image frame, scanning the image frame in regions, identifying whether each of the regions includes a sufficient number of foreground pixels, identifying whether each of regions including a sufficient number of foreground pixels includes a foreground object, clustering regions including a foreground object into at least one group, each of the at least one group corresponding to a different foreground object in the image frame, and conducting a second extraction for each of at least one group to identify whether a foreground pixel in the each of the at least one group is to be converted to a background pixel.
    Type: Grant
    Filed: April 2, 2007
    Date of Patent: April 19, 2011
    Assignee: Industrial Technology Research Institute
    Inventors: Yea-Shuan Huang, Hao-Ying Cheng, Po-Feng Cheng, Shih-Chun Wang
  • Publication number: 20080240500
    Abstract: A method of image processing, the method comprising receiving an image frame including a plurality of pixels, each of the plurality of pixels including an image information, conducting a first extraction based on the image information to identify foreground pixels related to a foreground object in the image frame and background pixels related to a background of the image frame, scanning the image frame in regions, identifying whether each of the regions includes a sufficient number of foreground pixels, identifying whether each of regions including a sufficient number of foreground pixels includes a foreground object, clustering regions including a foreground object into at least one group, each of the at least one group corresponding to a different foreground object in the image frame, and conducting a second extraction for each of at least one group to identify whether a foreground pixel in the each of the at least one group is to be converted to a background pixel.
    Type: Application
    Filed: April 2, 2007
    Publication date: October 2, 2008
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Yea-Shuan Huang, Hao-Ying Cheng, Po-Feng Cheng, Shih-Chun Wang
  • Patent number: 7372981
    Abstract: A statistical facial feature extraction method is disclosed. In a training phase, N training face images are respectively labeled n feature points located in n different blocks to form N feature vectors. Next, a principal component analysis (PCA) technique is used to obtain a statistical face shape model after aligning each shape vector with a reference shape vector. In an executing phase, initial positions for desired facial features are firstly guessed according to the coordinates of the mean shape for aligned training face images obtained in the training phase, and k candidates are respectively labeled in n search ranges corresponding to above-mentioned initial positions to obtain kn different combinations of test shape vectors. Finally, coordinates of the test shape vector having the best similarity with the mean shape for aligned training face image and the statistical face shape model are assigned as facial features of the test face image.
    Type: Grant
    Filed: November 25, 2003
    Date of Patent: May 13, 2008
    Assignee: Industrial Technology Research Institute
    Inventors: Shang-Hong Lai, Jiang-Ge Chen, Yea-Shuan Huang
  • Patent number: 7305124
    Abstract: A method for adjusting image acquisition parameters to optimize object extraction is disclosed, which is applied to an object characterized by forming a specific cluster in a color coordinate space after performing a coordinate projection, and thus the specific cluster contributes to a specific color model, such as a human skin color model. This method first locates a target object within a search window in a selected image. Then applies the specific color model to obtain the image acquisition parameter(s) according to the color distribution and features of the target object. Therefore, the image is transformed according to the adjusted image acquisition parameter(s). Consequently, a complete and clear target object can be extracted from the transformed image by applying the specific color model, and the follow-up images having the same image acquisition conditions with the aforesaid image can also be transformed according to the same image acquisition parameter(s).
    Type: Grant
    Filed: June 29, 2004
    Date of Patent: December 4, 2007
    Assignee: Industrial Technology Research Institute
    Inventors: Hung-Xin Zhao, Yea-Shuan Huang, Chung-Mou Pengwu
  • Patent number: 7113637
    Abstract: In order to improve pattern recognition, various kinds of transformations are performed on an input object. One or more recognition algorithms are then performed on the input object transforms in addition to the input object itself. By performing recognition algorithms on an input object and its transforms, a more comprehensive set of recognition results are generated. A final recognition decision is based upon an input object and its transforms by aggregating the recognition results.
    Type: Grant
    Filed: August 24, 2001
    Date of Patent: September 26, 2006
    Assignee: Industrial Technology Research Institute
    Inventors: Yea-Shuan Huang, Chun-Wei Hsieh
  • Publication number: 20060210159
    Abstract: A method for extracting a foreground object from an image comprises selecting a first pixel of the image, selecting a set of second pixels of the image associated with the first pixel, determining a set of contrasts for the first pixel by comparing the first pixel with each of the second pixels in image value, and determining an image structure of the first pixel in accordance with the set of contrasts.
    Type: Application
    Filed: March 15, 2005
    Publication date: September 21, 2006
    Inventors: Yea-Shuan Huang, Hao-Ying Cheng
  • Patent number: 7069257
    Abstract: A RBF pattern recognition method for reducing classification errors is provided. An optimum RBF training approach is obtained for reducing an error calculated by an error function. The invention continuously generates the updated differences of parameters in the learning process of recognizing training samples. The modified parameters are employed to stepwise adjust the RBF neural network. The invention can distinguish different degrees of importance and learning contributions among the training samples and evaluate the learning contribution of each training sample for obtaining differences of the parameters of the training samples. When the learning contribution is larger, the updated difference is larger to speed up the learning. Thus, the difference of the parameters is zero when the training samples are classified as the correct pattern type.
    Type: Grant
    Filed: October 25, 2002
    Date of Patent: June 27, 2006
    Assignee: Industrial Technology Research Institute
    Inventor: Yea-Shuan Huang
  • Patent number: 7068843
    Abstract: A method for extracting and matching gesture features of image is disclosed. An input gesture image is captured, and then a closed curve formed by a binary contour image of the gesture image is determined by preprocessing the gesture image. A curvature scale space (CSS) image of the gesture image is drawn based on the closed curve. Feature parameters of a plurality of sets of the gesture image are determined by extracting first plural peaks from the CSS image as basis points, and each feature parameter of the plurality of sets of the gesture image is compared with each feature parameter of a plurality of reference gesture shapes represented as a basis point of the maximal peak, thereby determining a gesture shape corresponding to the gesture image.
    Type: Grant
    Filed: July 31, 2002
    Date of Patent: June 27, 2006
    Assignee: Industrial Technology Research Institute
    Inventors: Chin-Chen Chang, Yea-Shuan Huang
  • Patent number: 7020345
    Abstract: The present invention relates to methods and systems for an illuminant compensation. In particular, these methods and systems include a method for operations on an image, for example, an image of a human face. In the described methods and systems, it is determined for each pixel in the image whether it is part of the face region. A surface fitting is then determined based on only the pixels that are determined to be part of the face region. Also, described are methods and systems for image normalization wherein the standard deviation and average for the gray levels of the pixels are determined and then used to normalize the image so that the gray level for each of the pixels falls between a particular range.
    Type: Grant
    Filed: April 26, 2001
    Date of Patent: March 28, 2006
    Assignee: Industrial Technology Research Institute
    Inventors: Yao-Hong Tsai, Yea-Shuan Huang, Cheng-Chin Chiang, Chun-Wei Hsieh
  • Patent number: 7003135
    Abstract: A system and a method for rapidly tracking multiple faces uses a face-like region generator to find a face-like region by skin color, motion, and silhouette information. A face tracking engine tracks faces based on new and old faces, and skin colors provided by the face-like regions. The tracked face is fed into a face status checker for determining whether the face-like regions are old faces tracked in a previous frame or are possible new faces. If the face-like regions are old faces, a face verification engine checks whether there exists a predefined percentage of overlapping area between an old face and a skin region. If yes, the old face is still in the current frame and its position is in the center of the skin region, otherwise, the position of the old face is found by a correlation operation.
    Type: Grant
    Filed: August 17, 2001
    Date of Patent: February 21, 2006
    Assignee: Industrial Technology Research Institute
    Inventors: Chun-Wei Hsieh, Yea-Shuan Huang
  • Patent number: 6915022
    Abstract: The present invention discloses an image preprocessing method capable of increasing the accuracy of face detection by enhancing the contrast between dark pixels and their surrounding bright pixels, and increasing the brightness difference between dark pixels and bright pixels. Even in insufficient and non-uniform lighting conditions, the eye-analogue segments of a human face are obvious; so as to make a subsequent algorithm using eye-analogue segments for detecting human faces and producing more accurate results.
    Type: Grant
    Filed: April 25, 2002
    Date of Patent: July 5, 2005
    Assignee: Industrial Technology Research Institute
    Inventors: Yea-Shuan Huang, Yao-Hong Tsai
  • Publication number: 20050141762
    Abstract: A method for adjusting image acquisition parameters to optimize object extraction is disclosed, which is applied to an object characterized by forming a specific cluster in a color coordinate space after performing a coordinate projection, and thus the specific cluster contributes to a specific color model, such as a human skin color model. This method first locates a target object within a search window in a selected image. Then applies the specific color model to obtain the image acquisition parameter(s) according to the color distribution and features of the target object. Therefore, the image is transformed according to the adjusted image acquisition parameter(s). Consequently, a complete and clear target object can be extracted from the transformed image by applying the specific color model, and the follow-up images having the same image acquisition conditions with the aforesaid image can also be transformed according to the same image acquisition parameter(s).
    Type: Application
    Filed: June 29, 2004
    Publication date: June 30, 2005
    Applicant: Industrial Technology Research Institute
    Inventors: Hung-Xin Zhao, Yea-Shuan Huang, Chung-Mou Pengwu