Patents by Inventor Kai-Tai Song

Kai-Tai Song has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20110150301
    Abstract: A face identification method includes the following steps. First, first and second sets of hidden layer parameters, which respectively correspond to first and second database character vectors, are obtained by way of training according to multiple first and second training character data. Next, first and second back propagation neural networks (BPNNs) are established according to the first and second sets of hidden layer parameters, respectively. Then, to-be-identified data are provided to the first BPNN to find a first output character vector. Next, whether the first output character vector satisfies an identification criterion is determined. If not, the to-be-identified data are provided to the second BPNN to find a second output character vector. Then, whether the second output character vector satisfies the identification criterion is determined. If yes, the to-be-identified data are identified as corresponding to the second database character vector.
    Type: Application
    Filed: July 6, 2010
    Publication date: June 23, 2011
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Kai-Tai Song, Meng-Ju Han, Shih-Chieh Wang
  • Publication number: 20110141258
    Abstract: A method is disclosed in the present disclosure for recognizing emotion by setting different weights to at least of two kinds of unknown information, such as image and audio information, based on their recognition reliability respectively. The weights are determined by the distance between test data and hyperplane and the standard deviation of training data and normalized by the mean distance between training data and hyperplane, representing the classification reliability of different information. The method recognizes the emotion according to the unidentified information having higher weights while the at least two kinds of unidentified information have different result classified by the hyperplane and correcting wrong classification result of the other unidentified information so as to raise the accuracy while emotion recognition. Meanwhile, the present disclosure also provides a learning step with a characteristic of higher learning speed through an algorithm of iteration.
    Type: Application
    Filed: February 7, 2011
    Publication date: June 16, 2011
    Applicant: Industrial Technology Research Institute
    Inventors: Kai-Tai Song, Meng-Ju Han, Jing-Huai Hsu, Jung-Wei Hong, Fuh-Yu Chang
  • Publication number: 20110144804
    Abstract: A device for expressing robot autonomous emotions comprises: a sensing unit; a user emotion recognition unit, recognizing current user emotional states after receiving sensed information from the sensing unit, and calculating user emotional strengths based on the current user emotional states; a robot emotion generation unit, generating robot emotional states based on the user emotional strengths; a behavior fusion unit, calculating a plurality of output behavior weights by a fuzzy-neuro network based on the user emotional strengths and a rule table; and a robot reaction unit, expressing a robot emotional behavior based on the output behavior weights and the robot emotional states.
    Type: Application
    Filed: May 13, 2010
    Publication date: June 16, 2011
    Inventors: Kai-Tai SONG, Meng-Ju Han, Chia-How Lin
  • Publication number: 20100284619
    Abstract: A face detection apparatus and a face detection method thereof are provided. The face detection apparatus includes a rectangle integral image unit, a feature mapping unit and a cascade and score unit. The rectangle integral image unit provides a rectangle integral image according to an original image. The feature mapping unit determines a face candidate region according to rectangular face feature templates, and calculates feature values of the rectangular face feature templates according to the rectangle integral image. The cascade and score unit judges whether the face candidate region conforms to cascade conditions or not, and gives the face candidate region a score according to the feature values when the face candidate region conforms to the cascade conditions. The face candidate region is a non-face region if the score of the face candidate region is lower than a threshold value.
    Type: Application
    Filed: October 29, 2009
    Publication date: November 11, 2010
    Applicant: NOVATEK MICROELECTRONICS CORP.
    Inventors: Kai-Tai Song, Meng-Ju Han, Shih-Chieh Wang, Ming-Feng Chiang, Chia-Ho Lin
  • Publication number: 20100278385
    Abstract: A facial expression recognition apparatus and a facial expression recognition method thereof are provided. The facial expression recognition apparatus comprises a gray image generating unit, a face edge detection unit, a motion skin extraction unit, a face contour generating unit and a facial expression recognition unit. The gray image generating unit generates a gray image according to an original image. The face edge detection unit outputs a face edge detection result according to the gray image. The motion skin extraction unit generates a motion skin extraction result according to the original image, and generates a face and background division result according to the motion skin extraction result. The face contour generating unit outputs a face contour according to the gray image, the face edge detection result and the face and background division result. The facial expression recognition unit outputs a facial expression recognition result according to the face contour.
    Type: Application
    Filed: November 16, 2009
    Publication date: November 4, 2010
    Applicant: NOVATEK MICROELECTRONICS CORP.
    Inventors: Kai-Tai Song, Meng-Ju Han, Shih-Chieh Wang, Chia-Ho Lin, Chi-Yi Lin
  • Patent number: 7742894
    Abstract: In the present invention, a multi-person pose recognition system has been developed. This system includes a body pose detection module, a CC2420DBK board and a multi-person pose monitoring software module. The body pose detection module includes a triaxial accelerometer, a Zigbee chip and an 8-bit microcontroller. Several body pose detection modules and the CC2420DBK board form a Zigbee wireless sensor network (WSN). The CC2420DBK board functions as the receiver of the Zigbee WSN and communicates with a robot onboard computer or a host computer through a RS-232 port. The multi-person pose monitoring software monitors and records activities of multiple users simultaneously. The present invention provides a pose recognition algorithm by combining time-domain analysis and wavelet transform analysis. This algorithm has been implemented in the microcontroller of a body pose estimation module.
    Type: Grant
    Filed: March 14, 2008
    Date of Patent: June 22, 2010
    Assignee: National Chiao Tung University
    Inventors: Chun-Wei Chen, Kai-Tai Song
  • Publication number: 20090303042
    Abstract: This invention is an intruder detection system which integrates wireless sensor network and security robots. Multiple ZigBee wireless sensor modules installed in the environment can detect intruders and abnormal conditions with various sensors, and transmit alert to the monitoring center and security robot via the wireless mesh network. The robot can navigate in the environment autonomously and approach to a target place using its localization system. If any possible intruder is detected, the robot can approach to that location, and transmit images to the mobile devices of the securities and users, in order to determine the exact situation in real time.
    Type: Application
    Filed: October 30, 2008
    Publication date: December 10, 2009
    Applicant: National Chiao Tung University
    Inventors: Kai-Tai Song, Chia-Hao Lin, Chih-Sheng Lin, Su-Hen Yang
  • Patent number: 7609307
    Abstract: The present invention discloses a heterogeneity-projection hard-decision interpolation method for color reproduction, which utilizes a heterogeneity-projection method to determine the optimal edge direction and then utilizes a hard-decision rule to determine the optimal interpolation direction and obtain the information of the green color elements. The high-frequency information of the plane of the green color elements is incorporated into the processing of the planes of the red color elements and the blue color elements to reduce the restoration errors of the red and blue color elements. Therefore, the present invention can decrease the interpolation-direction errors and achieve a higher PSNR and a better visual effect.
    Type: Grant
    Filed: September 13, 2006
    Date of Patent: October 27, 2009
    Assignee: National Chiao Tung University
    Inventors: Chi-Yi Tsai, Kai-Tai Song
  • Publication number: 20090198374
    Abstract: A nursing system of the present invention can position where a person to be nursed through a sensor network widely deployed in an environment, instantaneously detect if the person to be nursed has an accident, and forward a message to inform a relative or medical staff. An autonomous robot will actively move beside the person to be nursed and transmit real-time images to a remote computer or PDA so that the relative or medical staff can swiftly ascertain the situation of the person to be nursed and the person to be nursed in case of emergency can be rescued as soon as possible.
    Type: Application
    Filed: April 4, 2008
    Publication date: August 6, 2009
    Inventors: Chi-Yi Tsai, Fu-Sheng Huang, Chen-Yang Lin, Zhi-Sheng Lin, Chun-Wei Chen, Kai-Tai Song
  • Publication number: 20090161915
    Abstract: In the present invention, a multi-person pose recognition system has been developed. This system includes a body pose detection module, a CC2420DBK board and a multi-person pose monitoring software module. The body pose detection module includes a triaxial accelerometer, a Zigbee chip and an 8-bit microcontroller. Several body pose detection modules and the CC2420DBK board form a Zigbee wireless sensor network (WSN). The CC2420DBK board functions as the receiver of the Zigbee WSN and communicates with a robot onboard computer or a host computer through a RS-232 port. The multi-person pose monitoring software monitors and records activities of multiple users simultaneously. The present invention provides a pose recognition algorithm by combining time-domain analysis and wavelet transform analysis. This algorithm has been implemented in the microcontroller of a body pose estimation module.
    Type: Application
    Filed: March 14, 2008
    Publication date: June 25, 2009
    Inventors: Chun-Wei Chen, Kai-Tai Song
  • Publication number: 20080201144
    Abstract: A method is disclosed in the present invention for recognizing emotion by setting different weights to at least of two kinds of unknown information, such as image and audio information, based on their recognition reliability respectively. The weights are determined by the distance between test data and hyperplane and the standard deviation of training data and normalized by the mean distance between training data and hyperplane, representing the classification reliability of different information. The method is capable of recognizing the emotion according to the unidentified information having higher weights while the at least two kinds of unidentified information have different result classified by the hyperplane and correcting wrong classification result of the other unidentified information so as to raise the accuracy while emotion recognition. Meanwhile, the present invention also provides a learning step with a characteristic of higher learning speed through an algorithm of iteration.
    Type: Application
    Filed: August 8, 2007
    Publication date: August 21, 2008
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Kai-Tai Song, Meng-Ju Han, Jing-Huai Hsu, Jung-Wei Hong, Fuh-Yu Chang
  • Publication number: 20080062479
    Abstract: The present invention discloses a heterogeneity-projection hard-decision interpolation method for color reproduction, which utilizes a heterogeneity-projection method to determine the optimal edge direction and then utilizes a hard-decision rule to determine the optimal interpolation direction and obtain the information of the green color elements. The high-frequency information of the plane of the green color elements is incorporated into the processing of the planes of the red color elements and the blue color elements to reduce the restoration errors of the red and blue color elements. Therefore, the present invention can decrease the interpolation-direction errors and achieve a higher PSNR and a better visual effect.
    Type: Application
    Filed: September 13, 2006
    Publication date: March 13, 2008
    Inventors: Chi-Yi Tsai, Kai-Tai Song