Patents by Inventor Kai-Tai Song
Kai-Tai Song has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20160342847Abstract: A method for image recognition of an instrument includes: obtaining an input image containing a to-be-recognized instrument; selecting from the input image a region-of-interest containing the to-be-recognized instrument; determining in a high-to-low order of priority values of instrument categories, whether the to-be-recognized instrument contained in the region-of-interest belongs to one of the instrument categories according to the region-of-interest and a respective one of plural groups of sample images; and increasing the priority value of the one of the instrument categories when it is determined that the to-be-recognized instrument belongs to the one of the instrument categories.Type: ApplicationFiled: February 2, 2016Publication date: November 24, 2016Inventors: Kai-Tai Song, Kateryna Zinchenko
-
Patent number: 9489934Abstract: A method for selecting music based on face recognition, a music selecting system and an electronic apparatus are provided. The method includes the following steps: accessing a database to retrieve a plurality of song emotion coordinates corresponding to a plurality of songs; mapping the song emotion coordinates to an emotion coordinate graph; capturing a human face image; identifying an emotion state corresponding to the human face image, and transforming the emotion state to a current emotion coordinate; mapping the current emotion coordinate to the emotion coordinate graph; updating a song playlist according to a relative position between the current emotion coordinate and a target emotion coordinate, wherein the song playlist includes a plurality of songs to be played that direct the current emotion coordinate to the target emotion coordinate.Type: GrantFiled: May 22, 2014Date of Patent: November 8, 2016Assignee: National Chiao Tung UniversityInventors: Kai-Tai Song, Chao-Yu Lin
-
Patent number: 9481087Abstract: A robot and a control method thereof are provided. The method includes the following steps: receiving a manual control command from a remote control device, and accumulating a duration of issuing the manual control commands; estimating an estimated moving velocity corresponding to the manual control command; detecting a surrounding environment of the robot and generating an autonomous navigation command based on the surrounding environment; determining a first weighting value associated with the manual control command based on the duration, the estimated moving velocity and the distance to obstacles; determining a second weighting value associated with the autonomous navigation command based on the first weighting value; linearly combining the manual control command and the autonomous navigation command based on the first weighting value and the second weighting value to generate a moving control command; and moving based on the moving control command.Type: GrantFiled: April 30, 2015Date of Patent: November 1, 2016Assignee: National Chiao Tung UniversityInventors: Kai-Tai Song, Ming-Han Lin
-
Publication number: 20160184990Abstract: A robot and a control method thereof are provided. The method includes the following steps: receiving a manual control command from a remote control device, and accumulating a duration of issuing the manual control commands; estimating an estimated moving velocity corresponding to the manual control command; detecting a surrounding environment of the robot and generating an autonomous navigation command based on the surrounding environment; determining a first weighting value associated with the manual control command based on the duration, the estimated moving velocity and the distance to obstacles; determining a second weighting value associated with the autonomous navigation command based on the first weighting value; linearly combining the manual control command and the autonomous navigation command based on the first weighting value and the second weighting value to generate a moving control command; and moving based on the moving control command.Type: ApplicationFiled: April 30, 2015Publication date: June 30, 2016Inventors: Kai-Tai Song, Ming-Han Lin
-
Publication number: 20150206523Abstract: A method for selecting music based on face recognition, a music selecting system and an electronic apparatus are provided. The method includes the following steps: accessing a database to retrieve a plurality of song emotion coordinates corresponding to a plurality of songs; mapping the song emotion coordinates to an emotion coordinate graph; capturing a human face image; identifying an emotion state corresponding to the human face image, and transforming the emotion state to a current emotion coordinate; mapping the current emotion coordinate to the emotion coordinate graph; updating a song playlist according to a relative position between the current emotion coordinate and a target emotion coordinate, wherein the song playlist includes a plurality of songs to be played that direct the current emotion coordinate to the target emotion coordinate.Type: ApplicationFiled: May 22, 2014Publication date: July 23, 2015Applicant: National Chiao Tung UniversityInventors: Kai-Tai Song, Chao-Yu Lin
-
Patent number: 9081384Abstract: An autonomous electronic apparatus and a navigation method thereof are provided. The navigation method includes the following steps. Firstly, a calling signal from a target is received through a wireless sensor network. A position relationship between the target and the autonomous electronic apparatus is analyzed to generate a first speed. Next, an image set is captured and an image relationship between the image set and the target is analyzed to generate a second speed. Afterwards, a weighting value related to the position relationship is calculated. Besides, a moving speed is calculated according to the weighting value, the first speed and the second speed, and a moving status of the autonomous electronic apparatus moving toward the target is controlled via the moving speed.Type: GrantFiled: March 31, 2013Date of Patent: July 14, 2015Assignee: National Chiao Tung UniversityInventors: Kai-Tai Song, Shang-Chun Hung
-
Patent number: 8965762Abstract: A method is disclosed in the present disclosure for recognizing emotion by setting different weights to at least of two kinds of unknown information, such as image and audio information, based on their recognition reliability respectively. The weights are determined by the distance between test data and hyperplane and the standard deviation of training data and normalized by the mean distance between training data and hyperplane, representing the classification reliability of different information. The method recognizes the emotion according to the unidentified information having higher weights while the at least two kinds of unidentified information have different result classified by the hyperplane and correcting wrong classification result of the other unidentified information so as to raise the accuracy while emotion recognition. Meanwhile, the present disclosure also provides a learning step with a characteristic of higher learning speed through an algorithm of iteration.Type: GrantFiled: February 7, 2011Date of Patent: February 24, 2015Assignee: Industrial Technology Research InstituteInventors: Kai-Tai Song, Meng-Ju Han, Jing-Huai Hsu, Jung-Wei Hong, Fuh-Yu Chang
-
Patent number: 8879799Abstract: The present invention discloses a human identification system by fusion of face recognition and speaker recognition, a method and a service robot thereof. The system fuses results of the face recognition and the speaker recognition, and further uses confidence index to estimate the confidence level of the two recognition results. If only one of the confidence indices of the two recognition results reaches the threshold, then only this result is used as the output. If both confidence indices of the two recognition results reach the threshold, then the two recognition results are fused to output as a final result.Type: GrantFiled: November 13, 2012Date of Patent: November 4, 2014Assignee: National Chiao Tung UniversityInventors: Kai-Tai Song, Shuo-Cheng Chien, Chao-Yu Lin, Yi-Wen Chen, Sin-Horng Chen, Chen-Yu Chiang, Yi-Chiao Wu
-
Publication number: 20140172431Abstract: A music playing system and a music playing method suitable for playing music based on speech emotion recognition are provided. The music playing method includes following steps. A plurality of songs and song emotion coordinates of the songs mapping on an emotion coordinate graph are stored in a first database. Emotion recognition parameters are stored in a second database. A voice data is received and analyzed, and a current emotion coordinate of the voice data mapping on the emotion coordinate graph is obtained according to the second database. The setting of a target emotion coordinate is received. At least one specific song emotion coordinate closest to a cheer-up line connecting the current emotion coordinate and the target emotion coordinate is found. Songs corresponding to aforementioned emotion coordinates are sequentially played.Type: ApplicationFiled: April 10, 2013Publication date: June 19, 2014Applicant: National Chiao Tung UniversityInventors: Kai-Tai Song, Carlos Cervantes
-
Publication number: 20140156125Abstract: An autonomous electronic apparatus and a navigation method thereof are provided. The navigation method includes the following steps. Firstly, a calling signal from a target is received through a wireless sensor network. A position relationship between the target and the autonomous electronic apparatus is analyzed to generate a first speed. Next, an image set is captured and an image relationship between the image set and the target is analyzed to generate a second speed. Afterwards, a weighting value related to the position relationship is calculated. Besides, a moving speed is calculated according to the weighting value, the first speed and the second speed, and a moving status of the autonomous electronic apparatus moving toward the target is controlled via the moving speed.Type: ApplicationFiled: March 31, 2013Publication date: June 5, 2014Applicant: National Chiao Tung UniversityInventors: Kai-Tai Song, Shang-Chun Hung
-
Patent number: 8634595Abstract: The present invention provides a method for dynamically setting an environmental boundary in an image and a method for instantly determining human activity according to the method for dynamically setting the environmental boundary. The method for instantly determining human activity includes the steps of retrieving at least an initial environmental image with a predetermined angle, and calculating a boundary setting equation of an object and an environmental boundary in the initial environmental image; retrieving a dynamic environmental image having the object by using a movable platform, and figuring out a new environmental boundary; determining a human image in the dynamic environmental image, recording retention time of the human image, and determining a human posture; and determining human location according to the environmental boundary in the dynamic environmental image and the human image, and instantly determining the human activity.Type: GrantFiled: August 11, 2011Date of Patent: January 21, 2014Assignee: National Chiao Tung UniversityInventors: Kai-Tai Song, Wei-Jyun Chen
-
Publication number: 20140016835Abstract: The present invention discloses a human identification system by fusion of face recognition and speaker recognition, a method and a service robot thereof. The system fuses results of the face recognition and the speaker recognition, and further uses confidence index to estimate the confidence level of the two recognition results. If only one of the confidence indices of the two recognition results reaches the threshold, then only this result is used as the output. If both confidence indices of the two recognition results reach the threshold, then the two recognition results are fused to output as a final result.Type: ApplicationFiled: November 13, 2012Publication date: January 16, 2014Applicant: NATIONAL CHIAO TUNG UNIVERSITYInventors: KAI-TAI SONG, SHUO-CHENG CHIEN, CHAO-YU LIN, YI-WEN CHEN, SIN-HORNG CHEN, CHEN-YU CHIANG, YI-CHIAO WU
-
Publication number: 20140005475Abstract: An image tracking system and an image tracking method. The image tracking system includes image capture module, detection module and processing module. The image capture module captures real-time image. The detection module analyzes the real-time image, and detects whether positions of a plurality of instruments are disposed in the real-time image. The processing module defines buffer zone in the real-time image, and analyses whether the instruments are disposed in the buffer zone based on the positions of the instruments, and determines whether spacing distance between the instruments is small than preset distance. When the spacing distance is smaller than the preset distance or the instruments are disposed outside the buffer zone, the processing module emits controlling signal to control the image capture module to move to capture position. As a result, the present invention may achieve image tracking real time and provide stable image.Type: ApplicationFiled: November 14, 2012Publication date: January 2, 2014Applicant: National Chiao Tung UniversityInventors: Kai-Tai Song, Chun-Ju Chen
-
Patent number: 8437515Abstract: A face detection apparatus and a face detection method thereof are provided. The face detection apparatus includes a rectangle integral image unit, a feature mapping unit and a cascade and score unit. The rectangle integral image unit provides a rectangle integral image according to an original image. The feature mapping unit determines a face candidate region according to rectangular face feature templates, and calculates feature values of the rectangular face feature templates according to the rectangle integral image. The cascade and score unit judges whether the face candidate region conforms to cascade conditions or not, and gives the face candidate region a score according to the feature values when the face candidate region conforms to the cascade conditions. The face candidate region is a non-face region if the score of the face candidate region is lower than a threshold value.Type: GrantFiled: October 29, 2009Date of Patent: May 7, 2013Assignee: Novatek Microelectronics Corp.Inventors: Kai-Tai Song, Meng-Ju Han, Shih-Chieh Wang, Ming-Feng Chiang, Chia-Ho Lin
-
Patent number: 8437516Abstract: A facial expression recognition apparatus and a facial expression recognition method thereof are provided. The facial expression recognition apparatus comprises a gray image generating unit, a face edge detection unit, a motion skin extraction unit, a face contour generating unit and a facial expression recognition unit. The gray image generating unit generates a gray image according to an original image. The face edge detection unit outputs a face edge detection result according to the gray image. The motion skin extraction unit generates a motion skin extraction result according to the original image, and generates a face and background division result according to the motion skin extraction result. The face contour generating unit outputs a face contour according to the gray image, the face edge detection result and the face and background division result. The facial expression recognition unit outputs a facial expression recognition result according to the face contour.Type: GrantFiled: November 16, 2009Date of Patent: May 7, 2013Assignee: Novatek Microelectronics Corp.Inventors: Kai-Tai Song, Meng-Ju Han, Shih-Chieh Wang, Chia-Ho Lin, Chi-Yi Lin
-
Patent number: 8380379Abstract: The present invention discloses a walking assistive system comprising a motion module, a current detecting module and a central control module. Each motion module includes omni-directional wheels, motors, shaft encoders and servo controllers. The omni-directional wheels are connected to and driven by the motors. The motors are connected to the shaft encoders, and the rotation speed values are generated corresponds to the rotation speed of the motors by the shaft coder. The servo controllers connected to the shaft encoders and the motors receive the rotation speed values and control the motors. The current detecting modules connected to the motors detect the current of the motors and generate current values correspondingly. The central control module connected to the motion control module and the current detecting module controls the platform compliant motion control modules according to the rotation speed values and the current values.Type: GrantFiled: October 12, 2010Date of Patent: February 19, 2013Assignee: National Chiao Tung UniversityInventors: Kai-Tai Song, Sin-Yi Jiang, Ko-Tung Huang
-
Publication number: 20120281918Abstract: The present invention provides a method for dynamically setting an environmental boundary in an image and a method for instantly determining human activity according to the method for dynamically setting the environmental boundary. The method for instantly determining human activity includes the steps of retrieving at least an initial environmental image with a predetermined angle, and calculating a boundary setting equation of an object and an environmental boundary in the initial environmental image; retrieving a dynamic environmental image having the object by using a movable platform, and figuring out a new environmental boundary; determining a human image in the dynamic environmental image, recording retention time of the human image, and determining a human posture; and determining human location according to the environmental boundary in the dynamic environmental image and the human image, and instantly determining the human activity.Type: ApplicationFiled: August 11, 2011Publication date: November 8, 2012Applicant: NATIONAL CHIAO TUNG UNIVERSITYInventors: Kai-Tai Song, Wei-Jyun Chen
-
Patent number: 8214082Abstract: A nursing system of the present invention can position where a person to be nursed through a sensor network widely deployed in an environment, instantaneously detect if the person to be nursed has an accident, and forward a message to inform a relative or medical staff. An autonomous robot will actively move beside the person to be nursed and transmit real-time images to a remote computer or PDA so that the relative or medical staff can swiftly ascertain the situation of the person to be nursed and the person to be nursed in case of emergency can be rescued as soon as possible.Type: GrantFiled: April 4, 2008Date of Patent: July 3, 2012Assignee: National Chiao Tung UniversityInventors: Chi-Yi Tsai, Fu-Sheng Huang, Chen-Yang Lin, Zhi-Sheng Lin, Chun-Wei Chen, Kai-Tai Song
-
Patent number: 8111156Abstract: This invention is an intruder detection system which integrates wireless sensor network and security robots. Multiple ZigBee wireless sensor modules installed in the environment can detect intruders and abnormal conditions with various sensors, and transmit alert to the monitoring center and security robot via the wireless mesh network. The robot can navigate in the environment autonomously and approach to a target place using its localization system. If any possible intruder is detected, the robot can approach to that location, and transmit images to the mobile devices of the securities and users, in order to determine the exact situation in real time.Type: GrantFiled: October 30, 2008Date of Patent: February 7, 2012Assignee: National Chiao Tung UniversityInventors: Kai-Tai Song, Chia-Hao Lin, Chih-Sheng Lin, Su-Hen Yang
-
Publication number: 20110282529Abstract: The present invention discloses a walking assistive system comprising a motion module, a current detecting module and a central control module. Each motion module includes omni-directional wheels, motors, shaft encoders and servo controllers. The omni-directional wheels are connected to and driven by the motors. The motors are connected to the shaft encoders, and the rotation speed values are generated corresponds to the rotation speed of the motors by the shaft coder. The servo controllers connected to the shaft encoders and the motors receive the rotation speed values and control the motors. The current detecting modules connected to the motors detect the current of the motors and generate current values correspondingly. The central control module connected to the motion control module and the current detecting module controls the platform compliant motion control modules according to the rotation speed values and the current values.Type: ApplicationFiled: October 12, 2010Publication date: November 17, 2011Applicant: NATIONAL CHIAO TUNG UNIVERSITYInventors: KAI-TAI SONG, SIN-YI JIANG, KO-TUNG HUANG