Patents by Inventor Meng-Ju Han

Meng-Ju Han has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160147231
    Abstract: An automatic guided vehicle includes a vehicle body and a positioning identification module which being furnished in the vehicle body further includes a three-axis magnetic signal sensing unit and a logic operation processing unit. The logic operation processing unit is connected to the three-axis magnetic signal sensing unit by signal transmitted therefrom. A magnetic pointer unit is furnished adjacent to the marching route of the automatic guided vehicle. The three-axis magnetic signal sensing unit senses the magnetic field of magnetic pointer unit and generates a magnetic field information that transmits to the logic operation processing unit.
    Type: Application
    Filed: December 26, 2014
    Publication date: May 26, 2016
    Inventors: KUAN-CHUN SUN, Meng-Ju Han, JWU-SHENG Hu, Cheng-Hua Wu
  • Patent number: 9329599
    Abstract: An automatic guided vehicle includes a vehicle body and a positioning identification module which being furnished in the vehicle body further includes a three-axis magnetic signal sensing unit and a logic operation processing unit. The logic operation processing unit is connected to the three-axis magnetic signal sensing unit by signal transmitted therefrom. A magnetic pointer unit is furnished adjacent to the marching route of the automatic guided vehicle. The three-axis magnetic signal sensing unit senses the magnetic field of magnetic pointer unit and generates a magnetic field information that transmits to the logic operation processing unit.
    Type: Grant
    Filed: December 26, 2014
    Date of Patent: May 3, 2016
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Kuan-Chun Sun, Meng-Ju Han, Jwu-Sheng Hu, Cheng-Hua Wu
  • Patent number: 8971583
    Abstract: A distance measurement apparatus and a distance measurement method are provided. The apparatus includes a line-shaped laser transmitter, an image sensing device and a processing unit. The line-shaped laser transmitter transmits a line-shaped laser, and the image sensing device senses the line-shaped laser to output a line-shaped laser image. The processing unit receives the line-shaped laser image, and segments the line-shaped laser image into several sub-line-shaped laser images. The processing unit further calculates a vertical position for a laser line in each sub-line-shaped laser image, and outputs each distance information according to the corresponding sub-line-shaped laser image and a transformation relation.
    Type: Grant
    Filed: November 5, 2012
    Date of Patent: March 3, 2015
    Assignee: Industrial Technology Research Institute
    Inventors: Meng-Ju Han, Cheng-Hua Wu, Ching-Yi Kuo, Wei-Han Wang, Jwu-Sheng Hu
  • Patent number: 8965762
    Abstract: A method is disclosed in the present disclosure for recognizing emotion by setting different weights to at least of two kinds of unknown information, such as image and audio information, based on their recognition reliability respectively. The weights are determined by the distance between test data and hyperplane and the standard deviation of training data and normalized by the mean distance between training data and hyperplane, representing the classification reliability of different information. The method recognizes the emotion according to the unidentified information having higher weights while the at least two kinds of unidentified information have different result classified by the hyperplane and correcting wrong classification result of the other unidentified information so as to raise the accuracy while emotion recognition. Meanwhile, the present disclosure also provides a learning step with a characteristic of higher learning speed through an algorithm of iteration.
    Type: Grant
    Filed: February 7, 2011
    Date of Patent: February 24, 2015
    Assignee: Industrial Technology Research Institute
    Inventors: Kai-Tai Song, Meng-Ju Han, Jing-Huai Hsu, Jung-Wei Hong, Fuh-Yu Chang
  • Publication number: 20140098218
    Abstract: A moving control device is provided, including a filtering element, an image capturing unit, a calculating unit, and a light-emitting element that emits a structured light with a predetermined wavelength. The filtering element allows the structured light to pass therethrough while filtering out without the predetermined wavelength. The filtering element is provided in a portion at a front end of the image capturing unit, such that an external image retrieved by the image capturing unit includes a first region generated as a result of the light intersecting the filtering element and a second region generated as a result of the light not intersecting the filtering element. The calculating unit performs image recognition on the first and second regions of the external image to generate identification results to allow controlling movement of an autonomous mobile platform based on the identification results.
    Type: Application
    Filed: June 7, 2013
    Publication date: April 10, 2014
    Inventors: Cheng-Hua WU, Meng-Ju HAN, Ching-Yi KUO
  • Patent number: 8437516
    Abstract: A facial expression recognition apparatus and a facial expression recognition method thereof are provided. The facial expression recognition apparatus comprises a gray image generating unit, a face edge detection unit, a motion skin extraction unit, a face contour generating unit and a facial expression recognition unit. The gray image generating unit generates a gray image according to an original image. The face edge detection unit outputs a face edge detection result according to the gray image. The motion skin extraction unit generates a motion skin extraction result according to the original image, and generates a face and background division result according to the motion skin extraction result. The face contour generating unit outputs a face contour according to the gray image, the face edge detection result and the face and background division result. The facial expression recognition unit outputs a facial expression recognition result according to the face contour.
    Type: Grant
    Filed: November 16, 2009
    Date of Patent: May 7, 2013
    Assignee: Novatek Microelectronics Corp.
    Inventors: Kai-Tai Song, Meng-Ju Han, Shih-Chieh Wang, Chia-Ho Lin, Chi-Yi Lin
  • Patent number: 8437515
    Abstract: A face detection apparatus and a face detection method thereof are provided. The face detection apparatus includes a rectangle integral image unit, a feature mapping unit and a cascade and score unit. The rectangle integral image unit provides a rectangle integral image according to an original image. The feature mapping unit determines a face candidate region according to rectangular face feature templates, and calculates feature values of the rectangular face feature templates according to the rectangle integral image. The cascade and score unit judges whether the face candidate region conforms to cascade conditions or not, and gives the face candidate region a score according to the feature values when the face candidate region conforms to the cascade conditions. The face candidate region is a non-face region if the score of the face candidate region is lower than a threshold value.
    Type: Grant
    Filed: October 29, 2009
    Date of Patent: May 7, 2013
    Assignee: Novatek Microelectronics Corp.
    Inventors: Kai-Tai Song, Meng-Ju Han, Shih-Chieh Wang, Ming-Feng Chiang, Chia-Ho Lin
  • Publication number: 20110150301
    Abstract: A face identification method includes the following steps. First, first and second sets of hidden layer parameters, which respectively correspond to first and second database character vectors, are obtained by way of training according to multiple first and second training character data. Next, first and second back propagation neural networks (BPNNs) are established according to the first and second sets of hidden layer parameters, respectively. Then, to-be-identified data are provided to the first BPNN to find a first output character vector. Next, whether the first output character vector satisfies an identification criterion is determined. If not, the to-be-identified data are provided to the second BPNN to find a second output character vector. Then, whether the second output character vector satisfies the identification criterion is determined. If yes, the to-be-identified data are identified as corresponding to the second database character vector.
    Type: Application
    Filed: July 6, 2010
    Publication date: June 23, 2011
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Kai-Tai Song, Meng-Ju Han, Shih-Chieh Wang
  • Publication number: 20110144804
    Abstract: A device for expressing robot autonomous emotions comprises: a sensing unit; a user emotion recognition unit, recognizing current user emotional states after receiving sensed information from the sensing unit, and calculating user emotional strengths based on the current user emotional states; a robot emotion generation unit, generating robot emotional states based on the user emotional strengths; a behavior fusion unit, calculating a plurality of output behavior weights by a fuzzy-neuro network based on the user emotional strengths and a rule table; and a robot reaction unit, expressing a robot emotional behavior based on the output behavior weights and the robot emotional states.
    Type: Application
    Filed: May 13, 2010
    Publication date: June 16, 2011
    Inventors: Kai-Tai SONG, Meng-Ju Han, Chia-How Lin
  • Publication number: 20110141258
    Abstract: A method is disclosed in the present disclosure for recognizing emotion by setting different weights to at least of two kinds of unknown information, such as image and audio information, based on their recognition reliability respectively. The weights are determined by the distance between test data and hyperplane and the standard deviation of training data and normalized by the mean distance between training data and hyperplane, representing the classification reliability of different information. The method recognizes the emotion according to the unidentified information having higher weights while the at least two kinds of unidentified information have different result classified by the hyperplane and correcting wrong classification result of the other unidentified information so as to raise the accuracy while emotion recognition. Meanwhile, the present disclosure also provides a learning step with a characteristic of higher learning speed through an algorithm of iteration.
    Type: Application
    Filed: February 7, 2011
    Publication date: June 16, 2011
    Applicant: Industrial Technology Research Institute
    Inventors: Kai-Tai Song, Meng-Ju Han, Jing-Huai Hsu, Jung-Wei Hong, Fuh-Yu Chang
  • Publication number: 20100284619
    Abstract: A face detection apparatus and a face detection method thereof are provided. The face detection apparatus includes a rectangle integral image unit, a feature mapping unit and a cascade and score unit. The rectangle integral image unit provides a rectangle integral image according to an original image. The feature mapping unit determines a face candidate region according to rectangular face feature templates, and calculates feature values of the rectangular face feature templates according to the rectangle integral image. The cascade and score unit judges whether the face candidate region conforms to cascade conditions or not, and gives the face candidate region a score according to the feature values when the face candidate region conforms to the cascade conditions. The face candidate region is a non-face region if the score of the face candidate region is lower than a threshold value.
    Type: Application
    Filed: October 29, 2009
    Publication date: November 11, 2010
    Applicant: NOVATEK MICROELECTRONICS CORP.
    Inventors: Kai-Tai Song, Meng-Ju Han, Shih-Chieh Wang, Ming-Feng Chiang, Chia-Ho Lin
  • Publication number: 20100278385
    Abstract: A facial expression recognition apparatus and a facial expression recognition method thereof are provided. The facial expression recognition apparatus comprises a gray image generating unit, a face edge detection unit, a motion skin extraction unit, a face contour generating unit and a facial expression recognition unit. The gray image generating unit generates a gray image according to an original image. The face edge detection unit outputs a face edge detection result according to the gray image. The motion skin extraction unit generates a motion skin extraction result according to the original image, and generates a face and background division result according to the motion skin extraction result. The face contour generating unit outputs a face contour according to the gray image, the face edge detection result and the face and background division result. The facial expression recognition unit outputs a facial expression recognition result according to the face contour.
    Type: Application
    Filed: November 16, 2009
    Publication date: November 4, 2010
    Applicant: NOVATEK MICROELECTRONICS CORP.
    Inventors: Kai-Tai Song, Meng-Ju Han, Shih-Chieh Wang, Chia-Ho Lin, Chi-Yi Lin
  • Publication number: 20080201144
    Abstract: A method is disclosed in the present invention for recognizing emotion by setting different weights to at least of two kinds of unknown information, such as image and audio information, based on their recognition reliability respectively. The weights are determined by the distance between test data and hyperplane and the standard deviation of training data and normalized by the mean distance between training data and hyperplane, representing the classification reliability of different information. The method is capable of recognizing the emotion according to the unidentified information having higher weights while the at least two kinds of unidentified information have different result classified by the hyperplane and correcting wrong classification result of the other unidentified information so as to raise the accuracy while emotion recognition. Meanwhile, the present invention also provides a learning step with a characteristic of higher learning speed through an algorithm of iteration.
    Type: Application
    Filed: August 8, 2007
    Publication date: August 21, 2008
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Kai-Tai Song, Meng-Ju Han, Jing-Huai Hsu, Jung-Wei Hong, Fuh-Yu Chang