Patents by Inventor Mun Sung Han
Mun Sung Han has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9420963Abstract: An apparatus for recognizing a user's posture in a horse-riding simulator, the apparatus comprising: a standard posture model generation module configured to find out a standard posture model by selecting feature points from an expert database, and generate the standard posture model; and a posture recognizing module configured to obtain a user's posture from the horse-riding simulator, recognize a user's horse-riding posture by matching the obtained user's posture with the standard posture model generated in the standard posture model generation module, and suggest a standard posture model appropriate for a user's level.Type: GrantFiled: April 3, 2014Date of Patent: August 23, 2016Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Kye Kyung Kim, Sang Seung Kang, Suyoung Chi, Dong-Jin Lee, Yun Koo Chung, Mun Sung Han, Jae Hong Kim, Jong-Hyun Park
-
Publication number: 20160216770Abstract: The present invention relates to a motion based interactive service method and system which provide various motion based interactive services such as repeated training of a motion, progressive difficulty adjustment, evaluation for every body part, and feedback based on the analysis result of the user's motion so that a user of a dance game or a dance lesson can efficiently perform a motion based performance or take a lesson by easy and various methods.Type: ApplicationFiled: March 19, 2015Publication date: July 28, 2016Inventors: Min Su JANG, Do Hyung KIM, Jae Hong KIM, Nam Shik PARK, Mun Sung HAN, Cheon Shu PARK, Sung Woong SHIN
-
Publication number: 20160110453Abstract: The present invention provides a choreography searching system and method based on a motion inquiry which inputs a choreography video which is captured by a user at real time when the user dances in front of a camera and inputs the choreography video as an inquiry to compare the choreography with choreographic works such as K-POP stored in a choreography database to provide a list of choreographic works which are arranged in the order of similarity in order to provide intuitive choreography input based search rather than text based search such as a music title, a choreographer, or a name of a unit motion.Type: ApplicationFiled: March 24, 2015Publication date: April 21, 2016Inventors: Do Hyung KIM, Jae Hong KIM, Nam Shik PARK, Min Su JANG, Mun Sung HAN, Cheon Shu PARK, Sung Woong SHIN
-
Patent number: 9008440Abstract: Disclosed are a component recognizing apparatus and a component recognizing method. The component recognizing apparatus includes: an image preprocessing unit configured to extract component edges from an input component image by using a plurality of edge detecting techniques, and detect a component region by using the extracted component edges; a feature extracting unit configured to extract a component feature from the detected component region, and create a feature vector by using the component feature; and a component recognizing unit configured to input the created feature vector to an artificial neural network which has learned in advance to recognize a component category through a plurality of component image samples, and recognize the component category according to a result.Type: GrantFiled: July 10, 2012Date of Patent: April 14, 2015Assignee: Electronics and Telecommunications Research InstituteInventors: Kye Kyung Kim, Woo Han Yun, Hye Jin Kim, Su Young Chi, Jae Yeon Lee, Mun Sung Han, Jae Hong Kim, Joo Chan Sohn
-
Publication number: 20150051512Abstract: An apparatus for recognizing a user's posture in a horse-riding simulator, the apparatus comprising: a standard posture model generation module configured to find out a standard posture model by selecting feature points from an expert database, and generate the standard posture model; and a posture recognizing module configured to obtain a user's posture from the horse-riding simulator, recognize a user's horse-riding posture by matching the obtained user's posture with the standard posture model generated in the standard posture model generation module, and suggest a standard posture model appropriate for a user's level.Type: ApplicationFiled: April 3, 2014Publication date: February 19, 2015Applicant: Electronics and Telecommunications Research InstituteInventors: Kye Kyung Kim, Sang Seung Kang, Suyoung Chi, Dong-Jin Lee, Yun Koo Chung, Mun Sung Han, Jae Hong Kim, Jong-Hyun Park
-
Publication number: 20140306811Abstract: A system for integrating gestures and sounds including: a gesture recognition unit that extracts gesture feature information corresponding to user commands from image information and acquires gesture recognition information from the gesture feature information; a background recognition unit acquiring background sound information using a predetermined background sound model from sound information; a sound recognition unit that extracts sound feature information corresponding to user commands from the sound information and extracts the sound feature information based on the the background sound information and acquires sound recognition information from the sound feature information; and an integration unit that generates integration information by integrating the gesture recognition information and the sound recognition information.Type: ApplicationFiled: June 24, 2014Publication date: October 16, 2014Inventors: Mun Sung HAN, Young Giu Jung, Hyun Kim, Jae Hong Kim, Joo Chan Sohn
-
Patent number: 8793134Abstract: Disclosed is a system for integrating gestures and sounds including: a gesture recognition unit that extracts gesture feature information corresponding to user commands from image information and acquires gesture recognition information from the gesture feature information; a background recognition unit acquiring background sound information using the predetermined background sound model from the sound information; a sound recognition unit that extracts the sound feature information corresponding to user commands from the sound information and extracts the sound feature information based on the background sound information and acquires the sound recognition information from the sound feature information; and an integration unit that generates integration information by integrating the gesture recognition information and the sound recognition information.Type: GrantFiled: December 21, 2011Date of Patent: July 29, 2014Assignee: Electronics and Telecommunications Research InstituteInventors: Mun Sung Han, Young Giu Jung, Hyun Kim, Jae Hong Kim, Joo Chan Sohn
-
Publication number: 20140172428Abstract: Provided is a method for context independent gender recognition utilizing phoneme transition probability. The method for the context independent gender recognition includes detecting a voice section from a received voice signal, generating feature vectors within the detected voice section, performing a hidden Markov model on the feature vectors by using a search network that is set according to a phoneme rule to recognize a phoneme and obtain scores of first and second likelihoods, and comparing final scores of the first and second likelihoods obtained while the phoneme recognition is performed up to the last section of the voice section to finally decide gender with respect to the voice signal.Type: ApplicationFiled: September 3, 2013Publication date: June 19, 2014Applicant: Electronics and Telecommunications Research InstituteInventor: Mun Sung HAN
-
Patent number: 8705814Abstract: Disclosed is a method of detecting an upper body. The method includes detecting an omega candidate area including a shape formed of a face and a shoulder line of a human from a target image, cutting the target image into the upper body candidate area including the omega candidate area, detecting a human face from the upper body candidate area, and judging whether the upper body of the human is included in the target image according to the result of detecting the human face.Type: GrantFiled: December 21, 2011Date of Patent: April 22, 2014Assignee: Electronics and Telecommunications Research InstituteInventors: Woo Han Yun, Do Hyung Kim, Jae Yeon Lee, Kyu Dae Ban, Dae Ha Lee, Mun Sung Han, Ho Sub Yoon, Su Young Chi, Yun Koo Chung, Joo Chan Sohn, Hye Jin Kim, Young Woo Yoon, Jae Hong Kim, Jae Il Cho
-
Patent number: 8478600Abstract: Provided is an input/output apparatus based on voice recognition, and a method thereof. An object of the apparatus is to improve a user interface by making pointing input and command execution such as application program control possible according to a voice command of a user possible based on a voice recognition technology without individual pointing input device such as a mouse and a touch pad, and a method thereof. The apparatus includes: a voice recognizer for recognizing a voice command inputted from outside; a pointing controller for calculating a pointing location on a screen which corresponds to a voice recognition result transmitted from the voice recognizer; a displayer for displaying a screen; and a command controller for processing diverse commands related to a current pointing location.Type: GrantFiled: September 11, 2006Date of Patent: July 2, 2013Assignee: Electronics and Telecommunications Research InstituteInventors: Kwan-Hyun Cho, Mun-Sung Han, Jun-Seok Park, Young-Giu Jung
-
Publication number: 20130163858Abstract: Disclosed are a component recognizing apparatus and a component recognizing method. The component recognizing apparatus includes: an image preprocessing unit configured to extract component edges from an input component image by using a plurality of edge detecting techniques, and detect a component region by using the extracted component edges; a feature extracting unit configured to extract a component feature from the detected component region, and create a feature vector by using the component feature; and a component recognizing unit configured to input the created feature vector to an artificial neural network which has learned in advance to recognize a component category through a plurality of component image samples, and recognize the component category according to a result.Type: ApplicationFiled: July 10, 2012Publication date: June 27, 2013Applicant: Electronics and Telecommunications Research InstituteInventors: Kye Kyung KIM, Woo Han YUN, Hye Jin KIM, Su Young CHI, Jae Yeon LEE, Mun Sung HAN, Jae Hong KIM, Joo Chan SOHN
-
Publication number: 20120166200Abstract: Disclosed is a system for integrating gestures and sounds including: a gesture recognition unit that extracts gesture feature information corresponding to user commands from image information and acquires gesture recognition information from the gesture feature information; a background recognition unit acquiring background sound information using the predetermined background sound model from the sound information; a sound recognition unit that extracts the sound feature information corresponding to user commands from the sound information and extracts the sound feature information based on the background sound information and acquires the sound recognition information from the sound feature information; and an integration unit that generates integration information by integrating the gesture recognition information and the sound recognition information.Type: ApplicationFiled: December 21, 2011Publication date: June 28, 2012Applicant: Electronics and Telecommunications Research InstituteInventors: Mun Sung HAN, Young Giu JUNG, Hyun KIM, Jae Hong KIM, Joo Chan SOHN
-
Publication number: 20120166190Abstract: The present invention has been made in an effort to provide an apparatus for removing noise for sound/voice recognition removing a TV sound corresponding to a noise signal by using an adaptive filter capable of adapting a filter coefficient in order to remove an analogue signal and performing sound and/or voice recognition and a method thereof.Type: ApplicationFiled: December 15, 2011Publication date: June 28, 2012Applicant: Electronics and Telecommunications Research InstituteInventors: Jae Yeon Lee, Mun Sung Han, Jae Il Cho, Jae Hong Kim, Joo Chan Sohn
-
Publication number: 20120155719Abstract: Disclosed is a method of detecting an upper body. The method includes detecting an omega candidate area including a shape formed of a face and a shoulder line of a human from a target image, cutting the target image into the upper body candidate area including the omega candidate area, detecting a human face from the upper body candidate area, and judging whether the upper body of the human is included in the target image according to the result of detecting the human face.Type: ApplicationFiled: December 21, 2011Publication date: June 21, 2012Applicant: Electronics and Telecommunications Research InstituteInventors: Woo Han YUN, Do Hyung KIM, Jae Yeon LEE, Kyu Dae BAN, Dae Ha LEE, Mun Sung HAN, Ho Sub YOON, Su Young CHI, Yun Koo CHUNG, Joo Chan SOHN, Hye Jin KIM, Young Woo YOON, Jae Hong KIM, Jae Il CHO
-
Publication number: 20100077261Abstract: The present invention relates to a realistic service and system using a five senses integrated interface, and more particularly, to an apparatus and method for encoding the five senses and a system and method for providing realistic service using a five senses integrated interface, to allow a user to select a product to sensorially experience through an integrated interface in a remote location, which includes a integration recognizer detecting data selected by the user and recognizing an object, and transmitting five senses data of the object through a network, a five senses data analyzer receiving a five senses data packet including the five senses data of the object through the network, and extracting and analyzing the five senses data packet in terms of data on each of the five senses, and a five senses integration representer representing five senses using the data on each of the five senses.Type: ApplicationFiled: December 3, 2007Publication date: March 25, 2010Inventors: Young Giu Jung, Mun Sung Han, Jun Seok Park
-
Patent number: 7613611Abstract: Provided is a method and an apparatus for vocal-cord signal recognition. A signal processing unit receives and digitalizes a vocal cord signal, and a noise removing unit which channel noise included in the vocal cord signal. A feature extracting unit extracts a feature vector from the vocal cord signal, which has the channel noise removed therefrom, and a recognizing unit calculates a similarity between the vocal cord signal and the learned model parameter. Consequently, the apparatus is robust in a noisy environment.Type: GrantFiled: May 26, 2005Date of Patent: November 3, 2009Assignee: Electronics and Telecommunications Research InstituteInventors: Kwan Hyun Cho, Mun Sung Han, Young Giu Jung, Hee Sook Shin, Jun Seok Park, Dong Won Han
-
Publication number: 20080288260Abstract: Provided is an input/output apparatus based on voice recognition, and a method thereof. An object of the apparatus is to improve a user interface by making pointing input and command execution such as application program control possible according to a voice command of a user possible based on a voice recognition technology without individual pointing input device such as a mouse and a touch pad, and a method thereof. The apparatus includes: a voice recognizer for recognizing a voice command inputted from outside; a pointing controller for calculating a pointing location on a screen which corresponds to a voice recognition result transmitted from the voice recognizer; a displayer for displaying a screen; and a command controller for processing diverse commands related to a current pointing location.Type: ApplicationFiled: September 11, 2006Publication date: November 20, 2008Inventors: Kwan-Hyun Cho, Mun-Sung Han, Jun-Seok Park, Young-Giu Jung
-
Publication number: 20080270126Abstract: Provided are a vocal-cord recognition apparatus and a method thereof. The vocal-cord signal recognition apparatus includes a vocal-cord signal extracting unit for analyzing a feature of a vocal-cord signal inputted through a throat microphone, and extracting a vocal-cord feature vector from the vocal-cord signal using the analyzing data; and a vocal-cord signal recognition unit for recognizing the vocal-cord signal by extracting the feature of the vocal-cord signal using the vocal-cord signal feature vector extracted at the vocal-cord signal extracting means.Type: ApplicationFiled: October 19, 2006Publication date: October 30, 2008Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Young-Giu Jung, Mun-Sung Han, Kwan-Hyun Cho, Jun-Seok Park
-
Publication number: 20080137909Abstract: A gaze position tracking method and apparatus for simply mapping one's gaze position on a monitor screen are provided. The gaze position tracking apparatus includes an image capturing module, and an image processing module. The image capturing module illuminates infrared rays to a user's eyes, reflects an illuminated eye image at 45°, and captures the 45° reflected eye image. The image processing module obtains a pupil center point of the illuminated eye image by performing a predetermined algorithm, and maps the pupil center point on a display plane of a display device through a predetermined transform function.Type: ApplicationFiled: December 6, 2007Publication date: June 12, 2008Applicant: Electronics and Telecommunications Research InstituteInventors: Jaeseon LEE, Young Giu JUNG, Mun Sung HAN, Jun Seok PARK, Eui Chul LEE, Kang Ryoung PARK, Min Cheol HWANG, Joa Sang LIM, Yongjoo CHO