Patents by Inventor Mun Sung Han

Mun Sung Han has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9420963
    Abstract: An apparatus for recognizing a user's posture in a horse-riding simulator, the apparatus comprising: a standard posture model generation module configured to find out a standard posture model by selecting feature points from an expert database, and generate the standard posture model; and a posture recognizing module configured to obtain a user's posture from the horse-riding simulator, recognize a user's horse-riding posture by matching the obtained user's posture with the standard posture model generated in the standard posture model generation module, and suggest a standard posture model appropriate for a user's level.
    Type: Grant
    Filed: April 3, 2014
    Date of Patent: August 23, 2016
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Kye Kyung Kim, Sang Seung Kang, Suyoung Chi, Dong-Jin Lee, Yun Koo Chung, Mun Sung Han, Jae Hong Kim, Jong-Hyun Park
  • Publication number: 20160216770
    Abstract: The present invention relates to a motion based interactive service method and system which provide various motion based interactive services such as repeated training of a motion, progressive difficulty adjustment, evaluation for every body part, and feedback based on the analysis result of the user's motion so that a user of a dance game or a dance lesson can efficiently perform a motion based performance or take a lesson by easy and various methods.
    Type: Application
    Filed: March 19, 2015
    Publication date: July 28, 2016
    Inventors: Min Su JANG, Do Hyung KIM, Jae Hong KIM, Nam Shik PARK, Mun Sung HAN, Cheon Shu PARK, Sung Woong SHIN
  • Publication number: 20160110453
    Abstract: The present invention provides a choreography searching system and method based on a motion inquiry which inputs a choreography video which is captured by a user at real time when the user dances in front of a camera and inputs the choreography video as an inquiry to compare the choreography with choreographic works such as K-POP stored in a choreography database to provide a list of choreographic works which are arranged in the order of similarity in order to provide intuitive choreography input based search rather than text based search such as a music title, a choreographer, or a name of a unit motion.
    Type: Application
    Filed: March 24, 2015
    Publication date: April 21, 2016
    Inventors: Do Hyung KIM, Jae Hong KIM, Nam Shik PARK, Min Su JANG, Mun Sung HAN, Cheon Shu PARK, Sung Woong SHIN
  • Patent number: 9008440
    Abstract: Disclosed are a component recognizing apparatus and a component recognizing method. The component recognizing apparatus includes: an image preprocessing unit configured to extract component edges from an input component image by using a plurality of edge detecting techniques, and detect a component region by using the extracted component edges; a feature extracting unit configured to extract a component feature from the detected component region, and create a feature vector by using the component feature; and a component recognizing unit configured to input the created feature vector to an artificial neural network which has learned in advance to recognize a component category through a plurality of component image samples, and recognize the component category according to a result.
    Type: Grant
    Filed: July 10, 2012
    Date of Patent: April 14, 2015
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kye Kyung Kim, Woo Han Yun, Hye Jin Kim, Su Young Chi, Jae Yeon Lee, Mun Sung Han, Jae Hong Kim, Joo Chan Sohn
  • Publication number: 20150051512
    Abstract: An apparatus for recognizing a user's posture in a horse-riding simulator, the apparatus comprising: a standard posture model generation module configured to find out a standard posture model by selecting feature points from an expert database, and generate the standard posture model; and a posture recognizing module configured to obtain a user's posture from the horse-riding simulator, recognize a user's horse-riding posture by matching the obtained user's posture with the standard posture model generated in the standard posture model generation module, and suggest a standard posture model appropriate for a user's level.
    Type: Application
    Filed: April 3, 2014
    Publication date: February 19, 2015
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Kye Kyung Kim, Sang Seung Kang, Suyoung Chi, Dong-Jin Lee, Yun Koo Chung, Mun Sung Han, Jae Hong Kim, Jong-Hyun Park
  • Publication number: 20140306811
    Abstract: A system for integrating gestures and sounds including: a gesture recognition unit that extracts gesture feature information corresponding to user commands from image information and acquires gesture recognition information from the gesture feature information; a background recognition unit acquiring background sound information using a predetermined background sound model from sound information; a sound recognition unit that extracts sound feature information corresponding to user commands from the sound information and extracts the sound feature information based on the the background sound information and acquires sound recognition information from the sound feature information; and an integration unit that generates integration information by integrating the gesture recognition information and the sound recognition information.
    Type: Application
    Filed: June 24, 2014
    Publication date: October 16, 2014
    Inventors: Mun Sung HAN, Young Giu Jung, Hyun Kim, Jae Hong Kim, Joo Chan Sohn
  • Patent number: 8793134
    Abstract: Disclosed is a system for integrating gestures and sounds including: a gesture recognition unit that extracts gesture feature information corresponding to user commands from image information and acquires gesture recognition information from the gesture feature information; a background recognition unit acquiring background sound information using the predetermined background sound model from the sound information; a sound recognition unit that extracts the sound feature information corresponding to user commands from the sound information and extracts the sound feature information based on the background sound information and acquires the sound recognition information from the sound feature information; and an integration unit that generates integration information by integrating the gesture recognition information and the sound recognition information.
    Type: Grant
    Filed: December 21, 2011
    Date of Patent: July 29, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Mun Sung Han, Young Giu Jung, Hyun Kim, Jae Hong Kim, Joo Chan Sohn
  • Publication number: 20140172428
    Abstract: Provided is a method for context independent gender recognition utilizing phoneme transition probability. The method for the context independent gender recognition includes detecting a voice section from a received voice signal, generating feature vectors within the detected voice section, performing a hidden Markov model on the feature vectors by using a search network that is set according to a phoneme rule to recognize a phoneme and obtain scores of first and second likelihoods, and comparing final scores of the first and second likelihoods obtained while the phoneme recognition is performed up to the last section of the voice section to finally decide gender with respect to the voice signal.
    Type: Application
    Filed: September 3, 2013
    Publication date: June 19, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventor: Mun Sung HAN
  • Patent number: 8705814
    Abstract: Disclosed is a method of detecting an upper body. The method includes detecting an omega candidate area including a shape formed of a face and a shoulder line of a human from a target image, cutting the target image into the upper body candidate area including the omega candidate area, detecting a human face from the upper body candidate area, and judging whether the upper body of the human is included in the target image according to the result of detecting the human face.
    Type: Grant
    Filed: December 21, 2011
    Date of Patent: April 22, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Woo Han Yun, Do Hyung Kim, Jae Yeon Lee, Kyu Dae Ban, Dae Ha Lee, Mun Sung Han, Ho Sub Yoon, Su Young Chi, Yun Koo Chung, Joo Chan Sohn, Hye Jin Kim, Young Woo Yoon, Jae Hong Kim, Jae Il Cho
  • Patent number: 8478600
    Abstract: Provided is an input/output apparatus based on voice recognition, and a method thereof. An object of the apparatus is to improve a user interface by making pointing input and command execution such as application program control possible according to a voice command of a user possible based on a voice recognition technology without individual pointing input device such as a mouse and a touch pad, and a method thereof. The apparatus includes: a voice recognizer for recognizing a voice command inputted from outside; a pointing controller for calculating a pointing location on a screen which corresponds to a voice recognition result transmitted from the voice recognizer; a displayer for displaying a screen; and a command controller for processing diverse commands related to a current pointing location.
    Type: Grant
    Filed: September 11, 2006
    Date of Patent: July 2, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kwan-Hyun Cho, Mun-Sung Han, Jun-Seok Park, Young-Giu Jung
  • Publication number: 20130163858
    Abstract: Disclosed are a component recognizing apparatus and a component recognizing method. The component recognizing apparatus includes: an image preprocessing unit configured to extract component edges from an input component image by using a plurality of edge detecting techniques, and detect a component region by using the extracted component edges; a feature extracting unit configured to extract a component feature from the detected component region, and create a feature vector by using the component feature; and a component recognizing unit configured to input the created feature vector to an artificial neural network which has learned in advance to recognize a component category through a plurality of component image samples, and recognize the component category according to a result.
    Type: Application
    Filed: July 10, 2012
    Publication date: June 27, 2013
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Kye Kyung KIM, Woo Han YUN, Hye Jin KIM, Su Young CHI, Jae Yeon LEE, Mun Sung HAN, Jae Hong KIM, Joo Chan SOHN
  • Publication number: 20120166200
    Abstract: Disclosed is a system for integrating gestures and sounds including: a gesture recognition unit that extracts gesture feature information corresponding to user commands from image information and acquires gesture recognition information from the gesture feature information; a background recognition unit acquiring background sound information using the predetermined background sound model from the sound information; a sound recognition unit that extracts the sound feature information corresponding to user commands from the sound information and extracts the sound feature information based on the background sound information and acquires the sound recognition information from the sound feature information; and an integration unit that generates integration information by integrating the gesture recognition information and the sound recognition information.
    Type: Application
    Filed: December 21, 2011
    Publication date: June 28, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Mun Sung HAN, Young Giu JUNG, Hyun KIM, Jae Hong KIM, Joo Chan SOHN
  • Publication number: 20120166190
    Abstract: The present invention has been made in an effort to provide an apparatus for removing noise for sound/voice recognition removing a TV sound corresponding to a noise signal by using an adaptive filter capable of adapting a filter coefficient in order to remove an analogue signal and performing sound and/or voice recognition and a method thereof.
    Type: Application
    Filed: December 15, 2011
    Publication date: June 28, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jae Yeon Lee, Mun Sung Han, Jae Il Cho, Jae Hong Kim, Joo Chan Sohn
  • Publication number: 20120155719
    Abstract: Disclosed is a method of detecting an upper body. The method includes detecting an omega candidate area including a shape formed of a face and a shoulder line of a human from a target image, cutting the target image into the upper body candidate area including the omega candidate area, detecting a human face from the upper body candidate area, and judging whether the upper body of the human is included in the target image according to the result of detecting the human face.
    Type: Application
    Filed: December 21, 2011
    Publication date: June 21, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Woo Han YUN, Do Hyung KIM, Jae Yeon LEE, Kyu Dae BAN, Dae Ha LEE, Mun Sung HAN, Ho Sub YOON, Su Young CHI, Yun Koo CHUNG, Joo Chan SOHN, Hye Jin KIM, Young Woo YOON, Jae Hong KIM, Jae Il CHO
  • Publication number: 20100077261
    Abstract: The present invention relates to a realistic service and system using a five senses integrated interface, and more particularly, to an apparatus and method for encoding the five senses and a system and method for providing realistic service using a five senses integrated interface, to allow a user to select a product to sensorially experience through an integrated interface in a remote location, which includes a integration recognizer detecting data selected by the user and recognizing an object, and transmitting five senses data of the object through a network, a five senses data analyzer receiving a five senses data packet including the five senses data of the object through the network, and extracting and analyzing the five senses data packet in terms of data on each of the five senses, and a five senses integration representer representing five senses using the data on each of the five senses.
    Type: Application
    Filed: December 3, 2007
    Publication date: March 25, 2010
    Inventors: Young Giu Jung, Mun Sung Han, Jun Seok Park
  • Patent number: 7613611
    Abstract: Provided is a method and an apparatus for vocal-cord signal recognition. A signal processing unit receives and digitalizes a vocal cord signal, and a noise removing unit which channel noise included in the vocal cord signal. A feature extracting unit extracts a feature vector from the vocal cord signal, which has the channel noise removed therefrom, and a recognizing unit calculates a similarity between the vocal cord signal and the learned model parameter. Consequently, the apparatus is robust in a noisy environment.
    Type: Grant
    Filed: May 26, 2005
    Date of Patent: November 3, 2009
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kwan Hyun Cho, Mun Sung Han, Young Giu Jung, Hee Sook Shin, Jun Seok Park, Dong Won Han
  • Publication number: 20080288260
    Abstract: Provided is an input/output apparatus based on voice recognition, and a method thereof. An object of the apparatus is to improve a user interface by making pointing input and command execution such as application program control possible according to a voice command of a user possible based on a voice recognition technology without individual pointing input device such as a mouse and a touch pad, and a method thereof. The apparatus includes: a voice recognizer for recognizing a voice command inputted from outside; a pointing controller for calculating a pointing location on a screen which corresponds to a voice recognition result transmitted from the voice recognizer; a displayer for displaying a screen; and a command controller for processing diverse commands related to a current pointing location.
    Type: Application
    Filed: September 11, 2006
    Publication date: November 20, 2008
    Inventors: Kwan-Hyun Cho, Mun-Sung Han, Jun-Seok Park, Young-Giu Jung
  • Publication number: 20080270126
    Abstract: Provided are a vocal-cord recognition apparatus and a method thereof. The vocal-cord signal recognition apparatus includes a vocal-cord signal extracting unit for analyzing a feature of a vocal-cord signal inputted through a throat microphone, and extracting a vocal-cord feature vector from the vocal-cord signal using the analyzing data; and a vocal-cord signal recognition unit for recognizing the vocal-cord signal by extracting the feature of the vocal-cord signal using the vocal-cord signal feature vector extracted at the vocal-cord signal extracting means.
    Type: Application
    Filed: October 19, 2006
    Publication date: October 30, 2008
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Young-Giu Jung, Mun-Sung Han, Kwan-Hyun Cho, Jun-Seok Park
  • Publication number: 20080137909
    Abstract: A gaze position tracking method and apparatus for simply mapping one's gaze position on a monitor screen are provided. The gaze position tracking apparatus includes an image capturing module, and an image processing module. The image capturing module illuminates infrared rays to a user's eyes, reflects an illuminated eye image at 45°, and captures the 45° reflected eye image. The image processing module obtains a pupil center point of the illuminated eye image by performing a predetermined algorithm, and maps the pupil center point on a display plane of a display device through a predetermined transform function.
    Type: Application
    Filed: December 6, 2007
    Publication date: June 12, 2008
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jaeseon LEE, Young Giu JUNG, Mun Sung HAN, Jun Seok PARK, Eui Chul LEE, Kang Ryoung PARK, Min Cheol HWANG, Joa Sang LIM, Yongjoo CHO