Patents by Inventor Woo-Han Yun

Woo-Han Yun has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230142797
    Abstract: Disclosed herein a method and apparatus for learning a locally-adaptive local device task based on cloud simulation. According to an embodiment of the present disclosure, there is provided a method for learning a locally-adaptive local device task. The method comprising: receiving observation data about a surrounding environment recognized by a local device; performing a domain randomization based on the observation data and a failure type of a task assigned to the local device and relearning a policy network of the assigned task based on the domain randomization; and updating a policy network of the local device for the assigned task by transmitting the relearned policy network to the local device.
    Type: Application
    Filed: September 9, 2022
    Publication date: May 11, 2023
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Tae Woo KIM, Jae Hong KIM, Chan Kyu PARK, Woo Han YUN, Ho Sub YOON, Min Su JANG
  • Publication number: 20230147274
    Abstract: Disclosed herein a method and apparatus for recommending a table service based on image recognition. According to an embodiment of the present disclosure, there is provided a method for recommending a table service, including: receiving a table image that is captured in real time; acquiring, by using an artificial intelligence of a pre-learned learning model, table information that includes object information and food information of at least one table in the table image; and recommending, based on the table information, a service for each of the at least one table.
    Type: Application
    Filed: September 6, 2022
    Publication date: May 11, 2023
    Inventors: Woo Han YUN, Do Hyung KIM, Jae Hong KIM, Tae Woo KIM, Chan Kyu PARK, Ho Sub YOON, Jae Yeon LEE, Min Su JANG
  • Publication number: 20230077103
    Abstract: Disclosed herein are a cloud server, an edge server, and a method for generating an intelligence model using the same. The method for generating an intelligence model includes receiving, by the edge server, an intelligence model generation request from a user terminal, generating an intelligence model corresponding to the intelligence model generation request, and adjusting the generated intelligence model.
    Type: Application
    Filed: June 9, 2022
    Publication date: March 9, 2023
    Inventors: Min-Su JANG, Do-Hyung KIM, Jae-Hong KIM, Woo-Han YUN
  • Publication number: 20170053449
    Abstract: A virtual content provision apparatus and method for augmenting usability of a real object. The virtual content provision apparatus includes a real object information acquisition unit for acquiring real object information corresponding to a real object through an input module, a virtual content search unit for searching for any one piece of virtual content based on the real object information, a virtual interface projection unit for projecting a virtual interface corresponding to the virtual content onto the real object through an output module, a user input detection unit for detecting user input on the virtual interface based on the input module, and a virtual content provision unit for, when the user input is detected, extracting virtual information related to the user input based on the virtual content, and projecting and providing the virtual information to correspond to the real object information through the output module.
    Type: Application
    Filed: August 17, 2016
    Publication date: February 23, 2017
    Inventors: Joo-Haeng LEE, Jae-Hong KIM, Woo-Han YUN, A-Hyun LEE, Jae-Yeon LEE
  • Patent number: 9280703
    Abstract: Disclosed are an apparatus for tracking a location of a hand, includes: a skin color image detector for detecting a skin color region from an image input from an image device using a predetermined skin color of a user; a face tracker for tracking a face using the detected skin color image; a motion detector for setting a ROI using location information of the tracked face, and detecting a motion image from the set ROI; a candidate region extractor for extracting a candidate region with respect to a hand of the user using the skin color image detected by the skin color image detector and the motion image detected by the motion detector; and a hand tracker for tracking a location of the hand in the extracted candidate region to find out a final location of the hand.
    Type: Grant
    Filed: August 28, 2012
    Date of Patent: March 8, 2016
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Woo Han Yun, Jae Yeon Lee, Do Hyung Kim, Jae Hong Kim, Joo Chan Sohn
  • Patent number: 9201425
    Abstract: Provided are a human-tracking method and a robot apparatus. The human-tracking method includes receiving an image frame including a color image and a depth image, determining whether user tracking was successful in a previous image frame, and determining a location of a user and a goal position to which a robot apparatus is to move based on the color image and the depth image in the image frame, when user tracking was successful in the previous frame. Accordingly, a current location of the user can be predicted from the depth image, user tracking can be quickly performed, and the user can be re-detected and tracked using user information acquired in user tracking when detection of the user fails due to obstacles or the like.
    Type: Grant
    Filed: September 9, 2013
    Date of Patent: December 1, 2015
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Young Woo Yoon, Do Hyung Kim, Woo Han Yun, Ho Sub Yoon, Jae Yeon Lee, Jae Hong Kim, Jong Hyun Park
  • Patent number: 9183431
    Abstract: An apparatus includes an image receiving module configured to collect a depth image provided from a camera, a human body detection module configured to detect a human body from the collected depth image, and an activity recognition module configured to recognize an action of the human body on the basis of a 3-dimensional action volume extracted from the human body and a previously learned action model.
    Type: Grant
    Filed: January 23, 2014
    Date of Patent: November 10, 2015
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Do-Hyung Kim, Jae Hong Kim, Kye Kyung Kim, Youngwoo Yoon, Woo han Yun, Ho sub Yoon, Jae Yeon Lee, Suyoung Chi, Young-Jo Cho, Kyu-Dae Ban, Jong-Hyun Park
  • Patent number: 9129154
    Abstract: Provided is a gesture recognition apparatus. The gesture recognition apparatus includes a human detection unit, a gesture region setting region, an arm detection unit and a gesture determination unit. The human detection unit detects a face region of a user from an input image. The gesture region setting unit sets a gesture region, in which a gesture of the user's arm occurs, with respect to the detected face region. The arm detection unit detects an arm region of the user in the gesture region. The gesture determination unit analyzes a position, moving directionality and shape information of the arm region in the gesture region to determine a target gesture of the user. Such a gesture recognition apparatus may be used as a useful means for a human-robot interaction in a long distance where a robot has difficulty in recognizing a user's voice.
    Type: Grant
    Filed: December 15, 2009
    Date of Patent: September 8, 2015
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Do Hyung Kim, Jae Yeon Lee, Woo Han Yun, Su Young Chi, Ho Sub Yoon, Hye Jin Kim, Young Woo Yoon
  • Patent number: 9008440
    Abstract: Disclosed are a component recognizing apparatus and a component recognizing method. The component recognizing apparatus includes: an image preprocessing unit configured to extract component edges from an input component image by using a plurality of edge detecting techniques, and detect a component region by using the extracted component edges; a feature extracting unit configured to extract a component feature from the detected component region, and create a feature vector by using the component feature; and a component recognizing unit configured to input the created feature vector to an artificial neural network which has learned in advance to recognize a component category through a plurality of component image samples, and recognize the component category according to a result.
    Type: Grant
    Filed: July 10, 2012
    Date of Patent: April 14, 2015
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kye Kyung Kim, Woo Han Yun, Hye Jin Kim, Su Young Chi, Jae Yeon Lee, Mun Sung Han, Jae Hong Kim, Joo Chan Sohn
  • Publication number: 20150092981
    Abstract: An apparatus includes an image receiving module configured to collect a depth image provided from a camera, a human body detection module configured to detect a human body from the collected depth image, and an activity recognition module configured to recognize an action of the human body on the basis of a 3-dimensional action volume extracted from the human body and a previously learned action model.
    Type: Application
    Filed: January 23, 2014
    Publication date: April 2, 2015
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Do-Hyung KIM, Jae Hong KIM, Kye Kyung KIM, Youngwoo YOON, Woo han YUN, Ho sub YOON, Jae Yeon LEE, SUYOUNG CHI, Young-Jo CHO, Kyu-Dae BAN, Jong-Hyun PARK
  • Publication number: 20150088359
    Abstract: A mobile robot having a returning mechanism includes one or more moving members mounted on a body of the mobile robot; and a cable member connected to one side of the mobile robot so as to supply the mobile robot with electrical power. Further, the mobile robot includes a returning member having a rigidity stronger than the cable member and disposed to wrap the cable member so that the cable member is placed within the returning member; and a take-up unit configured to pull the returning member to keep it taut.
    Type: Application
    Filed: February 28, 2014
    Publication date: March 26, 2015
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Sunglok CHOI, Woo han Yun, Jae Hyun Park, SeungHwan Park, Wonpil Yu, Yu-Cheol Lee
  • Publication number: 20140348380
    Abstract: A method for tracking an object in an object tracking apparatus includes receiving an image frame of an image; and detecting a target, a depth analogous obstacle and an appearance analogous obstacle; tracking the target, the depth analogous obstacle and the appearance analogous obstacle; when the detected target overlaps the depth analogous obstacle, comparing the variation of tracking score of the target with that of the depth analogous obstacle. Further, the method includes continuously tracking the target when the variation of tracking score of the target is below that of the depth analogous obstacle and processing a next frame when the variation of tracking score of the target is above that of the depth analogous obstacle; and re-detecting the target.
    Type: Application
    Filed: January 24, 2014
    Publication date: November 27, 2014
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Youngwoo YOON, Woo han YUN, Ho sub YOON, Jae Yeon LEE, Do-Hyung KIM, Jae Hong KIM, Jong-Hyun PARK
  • Publication number: 20140218516
    Abstract: A human information recognition method includes analyzing sensor data from multi-sensor resource placed in a recognition space to generate human information based on the sensor data, the human information including an identity, location and activity information of people existed in the recognition space. Further, the human information recognition method includes mixing the human information based on the sensor data with human information, the human information being acquired through interaction with the people existed in the recognition space; and storing a human model of the people existed in the recognition space depending on the mixed human information in a database unit.
    Type: Application
    Filed: July 1, 2013
    Publication date: August 7, 2014
    Inventors: Do-Hyung KIM, Ho sub YOON, Jae Yeon LEE, Kyu-Dae BAN, Woo han YUN, Youngwoo YOON, Jae Hong KIM, Young-Jo CHO, Suyoung CHI, Kye Kyung KIM
  • Publication number: 20140170628
    Abstract: Provided is a multiple-intelligence detection system. The multiple-intelligence detection system includes an image detection device obtaining image information for evaluating multiple-intelligence from a user, a multiple-intelligence measurement model unit receiving the image information from the image detection device to perform multiple-intelligence evaluation through selection of one of a first reaction and a second reaction, and a content unit receiving a result of the evaluated multiple-intelligence from the multiple-intelligence measurement model unit to generate an individual portfolio on the basis of the received result. The multiple-intelligence measurement model unit selects one of the first and second reactions on the basis of a reference reaction according to feelings and behavior patterns of the user.
    Type: Application
    Filed: June 27, 2013
    Publication date: June 19, 2014
    Inventors: Chan Kyu PARK, Woo han YUN, Do-Hyung KIM, Ho Sub YOON, Jae Hong KIM
  • Patent number: 8705814
    Abstract: Disclosed is a method of detecting an upper body. The method includes detecting an omega candidate area including a shape formed of a face and a shoulder line of a human from a target image, cutting the target image into the upper body candidate area including the omega candidate area, detecting a human face from the upper body candidate area, and judging whether the upper body of the human is included in the target image according to the result of detecting the human face.
    Type: Grant
    Filed: December 21, 2011
    Date of Patent: April 22, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Woo Han Yun, Do Hyung Kim, Jae Yeon Lee, Kyu Dae Ban, Dae Ha Lee, Mun Sung Han, Ho Sub Yoon, Su Young Chi, Yun Koo Chung, Joo Chan Sohn, Hye Jin Kim, Young Woo Yoon, Jae Hong Kim, Jae Il Cho
  • Publication number: 20140107842
    Abstract: Provided are a human-tracking method and a robot apparatus. The human-tracking method includes receiving an image frame including a color image and a depth image, determining whether user tracking was successful in a previous image frame, and determining a location of a user and a goal position to which a robot apparatus is to move based on the color image and the depth image in the image frame, when user tracking was successful in the previous frame. Accordingly, a current location of the user can be predicted from the depth image, user tracking can be quickly performed, and the user can be re-detected and tracked using user information acquired in user tracking when detection of the user fails due to obstacles or the like.
    Type: Application
    Filed: September 9, 2013
    Publication date: April 17, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Young Woo YOON, Do Hyung KIM, Woo Han YUN, Ho Sub YOON, Jae Yeon LEE, Jae Hong KIM, Jong Hyun PARK
  • Publication number: 20140093851
    Abstract: Disclosed is a horseback riding simulator configured to operate in an adaptive manner with respect to a user and a method using the same, the horseback riding simulator including a user identification unit to identify a user through user identification information based on user facial information extracted from an input image, a posture recognition unit to calculate user posture information by extracting information related to each of designated body parts of the user from the input image, a coaching unit to provide the user with instruction in horseback riding posture, based on the user identification information and the user posture information, and a sensory realization unit to control a horseback riding mechanism based on a user-intended posture calculated through analysis of the user posture information. Accordingly, user-customized horseback riding instruction is provided through identification of a user and recognition of a user's horseback riding posture.
    Type: Application
    Filed: September 19, 2013
    Publication date: April 3, 2014
    Applicant: Electronics & Telecommunications Research Institute
    Inventors: Kye Kyung KIM, Sang Seung KANG, Woo Han YUN, Su Young CHI, Jae Hong KIM, Jong Hyun PARK
  • Patent number: 8548201
    Abstract: The present invention detects a candidate ROI group associated with character strings/figure strings on the basis of a result acquired through prior learning of various types of license plates, verifies the interested region candidate group detected by using at least one condition of five predetermined conditions, and determines an MBR region in the selected ROI region from the verified interested region candidate group by considering a ratio between the height and width of the ROI region to recognize the license plate for the automobile. According to the present invention, it is possible to automatically detect the location of the license plate regardless of various types of license plate specifications defined for each of countries.
    Type: Grant
    Filed: September 1, 2011
    Date of Patent: October 1, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Ho Sub Yoon, Kyu Dae Ban, Young Woo Yoon, Woo Han Yun, Do Hyung Kim, Jae Yeon Lee, Jae Hong Kim, Su Young Chi, Joo Chan Sohn
  • Publication number: 20130177204
    Abstract: Disclosed are an apparatus for tracking a location of a hand, includes: a skin color image detector for detecting a skin color region from an image input from an image device using a predetermined skin color of a user; a face tracker for tracking a face using the detected skin color image; a motion detector for setting a ROI using location information of the tracked face, and detecting a motion image from the set ROI; a candidate region extractor for extracting a candidate region with respect to a hand of the user using the skin color image detected by the skin color image detector and the motion image detected by the motion detector; and a hand tracker for tracking a location of the hand in the extracted candidate region to find out a final location of the hand.
    Type: Application
    Filed: August 28, 2012
    Publication date: July 11, 2013
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Woo Han YUN, Jae Yeon Lee, Do Hyung Kim, Jae Hong Kim, Joo Chan Sohn
  • Publication number: 20130163858
    Abstract: Disclosed are a component recognizing apparatus and a component recognizing method. The component recognizing apparatus includes: an image preprocessing unit configured to extract component edges from an input component image by using a plurality of edge detecting techniques, and detect a component region by using the extracted component edges; a feature extracting unit configured to extract a component feature from the detected component region, and create a feature vector by using the component feature; and a component recognizing unit configured to input the created feature vector to an artificial neural network which has learned in advance to recognize a component category through a plurality of component image samples, and recognize the component category according to a result.
    Type: Application
    Filed: July 10, 2012
    Publication date: June 27, 2013
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Kye Kyung KIM, Woo Han YUN, Hye Jin KIM, Su Young CHI, Jae Yeon LEE, Mun Sung HAN, Jae Hong KIM, Joo Chan SOHN