Patents by Inventor Young Woo Yoon
Young Woo Yoon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250187180Abstract: Disclosed herein is a method for performing a robot skill based on skill uncertainty using a large language model. The method includes generating a subtask list using a large language model by receiving a target task and environment information, mapping a subtask in the subtask list into a skill embedding space through an abstract skill policy network, and performing the subtask by decoding the mapped subtask through a manipulation skill policy network.Type: ApplicationFiled: December 22, 2023Publication date: June 12, 2025Inventors: Tae-Woo KIM, Jae-Hong KIM, Young-Woo YOON, Min-Su JANG, Jae-Woo CHOI
-
Publication number: 20250065188Abstract: Disclosed herein is an apparatus and method for counting repetitive movements based on Artificial Intelligence. The method may include generating standard movement information that includes a key pose extracted from a demonstration movement image stream based on human skeleton information, which is a set of pieces of positional information of human joints, and major joint information in the key pose and counting repetitive movements depending on whether a user movement matches the standard movement information based on human skeleton information of a user movement image stream.Type: ApplicationFiled: February 1, 2024Publication date: February 27, 2025Inventors: Do-Hyung KIM, Jae-Hong KIM, Young-Woo YOON, Ho-Beom JEON
-
Patent number: 12017116Abstract: Disclosed herein are an apparatus and method for evaluating a human motion using a mobile robot. The method may include identifying the exercise motion of a user by analyzing an image of the entire body of the user captured using a camera installed in the mobile robot, evaluating the pose of the user by comparing the standard pose of the identified exercise motion with images of the entire body of the user captured by the camera of the mobile robot from two or more target locations, and comprehensively evaluating the exercise motion of the user based on the pose evaluation information of the user from each of the two or more target locations.Type: GrantFiled: November 30, 2020Date of Patent: June 25, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Do-Hyung Kim, Jae-Hong Kim, Young-Woo Yoon, Jae-Yeon Lee, Min-Su Jang, Jeong-Dan Choi
-
Patent number: 11740184Abstract: Provided is a fiber web for a gas sensor. In one exemplary embodiment of the present invention, there is provided a fiber web for a gas sensor including nanofibers including a fiber-forming material and a sensing material for reacting with a target substance in a test gas. According to the exemplary embodiment, the fiber web for a gas sensor is capable of identifying the presence or absence of a target substance in a test gas and quantitatively determining the concentration of a target substance, and exhibits improved sensitivity due to having an increased area of contact and reaction with a target substance contained in a test gas.Type: GrantFiled: January 26, 2018Date of Patent: August 29, 2023Assignee: Amogreentech Co., Ltd.Inventor: Young Woo Yoon
-
Patent number: 11691291Abstract: Disclosed herein are an apparatus and method for generating robot interaction behavior. The method for generating robot interaction behavior includes generating co-speech gesture of a robot corresponding to utterance input of a user, generating a nonverbal behavior of the robot, that is a sequence of next joint positions of the robot, which are estimated from joint positions of the user and current joint positions of the robot based on a pre-trained neural network model for robot pose estimation, and generating a final behavior using at least one of the co-speech gesture and the nonverbal behavior.Type: GrantFiled: November 27, 2020Date of Patent: July 4, 2023Assignee: Electronics and Telecommunications Research InstituteInventors: Woo-Ri Ko, Do-Hyung Kim, Jae-Hong Kim, Young-Woo Yoon, Jae-Yeon Lee, Min-Su Jang
-
Publication number: 20220055221Abstract: Disclosed herein are an apparatus and method for generating robot interaction behavior. The method for generating robot interaction behavior includes generating co-speech gesture of a robot corresponding to utterance input of a user, generating a nonverbal behavior of the robot, that is a sequence of next joint positions of the robot, which are estimated from joint positions of the user and current joint positions of the robot based on a pre-trained neural network model for robot pose estimation, and generating a final behavior using at least one of the co-speech gesture and the nonverbal behavior.Type: ApplicationFiled: November 27, 2020Publication date: February 24, 2022Applicant: Electronics and Telecommunications Research InstituteInventors: Woo-Ri KO, Do-Hyung KIM, Jae-Hong KIM, Young-Woo YOON, Jae-Yeon LEE, Min-Su JANG
-
Publication number: 20210394021Abstract: Disclosed herein are an apparatus and method for evaluating a human motion using a mobile robot. The method may include identifying the exercise motion of a user by analyzing an image of the entire body of the user captured using a camera installed in the mobile robot, evaluating the pose of the user by comparing the standard pose of the identified exercise motion with images of the entire body of the user captured by the camera of the mobile robot from two or more target locations, and comprehensively evaluating the exercise motion of the user based on the pose evaluation information of the user from each of the two or more target locations.Type: ApplicationFiled: November 30, 2020Publication date: December 23, 2021Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Do-Hyung KIM, Jae-Hong KIM, Young-Woo YOON, Jae-Yeon LEE, Min-Su JANG, Jeong-Dan CHOI
-
Patent number: 11113988Abstract: Disclosed herein are an apparatus for writing a motion script and an apparatus and method for self-teaching of a motion. The method for self-teaching of a motion, in which the apparatus for writing a motion script and the apparatus for self-teaching of a motion are used, includes creating, by the apparatus for writing a motion script, a motion script based on expert motion of a first user; analyzing, by the apparatus for self-teaching of a motion, a motion of a second user, who learns the expert motion, based on the motion script; and outputting, by the apparatus for self-teaching of a motion, a result of analysis of the motion of the second user.Type: GrantFiled: July 1, 2020Date of Patent: September 7, 2021Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Do-Hyung Kim, Min-Su Jang, Jae-Hong Kim, Young-Woo Yoon, Jae-Il Cho
-
Publication number: 20200335007Abstract: Disclosed herein are an apparatus for writing a motion script and an apparatus and method for self-teaching of a motion. The method for self-teaching of a motion, in which the apparatus for writing a motion script and the apparatus for self-teaching of a motion are used, includes creating, by the apparatus for writing a motion script, a motion script based on expert motion of a first user; analyzing, by the apparatus for self-teaching of a motion, a motion of a second user, who learns the expert motion, based on the motion script; and outputting, by the apparatus for self-teaching of a motion, a result of analysis of the motion of the second user.Type: ApplicationFiled: July 1, 2020Publication date: October 22, 2020Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Do-Hyung KIM, Min-Su JANG, Jae-Hong KIM, Young-Woo YOON, Jae-Il CHO
-
Patent number: 10777198Abstract: Disclosed herein are an apparatus and method for determining the speech and motion properties of an interactive robot. The method for determining the speech and motion properties of an interactive robot includes receiving interlocutor conversation information including at least one of voice information and image information about an interlocutor that interacts with an interactive robot, extracting at least one of a verbal property and a nonverbal property of the interlocutor by analyzing the interlocutor conversation information, determining at least one of a speech property and a motion property of the interactive robot based on at least one of the verbal property, the nonverbal property, and context information inferred from a conversation between the interactive robot and the interlocutor, and controlling the operation of the interactive robot based on at least one of the determined speech property and motion property of the interactive robot.Type: GrantFiled: August 13, 2018Date of Patent: September 15, 2020Assignee: Electronics and Telecommunications Research InstituteInventors: Young-Woo Yoon, Jae-Hong Kim, Jae-Yeon Lee, Min-Su Jang
-
Patent number: 10748444Abstract: Disclosed herein are an apparatus for writing a motion script and an apparatus and method for self-teaching of a motion. The method for self-teaching of a motion, in which the apparatus for writing a motion script and the apparatus for self-teaching of a motion are used, includes creating, by the apparatus for writing a motion script, a motion script based on expert motion of a first user; analyzing, by the apparatus for self-teaching of a motion, a motion of a second user, who learns the expert motion, based on the motion script; and outputting, by the apparatus for self-teaching of a motion, a result of analysis of the motion of the second user.Type: GrantFiled: March 6, 2017Date of Patent: August 18, 2020Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Do-Hyung Kim, Min-Su Jang, Jae-Hong Kim, Young-Woo Yoon, Jae-Il Cho
-
Publication number: 20190391082Abstract: Provided is a fiber web for a gas sensor. In one exemplary embodiment of the present invention, there is provided a fiber web for a gas sensor including nanofibers including a fiber-forming material and a sensing material for reacting with a target substance in a test gas. According to the exemplary embodiment, the fiber web for a gas sensor is capable of identifying the presence or absence of a target substance in a test gas and quantitatively determining the concentration of a target substance, and exhibits improved sensitivity due to having an increased area of contact and reaction with a target substance contained in a test gas.Type: ApplicationFiled: January 26, 2018Publication date: December 26, 2019Applicant: AMOGREENTECH CO., LTD.Inventor: Young Woo YOON
-
Publication number: 20190164548Abstract: Disclosed herein are an apparatus and method for determining the speech and motion properties of an interactive robot. The method for determining the speech and motion properties of an interactive robot includes receiving interlocutor conversation information including at least one of voice information and image information about an interlocutor that interacts with an interactive robot, extracting at least one of a verbal property and a nonverbal property of the interlocutor by analyzing the interlocutor conversation information, determining at least one of a speech property and a motion property of the interactive robot based on at least one of the verbal property, the nonverbal property, and context information inferred from a conversation between the interactive robot and the interlocutor, and controlling the operation of the interactive robot based on at least one of the determined speech property and motion property of the interactive robot.Type: ApplicationFiled: August 13, 2018Publication date: May 30, 2019Applicant: Electronics and Telecommunications Research InstituteInventors: Young-Woo YOON, Jae-Hong KIM, Jae-Yeon LEE, Min-Su JANG
-
Patent number: 9990538Abstract: A face recognition technology using physiognomic feature information, which can improve the accuracy of face recognition. For this, the face recognition method using physiognomic feature information includes defining standard physiognomic types for respective facial elements, capturing a facial image of a user, detecting information about facial elements from the facial image, and calculating similarity scores relative to the standard physiognomic types for respective facial elements of the user based on the facial element information.Type: GrantFiled: April 4, 2016Date of Patent: June 5, 2018Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Ho-Sub Yoon, Kyu-Dae Ban, Young-Woo Yoon, Jae-Hong Kim
-
Publication number: 20170358243Abstract: Disclosed herein are an apparatus for writing a motion script and an apparatus and method for self-teaching of a motion. The method for self-teaching of a motion, in which the apparatus for writing a motion script and the apparatus for self-teaching of a motion are used, includes creating, by the apparatus for writing a motion script, a motion script based on expert motion of a first user; analyzing, by the apparatus for self-teaching of a motion, a motion of a second user, who learns the expert motion, based on the motion script; and outputting, by the apparatus for self-teaching of a motion, a result of analysis of the motion of the second user.Type: ApplicationFiled: March 6, 2017Publication date: December 14, 2017Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Do-Hyung KIM, Min-Su JANG, Jae-Hong KIM, Young-Woo YOON, Jae-Il CHO
-
Publication number: 20170193284Abstract: A face recognition technology using physiognomic feature information, which can improve the accuracy of face recognition. For this, the face recognition method using physiognomic feature information includes defining standard physiognomic types for respective facial elements, capturing a facial image of a user, detecting information about facial elements from the facial image, and calculating similarity scores relative to the standard physiognomic types for respective facial elements of the user based on the facial element information.Type: ApplicationFiled: April 4, 2016Publication date: July 6, 2017Inventors: Ho-Sub YOON, Kyu-Dae BAN, Young-Woo YOON, Jae-Hong KIM
-
Publication number: 20170076629Abstract: Disclosed herein are an apparatus and method for supporting choreography, which can easily and systematically search for existing dances through various interfaces and can check the simulation of the found dances. For this, the apparatus includes a dance motion DB for storing pieces of motion capture data about respective multiple dance motions, a dance attribute DB for storing pieces of biomechanical information about respective multiple dance motions, a search unit for receiving a search target dance from a user using a method corresponding to at least one of a sectional motion search and a dance attribute search, and searching the dance motion DB and the dance attribute DB for choreographic data based on similarity determination, and a display unit for displaying choreographic data of the dance motion DB and the dance attribute DB, found as a result of the search based on similarity determined by the search unit.Type: ApplicationFiled: March 3, 2016Publication date: March 16, 2017Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Do-Hyung KIM, Jae-Hong KIM, Young-Woo YOON, Min-Su JANG, Cheon-Shu PARK, Sung-Woong SHIN
-
Patent number: 9201425Abstract: Provided are a human-tracking method and a robot apparatus. The human-tracking method includes receiving an image frame including a color image and a depth image, determining whether user tracking was successful in a previous image frame, and determining a location of a user and a goal position to which a robot apparatus is to move based on the color image and the depth image in the image frame, when user tracking was successful in the previous frame. Accordingly, a current location of the user can be predicted from the depth image, user tracking can be quickly performed, and the user can be re-detected and tracked using user information acquired in user tracking when detection of the user fails due to obstacles or the like.Type: GrantFiled: September 9, 2013Date of Patent: December 1, 2015Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Young Woo Yoon, Do Hyung Kim, Woo Han Yun, Ho Sub Yoon, Jae Yeon Lee, Jae Hong Kim, Jong Hyun Park
-
Patent number: 9129154Abstract: Provided is a gesture recognition apparatus. The gesture recognition apparatus includes a human detection unit, a gesture region setting region, an arm detection unit and a gesture determination unit. The human detection unit detects a face region of a user from an input image. The gesture region setting unit sets a gesture region, in which a gesture of the user's arm occurs, with respect to the detected face region. The arm detection unit detects an arm region of the user in the gesture region. The gesture determination unit analyzes a position, moving directionality and shape information of the arm region in the gesture region to determine a target gesture of the user. Such a gesture recognition apparatus may be used as a useful means for a human-robot interaction in a long distance where a robot has difficulty in recognizing a user's voice.Type: GrantFiled: December 15, 2009Date of Patent: September 8, 2015Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Do Hyung Kim, Jae Yeon Lee, Woo Han Yun, Su Young Chi, Ho Sub Yoon, Hye Jin Kim, Young Woo Yoon
-
Patent number: 8705814Abstract: Disclosed is a method of detecting an upper body. The method includes detecting an omega candidate area including a shape formed of a face and a shoulder line of a human from a target image, cutting the target image into the upper body candidate area including the omega candidate area, detecting a human face from the upper body candidate area, and judging whether the upper body of the human is included in the target image according to the result of detecting the human face.Type: GrantFiled: December 21, 2011Date of Patent: April 22, 2014Assignee: Electronics and Telecommunications Research InstituteInventors: Woo Han Yun, Do Hyung Kim, Jae Yeon Lee, Kyu Dae Ban, Dae Ha Lee, Mun Sung Han, Ho Sub Yoon, Su Young Chi, Yun Koo Chung, Joo Chan Sohn, Hye Jin Kim, Young Woo Yoon, Jae Hong Kim, Jae Il Cho