Patents by Inventor Woon-Tack Woo

Woon-Tack Woo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10409447
    Abstract: The present invention includes: a targeting unit configured to, when an event by a user's action is generated in a 3D image displayed on a display device, acquire a first subspace of a first three-dimensional (3D) shape corresponding to the user's action; and a refinement unit configured to acquire a second subspace of a second 3D shape, of which position and scale are adjusted according to a user's gesture within a range of the first subspace acquired by the targeting unit.
    Type: Grant
    Filed: September 7, 2015
    Date of Patent: September 10, 2019
    Assignee: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Woon Tack Woo, Hyeong Mook Lee
  • Patent number: 10304248
    Abstract: Provided is a method of providing an augmented reality interaction service, the method including: generating reference coordinates based on a 3-dimensional (3D) image including depth information obtained through a camera; segmenting a region corresponding to a pre-set object from the 3D image including the depth information obtained through the camera, based on depth information of the pre-set object and color space conversion; segmenting a sub-object having a motion component from the pre-set object in the segmented region, and detecting a feature point by modeling the sub-object and a palm region linked to the sub-object based on a pre-set algorithm; and controlling a 3D object for use of an augmented reality service by estimating a posture of the sub-object based on joint information of the pre-set object provided through a certain user interface (UI).
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: May 28, 2019
    Assignee: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Woon Tack Woo, Tae Jin Ha
  • Publication number: 20180210627
    Abstract: The present invention includes: a targeting unit configured to, when an event by a user's action is generated in a 3D image displayed on a display device, acquire a first subspace of a first three-dimensional (3D) shape corresponding to the user's action; and a refinement unit configured to acquire a second subspace of a second 3D shape, of which position and scale are adjusted according to a user's gesture within a range of the first subspace acquired by the targeting unit.
    Type: Application
    Filed: September 7, 2015
    Publication date: July 26, 2018
    Inventors: Woon Tack WOO, Hyeong Mook LEE
  • Publication number: 20180121715
    Abstract: The present invention includes the steps of: displaying an unlocking interface via a user interrupt; receiving an unlocking pattern inputted via the unlocking interface; detecting the received unlocking pattern and executing a mode corresponding to the detected unlocking pattern to thereby measure the status degree of an object corresponding to the detected unlocking pattern; and calling a lookup table in which a range of adequate status degrees for each type is measured and matched according to a pre-set and classified object type to thereby determine the status degree of the measured object, and feeding back the result of the determination via the unlocking interface.
    Type: Application
    Filed: June 18, 2015
    Publication date: May 3, 2018
    Inventors: Woon Tack WOO, Jeonghun JO, Sung Sil KIM, Young Kyoon JANG
  • Publication number: 20180081448
    Abstract: The present invention includes: a wearable device including a head mounted display (HMD); an augmented reality service providing terminal paired with the wearable device and configured to reproduce content corresponding to a scenario-based preset flow via a GUI interface, overlay corresponding objects in a three-dimensional (3D) space being viewed from the wearable device when an interrupt occurs in an object formed in the content to thereby generate an augmented reality image, convert a state of each of the overlaid objects according to a user's gesture, and convert location regions of the objects based on motion information sensed by a motion sensor; and a pointing device including a magnetic sensor and configured to select or activate an object output from the augmented reality service providing terminal.
    Type: Application
    Filed: September 14, 2015
    Publication date: March 22, 2018
    Inventors: Woon Tack WOO, Kyung Won GIL, Tae Jin HA, Young Yim DOH, Ji Min RHIM
  • Publication number: 20180047213
    Abstract: The present invention includes the steps of: collecting data associated with a preset authoring item based on a predetermined viewing point within content to be guided, when an authoring mode; rendering authoring item-specific data collected based on a preset augmented reality content service policy, matching the rendered data to metadata corresponding to the viewing point, and storing the matched data; checking the metadata of the viewing point; and as a result of the checking, matching the metadata to the content displayed in the viewing region and augmenting the content.
    Type: Application
    Filed: June 10, 2015
    Publication date: February 15, 2018
    Inventors: Woon Tack WOO, Tae Jin HA, Jae In KIM
  • Publication number: 20170154471
    Abstract: Provided is a method of providing an augmented reality interaction service, the method including: generating reference coordinates based on a 3-dimensional (3D) image including depth information obtained through a camera; segmenting a region corresponding to a pre-set object from the 3D image including the depth information obtained through the camera, based on depth information of the pre-set object and color space conversion; segmenting a sub-object having a motion component from the pre-set object in the segmented region, and detecting a feature point by modeling the sub-object and a palm region linked to the sub-object based on a pre-set algorithm; and controlling a 3D object for use of an augmented reality service by estimating a posture of the sub-object based on joint information of the pre-set object provided through a certain user interface (UI).
    Type: Application
    Filed: June 26, 2015
    Publication date: June 1, 2017
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Woon Tack WOO, Tae Jin HA
  • Publication number: 20170140552
    Abstract: The present invention relates to a technology that allows a user to manipulate a virtual three-dimensional (3D) object with his or her bare hand in a wearable augmented reality (AR) environment, and more particularly, to a technology that is capable of detecting 3D positions of a pair of cameras mounted on a wearable display and a 3D position of a user's hand in a space by using distance input data of an RGB-Depth (RGB-D) camera, without separate hand and camera tracking devices installed in the space (environment) and enabling a user's bare hand interaction based on the detected 3D positions.
    Type: Application
    Filed: June 25, 2015
    Publication date: May 18, 2017
    Inventors: Woon Tack WOO, Tae Jin HA
  • Publication number: 20150325148
    Abstract: A CPR training simulation system is provided. The system obtains signals from various sensor installed in a dummy. Specifically, the system obtains signals representing pressure applied to the dummy, bending degree of an air pocket, and expansion of airway in the dummy. Using the obtained data, the system calculates a flow rate representing air flow via the airway of the dummy and compares with a reference flow rate. The system may include an portable terminal for displaying various guide for a trainee during CPR training.
    Type: Application
    Filed: June 11, 2014
    Publication date: November 12, 2015
    Inventors: Won Joon Kim, Ye Ram Kwon, Sung Won Lee, Ji Hoon Jeong, Noh Young Park, Woon Tack Woo
  • Patent number: 8983184
    Abstract: The present invention relates to a system for generating a vision image from the viewpoint of an agent in an augmented reality environment, a method thereof, and a recording medium in which a program for implementing the method is recorded. The invention provides a vision image information storage system comprising: a vision image generator which extracts visual objects from an augmented reality environment based on a predetermined agent, and generates a vision image from the viewpoint of the agent; and an information storage unit which evaluates the objects included in the generated vision image based on a predetermined purpose, and stores information on the evaluated objects.
    Type: Grant
    Filed: November 17, 2010
    Date of Patent: March 17, 2015
    Assignee: Gwangju Institute of Science and Technology
    Inventors: Woon Tack Woo, Se Jin Oh
  • Patent number: 8903124
    Abstract: The present invention relates to an object learning method that minimizes time required for learning an object, an object tracking method using the object learning method, and an object learning and tracking system. The object learning method includes: receiving an image to be learned through a camera to generate a front image by a terminal; generating m view points used for object learning and generating first images obtained when viewing the object from the m view points using the front image; generating second images by performing radial blur on the first images; separating an area used for learning from the second images to obtain reference patches; and storing pixel values of the reference patches.
    Type: Grant
    Filed: December 14, 2010
    Date of Patent: December 2, 2014
    Assignee: Gwangju Institute of Science and Technology
    Inventors: Woon Tack Woo, Won Woo Lee, Young Min Park
  • Patent number: 8823697
    Abstract: A tabletop, mobile augmented reality system for personalization and cooperation and an interaction method using augmented reality is presented. More particularly, the tabletop, mobile augmented reality system for personalization and cooperation adds a mobile interface for providing a personal user space to a table interface for providing a cooperative space for users such that information on a table can be brought to a personal mobile device and controlled directly or manipulated personally by a user, allows for fairness in the right to control the table, and permits users to share the result of personalization.
    Type: Grant
    Filed: February 11, 2009
    Date of Patent: September 2, 2014
    Assignee: Gwangju Institute of Science and Technology
    Inventors: Woon Tack Woo, Se Won Na
  • Patent number: 8660302
    Abstract: A target tracking apparatus and method according to an exemplary embodiment of the present invention may quickly and accurately perform target detection and tracking in a photographed image given as consecutive frames by acquiring at least one target candidate image most similar to a photographed image of a previous frame among prepared reference target images, determining one of the target candidate images as a target confirmation message based on the photographed image, and calculating a homography between the determined target confirmation image and the photographed image, and searching the photographed image of the previous image for feature points according to the calculated homography, and tracking an inter-frame change from the previous frame of the found feature points to a current frame.
    Type: Grant
    Filed: December 14, 2010
    Date of Patent: February 25, 2014
    Assignee: Gwangju Institute of Science and Technology
    Inventors: Woon Tack Woo, Ki Young Kim
  • Patent number: 8619160
    Abstract: The present invention concerns an enhanced-image generation system and method which responds adaptively to changes in conditions in real space or virtual space. The enhanced-image generation system according to the present invention comprises: a conditions-judging unit for judging conditions on the basis of conditions data associated with actual objects and conditions data associated with virtual objects; an operational-parameter generating unit for generating operational parameters for a responsive agent in accordance with the judged conditions; and an enhanced-image generating unit for generating an enhanced image through the use of agent operational parameters and an image relating to an actual object.
    Type: Grant
    Filed: January 8, 2009
    Date of Patent: December 31, 2013
    Assignee: Gwangju Institute of Science and Technology
    Inventors: Woon Tack Woo, Se Jin Oh
  • Patent number: 8467576
    Abstract: The present invention relates to a method and an apparatus for tracking multiple objects and a storage medium. More particularly, the present invention relates to a method and an apparatus for tracking multiple objects that performs object detection of one subset per an input image by performing only objection detection of one subset per camera image regardless of the number N of objects to be tracked and tracks all objects among images while the objects are detected to track multiple objects in real time, and a storage medium. The method for tracking multiple objects according to the exemplary embodiment of the present invention includes: (a) performing object detection with respect to only objects of one subset among multiple objects with respect to an input image at a predetermined time; and (b) tracking all objects among images from an image of a time prior to the predetermined time with respect to all objects in the input image while step (a) is performed.
    Type: Grant
    Filed: November 23, 2010
    Date of Patent: June 18, 2013
    Assignee: Gwangju Institute of Science and Technology
    Inventors: Woon Tack Woo, Young Min Park
  • Patent number: 8379970
    Abstract: The present invention relates to a color marker recognizing device using a color marker. The color marker recognizing device includes a coloring unit and a content unit. The coloring unit acquires a sub marker image from color marker images according to contextual information on a user. The content unit outputs a first content demand signal that requests to display a first content corresponding to the sub marker image acquired by the coloring unit. Since the color marker recognizing device uses the color marker, it is possible to provide personalized content services.
    Type: Grant
    Filed: January 22, 2008
    Date of Patent: February 19, 2013
    Assignee: Gwangju Institute of Science and Technology
    Inventors: Woon-Tack Woo, Won-Woo Lee
  • Publication number: 20120257791
    Abstract: Disclosed are an apparatus for detecting a vertex of an image and a method for the same detecting the vertex with the high degree of accuracy and reducing time to detect the vertex by minimizing the interaction operations of the user to detect the vertex even in a touch input part having the low degree of sensing precision. The method includes inputting a vertex position of an image, setting an ROI, detecting a plurality of edges, detecting a candidate straight line group based on the edges, and removing a candidate straight line forming forms an angle less than a critical angle with respect to a base candidate straight line set from the candidate straight line group, and determining an intersection point between a remaining candidate straight line and the base candidate straight line provided at a position making a minimum distance from the input vertex position as an optimal vertex.
    Type: Application
    Filed: October 28, 2010
    Publication date: October 11, 2012
    Applicant: GWANGJU INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Woon Tack Woo, Young Kyoon Jang
  • Publication number: 20120087580
    Abstract: The present invention relates to a system for generating a vision image from the viewpoint of an agent in an augmented reality environment, a method thereof, and a recording medium in which a program for implementing the method is recorded. The invention provides a vision image information storage system comprising: a vision image generator which extracts visual objects from an augmented reality environment based on a predetermined agent, and generates a vision image from the viewpoint of the agent; and an information storage unit which evaluates the objects included in the generated vision image based on a predetermined purpose, and stores information on the evaluated objects.
    Type: Application
    Filed: November 17, 2010
    Publication date: April 12, 2012
    Applicant: GWANGJU INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Woon Tack Woo, Se Jin Oh
  • Publication number: 20110262003
    Abstract: The present invention relates to an object learning method that minimizes time required for learning an object, an object tracking method using the object learning method, and an object learning and tracking system. The object learning method includes: receiving an image to be learned through a camera to generate a front image by a terminal; generating m view points used for object learning and generating first images obtained when viewing the object from the m view points using the front image; generating second images by performing radial blur on the first images; separating an area used for learning from the second images to obtain reference patches; and storing pixel values of the reference patches.
    Type: Application
    Filed: December 14, 2010
    Publication date: October 27, 2011
    Applicant: Gwangju Institute of Science and Technology
    Inventors: Woon Tack WOO, Won Woo Lee, Young Min Park
  • Publication number: 20110216939
    Abstract: A target tracking apparatus and method according to an exemplary embodiment of the present invention may quickly and accurately perform target detection and tracking in a photographed image given as consecutive frames by acquiring at least one target candidate image most similar to a photographed image of a previous frame among prepared reference target images, determining one of the target candidate images as a target confirmation message based on the photographed image, and calculating a homography between the determined target confirmation image and the photographed image, and searching the photographed image of the previous image for feature points according to the calculated homography, and tracking an inter-frame change from the previous frame of the found feature points to a current frame.
    Type: Application
    Filed: December 14, 2010
    Publication date: September 8, 2011
    Applicant: Gwangju Institute of Science and Technology
    Inventors: Woon Tack Woo, Ki Young Kim