Patents by Inventor Sang Chul Ahn

Sang Chul Ahn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11676372
    Abstract: The present disclosure relates to an object detection and classification system with higher accuracy and resolution in a less computer memory environment. The system comprises an input value generation unit to receive an input image and generate an input value including feature information; a memory value generation unit to receive a reference image and generate a memory value including feature information; a memory management unit to select information having high importance from the memory values and store in a computer memory; an aggregated value generation unit to compute similarity between the input value and the memory value, calculate a weighted sum to generate an integrated value, and aggregate the integrated value and the input value; and an object detection unit to detect or classify the object from the input image using the aggregated value.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: June 13, 2023
    Assignee: Korea Institute of Science and Technology
    Inventors: Sang Chul Ahn, Junseok Kang
  • Publication number: 20220188557
    Abstract: The present disclosure relates to an object detection and classification system with higher accuracy and resolution in a less computer memory environment. The system comprises an input value generation unit to receive an input image and generate an input value including feature information; a memory value generation unit to receive a reference image and generate a memory value including feature information; a memory management unit to select information having high importance from the memory values and store in a computer memory; an aggregated value generation unit to compute similarity between the input value and the memory value, calculate a weighted sum to generate an integrated value, and aggregate the integrated value and the input value; and an object detection unit to detect or classify the object from the input image using the aggregated value.
    Type: Application
    Filed: March 19, 2021
    Publication date: June 16, 2022
    Applicant: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Sang Chul AHN, Junseok KANG
  • Patent number: 10713833
    Abstract: Facial expressions and whole-body gestures of a 3D character are provided based on facial expressions of a user and gestures of a hand puppet perceived using a depth camera.
    Type: Grant
    Filed: March 23, 2017
    Date of Patent: July 14, 2020
    Assignee: Korea Institute of Science and Technology
    Inventors: Hwasup Lim, Youngmin Kim, Jae-In Hwang, Sang Chul Ahn
  • Patent number: 10593083
    Abstract: Disclosed is a method for facial age simulation based on an age of each facial part and environmental factors, which includes: measuring an age of each facial part on the basis of an input face image; designating a personal environmental factor; transforming an age of each facial part by applying an age transformation model according to the age of each facial part and the environmental factor; reconstructing the image transformed for each facial part; and composing the reconstructed images to generate an age-transformed face. Accordingly, it is possible to transform a face realistically based on an age measured for each facial part and an environmental factor.
    Type: Grant
    Filed: September 8, 2016
    Date of Patent: March 17, 2020
    Assignee: Korea Institute of Science and Technology
    Inventors: Ig Jae Kim, Sung Eun Choi, Sang Chul Ahn
  • Publication number: 20190124294
    Abstract: In a remote interaction method performed by a remote collaboration system comprising a robot device and a head mounted display, the robot device is located in a remote space and comprising a projector, a panoramic camera and a high resolution camera. The head mounted display is located in a local space apart from the remote space. A communication between the robot device and the head mounted display is established based on a communication request. The remote space is observed by the head mounted display based on first image information and second image information. The first image information is collected by the panoramic camera and received from the robot device. The second image information is collected by the high resolution camera and received from the robot device. Necessary information to be provided to the remote space is searched by the head mounted display based on a result of observing the remote space.
    Type: Application
    Filed: July 19, 2018
    Publication date: April 25, 2019
    Applicant: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Sang Chul Ahn, Jin Uk Kwon
  • Patent number: 10251556
    Abstract: Provided is an in vivo bioimaging method including irradiating near-infrared (NIR) light onto a living body, converting the NIR light passed through the living body, into visible light using upconversion nanoparticles (UCNPs), and generating a bioimage of the living body by receiving the visible light using a complementary metal-oxide-semiconductor (CMOS) image sensor.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: April 9, 2019
    Assignees: Korea Institute of Science an Technology, Center of Human-Centered Interaction for Coexistence
    Inventors: Hwa Sup Lim, Seok Joon Kwon, Sang Chul Ahn, Bum Jae You
  • Patent number: 10250845
    Abstract: In a remote interaction method performed by a remote collaboration system comprising a robot device and a head mounted display, the robot device is located in a remote space and comprising a projector, a panoramic camera and a high resolution camera. The head mounted display is located in a local space apart from the remote space. A communication between the robot device and the head mounted display is established based on a communication request. The remote space is observed by the head mounted display based on first image information and second image information. The first image information is collected by the panoramic camera and received from the robot device. The second image information is collected by the high resolution camera and received from the robot device. Necessary information to be provided to the remote space is searched by the head mounted display based on a result of observing the remote space.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: April 2, 2019
    Assignee: Korea Institute of Science and Technology
    Inventors: Sang Chul Ahn, Jin Uk Kwon
  • Publication number: 20180308246
    Abstract: Embodiments relate to an apparatus for applying a haptic property using a texture perceptual space and a method therefor, the apparatus including an image acquirer configured to acquire an image of a part of a virtual object inside a virtual space, a perceptual space position determiner configured to determine a position of the image inside a texture perceptual space in which a plurality of haptic models are arranged at predetermined positions, using feature points of the acquired image, a haptic model determiner configured to determine a haptic model that is closest to the determined position of the image, and a haptic property applier configured to apply a haptic property of the determined haptic model to the part of the virtual object, in which each of the haptic models includes a texture image and a haptic property for a specific object.
    Type: Application
    Filed: August 23, 2016
    Publication date: October 25, 2018
    Inventors: Seokhee JEON, Sang Chul AHN, Hwasup LIM
  • Patent number: 10078796
    Abstract: Disclosed is an apparatus for hand gesture recognition based on a depth image, which includes a depth image acquiring unit configured to acquire a depth image including a hand region, a depth point classifying unit configured to classify depth points of a hand region in the depth image according to a corresponding hand portion by means of a machine studying method, and a hand model matching unit configured to match a three-dimensional hand model with the classified depth points by using distances between the classified depth points and a hand portion respectively corresponding to the depth points. A recognition method using the apparatus is also disclosed.
    Type: Grant
    Filed: February 24, 2016
    Date of Patent: September 18, 2018
    Assignee: Korea Institute of Science and Technology
    Inventors: Hwasup Lim, Sungkuk Chun, Sang Chul Ahn
  • Publication number: 20180144530
    Abstract: Facial expressions and whole-body gestures of a 3D character are provided based on facial expressions of a user and gestures of a hand puppet perceived using a depth camera.
    Type: Application
    Filed: March 23, 2017
    Publication date: May 24, 2018
    Applicant: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Hwasup LIM, Youngmin KIM, Jae-In HWANG, Sang Chul AHN
  • Publication number: 20180055367
    Abstract: Provided is an in vivo bioimaging method including irradiating near-infrared (NIR) light onto a living body, converting the NIR light passed through the living body, into visible light using upconversion nanoparticles (UCNPs), and generating a bioimage of the living body by receiving the visible light using a complementary metal-oxide-semiconductor (CMOS) image sensor.
    Type: Application
    Filed: July 31, 2017
    Publication date: March 1, 2018
    Applicants: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY, CENTER OF HUMAN-CENTERED INTERACTION FOR COEXISTENCE
    Inventors: Hwa Sup LIM, Seok Joon KWON, Sang Chul AHN, Bum Jae YOU
  • Patent number: 9905048
    Abstract: Disclosed is a system for authoring and providing augmented reality contents, which includes a database storing a plurality of place models expressing an inherent physical place, a positioning unit for determining a current position of a user, a place model processing unit for searching and loading a place model corresponding to the current position of the user from the database, and a virtual object processing unit for disposing a virtual object expressed through a HTML document at a predetermined location in the loaded place model, wherein the plurality of place models is hierarchically stored so that at least one place model has at least one another place model of a subordinate concept.
    Type: Grant
    Filed: December 24, 2013
    Date of Patent: February 27, 2018
    Assignee: Korea Institute of Science and Technology
    Inventors: Heedong Ko, Sang-chul Ahn, Byounghyun Yoo
  • Patent number: 9904664
    Abstract: There are provided an augmented reality contents providing apparatus based on a Web information structure including: an HTML document that includes a URI setting unit setting a uniform resource identifier (URI) corresponding to a point of interest (POI), a POI processing unit collecting attribute information from a target terminal and identifying the POI by using the collected attribute information, a virtual object processing unit that matches a virtual object associated with the URI to the identified POI; and a 3D browser engine used for setting coordinates on the 3D physical space such that the POI and the virtual object are displayed in a 3D virtual space through a Web browser of the target terminal, analyzing video information of the POI and the virtual object based on the set coordinates, and providing the target terminal the analyzed video information.
    Type: Grant
    Filed: December 23, 2013
    Date of Patent: February 27, 2018
    Assignee: Korea Institute of Science and Technology
    Inventors: Heedong Ko, Sang-chul Ahn
  • Publication number: 20170084069
    Abstract: Disclosed is a method for facial age simulation based on an age of each facial part and environmental factors, which includes: measuring an age of each facial part on the basis of an input face image; designating a personal environmental factor; transforming an age of each facial part by applying an age transformation model according to the age of each facial part and the environmental factor; reconstructing the image transformed for each facial part; and composing the reconstructed images to generate an age-transformed face. Accordingly, it is possible to transform a face realistically based on an age measured for each facial part and an environmental factor.
    Type: Application
    Filed: September 8, 2016
    Publication date: March 23, 2017
    Applicant: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Ig Jae KIM, Sung Eun CHOI, Sang Chul AHN
  • Publication number: 20170068849
    Abstract: Disclosed is an apparatus for hand gesture recognition based on a depth image, which includes a depth image acquiring unit configured to acquire a depth image including a hand region, a depth point classifying unit configured to classify depth points of a hand region in the depth image according to a corresponding hand portion by means of a machine studying method, and a hand model matching unit configured to match a three-dimensional hand model with the classified depth points by using distances between the classified depth points and a hand portion respectively corresponding to the depth points. A recognition method using the apparatus is also disclosed.
    Type: Application
    Filed: February 24, 2016
    Publication date: March 9, 2017
    Applicant: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Hwasup LIM, Sungkuk CHUN, Sang Chul AHN
  • Patent number: 9415505
    Abstract: A device for dynamic reconfiguration of robot components includes: a resource monitoring unit for monitoring resources of a plurality of boards on which components for executing tasks of a robot are loaded; a dynamic reconfiguration unit for dynamically reconfiguring components of the boards, in the case at least one of the boards is at a risk for a scarcity of resources; and a log managing unit for storing a configuration of present components and a configuration of reconfigured components. Accordingly, it is possible to recognize a scarcity of resources in advance while the robot is operating and prevent the robot from malfunctioning by distributing the components.
    Type: Grant
    Filed: January 11, 2013
    Date of Patent: August 16, 2016
    Assignee: Korea Institute of Science and Technology
    Inventors: Sang Chul Ahn, Yong-Moo Kwon, Ui Kyu Song
  • Patent number: 9325886
    Abstract: An image generator is provided which obtains a specular image and a diffuse image from an image acquired by a polarized light field camera by separating two reflection components of a subject, and a control method thereof. The image generator may include a main lens, a polarizing filter part, a photosensor, a microlens array, and a controller that generates a single image in response to the electrical image signal and extracts, from the generated image, a specular image and a diffuse image that exhibit different reflection characteristics of the subject.
    Type: Grant
    Filed: March 20, 2014
    Date of Patent: April 26, 2016
    Assignee: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Jae Won Kim, Ig Jae Kim, Sang Chul Ahn
  • Patent number: 9092456
    Abstract: A method and system for reconstructing an image displayed on an electronic device connected to a network, to be a high resolution image. The method of reconstructing a selected area of the image displayed on the electronic device connected to a network, to be a high resolution image, includes: receiving a request to expand the selected area; collecting images including the selected area from the Internet; correcting the selected area to have a high resolution while expanding the selected area based on the collected images; and displaying the image expanded to have a high resolution on the electronic device.
    Type: Grant
    Filed: July 10, 2012
    Date of Patent: July 28, 2015
    Assignee: Korea Institute of Science and Technology
    Inventors: Jaewon Kim, Ig Jae Kim, Sang Chul Ahn, Jong-Ho Lee
  • Patent number: 9094609
    Abstract: A method that highlights a depth-of-field (DOF) region of an image and performs additional image processing by using the DOF region. The method includes: obtaining a first pattern image and a second pattern image that are captured by emitting light according to different patterns from an illumination device; detecting a DOF region by using the first pattern image and the second pattern image; determining weights to highlight the DOF region; and generating the highlighted DOF image by applying the weights to a combined image of the first pattern image and the second pattern image.
    Type: Grant
    Filed: April 2, 2012
    Date of Patent: July 28, 2015
    Assignee: Korea Institute of Science and Technology
    Inventors: Jaewon Kim, Ig Jae Kim, Sang Chul Ahn
  • Publication number: 20150146082
    Abstract: An image generator is provided which obtains a specular image and a diffuse image from an image acquired by a polarized light field camera by separating two reflection components of a subject, and a control method thereof. The image generator may include a main lens, a polarizing filter part, a photosensor, a microlens array, and a controller that generates a single image in response to the electrical image signal and extracts, from the generated image, a specular image and a diffuse image that exhibit different reflection characteristics of the subject.
    Type: Application
    Filed: March 20, 2014
    Publication date: May 28, 2015
    Applicant: Korea Institute of Science and Technology
    Inventors: Jae Won KIM, Ig Jae KIM, Sang Chul AHN