Patents by Inventor Ki-Min Yun

Ki-Min Yun has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230145028
    Abstract: Disclosed herein are a method and apparatus for processing feature information based on an artificial neural network. According to an embodiment of the present disclosure, the apparatus for processing feature information based on an artificial neural network may include a memory for storing data and a processor for controlling the memory, and the processor may further be configured to extract a graph, which includes vertices, based on a feature map of an image, to extract a feature vector corresponding to the vertices and to process the graph and the feature vector based on an artificial neural network, and the graph may include positions of the vertices and information on a connection relationship between the vertices.
    Type: Application
    Filed: September 7, 2022
    Publication date: May 11, 2023
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Sung Chan OH, Yong Jin KWON, Hyung Il KIM, Jin Young MOON, Yu Seok BAE, Ki Min YUN, Jeun Woo LEE, Joong Won HWANG
  • Publication number: 20230059462
    Abstract: The present disclosure relates to a method and apparatus for performing multiple tasks based on task similarity by using artificial intelligence. According to an embodiment of the present disclosure, a method for performing multi-task learning based on task similarity may include performing a similarity analysis between a first task and a second task and training a neural network for the second task based on a result of the similarity analysis. Herein, wherein in response to be determined that a first training dataset used for the first task and a second training dataset used for the second task are similar, the neural network may learn a second parameter allocated to the second training dataset based on a first parameter allocated to the first training dataset.
    Type: Application
    Filed: November 24, 2021
    Publication date: February 23, 2023
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Eun Woo KIM, Hyun Dong JIN, Ki Min YUN, Jin Young MOON
  • Publication number: 20230015295
    Abstract: Disclosed herein are an object recognition apparatus and method based on environment matching. The object recognition apparatus includes memory for storing at least one program, and a processor for executing the program, wherein the program performs extracting at least one key frame from a video that is input in real time, determining a similarity between the key frame extracted from the input video and each of videos used as training data of prestored multiple recognition models, based on a pretrained similarity-matching network, and selecting a recognition model pretrained with a video having a maximal similarity to the key frame extracted from the input video, preprocessing the input video such that at least one of color and size of a video used as training data of an initial model is similar to that of the input video, and recognizing the preprocessed video based on the initial model.
    Type: Application
    Filed: December 14, 2021
    Publication date: January 19, 2023
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Ki-Min YUN, Jin-Young MOON, Jong-Won CHOI, Joung-Su YOUN, Seok-Jun CHOI, Woo-Seok HYUNG
  • Patent number: 11527067
    Abstract: An electronic device according to an embodiment disclosed herein may include a memory including at least one instruction and a processor. By executing the at least one instruction, the processor may check feature information corresponding to a video and including at least one of an appearance-related feature value and a motion-related feature value from the video, calculate at least one of a starting score related to a starting point of an action instance, an ending score related to an ending point of an action instance, and a relatedness score between action instances on the basis of the feature information corresponding to the video, the action instances being included in the video, and generate an action proposal included in the video on the basis of the at least one score.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: December 13, 2022
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin Young Moon, Yong Jin Kwon, Hyung Il Kim, Jong Youl Park, Kang Min Bae, Ki Min Yun
  • Patent number: 11399205
    Abstract: An USB-C DMP device includes: a USB terminal for connecting to an input port of a monitor or a television (TV); a Wi-Fi module for receiving a mirroring signal from a mobile terminal through Wi-Fi communication; a converter module for converting the mirroring signal received by the Wi-Fi module into a USB-C signal and transmitting the USB-C signal to the monitor or TV through the USB terminal; and a system-on-chip (SoC) module for controlling the Wi-Fi module to receive the mirroring signal from the mobile terminal, and controlling the converter module to convert the mirroring signal received by the Wi-Fi module into the USB-C signal.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: July 26, 2022
    Assignee: O2O CO., LTD.
    Inventors: Sung Min Ahn, Dong Gil Park, Ki Min Yun
  • Patent number: 11380133
    Abstract: A domain adaptation-based object recognition apparatus includes a memory configured to store a domain adaptation-based object recognition program and a processor configured to execute the program. The processor learns a generative model for generating a feature or an image similar to a gallery image on the basis of domain adaptation in association with an input probe image and learns an object recognition classification model by using a learning database corresponding to the gallery image and the input probe image, thereby performing object recognition using the input probe image.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: July 5, 2022
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hyung Il Kim, Yong Jin Kwon, Jin Young Moon, Jong Youl Park, Sung Chan Oh, Ki Min Yun, Jeun Woo Lee
  • Publication number: 20220078504
    Abstract: An USB-C DMP device includes: a USB terminal for connecting to an input port of a monitor or a television (TV); a Wi-Fi module for receiving a mirroring signal from a mobile terminal through Wi-Fi communication; a converter module for converting the mirroring signal received by the Wi-Fi module into a USB-C signal and transmitting the USB-C signal to the monitor or TV through the USB terminal; and a system-on-chip (SoC) module for controlling the Wi-Fi module to receive the mirroring signal from the mobile terminal, and controlling the converter module to convert the mirroring signal received by the Wi-Fi module into the USB-C signal.
    Type: Application
    Filed: March 14, 2019
    Publication date: March 10, 2022
    Applicant: O2O CO., LTD.
    Inventors: Sung Min AHN, Dong Gil PARK, Ki Min YUN
  • Publication number: 20220067382
    Abstract: Provided is an apparatus for online action detection, the apparatus including a feature extraction unit configured to extract a chunk-level feature of a video chunk sequence of a streaming video, a filtering unit configured to perform filtering on the chunk-level feature, and an action classification unit configured to classify an action class using the filtered chunk-level feature.
    Type: Application
    Filed: August 25, 2021
    Publication date: March 3, 2022
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jin Young MOON, Hyung Il KIM, Jong Youl PARK, Kang Min BAE, Ki Min YUN
  • Publication number: 20220059079
    Abstract: A service providing system includes: a voice recognition accessory which recognizes a user's voice, generates a wake-up signal corresponding to the recognized voice, and transmits the wake-up signal in real time; a mobile terminal which receives the wake-up signal from the voice recognition accessory in real time so as to recognize the wake-up signal, and runs an application according to the recognized wake-up signal; and a service providing server for communicating with the application running in the mobile terminal so as to provide a corresponding service.
    Type: Application
    Filed: October 14, 2019
    Publication date: February 24, 2022
    Applicant: O2O CO., LTD.
    Inventors: Sung Min AHN, Dong Gil PARK, Ki Min YUN
  • Publication number: 20210142063
    Abstract: An electronic device according to an embodiment disclosed herein may include a memory including at least one instruction and a processor. By executing the at least one instruction, the processor may check feature information corresponding to a video and including at least one of an appearance-related feature value and a motion-related feature value from the video, calculate at least one of a starting score related to a starting point of an action instance, an ending score related to an ending point of an action instance, and a relatedness score between action instances on the basis of the feature information corresponding to the video, the action instances being included in the video, and generate an action proposal included in the video on the basis of the at least one score.
    Type: Application
    Filed: November 3, 2020
    Publication date: May 13, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jin Young MOON, Yong Jin KWON, Hyung Il KIM, Jong Youl PARK, Kang Min BAE, Ki Min YUN
  • Publication number: 20200311389
    Abstract: A domain adaptation-based object recognition apparatus includes a memory configured to store a domain adaptation-based object recognition program and a processor configured to execute the program. The processor learns a generative model for generating a feature or an image similar to a gallery image on the basis of domain adaptation in association with an input probe image and learns an object recognition classification model by using a learning database corresponding to the gallery image and the input probe image, thereby performing object recognition using the input probe image.
    Type: Application
    Filed: March 30, 2020
    Publication date: October 1, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hyung Il KIM, Yong Jin KWON, Jin Young MOON, Jong Youl PARK, Sung Chan OH, Ki Min YUN, Jeun Woo LEE
  • Patent number: 10789470
    Abstract: Provided is a dynamic object detecting technique, and more specifically, a system and method for determining a state of a motion of a camera on the basis of a local motion estimated on the basis of a video captured by a dynamic camera and a result of analyzing a global motion, flexibly updating a background model according to the state of the motion of the camera, and flexibly detecting a dynamic object according to the state of the motion of the camera.
    Type: Grant
    Filed: July 11, 2018
    Date of Patent: September 29, 2020
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Ki Min Yun, Yong Jin Kwon, Jin Young Moon, Sung Chan Oh, Jong Youl Park, Jeun Woo Lee
  • Publication number: 20200074647
    Abstract: Smart glasses for selectively tracking a target of visual cognition according to the present invention include a first camera configured to capture a first input image that is a first-person view image of a user, a second camera configured to capture a second input image containing sight line information of the user, a display configured to output additional information corresponding to the first input image, a memory configured to store a program for selectively tracking a target of visual cognition on the basis of the first and second input images, and a processor configured to execute the program stored in the memory, wherein upon executing the program, the processor is configured to detect the target of visual cognition from the first input image and determine, from the second input image, whether the user is in an inattentive state with respect to the target of visual cognition.
    Type: Application
    Filed: August 15, 2019
    Publication date: March 5, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jin Young MOON, Yong Jin KWON, HYUNG IL KIM, Ki Min YUN, Jong Youl PARK, Sung Chan OH, Jeun Woo LEE
  • Patent number: 10311579
    Abstract: An apparatus and method for detecting a foreground in an image is provided, and the foreground detecting apparatus includes a context information estimator configured to estimate context information on a scene from an image frame of the image, a background model constructor configured to construct a background model of the image frame using the estimated context information, and a foreground detector configured to detect a foreground from the image frame based on the constructed background model.
    Type: Grant
    Filed: January 13, 2017
    Date of Patent: June 4, 2019
    Assignees: Samsung Electronics Co., Ltd., Seoul National University R&DB Foundation
    Inventors: Ki Min Yun, Jin Young Choi, Jong In Lim
  • Publication number: 20190019031
    Abstract: Provided is a dynamic object detecting technique, and more specifically, a system and method for determining a state of a motion of a camera on the basis of a local motion estimated on the basis of a video captured by a dynamic camera and a result of analyzing a global motion, flexibly updating a background model according to the state of the motion of the camera, and flexibly detecting a dynamic object according to the state of the motion of the camera.
    Type: Application
    Filed: July 11, 2018
    Publication date: January 17, 2019
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Ki Min YUN, Yong Jin KWON, Jin Young MOON, Sung Chan OH, Jong Youl PARK, Jeun Woo LEE
  • Publication number: 20170213100
    Abstract: An apparatus and method for detecting a foreground in an image is provided, and the foreground detecting apparatus includes a context information estimator configured to estimate context information on a scene from an image frame of the image, a background model constructor configured to construct a background model of the image frame using the estimated context information, and a foreground detector configured to detect a foreground from the image frame based on the constructed background model.
    Type: Application
    Filed: January 13, 2017
    Publication date: July 27, 2017
    Applicants: Samsung Electronics Co., Ltd., Seoul National University R&DB Foundation
    Inventors: Ki Min YUN, Jin Young CHOI, Jong In LIM
  • Patent number: 9418320
    Abstract: An apparatus for detecting an object includes a filter for filtering a current input image and a background model generated based on a previous input image, a homography matrix estimation unit for estimating a homography matrix between the current input image and the background model, an image alignment unit for converting the background model by applying the homography matrix to a filtered background model and aligning a converted background model and a filtered current input image, and a foreground/background detection unit for detecting a foreground by comparing corresponding pixels between the converted background model and the filtered current input image.
    Type: Grant
    Filed: November 20, 2012
    Date of Patent: August 16, 2016
    Assignees: Seoul National University Industry Foundation, Hanwha Techwin Co., Ltd.
    Inventors: Il-Kwon Chang, Jeong-Eun Lim, Soo-Wan Kim, Sun-Jung Kim, Kwang Moo Yi, Ki-Min Yun, Jin Young Choi