Patents by Inventor Jeun Woo Lee

Jeun Woo Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230229740
    Abstract: The present invention provides a multiclass classification apparatus and method robust to imbalanced data, which generate artificial data of a minority class on the basis of an over-sampling technique based on adversarial learning to balance imbalanced data and performs multiclass classification robust to imbalanced data by using corresponding data in class classification learning without additionally collecting data.
    Type: Application
    Filed: November 29, 2022
    Publication date: July 20, 2023
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: MINHO PARK, Dong-oh KANG, Hwajeon SONG, Jeun Woo LEE
  • Patent number: 11663816
    Abstract: Provided is an apparatus for classifying an attribute of an image object, including: a first memory configured to store target object images that are indexed; a second memory configured to store target object images that are un-indexed; and an object attribute classification module configured to perform learning on the un-indexed target object images to construct a classifier for classifying a detailed attribute of target object, and finely adjust the classifier on the basis of the indexed target object images.
    Type: Grant
    Filed: February 12, 2021
    Date of Patent: May 30, 2023
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jeun Woo Lee, Sung Chan Oh
  • Publication number: 20230145028
    Abstract: Disclosed herein are a method and apparatus for processing feature information based on an artificial neural network. According to an embodiment of the present disclosure, the apparatus for processing feature information based on an artificial neural network may include a memory for storing data and a processor for controlling the memory, and the processor may further be configured to extract a graph, which includes vertices, based on a feature map of an image, to extract a feature vector corresponding to the vertices and to process the graph and the feature vector based on an artificial neural network, and the graph may include positions of the vertices and information on a connection relationship between the vertices.
    Type: Application
    Filed: September 7, 2022
    Publication date: May 11, 2023
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Sung Chan OH, Yong Jin KWON, Hyung Il KIM, Jin Young MOON, Yu Seok BAE, Ki Min YUN, Jeun Woo LEE, Joong Won HWANG
  • Patent number: 11380133
    Abstract: A domain adaptation-based object recognition apparatus includes a memory configured to store a domain adaptation-based object recognition program and a processor configured to execute the program. The processor learns a generative model for generating a feature or an image similar to a gallery image on the basis of domain adaptation in association with an input probe image and learns an object recognition classification model by using a learning database corresponding to the gallery image and the input probe image, thereby performing object recognition using the input probe image.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: July 5, 2022
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hyung Il Kim, Yong Jin Kwon, Jin Young Moon, Jong Youl Park, Sung Chan Oh, Ki Min Yun, Jeun Woo Lee
  • Publication number: 20210256322
    Abstract: Provided is an apparatus for classifying an attribute of an image object, including: a first memory configured to store target object images that are indexed; a second memory configured to store target object images that are un-indexed; and an object attribute classification module configured to perform learning on the un-indexed target object images to construct a classifier for classifying a detailed attribute of target object, and finely adjust the classifier on the basis of the indexed target object images.
    Type: Application
    Filed: February 12, 2021
    Publication date: August 19, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jeun Woo LEE, Sung Chan OH
  • Publication number: 20200311389
    Abstract: A domain adaptation-based object recognition apparatus includes a memory configured to store a domain adaptation-based object recognition program and a processor configured to execute the program. The processor learns a generative model for generating a feature or an image similar to a gallery image on the basis of domain adaptation in association with an input probe image and learns an object recognition classification model by using a learning database corresponding to the gallery image and the input probe image, thereby performing object recognition using the input probe image.
    Type: Application
    Filed: March 30, 2020
    Publication date: October 1, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hyung Il KIM, Yong Jin KWON, Jin Young MOON, Jong Youl PARK, Sung Chan OH, Ki Min YUN, Jeun Woo LEE
  • Patent number: 10789470
    Abstract: Provided is a dynamic object detecting technique, and more specifically, a system and method for determining a state of a motion of a camera on the basis of a local motion estimated on the basis of a video captured by a dynamic camera and a result of analyzing a global motion, flexibly updating a background model according to the state of the motion of the camera, and flexibly detecting a dynamic object according to the state of the motion of the camera.
    Type: Grant
    Filed: July 11, 2018
    Date of Patent: September 29, 2020
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Ki Min Yun, Yong Jin Kwon, Jin Young Moon, Sung Chan Oh, Jong Youl Park, Jeun Woo Lee
  • Publication number: 20200074647
    Abstract: Smart glasses for selectively tracking a target of visual cognition according to the present invention include a first camera configured to capture a first input image that is a first-person view image of a user, a second camera configured to capture a second input image containing sight line information of the user, a display configured to output additional information corresponding to the first input image, a memory configured to store a program for selectively tracking a target of visual cognition on the basis of the first and second input images, and a processor configured to execute the program stored in the memory, wherein upon executing the program, the processor is configured to detect the target of visual cognition from the first input image and determine, from the second input image, whether the user is in an inattentive state with respect to the target of visual cognition.
    Type: Application
    Filed: August 15, 2019
    Publication date: March 5, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jin Young MOON, Yong Jin KWON, HYUNG IL KIM, Ki Min YUN, Jong Youl PARK, Sung Chan OH, Jeun Woo LEE
  • Patent number: 10474901
    Abstract: Disclosed herein are a video interpretation apparatus and method. The video interpretation apparatus includes an object information generation unit for generating object information based on objects in an input video, a relation generation unit for generating a dynamic spatial relation between the objects based on the object information, a general event information generation unit for generating general event information based on the dynamic spatial relation, a video information generation unit for generating video information including any one of a sentence and an event description based on the object information and the general event information, and a video descriptor storage unit for storing the object information, the general event information, and the video information.
    Type: Grant
    Filed: January 5, 2017
    Date of Patent: November 12, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin-Young Moon, Kyu-Chang Kang, Yong-Jin Kwon, Kyoung Park, Jong-Youl Park, Jeun-Woo Lee
  • Patent number: 10235410
    Abstract: Disclosed herein are a query input apparatus and method. The query input apparatus includes: an input unit providing a graphic user interface (GUI) to receive a schematized composite activity that a user wants to search from the user; and a processing unit generating a query using an activity descriptor corresponding to the schematized composite activity depending on a query request from an activity searching system and transferring the generated query to the activity searching system.
    Type: Grant
    Filed: February 11, 2016
    Date of Patent: March 19, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin-Young Moon, Kyu-Chang Kang, Yong-Jin Kwon, Kyoung Park, Chang-Seok Bae, Jeun-Woo Lee
  • Publication number: 20190019031
    Abstract: Provided is a dynamic object detecting technique, and more specifically, a system and method for determining a state of a motion of a camera on the basis of a local motion estimated on the basis of a video captured by a dynamic camera and a result of analyzing a global motion, flexibly updating a background model according to the state of the motion of the camera, and flexibly detecting a dynamic object according to the state of the motion of the camera.
    Type: Application
    Filed: July 11, 2018
    Publication date: January 17, 2019
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Ki Min YUN, Yong Jin KWON, Jin Young MOON, Sung Chan OH, Jong Youl PARK, Jeun Woo LEE
  • Publication number: 20180285744
    Abstract: A system for generating a multimedia knowledge base uses a multimedia information detection unit to detect texted meta information from multimedia data including at least one combination of a text, a voice, an image and a video and allows a knowledge base shaping unit to use the texted meta information and context information of the multimedia data to divide the multimedia data into syntactic information representing extrinsic configuration information and semantic information representing intrinsic meaning information and may shape the syntactic information and the semantic information into the multimedia knowledge.
    Type: Application
    Filed: April 4, 2018
    Publication date: October 4, 2018
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Kyu Chang KANG, Yongjin KWON, Jin Young MOON, Kyoung PARK, Jongyoul PARK, Yu Seok BAE, Sungchan OH, Jeun Woo LEE
  • Patent number: 9898666
    Abstract: An apparatus and method for providing primitive visual knowledge are disclosed. The method of providing primitive visual knowledge includes receiving an image in a form of a digital image sequence, dividing the received image into scenes, extracting a representative shot from each of the scenes, extracting objects from frames which compose the representative shot, extracting action verbs based on a mutual relationship between the extracted objects, selecting a frame best expressing the mutual relationship with the objects, which are the basis for the extracting of the action verbs, as a key frame, generating the primitive visual knowledge based on the selected key frame, storing the generated primitive visual knowledge in a database, and visualizing the primitive visual knowledge stored in the database to provide the primitive visual knowledge to a manager.
    Type: Grant
    Filed: January 22, 2016
    Date of Patent: February 20, 2018
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kyu-Chang Kang, Yong-Jin Kwon, Jin-Young Moon, Kyoung Park, Chang-Seok Bae, Jeun-Woo Lee
  • Publication number: 20170316268
    Abstract: Disclosed herein are a video interpretation apparatus and method. The video interpretation apparatus includes an object information generation unit for generating object information based on objects in an input video, a relation generation unit for generating a dynamic spatial relation between the objects based on the object information, a general event information generation unit for generating general event information based on the dynamic spatial relation, a video information generation unit for generating video information including any one of a sentence and an event description based on the object information and the general event information, and a video descriptor storage unit for storing the object information, the general event information, and the video information.
    Type: Application
    Filed: January 5, 2017
    Publication date: November 2, 2017
    Inventors: Jin-Young MOON, Kyu-Chang KANG, Yong-Jin KWON, Kyoung PARK, Jong-Youl PARK, Jeun-Woo LEE
  • Publication number: 20160232202
    Abstract: Disclosed herein are a query input apparatus and method. The query input apparatus includes: an input unit providing a graphic user interface (GUI) to receive a schematized composite activity that a user wants to search from the user; and a processing unit generating a query using an activity descriptor corresponding to the schematized composite activity depending on a query request from an activity searching system and transferring the generated query to the activity searching system.
    Type: Application
    Filed: February 11, 2016
    Publication date: August 11, 2016
    Inventors: Jin-Young MOON, Kyu-Chang KANG, Yong-Jin KWON, Kyoung PARK, Chang-Seok BAE, Jeun-Woo LEE
  • Publication number: 20160217329
    Abstract: An apparatus and method for providing primitive visual knowledge are disclosed. The method of providing primitive visual knowledge includes receiving an image in a form of a digital image sequence, dividing the received image into scenes, extracting a representative shot from each of the scenes, extracting objects from frames which compose the representative shot, extracting action verbs based on a mutual relationship between the extracted objects, selecting a frame best expressing the mutual relationship with the objects, which are the basis for the extracting of the action verbs, as a key frame, generating the primitive visual knowledge based on the selected key frame, storing the generated primitive visual knowledge in a database, and visualizing the primitive visual knowledge stored in the database to provide the primitive visual knowledge to a manager.
    Type: Application
    Filed: January 22, 2016
    Publication date: July 28, 2016
    Inventors: Kyu-Chang Kang, Yong-Jin Kwon, Jin-Young Moon, Kyoung Park, Chang-Seok Bae, Jeun-Woo Lee
  • Patent number: 9380085
    Abstract: Disclosed herein are a server and method for providing collaboration service, and a sociality management server. The server includes a service provision unit and a space construction unit. The service provision unit sets up collaboration service corresponding to the results of combining functions of at least two collaboration terminals based on sociality values corresponding to the at least two collaboration terminals. The space construction unit constructs a service provision device space via which the collaboration service is to be provided. The service provision unit provides the collaboration service to the at least two collaboration terminals via the service provision device space.
    Type: Grant
    Filed: July 19, 2013
    Date of Patent: June 28, 2016
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Dong-Oh Kang, Changseok Bae, Jeun-Woo Lee, Kyuchang Kang, Hyungjik Lee, Joon-Young Jung
  • Patent number: 8806488
    Abstract: Disclosed are a system and method for managing personalization information of a virtual machine based on cloud computing. An exemplary embodiment of the present invention provides a system for managing personalization information of a virtual machine, including: virtual desktops positioned in on-demand services zone, and created and driven on the basis of the virtual machine; zone servers transmitting the personalization information of the virtual machine on at least one virtual desktop positioned in the on-demand service zones, and local servers storing personalization information of virtual machine on at least one virtual desktop positioned in at least one on-demand service zone and synchronizing the personalization information of the virtual machine with another local server.
    Type: Grant
    Filed: November 25, 2011
    Date of Patent: August 12, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kyu Chang Kang, Dong Oh Kang, Hyung Jik Lee, Jeun Woo Lee
  • Patent number: 8780178
    Abstract: Disclosed herein are a device and method for displaying 3D images. The device includes an image processing unit for calculating the location of a user relative to a reference point and outputting a 3D image which is obtained by performing image processing on 3D content sent by a server based on the calculated location of the user, the image processing corresponding to a viewpoint of the user, and a display unit for displaying the 3D image output by the image processing unit to the user. The method includes calculating the location of a user relative to a reference point, performing image processing on 3D content sent by a server from a viewpoint of the user based on the calculated location of the user, and outputting a 3D image which is obtained by the image processing, and displaying the 3D image output by the image processing unit to the user.
    Type: Grant
    Filed: December 16, 2010
    Date of Patent: July 15, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Eun-Jin Koh, Jong-Ho Won, Jun-Seok Park, Jeun-Woo Lee
  • Patent number: 8782137
    Abstract: Disclosed are a multi-hop MIMO system and method. The multi-hop MIMO system according to an exemplary embodiment of the present invention includes: a server including a plurality of virtual machines; one remote screen device first connected to the server through at least one virtual machine of the plurality of virtual machines, receiving screen data from the server in a unicast scheme, and driven as a multicast server; and a plurality of different remote screen devices connected to the server through the at least one virtual machine, existing on a sub network where the one remote screen device exist, operating as multicast clients, receiving the screen data from the server or the one remote screen device in a multicast scheme, wherein the one remote screen device and the plurality of different remote screen devices simultaneously output the same screen data.
    Type: Grant
    Filed: July 28, 2011
    Date of Patent: July 15, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Joon Young Jung, Jeun Woo Lee