Patents by Inventor Mi Kyong HAN

Mi Kyong HAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11727533
    Abstract: A method for generating a super resolution image may comprise up-scaling an input low resolution image; determining a directivity for each patch included in the up-scaled image; selecting an orientation-specified neural network or an orientation-non-specified neural network according to the directivity of the patch; applying the selected neural network to the patch; and obtaining a super resolution image by combining one or more patches output from the orientation-specified neural network and the orientation-non-specified neural network.
    Type: Grant
    Filed: August 12, 2020
    Date of Patent: August 15, 2023
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seok Bong Yoo, Mi Kyong Han
  • Patent number: 11682213
    Abstract: The present disclosure provides a method and a device for training a neural network model for use in analyzing captured images, and an intelligent image capturing apparatus employing the same. The neural network model can be trained by performing the image reconstruction and the image classification using based on image data received from a plurality of image capturing devices installed in the monitoring area, calculating at least one loss function based on data processed by the neural network model or the neural network model training device, and determining parameters minimizing the loss function. In addition, the neural network model can be updated through the re-training taking into account the newly acquired image data. Accordingly, the image analysis neural network model can operate with high precision and accuracy.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: June 20, 2023
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hyun Jin Yoon, Mi Kyong Han
  • Patent number: 11430090
    Abstract: A method for removing compressed Poisson noises in an image, based on deep neural networks, may comprise generating a plurality of block-aggregation images by performing block transform on low-frequency components of an input image; obtaining a plurality of restored block-aggregation images by inputting the plurality of block-aggregation images into a first deep neural network; generating a low-band output image from which noises for the low-frequency components are removed by performing inverse block transform on the plurality of restored block-aggregation images; and generating an output image from which compressed Poisson noises are removed by adding the low-band output image to a high-band output image from which noises for high-frequency components of the input image are removed.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: August 30, 2022
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seok Bong Yoo, Mi Kyong Han
  • Publication number: 20210326654
    Abstract: The present disclosure provides a method and a device for training a neural network model for use in analyzing captured images, and an intelligent image capturing apparatus employing the same. The neural network model can be trained by performing the image reconstruction and the image classification using based on image data received from a plurality of image capturing devices installed in the monitoring area, calculating at least one loss function based on data processed by the neural network model or the neural network model training device, and determining parameters minimizing the loss function. In addition, the neural network model can be updated through the re-training taking into account the newly acquired image data. Accordingly, the image analysis neural network model can operate with high precision and accuracy.
    Type: Application
    Filed: April 20, 2021
    Publication date: October 21, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hyun Jin YOON, Mi Kyong HAN
  • Publication number: 20210049741
    Abstract: A method for generating a super resolution image may comprise up-scaling an input low resolution image; determining a directivity for each patch included in the up-scaled image; selecting an orientation-specified neural network or an orientation-non-specified neural network according to the directivity of the patch; applying the selected neural network to the patch; and obtaining a super resolution image by combining one or more patches output from the orientation-specified neural network and the orientation-non-specified neural network.
    Type: Application
    Filed: August 12, 2020
    Publication date: February 18, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seok Bong YOO, Mi Kyong HAN
  • Publication number: 20210042887
    Abstract: A method for removing compressed Poisson noises in an image, based on deep neural networks, may comprise generating a plurality of block-aggregation images by performing block transform on low-frequency components of an input image; obtaining a plurality of restored block-aggregation images by inputting the plurality of block-aggregation images into a first deep neural network; generating a low-band output image from which noises for the low-frequency components are removed by performing inverse block transform on the plurality of restored block-aggregation images; and generating an output image from which compressed Poisson noises are removed by adding the low-band output image to a high-band output image from which noises for high-frequency components of the input image are removed.
    Type: Application
    Filed: August 6, 2020
    Publication date: February 11, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seok Bong YOO, Mi Kyong HAN
  • Patent number: 10861221
    Abstract: Provided is a sensory effect adaptation method performed by an adaptation engine, the method including identifying first metadata associated with an object in a virtual world and used to describe the object and converting the identified first metadata into second metadata to be applied to a sensory device in a real world, wherein the second metadata is obtained by converting the first metadata based on a scene determined by a gaze of a user in the virtual world.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: December 8, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jae-Kwan Yun, Noh-Sam Park, Jong Hyun Jang, Mi Kyong Han
  • Publication number: 20200334553
    Abstract: An apparatus and a method for predicting error possibility, including: generating a first annotation for input data for training by using an algorithm; performing a machine-learning for an annotation evaluation model based on the first annotation and a correction history for the first annotation; generating a second annotation for input data for evaluating by using the algorithm; and predicting the error probability of the second annotation based on the annotation evaluation model are provided.
    Type: Application
    Filed: April 21, 2020
    Publication date: October 22, 2020
    Inventors: Hyunjin YOON, Mi Kyong HAN
  • Patent number: 10719741
    Abstract: Disclosed is a sensory information providing apparatus. The sensory information providing apparatus may comprise a learning model database storing a plurality of learning models related to sensory effect information with respect to a plurality of videos; and a video analysis engine generating the plurality of learning models by extracting sensory effect association information by analyzing the plurality of videos and sensory effect meta information of the plurality of videos, and extracting sensory information corresponding to an input video stream by analyzing the input video stream based on the plurality of learning model.
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: July 21, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Noh Sam Park, Hoon Ki Lee, Mi Kyong Han
  • Patent number: 10410094
    Abstract: A method and an apparatus for authoring a machine learning-based immersive media are provided. The apparatus determines an immersive effect type of an original image of image contents to be converted into an immersive media by using an immersive effect classifier learned using an existing immersive media that the immersive effect is already added to an image, detects an immersive effect section of the original image based on the immersive effect type determination result, and generates metadata of the detected immersive effect section.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: September 10, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hyunjin Yoon, Siadari T. Suprapto, Hoon Ki Lee, Mi Kyong Han
  • Publication number: 20190019340
    Abstract: Provided is a sensory effect adaptation method performed by an adaptation engine, the method including identifying first metadata associated with an object in a virtual world and used to describe the object and converting the identified first metadata into second metadata to be applied to a sensory device in a real world, wherein the second metadata is obtained by converting the first metadata based on a scene determined by a gaze of a user in the virtual world.
    Type: Application
    Filed: July 13, 2018
    Publication date: January 17, 2019
    Inventors: Jae-Kwan YUN, Noh-Sam PARK, Jong Hyun JANG, Mi Kyong HAN
  • Publication number: 20180270452
    Abstract: Disclosed is a multi-point connection control apparatus and method for a video conference service. The apparatus may include a front end processor configured to receive video streams and audio streams from user terminals of participants using the video conference service, and generate screen configuration information for providing the video conference service based on the received video streams and the received audio streams, and a back end processor configured to receive at least one of the video streams, at least one of the audio streams, and the screen configuration information from the front end processor, and generate a mixed video for the video conference service based on the received at least one of the video streams, at least one of the audio streams, and the screen configuration information.
    Type: Application
    Filed: July 26, 2017
    Publication date: September 20, 2018
    Inventors: Jong Bae MOON, Jung-Hyun CHO, Jin Ah KANG, Hoon Ki LEE, Jong Hyun JANG, Deockgu JEE, Seung Han CHOI, Mi Kyong HAN
  • Publication number: 20180262716
    Abstract: Provided are a method of providing a video conference service and apparatuses performing the same, the method including determining contributions of a plurality of participants to a video conference based on first video signals and first audio signals of devices of the plurality of participants participating in the video conference, and generating a second video signal and a second audio signal to be transmitted to the devices of the plurality of participants based on the contributions.
    Type: Application
    Filed: March 9, 2018
    Publication date: September 13, 2018
    Inventors: Jin Ah KANG, Hyunjin YOON, Deockgu JEE, Jong Hyun JANG, Mi Kyong HAN
  • Publication number: 20180232606
    Abstract: Disclosed is a sensory information providing apparatus. The sensory information providing apparatus may comprise a learning model database storing a plurality of learning models related to sensory effect information with respect to a plurality of videos; and a video analysis engine generating the plurality of learning models by extracting sensory effect association information by analyzing the plurality of videos and sensory effect meta information of the plurality of videos, and extracting sensory information corresponding to an input video stream by analyzing the input video stream based on the plurality of learning model.
    Type: Application
    Filed: February 6, 2018
    Publication date: August 16, 2018
    Inventors: Noh Sam PARK, Hoon Ki LEE, Mi Kyong HAN
  • Publication number: 20180096222
    Abstract: A method and an apparatus for authoring a machine learning-based immersive media are provided. The apparatus determines an immersive effect type of an original image of image contents to be converted into an immersive media by using an immersive effect classifier learned using an existing immersive media that the immersive effect is already added to an image, detects an immersive effect section of the original image based on the immersive effect type determination result, and generates metadata of the detected immersive effect section.
    Type: Application
    Filed: October 3, 2017
    Publication date: April 5, 2018
    Inventors: Hyunjin YOON, Siadari T. SUPRAPTO, Hoon Ki LEE, Mi Kyong HAN
  • Publication number: 20160188674
    Abstract: An apparatus and method capable of recommending content suitable for a user using emotion annotation information is provided. The emotion-based content recommendation apparatus includes a content annotation information database (DB) configured to store basic annotation information and emotion information for each content; a user profile information DB configured to store preferred emotion information in addition to basic profile information for each user; and a content recommendation management module configured to recommend a content list suitable for an emotion of a user based on the emotion information for each content and the preferred emotion information for each user.
    Type: Application
    Filed: December 30, 2015
    Publication date: June 30, 2016
    Inventor: Mi-Kyong HAN
  • Patent number: 9338200
    Abstract: Disclosed herein are a metaverse client terminal and method for providing a metaverse space capable of enabling interaction between users. The metaverse client terminal includes a sensing data collection unit, a motion state determination unit, a server interface unit, and a metaverse space provision unit. The sensing data collection unit collects sensing data regarding a motion of a first user. The motion state determination unit determines a motion state of the first user, and generates state information data of the first user. The server interface unit transmits the state information data of the first user to a metaverse server, and receives metaverse information data and state information data of a second user. The metaverse space provision unit generates a metaverse space, generates a first avatar and a second avatar, incorporates the first and second avatars into the metaverse space, and provides the metaverse to the first user.
    Type: Grant
    Filed: September 16, 2013
    Date of Patent: May 10, 2016
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Sangwook Park, Noh-Sam Park, Jong-Hyun Jang, Kwang-Roh Park, Hyun-Chul Kang, Eun-Jin Ko, Mi-Kyong Han
  • Patent number: 9258392
    Abstract: Disclosed are a method and an apparatus for generating metadata of immersive media and disclosed also are an apparatus and a method for transmitting metadata related information. The apparatus includes: at least one of a camera module photographing or capturing the image; a gyro module sensing horizontality; a global positioning sensor (GPS) module calculating a position by receiving a satellite signal; and an audio module recording audio; and a network module receiving sensor effect information from a sensor aggregator through a wireless communication network; and an application generating metadata by performing timer-synchronization of an image photographed based on the camera module, a sensor effect collected by using the gyro module or the GPS module, or audio collected based on the audio module.
    Type: Grant
    Filed: November 21, 2013
    Date of Patent: February 9, 2016
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Mi Ryong Park, Hyun Woo Oh, Jae Kwan Yun, Mi Kyong Han, Ji Yeon Kim, Deock Gu Jee, Kwang Roh Park
  • Patent number: 9165380
    Abstract: An image encoding method using a Binary Partition Tree (BPT) includes performing the BPT on a reference frame, detecting blocks, each having a difference in a pixel value exceeding a threshold value in a current frame, based on a result of the BPT of the reference frame, and performing the BPT of the current frame on the detected blocks. In accordance with the present invention, block partition is not applied to all frames, but a partial partition method based on a difference between the pixel values of a reference frame and a current frame to be encoded is provided. Accordingly, the encoding speed within the P frame or the B frame can be improved. Furthermore, the PSNR of a corresponding frame can be maintained within a specific range of the PSNR of a reference frame, and a compression effect can be improved.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: October 20, 2015
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Eun Jin Ko, Hyun Chul Kang, Sang Wook Park, Noh-Sam Park, Mi Kyong Han, Mi Ryong Park, Jong Hyun Jang, Kwang Roh Park
  • Patent number: D771735
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: November 15, 2016
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Yong Kwi Lee, Jong Hyun Jang, Sangwook Park, Minney Shin, Hyunjin Yoon, Mi Kyong Han