Patents by Inventor Won-Young Yoo

Won-Young Yoo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230169709
    Abstract: Provided are a face de-identification method and system and a graphical user interface (GUI) provision method for face de-identification employing facial image generation. According to the face de-identification method and system and the GUI provision method, a facial area including eyes, a nose, and a mouth in a face of a person detected in an input image is replaced with a de-identified facial area generated through deep learning to maintain the face in a natural shape while protecting the person's portrait right. Accordingly, qualitative degradation of content is prevented, and viewers' concentration on the image is increased.
    Type: Application
    Filed: August 31, 2022
    Publication date: June 1, 2023
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Dong Hyuck IM, Jung Hyun KIM, Hye Mi KIM, Jee Hyun PARK, Yong Seok SEO, Won Young YOO
  • Publication number: 20230153351
    Abstract: The present invention relates to an apparatus and method for identifying music in a content, The present invention includes extracting and storing a fingerprint of an original audio in an audio fingerprint DB; extracting a first fingerprint of a first audio in the content; and searching for a fingerprint corresponding to the fingerprint of the first audio in the audio fingerprint DB, wherein the first audio is audio data in a music section detected from the content.
    Type: Application
    Filed: February 25, 2022
    Publication date: May 18, 2023
    Inventors: Jee Hyun PARK, Jung Hyun KIM, Hye Mi KIM, Yong Seok SEO, Dong Hyuck IM, Won Young YOO
  • Publication number: 20230122553
    Abstract: The present invention relates to an apparatus and method for drawing, the method comprising: inputting a drawing image; recognizing a component in the input drawing image; inferring a structure of an object based on the recognized component; and drawing the inferred structure of the object.
    Type: Application
    Filed: February 2, 2022
    Publication date: April 20, 2023
    Inventors: Seung Jae LEE, Su Woong LEE, Yong Sik LEE, Ju Won LEE, Da Un JUNG, Jong Gook KO, Won Young YOO
  • Publication number: 20220012589
    Abstract: A data learning device in a deep learning network characterized by a high image resolution and a thin channel at an input stage and an output stage and a low image resolution and a thick channel in an intermediate deep layer includes a feature information extraction unit configured to extract global feature information considering an association between all elements of data when generating an initial estimate in the deep layer; a direct channel-to-image conversion unit configured to generate expanded data having the same resolution as a final output from the generated initial estimate of the global feature information or intermediate outputs sequentially generated in subsequent layers; and a comparison and learning unit configured to calculate a difference between the expanded data generated by the direct channel-to-image conversion unit and a prepared ground truth value and update network parameters such that the difference is decreased.
    Type: Application
    Filed: July 8, 2021
    Publication date: January 13, 2022
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jung Jae YU, Jong Gook KO, Won Young YOO, Keun Dong LEE, Su Woong LEE, Seung Jae LEE, Yong Sik LEE, Da Un JUNG
  • Patent number: 11182651
    Abstract: A fast object detection method and a fast object detection apparatus using an artificial neural network. The fast object detection method includes obtaining an input image; inputting the obtained input image into an object detection neural network using a plurality of preset bounding boxes; and detecting an object included in the input image by acquiring output data of the object detection neural network.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: November 23, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seung Jae Lee, Jong Gook Ko, Keun Dong Lee, Su Woong Lee, Won Young Yoo
  • Patent number: 11106942
    Abstract: Disclosed are a learning data generation method and apparatus needed to learn animation characters on the basis of deep learning. The learning data generation method needed to learn animation characters on the basis of deep learning may include collecting various images from an external source using wired/wireless communication, acquiring character images from the collected images using a character detection module, clustering the acquired character images, selecting learning data from among the clustered images, and inputting the selected learning data to an artificial neural network for character recognition.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: August 31, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Dong Hyuck Im, Jung Hyun Kim, Hye Mi Kim, Jee Hyun Park, Yong Seok Seo, Won Young Yoo
  • Publication number: 20210120355
    Abstract: A method for receiving a mono sound source audio signal including phase information as an input, and separating into a plurality of signals may comprise performing initial convolution and down-sampling on the inputted mono sound source audio signal; generating an encoded signal by encoding the inputted signal using at least one first dense block and at least one down-transition layer; generating a decoded signal by decoding the encoded signal using at least one second dense block and at least one up-transition layer; and performing final convolution and resize on the decoded signal.
    Type: Application
    Filed: September 25, 2020
    Publication date: April 22, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hye Mi KIM, Jung Hyun KIM, Jee Hyun PARK, Yong Seok SEO, Dong Hyuck IM, Won Young YOO
  • Publication number: 20210103721
    Abstract: Disclosed are a learning data generation method and apparatus needed to learn animation characters on the basis of deep learning. The learning data generation method needed to learn animation characters on the basis of deep learning may include collecting various images from an external source using wired/wireless communication, acquiring character images from the collected images using a character detection module, clustering the acquired character images, selecting learning data from among the clustered images, and inputting the selected learning data to an artificial neural network for character recognition.
    Type: Application
    Filed: November 26, 2019
    Publication date: April 8, 2021
    Inventors: Dong Hyuck IM, Jung Hyun KIM, Hye Mi KIM, Jee Hyun PARK, Yong Seok SEO, Won Young YOO
  • Patent number: 10915574
    Abstract: An apparatus for recognizing a person includes a content separator configured to receive contents and separate the contents into video content and audio content; a video processor configured to recognize a face from an image in the video content received from the content separator and obtain information on a face recognition section by analyzing the video content; an audio processor configured to recognize a speaker from voice data in the audio content received from the content separator and obtain information on a speaker recognition section by analyzing the audio content; and a person recognized section information provider configured to provide information on a section of the contents in which a person appears based on the information on the face recognition section and the information on the speaker recognition section.
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: February 9, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Dong Hyuck Im, Yong Seok Seo, Jung Hyun Kim, Jee Hyun Park, Won Young Yoo
  • Publication number: 20200272863
    Abstract: A fast object detection method and a fast object detection apparatus using an artificial neural network. The fast object detection method includes obtaining an input image; inputting the obtained input image into an object detection neural network using a plurality of preset bounding boxes; and detecting an object included in the input image by acquiring output data of the object detection neural network.
    Type: Application
    Filed: December 4, 2019
    Publication date: August 27, 2020
    Inventors: Seung Jae LEE, Jong Gook KO, Keun Dong LEE, Su Woong LEE, Won Young YOO
  • Patent number: 10614312
    Abstract: A signature actor determination method for video identification includes setting a list of actors who appear in each of a plurality of videos, generating a plurality of subsets including the actors, and determining that an actor included in a single final set indicating a first video among the plurality of subsets is a signature actor of the first video. Accordingly, video identification is possible by using just a little information.
    Type: Grant
    Filed: February 9, 2017
    Date of Patent: April 7, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Yong Seok Seo, Dong Hyuck Im, Won Young Yoo, Jee Hyun Park, Jung Hyun Kim, Young Ho Suh
  • Patent number: 10565435
    Abstract: A method for determining a video-related emotion and a method of generating data for learning video-related emotions include separating an input video into a video stream and an audio stream; analyzing the audio stream to detect a music section; extracting at least one video clip matching the music section; extracting emotion information from the music section; tagging the video clip with the extracted emotion information and outputting the video clip; learning video-related emotions by using the at least one video clip tagged with the emotion information to generate a video-related emotion classification model; and determining an emotion related to an input query video by using the video-related emotion classification model to provide the emotion.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: February 18, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jee Hyun Park, Jung Hyun Kim, Yong Seok Seo, Won Young Yoo, Dong Hyuck Im
  • Publication number: 20190392591
    Abstract: Disclosed herein is a method of detecting a moving object including: predicting an optical flow in an input image clip using a first deep neural network which is trained to predict an optical flow in an image clip including a plurality of frames; obtaining an optical flow image which reflects a result of the optical flow prediction; and detecting a moving object in the image clip on the basis of the optical flow image using a second deep neural network trained using the first deep neural network.
    Type: Application
    Filed: November 27, 2018
    Publication date: December 26, 2019
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Ji Won LEE, Do Won NAM, Sung Won MOON, Jung Soo LEE, Won Young YOO, Ki Song YOON
  • Patent number: 10430459
    Abstract: A server for providing a city street search service includes a street information database configured to store city street images, a feature selection unit configured to select at least one feature according to a predetermined criterion when a city street image for searching and two or more features for the image are received from a user terminal, a candidate extraction unit configured to extract a candidate list of a city street image, a feature matching unit configured to match the city street image for registration included in the extracted candidate list and the at least one selected feature, and a search result provision unit configured to provide the user terminal with a result of the matching as result information regarding the city street image for searching.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: October 1, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seung Jae Lee, Keun Dong Lee, Hyung Kwan Son, Weon Geun Oh, Da Un Jung, Young Ho Suh, Wook Ho Son, Won Young Yoo, Gil Haeng Lee
  • Publication number: 20190278978
    Abstract: A method for determining a video-related emotion and a method of generating data for learning video-related emotions include separating an input video into a video stream and an audio stream; analyzing the audio stream to detect a music section; extracting at least one video clip matching the music section; extracting emotion information from the music section; tagging the video clip with the extracted emotion information and outputting the video clip; learning video-related emotions by using the at least one video clip tagged with the emotion information to generate a video-related emotion classification model; and determining an emotion related to an input query video by using the video-related emotion classification model to provide the emotion.
    Type: Application
    Filed: May 30, 2018
    Publication date: September 12, 2019
    Inventors: Jee Hyun PARK, Jung Hyun KIM, Yong Seok SEO, Won Young YOO, Dong Hyuck IM
  • Publication number: 20190213279
    Abstract: An apparatus and method of analyzing and identifying a song with high performance identify a subject song in which global and local characteristics of a feature vector are reflected, and quickly identify a cover song in which changes in tempo and key are reflected by using a feature vector extracting part, a feature vector condensing part, and a feature vector comparing part, and by condensing a feature vector sequence into global and local characteristics in which a melody characteristic is reflected.
    Type: Application
    Filed: February 26, 2018
    Publication date: July 11, 2019
    Inventors: Jung Hyun KIM, Jee Hyun PARK, Yong Seok SEO, Won Young YOO, Dong Hyuck IM, Jin Soo SEO
  • Publication number: 20190179960
    Abstract: An apparatus for recognizing a person includes a content separator configured to receive contents and separate the contents into video content and audio content; a video processor configured to recognize a face from an image in the video content received from the content separator and obtain information on a face recognition section by analyzing the video content; an audio processor configured to recognize a speaker from voice data in the audio content received from the content separator and obtain information on a speaker recognition section by analyzing the audio content; and a person recognized section information provider configured to provide information on a section of the contents in which a person appears based on the information on the face recognition section and the information on the speaker recognition section.
    Type: Application
    Filed: January 26, 2018
    Publication date: June 13, 2019
    Inventors: Dong Hyuck IM, Yong Seok SEO, Jung Hyun KIM, Jee Hyun PARK, Won Young YOO
  • Publication number: 20180189571
    Abstract: A signature actor determination method for video identification includes setting a list of actors who appear in each of a plurality of videos, generating a plurality of subsets including the actors, and determining that an actor included in a single final set indicating a first video among the plurality of subsets is a signature actor of the first video. Accordingly, video identification is possible by using just a little information.
    Type: Application
    Filed: February 9, 2017
    Publication date: July 5, 2018
    Inventors: Yong Seok SEO, Dong Hyuck IM, Won Young YOO, Jee Hyun PARK, Jung Hyun KIM, Young Ho SUH
  • Publication number: 20170199900
    Abstract: A server for providing a city street search service includes a street information database configured to store city street images, a feature selection unit configured to select at least one feature according to a predetermined criterion when a city street image for searching and two or more features for the image are received from a user terminal, a candidate extraction unit configured to extract a candidate list of a city street image, a feature matching unit configured to match the city street image for registration included in the extracted candidate list and the at least one selected feature, and a search result provision unit configured to provide the user terminal with a result of the matching as result information regarding the city street image for searching.
    Type: Application
    Filed: October 24, 2016
    Publication date: July 13, 2017
    Inventors: Seung Jae LEE, Keun Dong LEE, Hyung Kwan SON, Weon Geun OH, Da Un JUNG, Young Ho SUH, Wook Ho SON, Won Young YOO, Gil Haeng LEE
  • Patent number: 9262521
    Abstract: Disclosed are an apparatus and a method for extracting a highlight section of music. The apparatus for extracting a highlight section of music in accordance with the embodiment of the present invention includes a frame divider that divides an audio file into a plurality of frames having a predetermined sample length; an average energy signal calculator that calculates a signal representing the average magnitude of audio energy for a plurality of samples belonging to each frame for each frame of the plurality of frames; and a highlight section selector that extracts a low frequency signal from the signal representing the average audio energy magnitude for each frame and determines the highlight section from frame sections including maximum points of the low frequency signal.
    Type: Grant
    Filed: November 26, 2012
    Date of Patent: February 16, 2016
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Sung-Min Kim, Seung-Jae Lee, Jung-Hyun Kim, Young-Ho Suh, Yong-Seok Seo, Jee-Hyun Park, Sang-Kwang Lee, Jung-Ho Lee, Young-Suk Yoon, Won-Young Yoo