Patents by Inventor Won-Young Yoo

Won-Young Yoo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220336711
    Abstract: A light-emitting element and a display device including the same are provided. The light-emitting element comprises a first semiconductor layer doped with an n-type dopant, a second semiconductor layer disposed below the first semiconductor layer and doped with a p-type dopant, a light-emitting layer disposed between the first semiconductor layer and the second semiconductor layer, a first intermediate layer disposed on the first semiconductor layer, and including a metal, and an electrode layer disposed on the first intermediate layer. Light from the light-emitting layer transmits through the first semiconductor layer, the first intermediate layer, and the electrode layer at a transmittance equal to or greater than about 70%.
    Type: Application
    Filed: March 15, 2022
    Publication date: October 20, 2022
    Applicants: Samsung Display Co., LTD., POSTECH Research and Business Development Foundation
    Inventors: Se Young KIM, Jong Lam LEE, Won Seok CHO, Dong Uk KIM, Jae Yong PARK, Chul Jong YOO
  • Publication number: 20220012589
    Abstract: A data learning device in a deep learning network characterized by a high image resolution and a thin channel at an input stage and an output stage and a low image resolution and a thick channel in an intermediate deep layer includes a feature information extraction unit configured to extract global feature information considering an association between all elements of data when generating an initial estimate in the deep layer; a direct channel-to-image conversion unit configured to generate expanded data having the same resolution as a final output from the generated initial estimate of the global feature information or intermediate outputs sequentially generated in subsequent layers; and a comparison and learning unit configured to calculate a difference between the expanded data generated by the direct channel-to-image conversion unit and a prepared ground truth value and update network parameters such that the difference is decreased.
    Type: Application
    Filed: July 8, 2021
    Publication date: January 13, 2022
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jung Jae YU, Jong Gook KO, Won Young YOO, Keun Dong LEE, Su Woong LEE, Seung Jae LEE, Yong Sik LEE, Da Un JUNG
  • Patent number: 11182651
    Abstract: A fast object detection method and a fast object detection apparatus using an artificial neural network. The fast object detection method includes obtaining an input image; inputting the obtained input image into an object detection neural network using a plurality of preset bounding boxes; and detecting an object included in the input image by acquiring output data of the object detection neural network.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: November 23, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seung Jae Lee, Jong Gook Ko, Keun Dong Lee, Su Woong Lee, Won Young Yoo
  • Patent number: 11106942
    Abstract: Disclosed are a learning data generation method and apparatus needed to learn animation characters on the basis of deep learning. The learning data generation method needed to learn animation characters on the basis of deep learning may include collecting various images from an external source using wired/wireless communication, acquiring character images from the collected images using a character detection module, clustering the acquired character images, selecting learning data from among the clustered images, and inputting the selected learning data to an artificial neural network for character recognition.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: August 31, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Dong Hyuck Im, Jung Hyun Kim, Hye Mi Kim, Jee Hyun Park, Yong Seok Seo, Won Young Yoo
  • Publication number: 20210120355
    Abstract: A method for receiving a mono sound source audio signal including phase information as an input, and separating into a plurality of signals may comprise performing initial convolution and down-sampling on the inputted mono sound source audio signal; generating an encoded signal by encoding the inputted signal using at least one first dense block and at least one down-transition layer; generating a decoded signal by decoding the encoded signal using at least one second dense block and at least one up-transition layer; and performing final convolution and resize on the decoded signal.
    Type: Application
    Filed: September 25, 2020
    Publication date: April 22, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hye Mi KIM, Jung Hyun KIM, Jee Hyun PARK, Yong Seok SEO, Dong Hyuck IM, Won Young YOO
  • Publication number: 20210103721
    Abstract: Disclosed are a learning data generation method and apparatus needed to learn animation characters on the basis of deep learning. The learning data generation method needed to learn animation characters on the basis of deep learning may include collecting various images from an external source using wired/wireless communication, acquiring character images from the collected images using a character detection module, clustering the acquired character images, selecting learning data from among the clustered images, and inputting the selected learning data to an artificial neural network for character recognition.
    Type: Application
    Filed: November 26, 2019
    Publication date: April 8, 2021
    Inventors: Dong Hyuck IM, Jung Hyun KIM, Hye Mi KIM, Jee Hyun PARK, Yong Seok SEO, Won Young YOO
  • Patent number: 10915574
    Abstract: An apparatus for recognizing a person includes a content separator configured to receive contents and separate the contents into video content and audio content; a video processor configured to recognize a face from an image in the video content received from the content separator and obtain information on a face recognition section by analyzing the video content; an audio processor configured to recognize a speaker from voice data in the audio content received from the content separator and obtain information on a speaker recognition section by analyzing the audio content; and a person recognized section information provider configured to provide information on a section of the contents in which a person appears based on the information on the face recognition section and the information on the speaker recognition section.
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: February 9, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Dong Hyuck Im, Yong Seok Seo, Jung Hyun Kim, Jee Hyun Park, Won Young Yoo
  • Publication number: 20200272863
    Abstract: A fast object detection method and a fast object detection apparatus using an artificial neural network. The fast object detection method includes obtaining an input image; inputting the obtained input image into an object detection neural network using a plurality of preset bounding boxes; and detecting an object included in the input image by acquiring output data of the object detection neural network.
    Type: Application
    Filed: December 4, 2019
    Publication date: August 27, 2020
    Inventors: Seung Jae LEE, Jong Gook KO, Keun Dong LEE, Su Woong LEE, Won Young YOO
  • Patent number: 10614312
    Abstract: A signature actor determination method for video identification includes setting a list of actors who appear in each of a plurality of videos, generating a plurality of subsets including the actors, and determining that an actor included in a single final set indicating a first video among the plurality of subsets is a signature actor of the first video. Accordingly, video identification is possible by using just a little information.
    Type: Grant
    Filed: February 9, 2017
    Date of Patent: April 7, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Yong Seok Seo, Dong Hyuck Im, Won Young Yoo, Jee Hyun Park, Jung Hyun Kim, Young Ho Suh
  • Patent number: 10565435
    Abstract: A method for determining a video-related emotion and a method of generating data for learning video-related emotions include separating an input video into a video stream and an audio stream; analyzing the audio stream to detect a music section; extracting at least one video clip matching the music section; extracting emotion information from the music section; tagging the video clip with the extracted emotion information and outputting the video clip; learning video-related emotions by using the at least one video clip tagged with the emotion information to generate a video-related emotion classification model; and determining an emotion related to an input query video by using the video-related emotion classification model to provide the emotion.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: February 18, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jee Hyun Park, Jung Hyun Kim, Yong Seok Seo, Won Young Yoo, Dong Hyuck Im
  • Publication number: 20190392591
    Abstract: Disclosed herein is a method of detecting a moving object including: predicting an optical flow in an input image clip using a first deep neural network which is trained to predict an optical flow in an image clip including a plurality of frames; obtaining an optical flow image which reflects a result of the optical flow prediction; and detecting a moving object in the image clip on the basis of the optical flow image using a second deep neural network trained using the first deep neural network.
    Type: Application
    Filed: November 27, 2018
    Publication date: December 26, 2019
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Ji Won LEE, Do Won NAM, Sung Won MOON, Jung Soo LEE, Won Young YOO, Ki Song YOON
  • Patent number: 10430459
    Abstract: A server for providing a city street search service includes a street information database configured to store city street images, a feature selection unit configured to select at least one feature according to a predetermined criterion when a city street image for searching and two or more features for the image are received from a user terminal, a candidate extraction unit configured to extract a candidate list of a city street image, a feature matching unit configured to match the city street image for registration included in the extracted candidate list and the at least one selected feature, and a search result provision unit configured to provide the user terminal with a result of the matching as result information regarding the city street image for searching.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: October 1, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seung Jae Lee, Keun Dong Lee, Hyung Kwan Son, Weon Geun Oh, Da Un Jung, Young Ho Suh, Wook Ho Son, Won Young Yoo, Gil Haeng Lee
  • Publication number: 20190278978
    Abstract: A method for determining a video-related emotion and a method of generating data for learning video-related emotions include separating an input video into a video stream and an audio stream; analyzing the audio stream to detect a music section; extracting at least one video clip matching the music section; extracting emotion information from the music section; tagging the video clip with the extracted emotion information and outputting the video clip; learning video-related emotions by using the at least one video clip tagged with the emotion information to generate a video-related emotion classification model; and determining an emotion related to an input query video by using the video-related emotion classification model to provide the emotion.
    Type: Application
    Filed: May 30, 2018
    Publication date: September 12, 2019
    Inventors: Jee Hyun PARK, Jung Hyun KIM, Yong Seok SEO, Won Young YOO, Dong Hyuck IM
  • Publication number: 20190213279
    Abstract: An apparatus and method of analyzing and identifying a song with high performance identify a subject song in which global and local characteristics of a feature vector are reflected, and quickly identify a cover song in which changes in tempo and key are reflected by using a feature vector extracting part, a feature vector condensing part, and a feature vector comparing part, and by condensing a feature vector sequence into global and local characteristics in which a melody characteristic is reflected.
    Type: Application
    Filed: February 26, 2018
    Publication date: July 11, 2019
    Inventors: Jung Hyun KIM, Jee Hyun PARK, Yong Seok SEO, Won Young YOO, Dong Hyuck IM, Jin Soo SEO
  • Publication number: 20190179960
    Abstract: An apparatus for recognizing a person includes a content separator configured to receive contents and separate the contents into video content and audio content; a video processor configured to recognize a face from an image in the video content received from the content separator and obtain information on a face recognition section by analyzing the video content; an audio processor configured to recognize a speaker from voice data in the audio content received from the content separator and obtain information on a speaker recognition section by analyzing the audio content; and a person recognized section information provider configured to provide information on a section of the contents in which a person appears based on the information on the face recognition section and the information on the speaker recognition section.
    Type: Application
    Filed: January 26, 2018
    Publication date: June 13, 2019
    Inventors: Dong Hyuck IM, Yong Seok SEO, Jung Hyun KIM, Jee Hyun PARK, Won Young YOO
  • Publication number: 20180189571
    Abstract: A signature actor determination method for video identification includes setting a list of actors who appear in each of a plurality of videos, generating a plurality of subsets including the actors, and determining that an actor included in a single final set indicating a first video among the plurality of subsets is a signature actor of the first video. Accordingly, video identification is possible by using just a little information.
    Type: Application
    Filed: February 9, 2017
    Publication date: July 5, 2018
    Inventors: Yong Seok SEO, Dong Hyuck IM, Won Young YOO, Jee Hyun PARK, Jung Hyun KIM, Young Ho SUH
  • Publication number: 20170199900
    Abstract: A server for providing a city street search service includes a street information database configured to store city street images, a feature selection unit configured to select at least one feature according to a predetermined criterion when a city street image for searching and two or more features for the image are received from a user terminal, a candidate extraction unit configured to extract a candidate list of a city street image, a feature matching unit configured to match the city street image for registration included in the extracted candidate list and the at least one selected feature, and a search result provision unit configured to provide the user terminal with a result of the matching as result information regarding the city street image for searching.
    Type: Application
    Filed: October 24, 2016
    Publication date: July 13, 2017
    Inventors: Seung Jae LEE, Keun Dong LEE, Hyung Kwan SON, Weon Geun OH, Da Un JUNG, Young Ho SUH, Wook Ho SON, Won Young YOO, Gil Haeng LEE
  • Patent number: 9262521
    Abstract: Disclosed are an apparatus and a method for extracting a highlight section of music. The apparatus for extracting a highlight section of music in accordance with the embodiment of the present invention includes a frame divider that divides an audio file into a plurality of frames having a predetermined sample length; an average energy signal calculator that calculates a signal representing the average magnitude of audio energy for a plurality of samples belonging to each frame for each frame of the plurality of frames; and a highlight section selector that extracts a low frequency signal from the signal representing the average audio energy magnitude for each frame and determines the highlight section from frame sections including maximum points of the low frequency signal.
    Type: Grant
    Filed: November 26, 2012
    Date of Patent: February 16, 2016
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Sung-Min Kim, Seung-Jae Lee, Jung-Hyun Kim, Young-Ho Suh, Yong-Seok Seo, Jee-Hyun Park, Sang-Kwang Lee, Jung-Ho Lee, Young-Suk Yoon, Won-Young Yoo
  • Patent number: 9183840
    Abstract: Disclosed herein is an apparatus and method for measuring quality of audio. The apparatus includes a distorted signal generation unit, an extraction unit, a distortion level measurement unit, a distortion function generation unit, and a search and measurement unit. The distorted signal generation unit generates a plurality of distorted signals with respect to audio in compliance. The extraction unit extracts a fingerprint and AV information corresponding to the audio and fingerprints and AV information corresponding to the plurality of distorted signals. The distortion level measurement unit measures fingerprint distance differences, and arousal and valence (AV) distance differences. The distortion function generation unit generates a fingerprint distortion function and an AV distortion function.
    Type: Grant
    Filed: June 14, 2013
    Date of Patent: November 10, 2015
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seung-Jae Lee, Jung-Hyun Kim, Won-Young Yoo, Yong-Seok Seo, Sang-Kwang Lee, Jee-Hyun Park, Young-Suk Yoon, Young-Ho Suh
  • Publication number: 20150278605
    Abstract: An apparatus and method for managing a representative video image, which selects representative images based on human visual aesthetic criteria and creates an album by arranging the selected representative images in an album template with various layouts, based on the region of interest (ROI).
    Type: Application
    Filed: January 15, 2015
    Publication date: October 1, 2015
    Inventors: Yong Seok SEO, Jung Hyun KIM, Jee Hyun PARK, Young Suk YOON, Won Young YOO, Young Ho SUH, Wook Ho SON