Patents by Inventor Jong Youl PARK

Jong Youl PARK has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12062154
    Abstract: An image correcting method of the present invention includes: a step of performing a preprocessing process on an original image to generate a mask image including only an erased area of the original image; a step of predicting, by using generative adversarial networks, an image which is to be synthesized with the erased area in the mask image; and a step of synthesizing the predicted image with the erased area of the original image to generate a new image.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: August 13, 2024
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Young Joo Jo, Jong Youl Park, Yu Seok Bae
  • Patent number: 12046464
    Abstract: A substrate cleaning composition, a method of cleaning a substrate using the same, and a method of fabricating a semiconductor device using the same, the substrate cleaning composition including a styrene copolymer including a first repeating unit represented by Formula 1-1a and a second repeating unit represented by Formula 1-1b; an additive represented by Formula 2-1; and an alcoholic solvent having a solubility of 500 g/L or less in deionized water,
    Type: Grant
    Filed: April 13, 2022
    Date of Patent: July 23, 2024
    Assignees: SAMSUNG ELECTRONICS CO., LTD., DONGJIN SEMICHEM CO., LTD.
    Inventors: Ga Young Song, Mi Hyun Park, Jong Kyoung Park, Jung Youl Lee, Hyun Jin Kim, Hyo San Lee, Han Sol Lim, Hoon Han
  • Patent number: 12036175
    Abstract: Provided is a vibratory stimulation device including a first substrate, a connection band connected to both sides of the first substrate, and a vibration element array including a plurality of vibration elements provided on the first substrate, wherein each of the vibration elements includes a stand provided on the first substrate, a vibration film provided on the stand and in contact with the stand at an edge, a vibrator provided on an upper or lower surface of the vibration film, and an electrode wire connected to the vibrator, wherein the vibration film includes a material that is more flexible and stretchable than the stand.
    Type: Grant
    Filed: May 24, 2021
    Date of Patent: July 16, 2024
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kang-Ho Park, Jong Tae Lim, Seung Youl Kang, Bock Soon Na, Chan Woo Park, Seongdeok Ahn, Wooseup Youm, Ji-Young Oh
  • Patent number: 11935296
    Abstract: Provided is an apparatus for online action detection, the apparatus including a feature extraction unit configured to extract a chunk-level feature of a video chunk sequence of a streaming video, a filtering unit configured to perform filtering on the chunk-level feature, and an action classification unit configured to classify an action class using the filtered chunk-level feature.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: March 19, 2024
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin Young Moon, Hyung Il Kim, Jong Youl Park, Kang Min Bae, Ki Min Yun
  • Patent number: 11527067
    Abstract: An electronic device according to an embodiment disclosed herein may include a memory including at least one instruction and a processor. By executing the at least one instruction, the processor may check feature information corresponding to a video and including at least one of an appearance-related feature value and a motion-related feature value from the video, calculate at least one of a starting score related to a starting point of an action instance, an ending score related to an ending point of an action instance, and a relatedness score between action instances on the basis of the feature information corresponding to the video, the action instances being included in the video, and generate an action proposal included in the video on the basis of the at least one score.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: December 13, 2022
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin Young Moon, Yong Jin Kwon, Hyung Il Kim, Jong Youl Park, Kang Min Bae, Ki Min Yun
  • Patent number: 11517968
    Abstract: A deburring tool includes: a body; and a cutting unit provided on an end portion of the body and including a blade part, where a first channel is provided inside the body, and when a fluid supplied from outside of the deburring tool through the first channel is injected into the cutting unit and presses the cutting unit, the cutting unit moves and a degree to which the blade part protrudes outwardly increases.
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: December 6, 2022
    Assignees: Hyundai Motor Company, Kia Motors Corporation
    Inventors: Sung Min Bae, Jong Youl Park, Jin Youl Kim, Seung Ho Lee, Min Hee Cho
  • Patent number: 11380133
    Abstract: A domain adaptation-based object recognition apparatus includes a memory configured to store a domain adaptation-based object recognition program and a processor configured to execute the program. The processor learns a generative model for generating a feature or an image similar to a gallery image on the basis of domain adaptation in association with an input probe image and learns an object recognition classification model by using a learning database corresponding to the gallery image and the input probe image, thereby performing object recognition using the input probe image.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: July 5, 2022
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hyung Il Kim, Yong Jin Kwon, Jin Young Moon, Jong Youl Park, Sung Chan Oh, Ki Min Yun, Jeun Woo Lee
  • Publication number: 20220180490
    Abstract: An image correcting method of the present invention includes: a step of performing a preprocessing process on an original image to generate a mask image including only an erased area of the original image; a step of predicting, by using generative adversarial networks, an image which is to be synthesized with the erased area in the mask image; and a step of synthesizing the predicted image with the erased area of the original image to generate a new image.
    Type: Application
    Filed: March 5, 2020
    Publication date: June 9, 2022
    Inventors: Young Joo JO, Jong Youl PARK, Yu Seok BAE
  • Publication number: 20220067382
    Abstract: Provided is an apparatus for online action detection, the apparatus including a feature extraction unit configured to extract a chunk-level feature of a video chunk sequence of a streaming video, a filtering unit configured to perform filtering on the chunk-level feature, and an action classification unit configured to classify an action class using the filtered chunk-level feature.
    Type: Application
    Filed: August 25, 2021
    Publication date: March 3, 2022
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jin Young MOON, Hyung Il KIM, Jong Youl PARK, Kang Min BAE, Ki Min YUN
  • Publication number: 20210394282
    Abstract: A deburring tool includes: a body; and a cutting unit provided on an end portion of the body and including a blade part, where a first channel is provided inside the body, and when a fluid supplied from outside of the deburring tool through the first channel is injected into the cutting unit and presses the cutting unit, the cutting unit moves and a degree to which the blade part protrudes outwardly increases.
    Type: Application
    Filed: November 16, 2020
    Publication date: December 23, 2021
    Inventors: Sung Min Bae, Jong Youl Park, Jin Youl Kim, Seung Ho Lee, Min Hee Cho
  • Publication number: 20210142063
    Abstract: An electronic device according to an embodiment disclosed herein may include a memory including at least one instruction and a processor. By executing the at least one instruction, the processor may check feature information corresponding to a video and including at least one of an appearance-related feature value and a motion-related feature value from the video, calculate at least one of a starting score related to a starting point of an action instance, an ending score related to an ending point of an action instance, and a relatedness score between action instances on the basis of the feature information corresponding to the video, the action instances being included in the video, and generate an action proposal included in the video on the basis of the at least one score.
    Type: Application
    Filed: November 3, 2020
    Publication date: May 13, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jin Young MOON, Yong Jin KWON, Hyung Il KIM, Jong Youl PARK, Kang Min BAE, Ki Min YUN
  • Publication number: 20200311389
    Abstract: A domain adaptation-based object recognition apparatus includes a memory configured to store a domain adaptation-based object recognition program and a processor configured to execute the program. The processor learns a generative model for generating a feature or an image similar to a gallery image on the basis of domain adaptation in association with an input probe image and learns an object recognition classification model by using a learning database corresponding to the gallery image and the input probe image, thereby performing object recognition using the input probe image.
    Type: Application
    Filed: March 30, 2020
    Publication date: October 1, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hyung Il KIM, Yong Jin KWON, Jin Young MOON, Jong Youl PARK, Sung Chan OH, Ki Min YUN, Jeun Woo LEE
  • Patent number: 10789470
    Abstract: Provided is a dynamic object detecting technique, and more specifically, a system and method for determining a state of a motion of a camera on the basis of a local motion estimated on the basis of a video captured by a dynamic camera and a result of analyzing a global motion, flexibly updating a background model according to the state of the motion of the camera, and flexibly detecting a dynamic object according to the state of the motion of the camera.
    Type: Grant
    Filed: July 11, 2018
    Date of Patent: September 29, 2020
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Ki Min Yun, Yong Jin Kwon, Jin Young Moon, Sung Chan Oh, Jong Youl Park, Jeun Woo Lee
  • Publication number: 20200074647
    Abstract: Smart glasses for selectively tracking a target of visual cognition according to the present invention include a first camera configured to capture a first input image that is a first-person view image of a user, a second camera configured to capture a second input image containing sight line information of the user, a display configured to output additional information corresponding to the first input image, a memory configured to store a program for selectively tracking a target of visual cognition on the basis of the first and second input images, and a processor configured to execute the program stored in the memory, wherein upon executing the program, the processor is configured to detect the target of visual cognition from the first input image and determine, from the second input image, whether the user is in an inattentive state with respect to the target of visual cognition.
    Type: Application
    Filed: August 15, 2019
    Publication date: March 5, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jin Young MOON, Yong Jin KWON, HYUNG IL KIM, Ki Min YUN, Jong Youl PARK, Sung Chan OH, Jeun Woo LEE
  • Patent number: 10474901
    Abstract: Disclosed herein are a video interpretation apparatus and method. The video interpretation apparatus includes an object information generation unit for generating object information based on objects in an input video, a relation generation unit for generating a dynamic spatial relation between the objects based on the object information, a general event information generation unit for generating general event information based on the dynamic spatial relation, a video information generation unit for generating video information including any one of a sentence and an event description based on the object information and the general event information, and a video descriptor storage unit for storing the object information, the general event information, and the video information.
    Type: Grant
    Filed: January 5, 2017
    Date of Patent: November 12, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin-Young Moon, Kyu-Chang Kang, Yong-Jin Kwon, Kyoung Park, Jong-Youl Park, Jeun-Woo Lee
  • Publication number: 20190019031
    Abstract: Provided is a dynamic object detecting technique, and more specifically, a system and method for determining a state of a motion of a camera on the basis of a local motion estimated on the basis of a video captured by a dynamic camera and a result of analyzing a global motion, flexibly updating a background model according to the state of the motion of the camera, and flexibly detecting a dynamic object according to the state of the motion of the camera.
    Type: Application
    Filed: July 11, 2018
    Publication date: January 17, 2019
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Ki Min YUN, Yong Jin KWON, Jin Young MOON, Sung Chan OH, Jong Youl PARK, Jeun Woo LEE
  • Patent number: 9992300
    Abstract: Disclosed is an adaptive cache transformation architecture for a cache deployed forward to minimize duplicated transmission, by automatically storing content in a subscriber network area. The system for adaptively deploying a cache positioned at a subscriber network includes a cache service group configured to store all or a part of pieces of content serviced from one or more content providing apparatuses to one or more terminals and including a plurality of caches deployed at a subscriber network between the content providing apparatus and the terminal in a distributed manner, and a resource manager configured to transform a deployment structure of the plurality of caches forming the cache service group, based on at least one of an increase rate in the number of pieces of contents requested by the one or more terminals and a reutilization rate for each content.
    Type: Grant
    Filed: February 25, 2015
    Date of Patent: June 5, 2018
    Inventors: Tai Yeon Ku, Jong Youl Park, Young sik Chung
  • Publication number: 20170316268
    Abstract: Disclosed herein are a video interpretation apparatus and method. The video interpretation apparatus includes an object information generation unit for generating object information based on objects in an input video, a relation generation unit for generating a dynamic spatial relation between the objects based on the object information, a general event information generation unit for generating general event information based on the dynamic spatial relation, a video information generation unit for generating video information including any one of a sentence and an event description based on the object information and the general event information, and a video descriptor storage unit for storing the object information, the general event information, and the video information.
    Type: Application
    Filed: January 5, 2017
    Publication date: November 2, 2017
    Inventors: Jin-Young MOON, Kyu-Chang KANG, Yong-Jin KWON, Kyoung PARK, Jong-Youl PARK, Jeun-Woo LEE
  • Publication number: 20170195389
    Abstract: Provided is a large scale video management system including: a video random binary stream calculator configured to generate a video random binary stream while changing a setting of a random threshold in an entire section of an input video; a video quality measurer configured to measure a quality of the input video; and a video replacement determiner configured to search whether a video having the same value as the video random binary stream generation value is previously stored, and compare a quality of the previously stored video with a quality of the input video to replace with a high quality video when the video having the same value as the video random binary stream generation value is previously stored.
    Type: Application
    Filed: July 25, 2016
    Publication date: July 6, 2017
    Inventors: Young Suk YOON, Kyoung PARK, Jong Youl PARK
  • Publication number: 20160358039
    Abstract: An apparatus for object detection according to an example includes a level image generating unit configured to generate a plurality of level images with reference to a target image; a feature vector extracting unit configured to extract a feature vector from each level image; a codeword generating unit configured to generate a codeword by clustering the feature vector for each level image; a histogram generating unit configured to generate a histogram corresponding to the codeword; and a classifier configured to generate object recognition information of the target image based on the histogram.
    Type: Application
    Filed: May 25, 2016
    Publication date: December 8, 2016
    Inventors: Jong-Gook KO, Kyoung PARK, Jong-Youl PARK, Joong-Won HWANG