Patents by Inventor Jong Youl PARK
Jong Youl PARK has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12062154Abstract: An image correcting method of the present invention includes: a step of performing a preprocessing process on an original image to generate a mask image including only an erased area of the original image; a step of predicting, by using generative adversarial networks, an image which is to be synthesized with the erased area in the mask image; and a step of synthesizing the predicted image with the erased area of the original image to generate a new image.Type: GrantFiled: March 5, 2020Date of Patent: August 13, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Young Joo Jo, Jong Youl Park, Yu Seok Bae
-
Patent number: 12046464Abstract: A substrate cleaning composition, a method of cleaning a substrate using the same, and a method of fabricating a semiconductor device using the same, the substrate cleaning composition including a styrene copolymer including a first repeating unit represented by Formula 1-1a and a second repeating unit represented by Formula 1-1b; an additive represented by Formula 2-1; and an alcoholic solvent having a solubility of 500 g/L or less in deionized water,Type: GrantFiled: April 13, 2022Date of Patent: July 23, 2024Assignees: SAMSUNG ELECTRONICS CO., LTD., DONGJIN SEMICHEM CO., LTD.Inventors: Ga Young Song, Mi Hyun Park, Jong Kyoung Park, Jung Youl Lee, Hyun Jin Kim, Hyo San Lee, Han Sol Lim, Hoon Han
-
Patent number: 12036175Abstract: Provided is a vibratory stimulation device including a first substrate, a connection band connected to both sides of the first substrate, and a vibration element array including a plurality of vibration elements provided on the first substrate, wherein each of the vibration elements includes a stand provided on the first substrate, a vibration film provided on the stand and in contact with the stand at an edge, a vibrator provided on an upper or lower surface of the vibration film, and an electrode wire connected to the vibrator, wherein the vibration film includes a material that is more flexible and stretchable than the stand.Type: GrantFiled: May 24, 2021Date of Patent: July 16, 2024Assignee: Electronics and Telecommunications Research InstituteInventors: Kang-Ho Park, Jong Tae Lim, Seung Youl Kang, Bock Soon Na, Chan Woo Park, Seongdeok Ahn, Wooseup Youm, Ji-Young Oh
-
Patent number: 11935296Abstract: Provided is an apparatus for online action detection, the apparatus including a feature extraction unit configured to extract a chunk-level feature of a video chunk sequence of a streaming video, a filtering unit configured to perform filtering on the chunk-level feature, and an action classification unit configured to classify an action class using the filtered chunk-level feature.Type: GrantFiled: August 25, 2021Date of Patent: March 19, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jin Young Moon, Hyung Il Kim, Jong Youl Park, Kang Min Bae, Ki Min Yun
-
Patent number: 11527067Abstract: An electronic device according to an embodiment disclosed herein may include a memory including at least one instruction and a processor. By executing the at least one instruction, the processor may check feature information corresponding to a video and including at least one of an appearance-related feature value and a motion-related feature value from the video, calculate at least one of a starting score related to a starting point of an action instance, an ending score related to an ending point of an action instance, and a relatedness score between action instances on the basis of the feature information corresponding to the video, the action instances being included in the video, and generate an action proposal included in the video on the basis of the at least one score.Type: GrantFiled: November 3, 2020Date of Patent: December 13, 2022Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jin Young Moon, Yong Jin Kwon, Hyung Il Kim, Jong Youl Park, Kang Min Bae, Ki Min Yun
-
Patent number: 11517968Abstract: A deburring tool includes: a body; and a cutting unit provided on an end portion of the body and including a blade part, where a first channel is provided inside the body, and when a fluid supplied from outside of the deburring tool through the first channel is injected into the cutting unit and presses the cutting unit, the cutting unit moves and a degree to which the blade part protrudes outwardly increases.Type: GrantFiled: November 16, 2020Date of Patent: December 6, 2022Assignees: Hyundai Motor Company, Kia Motors CorporationInventors: Sung Min Bae, Jong Youl Park, Jin Youl Kim, Seung Ho Lee, Min Hee Cho
-
Patent number: 11380133Abstract: A domain adaptation-based object recognition apparatus includes a memory configured to store a domain adaptation-based object recognition program and a processor configured to execute the program. The processor learns a generative model for generating a feature or an image similar to a gallery image on the basis of domain adaptation in association with an input probe image and learns an object recognition classification model by using a learning database corresponding to the gallery image and the input probe image, thereby performing object recognition using the input probe image.Type: GrantFiled: March 30, 2020Date of Patent: July 5, 2022Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hyung Il Kim, Yong Jin Kwon, Jin Young Moon, Jong Youl Park, Sung Chan Oh, Ki Min Yun, Jeun Woo Lee
-
Publication number: 20220180490Abstract: An image correcting method of the present invention includes: a step of performing a preprocessing process on an original image to generate a mask image including only an erased area of the original image; a step of predicting, by using generative adversarial networks, an image which is to be synthesized with the erased area in the mask image; and a step of synthesizing the predicted image with the erased area of the original image to generate a new image.Type: ApplicationFiled: March 5, 2020Publication date: June 9, 2022Inventors: Young Joo JO, Jong Youl PARK, Yu Seok BAE
-
Publication number: 20220067382Abstract: Provided is an apparatus for online action detection, the apparatus including a feature extraction unit configured to extract a chunk-level feature of a video chunk sequence of a streaming video, a filtering unit configured to perform filtering on the chunk-level feature, and an action classification unit configured to classify an action class using the filtered chunk-level feature.Type: ApplicationFiled: August 25, 2021Publication date: March 3, 2022Applicant: Electronics and Telecommunications Research InstituteInventors: Jin Young MOON, Hyung Il KIM, Jong Youl PARK, Kang Min BAE, Ki Min YUN
-
Publication number: 20210394282Abstract: A deburring tool includes: a body; and a cutting unit provided on an end portion of the body and including a blade part, where a first channel is provided inside the body, and when a fluid supplied from outside of the deburring tool through the first channel is injected into the cutting unit and presses the cutting unit, the cutting unit moves and a degree to which the blade part protrudes outwardly increases.Type: ApplicationFiled: November 16, 2020Publication date: December 23, 2021Inventors: Sung Min Bae, Jong Youl Park, Jin Youl Kim, Seung Ho Lee, Min Hee Cho
-
Publication number: 20210142063Abstract: An electronic device according to an embodiment disclosed herein may include a memory including at least one instruction and a processor. By executing the at least one instruction, the processor may check feature information corresponding to a video and including at least one of an appearance-related feature value and a motion-related feature value from the video, calculate at least one of a starting score related to a starting point of an action instance, an ending score related to an ending point of an action instance, and a relatedness score between action instances on the basis of the feature information corresponding to the video, the action instances being included in the video, and generate an action proposal included in the video on the basis of the at least one score.Type: ApplicationFiled: November 3, 2020Publication date: May 13, 2021Applicant: Electronics and Telecommunications Research InstituteInventors: Jin Young MOON, Yong Jin KWON, Hyung Il KIM, Jong Youl PARK, Kang Min BAE, Ki Min YUN
-
Publication number: 20200311389Abstract: A domain adaptation-based object recognition apparatus includes a memory configured to store a domain adaptation-based object recognition program and a processor configured to execute the program. The processor learns a generative model for generating a feature or an image similar to a gallery image on the basis of domain adaptation in association with an input probe image and learns an object recognition classification model by using a learning database corresponding to the gallery image and the input probe image, thereby performing object recognition using the input probe image.Type: ApplicationFiled: March 30, 2020Publication date: October 1, 2020Applicant: Electronics and Telecommunications Research InstituteInventors: Hyung Il KIM, Yong Jin KWON, Jin Young MOON, Jong Youl PARK, Sung Chan OH, Ki Min YUN, Jeun Woo LEE
-
Patent number: 10789470Abstract: Provided is a dynamic object detecting technique, and more specifically, a system and method for determining a state of a motion of a camera on the basis of a local motion estimated on the basis of a video captured by a dynamic camera and a result of analyzing a global motion, flexibly updating a background model according to the state of the motion of the camera, and flexibly detecting a dynamic object according to the state of the motion of the camera.Type: GrantFiled: July 11, 2018Date of Patent: September 29, 2020Assignee: Electronics and Telecommunications Research InstituteInventors: Ki Min Yun, Yong Jin Kwon, Jin Young Moon, Sung Chan Oh, Jong Youl Park, Jeun Woo Lee
-
Publication number: 20200074647Abstract: Smart glasses for selectively tracking a target of visual cognition according to the present invention include a first camera configured to capture a first input image that is a first-person view image of a user, a second camera configured to capture a second input image containing sight line information of the user, a display configured to output additional information corresponding to the first input image, a memory configured to store a program for selectively tracking a target of visual cognition on the basis of the first and second input images, and a processor configured to execute the program stored in the memory, wherein upon executing the program, the processor is configured to detect the target of visual cognition from the first input image and determine, from the second input image, whether the user is in an inattentive state with respect to the target of visual cognition.Type: ApplicationFiled: August 15, 2019Publication date: March 5, 2020Applicant: Electronics and Telecommunications Research InstituteInventors: Jin Young MOON, Yong Jin KWON, HYUNG IL KIM, Ki Min YUN, Jong Youl PARK, Sung Chan OH, Jeun Woo LEE
-
Patent number: 10474901Abstract: Disclosed herein are a video interpretation apparatus and method. The video interpretation apparatus includes an object information generation unit for generating object information based on objects in an input video, a relation generation unit for generating a dynamic spatial relation between the objects based on the object information, a general event information generation unit for generating general event information based on the dynamic spatial relation, a video information generation unit for generating video information including any one of a sentence and an event description based on the object information and the general event information, and a video descriptor storage unit for storing the object information, the general event information, and the video information.Type: GrantFiled: January 5, 2017Date of Patent: November 12, 2019Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jin-Young Moon, Kyu-Chang Kang, Yong-Jin Kwon, Kyoung Park, Jong-Youl Park, Jeun-Woo Lee
-
Publication number: 20190019031Abstract: Provided is a dynamic object detecting technique, and more specifically, a system and method for determining a state of a motion of a camera on the basis of a local motion estimated on the basis of a video captured by a dynamic camera and a result of analyzing a global motion, flexibly updating a background model according to the state of the motion of the camera, and flexibly detecting a dynamic object according to the state of the motion of the camera.Type: ApplicationFiled: July 11, 2018Publication date: January 17, 2019Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Ki Min YUN, Yong Jin KWON, Jin Young MOON, Sung Chan OH, Jong Youl PARK, Jeun Woo LEE
-
Patent number: 9992300Abstract: Disclosed is an adaptive cache transformation architecture for a cache deployed forward to minimize duplicated transmission, by automatically storing content in a subscriber network area. The system for adaptively deploying a cache positioned at a subscriber network includes a cache service group configured to store all or a part of pieces of content serviced from one or more content providing apparatuses to one or more terminals and including a plurality of caches deployed at a subscriber network between the content providing apparatus and the terminal in a distributed manner, and a resource manager configured to transform a deployment structure of the plurality of caches forming the cache service group, based on at least one of an increase rate in the number of pieces of contents requested by the one or more terminals and a reutilization rate for each content.Type: GrantFiled: February 25, 2015Date of Patent: June 5, 2018Inventors: Tai Yeon Ku, Jong Youl Park, Young sik Chung
-
Publication number: 20170316268Abstract: Disclosed herein are a video interpretation apparatus and method. The video interpretation apparatus includes an object information generation unit for generating object information based on objects in an input video, a relation generation unit for generating a dynamic spatial relation between the objects based on the object information, a general event information generation unit for generating general event information based on the dynamic spatial relation, a video information generation unit for generating video information including any one of a sentence and an event description based on the object information and the general event information, and a video descriptor storage unit for storing the object information, the general event information, and the video information.Type: ApplicationFiled: January 5, 2017Publication date: November 2, 2017Inventors: Jin-Young MOON, Kyu-Chang KANG, Yong-Jin KWON, Kyoung PARK, Jong-Youl PARK, Jeun-Woo LEE
-
Publication number: 20170195389Abstract: Provided is a large scale video management system including: a video random binary stream calculator configured to generate a video random binary stream while changing a setting of a random threshold in an entire section of an input video; a video quality measurer configured to measure a quality of the input video; and a video replacement determiner configured to search whether a video having the same value as the video random binary stream generation value is previously stored, and compare a quality of the previously stored video with a quality of the input video to replace with a high quality video when the video having the same value as the video random binary stream generation value is previously stored.Type: ApplicationFiled: July 25, 2016Publication date: July 6, 2017Inventors: Young Suk YOON, Kyoung PARK, Jong Youl PARK
-
Publication number: 20160358039Abstract: An apparatus for object detection according to an example includes a level image generating unit configured to generate a plurality of level images with reference to a target image; a feature vector extracting unit configured to extract a feature vector from each level image; a codeword generating unit configured to generate a codeword by clustering the feature vector for each level image; a histogram generating unit configured to generate a histogram corresponding to the codeword; and a classifier configured to generate object recognition information of the target image based on the histogram.Type: ApplicationFiled: May 25, 2016Publication date: December 8, 2016Inventors: Jong-Gook KO, Kyoung PARK, Jong-Youl PARK, Joong-Won HWANG