Patents by Inventor Jeong Woo Son
Jeong Woo Son has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240105951Abstract: The present disclosure relates to an electrode binder composition for a rechargeable battery and an electrode mixture including the same. The electrode binder composition comprising an emulsified polymer particle having a core-shell structure can maintain a structural stability of the electrode even in repeated charge and discharge cycles, while having excellent properties in terms of a binding force, a mechanical property or the like, thereby improving the overall performance of the rechargeable battery.Type: ApplicationFiled: December 17, 2021Publication date: March 28, 2024Applicant: LG Chem, Ltd.Inventors: Jungeun Woo, Min Ah Kang, Jeong Man Son, Sungjin Lee, Seon Hee Han
-
Patent number: 11930691Abstract: An apparatus for manufacturing an organic material includes an outer tube including an internal accommodating space, and at least one loading inner tube and at least one collecting inner tube disposed in the accommodation space, the loading inner tube including a mesh boat disposed in a first direction in which the loading inner tube extends.Type: GrantFiled: September 5, 2019Date of Patent: March 12, 2024Assignee: Samsung Display Co., Ltd.Inventors: Keun Hee Han, Jong Woo Lee, Myung Ki Lee, Suk Ki, Jeong Hyeon Son
-
Patent number: 11886499Abstract: Disclosed herein is an apparatus for analyzing a video shot. The apparatus includes at least one program, memory in which the program is recorded, and a processor for executing the program. The program may include a frame extraction unit for extracting at least one frame from a video shot, a shot composition and camera position recognition unit for predicting shot composition and a camera position for the extracted at least one frame based on a previously trained shot composition recognition model, a place and time information extraction unit for predicting a shot location and a shot time for the extracted at least one frame based on previously trained shot location recognition model and shot time recognition model, and an information combination unit for combining pieces of information, respectively predicted for the at least one frame, for each video shot and tagging the video shot with the combined pieces of information.Type: GrantFiled: February 3, 2021Date of Patent: January 30, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jeong-Woo Son, Chang-Uk Kwak, Sun-Joong Kim, Alex Lee, Min-Ho Han, Gyeong-June Hahm
-
Publication number: 20220004773Abstract: Disclosed herein is an apparatus for analyzing a video shot. The apparatus includes at least one program, memory in which the program is recorded, and a processor for executing the program. The program may include a frame extraction unit for extracting at least one frame from a video shot, a shot composition and camera position recognition unit for predicting shot composition and a camera position for the extracted at least one frame based on a previously trained shot composition recognition model, a place and time information extraction unit for predicting a shot location and a shot time for the extracted at least one frame based on previously trained shot location recognition model and shot time recognition model, and an information combination unit for combining pieces of information, respectively predicted for the at least one frame, for each video shot and tagging the video shot with the combined pieces of information.Type: ApplicationFiled: February 3, 2021Publication date: January 6, 2022Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jeong-Woo SON, Chang-Uk KWAK, Sun-Joong KIM, Alex LEE, Min-Ho HAN, Gyeong-June HAHM
-
Patent number: 11017015Abstract: A technology for allowing anyone to easily create interactive media capable of easily recognizing a user interaction by using a stored image is provided. A system according to the present invention includes an image reconstruction server, an image ontology, and an image repository. The image reconstruction server includes an image reconstruction controller, a natural language processing module, and an image search module. The image reconstruction controller of the image reconstruction server receives a scenario based on a natural language from a user and searches for images desired by the user by using the natural language processing module, the image search module, and the image repository. The natural language processing module of the image reconstruction server performs a morphological analysis and a syntax analysis on the scenario input by the user as a preliminary operation for the search of the image ontology.Type: GrantFiled: May 25, 2017Date of Patent: May 25, 2021Assignee: Electronics and Telecommunications Research InstituteInventors: Min Ho Han, Sun Joong Kim, Won Joo Park, Jong Hyun Park, Jeong Woo Son
-
Patent number: 10795932Abstract: Disclosed is a method and apparatus for generating a title and a keyframe of a video. According to an embodiment of the present disclosure, the method includes: selecting a main subtitle by analyzing subtitles of the video; selecting the keyframe corresponding to the main subtitle; extracting content information of the keyframe by analyzing the keyframe; generating the title of the video using metadata of the video, the main subtitle, and the content information of the keyframe; and outputting the title and the keyframe of the video.Type: GrantFiled: September 5, 2018Date of Patent: October 6, 2020Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jeong Woo Son, Sun Joong Kim, Won Joo Park, Sang Yun Lee
-
Publication number: 20200151458Abstract: A method and apparatus for video data augmentation that automatically constructs a large amount of learning data using video data. An apparatus for augmenting video data according to an embodiment of this disclosure, the apparatus including: a feature information check unit checking feature information including a content feature, a flow feature, and a class feature of a sub video of a predetermined unit constituting an original video; a section check unit selecting a video section including at least one sub video on the basis of the feature information of the sub video; and a video augmentation unit extracting at least one substitute sub video corresponding to the selected video section from multiple pre-stored sub videos, and applying the extracted at least one sub video to the selected video section to generate an augmented video.Type: ApplicationFiled: November 13, 2019Publication date: May 14, 2020Applicant: Electronics and Telecommunications Research InstituteInventors: Jeong Woo SON, Sang Hoon LEE, Alex LEE, Sun Joong KIM
-
Patent number: 10433028Abstract: There are provided an apparatus and method for tracking temporal variation of a video content context using dynamically generated metadata, wherein the method includes generating static metadata on the basis of internal data held during an initial publication of video content and tagging the generated static metadata to the video content, collecting external data related to the video content generated after the video content is published, generating dynamic metadata related to the video content on the basis of the collected external data and tagging the generated dynamic metadata to the video content, repeating regeneration and tagging of the dynamic metadata with an elapse of time, tracking a change in content of the dynamic metadata, and generating and providing a trend analysis report corresponding to a result of tracking the change in the content.Type: GrantFiled: November 22, 2017Date of Patent: October 1, 2019Assignee: Electronics and Telecommunications Research InstituteInventors: Won Joo Park, Jeong Woo Son, Sang Kwon Kim, Sun Joong Kim, Sang Yun Lee
-
Patent number: 10372742Abstract: Disclosed is an apparatus and method for tagging a topic to content. The apparatus may include an unstructured data-based topic generator configured to generate a topic model including an unstructured data-based topic based on content and unstructured data, a viewer group analyzer configured to analyze a characteristic of a viewer group including a viewer of the content based on a social network of the viewer and viewing situation information of the viewer, a multifaceted topic generator configured to generate a multifaceted topic based on the topic model and the characteristic of the viewer group, a content divider configured to divide the content into a plurality of scenes, and a tagger configured to tag the multifaceted topic to the scenes.Type: GrantFiled: August 31, 2016Date of Patent: August 6, 2019Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jeong Woo Son, Sun Joong Kim, Won Joo Park, Sang Yun Lee, Won Ryu, Sang Kwon Kim, Seung Hee Kim, Woo Sug Jung
-
Publication number: 20190095529Abstract: Disclosed is a method and apparatus for generating a title and a keyframe of a video. According to an embodiment of the present disclosure, the method includes: selecting a main subtitle by analyzing subtitles of the video; selecting the keyframe corresponding to the main subtitle; extracting content information of the keyframe by analyzing the keyframe; generating the title of the video using metadata of the video, the main subtitle, and the content information of the keyframe; and outputting the title and the keyframe of the video.Type: ApplicationFiled: September 5, 2018Publication date: March 28, 2019Inventors: Jeong Woo SON, Sun Joong KIM, Won Joo PARK, Sang Yun LEE
-
Publication number: 20180213299Abstract: There are provided an apparatus and method for tracking temporal variation of a video content context using dynamically generated metadata, wherein the method includes generating static metadata on the basis of internal data held during an initial publication of video content and tagging the generated static metadata to the video content, collecting external data related to the video content generated after the video content is published, generating dynamic metadata related to the video content on the basis of the collected external data and tagging the generated dynamic metadata to the video content, repeating regeneration and tagging of the dynamic metadata with an elapse of time, tracking a change in content of the dynamic metadata, and generating and providing a trend analysis report corresponding to a result of tracking the change in the content.Type: ApplicationFiled: November 22, 2017Publication date: July 26, 2018Applicant: Electronics and Telecommunications Research InstituteInventors: Won Joo PARK, Jeong Woo SON, Sang Kwon KIM, Sun Joong KIM, Sang Yun LEE
-
Publication number: 20180213289Abstract: Provided is a method of authorizing a video scene and metadata for providing a GUI screen provided to a user for authorizing the video scene and the metadata. The method includes generating a GUI screen configuration for an input of data including a video, sound, subtitles, and a script, generating a GUI screen configuration for extracting and editing shots from the data, generating a GUI screen configuration for generating and editing scenes, based on the shots, generating a GUI screen configuration for automatically generating and editing metadata of the scenes, and generating a GUI screen configuration for storing the scenes and the metadata in a database.Type: ApplicationFiled: December 12, 2017Publication date: July 26, 2018Applicant: Electronics and Telecommunications Research InstituteInventors: Sang Yun LEE, Sun Joong KIM, Won Joo PARK, Jeong Woo SON
-
Publication number: 20180210890Abstract: The present invention relates to an apparatus and method for providing a content map service using a story graph of video content and a user structure query.Type: ApplicationFiled: August 29, 2017Publication date: July 26, 2018Applicant: Electronics and Telecommunications Research InstituteInventors: Jeong Woo SON, Sang Kwon KIM, Sun Joong KIM, Seung Hee KIM, Hyun Woo LEE
-
Publication number: 20180203855Abstract: A technology for allowing anyone to easily create interactive media capable of easily recognizing a user interaction by using a stored image is provided. A system according to the present invention includes an image reconstruction server, an image ontology, and an image repository. The image reconstruction server includes an image reconstruction controller, a natural language processing module, and an image search module. The image reconstruction controller of the image reconstruction server receives a scenario based on a natural language from a user and searches for images desired by the user by using the natural language processing module, the image search module, and the image repository. The natural language processing module of the image reconstruction server performs a morphological analysis and a syntax analysis on the scenario input by the user as a preliminary operation for the search of the image ontology.Type: ApplicationFiled: May 25, 2017Publication date: July 19, 2018Applicant: Electronics and Telecommunications Research InstituteInventors: Min Ho HAN, Sun Joong KIM, Won Joo PARK, Jong Hyun PARK, Jeong Woo SON
-
Patent number: 9762934Abstract: An apparatus and method for verifying broadcast content object identification based on web data. The apparatus includes: a web data processor configured to collect and process web data related to broadcast content and create content knowledge information by tagging the web data to the broadcast content; a content knowledge information storage portion configured to store the content knowledge information; and an object identification verifier configured to verify a result of identifying an object contained in the broadcast content, using the content knowledge information.Type: GrantFiled: November 4, 2015Date of Patent: September 12, 2017Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jeong Woo Son, Kee Seong Cho, Sun Joong Kim, Hwa Suk Kim, So Yung Park, Won Il Chang, Kyong Ha Lee
-
Publication number: 20170060999Abstract: Disclosed is an apparatus and method for tagging a topic to content. The apparatus may include an unstructured data-based topic generator configured to generate a topic model including an unstructured data-based topic based on content and unstructured data, a viewer group analyzer configured to analyze a characteristic of a viewer group including a viewer of the content based on a social network of the viewer and viewing situation information of the viewer, a multifaceted topic generator configured to generate a multifaceted topic based on the topic model and the characteristic of the viewer group, a content divider configured to divide the content into a plurality of scenes, and a tagger configured to tag the multifaceted topic to the scenes.Type: ApplicationFiled: August 31, 2016Publication date: March 2, 2017Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jeong Woo SON, Sun Joong KIM, Won Joo PARK, Sang Yun LEE, Won RYU, Sang Kwon KIM, Seung Hee KIM, Woo Sug JUNG
-
Publication number: 20170061215Abstract: Provided are a clustering method using broadcast content and broadcast related data and a user terminal to perform the method, the clustering method including creating a story graph with respect to each of a plurality of scenes associated with broadcast content based on the broadcast content and broadcast related data, and creating a cluster of a scene based on the created story graph.Type: ApplicationFiled: August 31, 2016Publication date: March 2, 2017Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jeong Woo SON, Sun Joong KIM, Won Joo PARK, Sang Yun LEE, Won RYU, Sang Kwon KIM, Seung Hee KIM, Woo Sug JUNG
-
Publication number: 20160224864Abstract: Provided is an object detecting method and apparatus, the apparatus configured to extract a frame image and a motion vector from a video, generate an integrated feature vector based on the frame image and the motion vector, and detect an object included in the video based on the integrated feature vector.Type: ApplicationFiled: January 21, 2016Publication date: August 4, 2016Inventors: Won Il CHANG, Jeong Woo SON, Sun Joong KIM, Hwa Suk KIM, So Yung PARK, Alex LEE, Kyong Ha LEE, Kee Seong CHO
-
Publication number: 20160127750Abstract: An apparatus and method for verifying broadcast content object identification based on web data. The apparatus includes: a web data processor configured to collect and process web data related to broadcast content and create content knowledge information by tagging the web data to the broadcast content; a content knowledge information storage portion configured to store the content knowledge information; and an object identification verifier configured to verify a result of identifying an object contained in the broadcast content, using the content knowledge information.Type: ApplicationFiled: November 4, 2015Publication date: May 5, 2016Inventors: Jeong Woo SON, Kee Seong CHO, Sun Joong KIM, Hwa Suk KIM, So Yung PARK, Won Il CHANG, Kyong Ha LEE
-
Publication number: 20100160496Abstract: The present invention provides an acrylic acid ester copolymer emulsion composition, and redispersible powders made therefrom. The acrylic acid ester copolymer emulsion composition comprises polyvinyl alcohol having a degree of saponification of 85 mol % or more, and an average degree of polymerization of 300 to 1400; hydrophilic ethylenic unsaturated monomers having a water solubility of 1% or more; hydrophobic ethylenic unsaturated monomers having a water solubility of less than 1%; and a lipophilic initiator. The acrylic acid ester copolymer composition according to the present invention has excellent polymerization stability, and improved water resistance, alkali resistance, and fluidity, and the redispersible powders prepared by a spray-dry of the acrylic acid ester copolymer composition have improved water redispersibility, and thus, can be used in various fields such as an additive to a hydraulic material, a powder paint, and an adhesive.Type: ApplicationFiled: September 21, 2007Publication date: June 24, 2010Inventor: Jeong Woo Son