Abstract: Provided are a system and method for automatically recreating personal media through fusion of multimodal features. The system includes a multimodal fusion analyzer configured to analyze semantics of personal media having various forms based on a plurality of modalities and divide the personal media into media fragments which are the smallest units having semantics, a semantic-based intelligent retriever configured to store and retrieve the divided media fragments by considering the semantics, a personal media recommender configured to learn and analyze a profile of a user through modeling the user, and select and recommend a plurality of media fragments wanted by the user among the media fragments retrieved by the semantic-based intelligent retriever, and a personal media creator configured to create new personal media using the plurality of media fragments recommended by the personal media recommender according to a scenario input by the user.
Type:
Grant
Filed:
June 27, 2016
Date of Patent:
July 2, 2019
Assignees:
Electronics and Telecommunications Research Institute, SOGANG UNIVERSITY RESEARCH FOUNDATION, DIQUEST INC., Soongsil University Research Consortium techno-PARK, Pukyong National University Industry-University Cooperation Foundation
Inventors:
Kyeong Deok Moon, Jong Ho Nang, Yun Kyung Park, Kyung Sun Kim, Chae Kyu Kim, Ki Ryong Kwon, Kyoung Ju Noh, Young Tack Park, Kwang Il Lee