Patents by Inventor Seunghyun Yoon
Seunghyun Yoon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250259437Abstract: Embodiments of the disclosure provide a machine learning model for generating a predicted executable command for an image. The learning model includes an interface configured to obtain an utterance indicating a request associated with the image, an utterance sub-model, a visual sub-model, an attention network, and a selection gate. The machine learning model generates a segment of the predicted executable command from weighted probabilities of each candidate token in a predetermined vocabulary determined based on the visual features, the concept features, current command features, and the utterance features extracted from the utterance or the image.Type: ApplicationFiled: April 2, 2025Publication date: August 14, 2025Inventors: Seunghyun Yoon, Trung Huu Bui, Franck Dernoncourt, Hyounghun Kim, Doo Soon Kim
-
Publication number: 20250252265Abstract: The present disclosure is directed toward systems, methods, and non-transitory computer readable media that provide a contextual query answering system that trains and implements a unique machine learning architecture to generate accurate domain-specific contextual responses. For example, the disclosed systems receive a contextual query indicating a software context of a computer application within a software-specific domain. The disclosed systems utilize a context retrieval model to generate query embeddings from the contextual query and data segment embeddings from data segments of stored digital documents. Further, the context retrieval model determines relevant digital documents from among the stored digital documents based on comparing the query embeddings and the data segment embeddings. The disclosed systems provide the relevant digital documents to a response generator model to generate a contextual response within the software-specific domain.Type: ApplicationFiled: February 5, 2024Publication date: August 7, 2025Inventors: Varun Kumar Kotte, Trung Bui, Seunghyun Yoon, Sanat Sharma, Franck Dernoncourt, Dewang Sultania
-
Publication number: 20250209278Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for identifying speaker names in transcripts. In particular, in one or more embodiments, the disclosed systems determine, from a set of sentences in a textual transcript of a dialogue, a first sentence spoken by a first speaker and a second sentence spoken by a second speaker. Additionally, in some embodiments, the disclosed systems generate a first feature representation for the first sentence and a second feature representation for the second sentence. Moreover, in some embodiments, the disclosed systems determine a speaker name for at least one of the first sentence or the second sentence by comparing each of the first feature representation and the second feature representation with a name representation for a name spoken in at least one of the first sentence or the second sentence.Type: ApplicationFiled: December 20, 2023Publication date: June 26, 2025Inventors: Minh Nguyen, Franck Dernoncourt, Hanieh Deilamsalehy, Hao Tan, Quan Tran, Seunghyun Yoon, Trung Bui
-
Publication number: 20250200335Abstract: Methods, non-transitory computer readable media, apparatuses, and systems for identifying misinformation include obtaining a training sample comprising a text graph and a label indicating whether the text graph includes misinformation, and generating a pseudo-sample by modifying the text graph to obtain a modified text graph, where the pseudo-sample includes the modified text graph and the label. A graph classifier is trained to identify misinformation using the training sample and the pseudo-sample.Type: ApplicationFiled: December 19, 2023Publication date: June 19, 2025Inventors: Jaya Singh, Seunghyun Yoon
-
Patent number: 12332939Abstract: Systems and methods for text processing are described. Embodiments of the present disclosure receive a query comprising a natural language expression; extract a plurality of mentions from the query; generate a relation vector between a pair of the plurality of mentions using a relation encoder network, wherein the relation encoder network is trained using a contrastive learning process where mention pairs from a same document are labeled as positive samples and mention pairs from different documents are labeled as negative samples; combine the plurality of mentions with the relation vector to obtain a virtual knowledge graph of the query; identify a document corresponding to the query by comparing the virtual knowledge graph of the query to a virtual knowledge graph of the document; and transmit a response to the query, wherein the response includes a reference to the document.Type: GrantFiled: June 24, 2022Date of Patent: June 17, 2025Assignee: ADOBE INC.Inventors: Yeon Seonwoo, Seunghyun Yoon, Trung Huu Bui, Franck Dernoncourt, Roger K. Brooks, Mihir Naware
-
Patent number: 12293577Abstract: Embodiments of the disclosure provide a machine learning model for generating a predicted executable command for an image. The learning model includes an interface configured to obtain an utterance indicating a request associated with the image, an utterance sub-model, a visual sub-model, an attention network, and a selection gate. The machine learning model generates a segment of the predicted executable command from weighted probabilities of each candidate token in a predetermined vocabulary determined based on the visual features, the concept features, current command features, and the utterance features extracted from the utterance or the image.Type: GrantFiled: February 18, 2022Date of Patent: May 6, 2025Assignee: Adobe Inc.Inventors: Seunghyun Yoon, Trung Huu Bui, Franck Dernoncourt, Hyounghun Kim, Doo Soon Kim
-
Publication number: 20250077775Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating aspect-based summaries utilizing deep learning. In particular, in one or more embodiments, the disclosed systems access a transcript comprising sentences. The disclosed systems generate, utilizing a sentence classification machine learning model, aspect labels for the sentences of the transcript. The disclosed systems organize the sentences based on the aspect labels. The disclosed systems generate, utilizing a summary machine learning model, a summary of the transcript for each aspect of the plurality of aspects from the organized sentences.Type: ApplicationFiled: August 29, 2023Publication date: March 6, 2025Inventors: Zhongfen Deng, Seunghyun Yoon, Trung Bui, Quan Tran, Franck Dernoncourt
-
Patent number: 12242820Abstract: Techniques for training a language model for code switching content are disclosed. Such techniques include, in some embodiments, generating a dataset, which includes identifying one or more portions within textual content in a first language, the identified one or more portions each including one or more of offensive content or non-offensive content; translating the identified one or more salient portions to a second language; and reintegrating the translated one or more portions into the textual content to generate code-switched textual content. In some cases, the textual content in the first language includes offensive content and non-offensive content, the identified one or more portions include the offensive content, and the translated one or more portions include a translated version of the offensive content. In some embodiments, the code-switched textual content is at least part of a synthetic dataset usable to train a language model, such as a multilingual classification model.Type: GrantFiled: February 17, 2022Date of Patent: March 4, 2025Assignee: Adobe Inc.Inventors: Cesa Salaam, Seunghyun Yoon, Trung Huu Bui, Franck Dernoncourt
-
Publication number: 20250068924Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for providing multilingual semantic search results utilizing meta-learning and knowledge distillation. For example, in some implementations, the disclosed systems perform a first inner learning loop for a monolingual to bilingual meta-learning task for a teacher model. Additionally, in some implementations, the disclosed systems perform a second inner learning loop for a bilingual to multilingual meta-learning task for a student model. In some embodiments, the disclosed systems perform knowledge distillation based on the first inner learning loop for the monolingual to bilingual meta-learning task and the second inner learning loop for the bilingual to multilingual meta-learning task.Type: ApplicationFiled: August 14, 2023Publication date: February 27, 2025Inventors: Meryem M'hamdi, Seunghyun Yoon, Franck Dernoncourt, Trung Bui
-
Patent number: 12236975Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for determining speech emotion. In particular, a speech emotion recognition system generates an audio feature vector and a textual feature vector for a sequence of words. Further, the speech emotion recognition system utilizes a neural attention mechanism that intelligently blends together the audio feature vector and the textual feature vector to generate attention output. Using the attention output, which includes consideration of both audio and text modalities for speech corresponding to the sequence of words, the speech emotion recognition system can apply attention methods to one of the feature vectors to generate a hidden feature vector. Based on the hidden feature vector, the speech emotion recognition system can generate a speech emotion probability distribution of emotions among a group of candidate emotions, and then select one of the candidate emotions as corresponding to the sequence of words.Type: GrantFiled: November 15, 2021Date of Patent: February 25, 2025Assignee: Adobe Inc.Inventors: Trung Bui, Subhadeep Dey, Seunghyun Yoon
-
Patent number: 12210825Abstract: Systems and methods for image captioning are described. One or more aspects of the systems and methods include generating a training caption for a training image using an image captioning network; encoding the training caption using a multi-modal encoder to obtain an encoded training caption; encoding the training image using the multi-modal encoder to obtain an encoded training image; computing a reward function based on the encoded training caption and the encoded training image; and updating parameters of the image captioning network based on the reward function.Type: GrantFiled: November 18, 2021Date of Patent: January 28, 2025Assignee: ADOBE INC.Inventors: Jaemin Cho, Seunghyun Yoon, Ajinkya Gorakhnath Kale, Trung Huu Bui, Franck Dernoncourt
-
Publication number: 20250028758Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that learns parameters for a natural language video localization model utilizing a curated dataset. In particular, in some embodiments, the disclosed systems generate a set of similarity scores between a target query and a video dataset that includes a plurality of digital videos. For instance, the disclosed systems determines a false-negative threshold by utilizing the set of similarity scores to exclude a subset of false-negative samples from the plurality of digital videos. Further, the disclosed systems determines a negative sample distribution and generates a curated dataset that includes a subset of negative samples with the subset of false-negative samples excluded.Type: ApplicationFiled: July 19, 2023Publication date: January 23, 2025Inventor: Seunghyun Yoon
-
Publication number: 20250022459Abstract: The disclosed method generates helpful training data for a language model, for example, a model implementing a punctuation restoration task, for real-world ASR texts. The method uses a reinforcement learning method using a generative AI model to generate additional data to train the language model. The method allows the generative AI model to learn from real-world ASR text to generate more effective training examples based on gradient feedback from the language model.Type: ApplicationFiled: July 12, 2023Publication date: January 16, 2025Applicant: Adobe Inc.Inventors: Viet Dac Lai, Trung Bui, Seunghyun Yoon, Quan Tran, Hao Tan, Hanieh Deilamsalehy, Abel Salinas, Franck Dernoncourt
-
Patent number: 12182524Abstract: Systems and methods for natural language processing are described. One or more aspects of a method, apparatus, and non-transitory computer readable medium include receiving a text phrase; encoding the text phrase using an encoder to obtain a hidden representation of the text phrase, wherein the encoder is trained during a first training phrase using self-supervised learning based on a first contrastive loss and during a second training phrase using supervised learning based on a second contrastive learning loss; identifying an intent of the text phrase from a predetermined set of intent labels using a classification network, wherein the classification network is jointly trained with the encoder in the second training phase; and generating a response to the text phrase based on the intent.Type: GrantFiled: November 4, 2021Date of Patent: December 31, 2024Assignee: ADOBE INC.Inventors: Jianguo Zhang, Trung Huu Bui, Seunghyun Yoon, Xiang Chen, Quan Hung Tran, Walter W. Chang
-
Publication number: 20240355119Abstract: One or more aspects of the method, apparatus, and non-transitory computer readable medium include receiving a query relating to a long video. One or more aspects of the method, apparatus, and non-transitory computer readable medium further include generating a segment of the long video corresponding to the query using a machine learning model trained to identify relevant segments from long videos. One or more aspects of the method, apparatus, and non-transitory computer readable medium further include responding to the query based on the generated segment.Type: ApplicationFiled: April 24, 2023Publication date: October 24, 2024Inventors: Ioana Croitoru, Trung Huu Bui, Zhaowen Wang, Seunghyun Yoon, Franck Dernoncourt, Hailin Jin
-
Patent number: 12124508Abstract: Systems and methods for intent discovery and video summarization are described. Embodiments of the present disclosure receive a video and a transcript of the video, encode the video to obtain a sequence of video encodings, encode the transcript to obtain a sequence of text encodings, apply a visual gate to the sequence of text encodings based on the sequence of video encodings to obtain gated text encodings, and generate an intent label for the transcript based on the gated text encodings.Type: GrantFiled: July 12, 2022Date of Patent: October 22, 2024Assignee: ADOBE INC.Inventors: Adyasha Maharana, Quan Hung Tran, Seunghyun Yoon, Franck Dernoncourt, Trung Huu Bui, Walter W. Chang
-
Publication number: 20240304009Abstract: Embodiments are disclosed for training an image caption evaluation system to perform evaluations of image captions. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a training image, a ground truth image caption for the training image, and a perturbed image caption for the training image, where the perturbed image caption includes modifications to the ground truth image caption. The disclosed systems and methods further comprise generating, by a visual encoder, a visual embedding representation of the training image and generating, by a perturbation-aware text encoder, a first text embedding for the ground truth image caption and a second text embedding for the perturbed image caption. The disclosed systems and methods further comprise computing losses between the visual embedding, the first text embedding, and the second text embedding and training the perturbation-aware text encoder based on the computed losses.Type: ApplicationFiled: March 6, 2023Publication date: September 12, 2024Applicant: Adobe Inc.Inventors: Seunghyun YOON, Trung BUI
-
Patent number: 12038960Abstract: An incongruent headline detection system receives a request to determine a headline incongruence score for an electronic document. The incongruent headline detection system determines the headline incongruence score for the electronic document by applying a machine learning model to the electronic document. Applying the machine learning model to the electronic document includes generating a graph representing a textual similarity between a headline of the electronic document and each of a plurality of paragraphs of the electronic document and determining the headline incongruence score using the graph. The incongruent headline detection system transmits, responsive to the request, the headline incongruence score for the electronic document.Type: GrantFiled: November 17, 2021Date of Patent: July 16, 2024Assignee: Adobe Inc.Inventor: Seunghyun Yoon
-
Publication number: 20240020337Abstract: Systems and methods for intent discovery and video summarization are described. Embodiments of the present disclosure receive a video and a transcript of the video, encode the video to obtain a sequence of video encodings, encode the transcript to obtain a sequence of text encodings, apply a visual gate to the sequence of text encodings based on the sequence of video encodings to obtain gated text encodings, and generate an intent label for the transcript based on the gated text encodings.Type: ApplicationFiled: July 12, 2022Publication date: January 18, 2024Inventors: Adyasha Maharana, Quan Hung Tran, Seunghyun Yoon, Franck Dernoncourt, Trung Huu Bui, Walter W. Chang
-
Publication number: 20230418868Abstract: Systems and methods for text processing are described. Embodiments of the present disclosure receive a query comprising a natural language expression; extract a plurality of mentions from the query; generate a relation vector between a pair of the plurality of mentions using a relation encoder network, wherein the relation encoder network is trained using a contrastive learning process where mention pairs from a same document are labeled as positive samples and mention pairs from different documents are labeled as negative samples; combine the plurality of mentions with the relation vector to obtain a virtual knowledge graph of the query; identify a document corresponding to the query by comparing the virtual knowledge graph of the query to a virtual knowledge graph of the document; and transmit a response to the query, wherein the response includes a reference to the document.Type: ApplicationFiled: June 24, 2022Publication date: December 28, 2023Inventors: Yeon Seonwoo, Seunghyun Yoon, Trung Huu Bui, Franck Dernoncourt, Roger K. Brooks, Mihir Naware