Patents by Inventor Seunghyun Yoon

Seunghyun Yoon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240020337
    Abstract: Systems and methods for intent discovery and video summarization are described. Embodiments of the present disclosure receive a video and a transcript of the video, encode the video to obtain a sequence of video encodings, encode the transcript to obtain a sequence of text encodings, apply a visual gate to the sequence of text encodings based on the sequence of video encodings to obtain gated text encodings, and generate an intent label for the transcript based on the gated text encodings.
    Type: Application
    Filed: July 12, 2022
    Publication date: January 18, 2024
    Inventors: Adyasha Maharana, Quan Hung Tran, Seunghyun Yoon, Franck Dernoncourt, Trung Huu Bui, Walter W. Chang
  • Publication number: 20230419164
    Abstract: Multitask machine-learning model training and training data augmentation techniques are described. In one example, training is performed for multiple tasks simultaneously as part of training a multitask machine-learning model using question pairs. Examples of the multiple tasks include question summarization and recognizing question entailment. Further, a loss function is described that incorporates a parameter sharing loss that is configured to adjust an amount that parameters are shared between corresponding layers trained for the first and second tasks, respectively. In an implementation, training data augmentation techniques are also employed by synthesizing question pairs, automatically and without user intervention, to improve accuracy in model training.
    Type: Application
    Filed: June 22, 2022
    Publication date: December 28, 2023
    Applicant: Adobe Inc.
    Inventors: Khalil Mrini, Franck Dernoncourt, Seunghyun Yoon, Trung Huu Bui, Walter W. Chang, Emilia Farcas, Ndapandula T. Nakashole
  • Publication number: 20230418868
    Abstract: Systems and methods for text processing are described. Embodiments of the present disclosure receive a query comprising a natural language expression; extract a plurality of mentions from the query; generate a relation vector between a pair of the plurality of mentions using a relation encoder network, wherein the relation encoder network is trained using a contrastive learning process where mention pairs from a same document are labeled as positive samples and mention pairs from different documents are labeled as negative samples; combine the plurality of mentions with the relation vector to obtain a virtual knowledge graph of the query; identify a document corresponding to the query by comparing the virtual knowledge graph of the query to a virtual knowledge graph of the document; and transmit a response to the query, wherein the response includes a reference to the document.
    Type: Application
    Filed: June 24, 2022
    Publication date: December 28, 2023
    Inventors: Yeon Seonwoo, Seunghyun Yoon, Trung Huu Bui, Franck Dernoncourt, Roger K. Brooks, Mihir Naware
  • Publication number: 20230267726
    Abstract: Embodiments of the disclosure provide a machine learning model for generating a predicted executable command for an image. The learning model includes an interface configured to obtain an utterance indicating a request associated with the image, an utterance sub-model, a visual sub-model, an attention network, and a selection gate. The machine learning model generates a segment of the predicted executable command from weighted probabilities of each candidate token in a predetermined vocabulary determined based on the visual features, the concept features, current command features, and the utterance features extracted from the utterance or the image.
    Type: Application
    Filed: February 18, 2022
    Publication date: August 24, 2023
    Inventors: Seunghyun Yoon, Trung Huu Bui, Franck Dernoncourt, Hyounghun Kim, Doo Soon Kim
  • Publication number: 20230259708
    Abstract: Systems and methods for key-phrase extraction are described. The systems and methods include receiving a transcript including a text paragraph and generating key-phrase data for the text paragraph using a key-phrase extraction network. The key-phrase extraction network is trained to identify domain-relevant key-phrase data based on domain data obtained using a domain discriminator network. The systems and methods further include generating meta-data for the transcript based on the key-phrase data.
    Type: Application
    Filed: February 14, 2022
    Publication date: August 17, 2023
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt, Walter W. Chang, Trung Huu Bui, Hanieh Deilamsalehy, Seunghyun Yoon, Rajiv Bhawanji Jain, Quan Hung Tran, Varun Manjunatha
  • Publication number: 20230259718
    Abstract: Techniques for training a language model for code switching content are disclosed. Such techniques include, in some embodiments, generating a dataset, which includes identifying one or more portions within textual content in a first language, the identified one or more portions each including one or more of offensive content or non-offensive content; translating the identified one or more salient portions to a second language; and reintegrating the translated one or more portions into the textual content to generate code-switched textual content. In some cases, the textual content in the first language includes offensive content and non-offensive content, the identified one or more portions include the offensive content, and the translated one or more portions include a translated version of the offensive content. In some embodiments, the code-switched textual content is at least part of a synthetic dataset usable to train a language model, such as a multilingual classification model.
    Type: Application
    Filed: February 17, 2022
    Publication date: August 17, 2023
    Inventors: Cesa Salaam, Seunghyun Yoon, Trung Huu Bui, Franck Dernoncourt
  • Publication number: 20230153522
    Abstract: Systems and methods for image captioning are described. One or more aspects of the systems and methods include generating a training caption for a training image using an image captioning network; encoding the training caption using a multi-modal encoder to obtain an encoded training caption; encoding the training image using the multi-modal encoder to obtain an encoded training image; computing a reward function based on the encoded training caption and the encoded training image; and updating parameters of the image captioning network based on the reward function.
    Type: Application
    Filed: November 18, 2021
    Publication date: May 18, 2023
    Inventors: Jaemin Cho, Seunghyun Yoon, Ajinkya Gorakhnath Kale, Trung Huu Bui, Franck Dernoncourt
  • Publication number: 20230153341
    Abstract: An incongruent headline detection system receives a request to determine a headline incongruence score for an electronic document. The incongruent headline detection system determines the headline incongruence score for the electronic document by applying a machine learning model to the electronic document. Applying the machine learning model to the electronic document includes generating a graph representing a textual similarity between a headline of the electronic document and each of a plurality of paragraphs of the electronic document and determining the headline incongruence score using the graph. The incongruent headline detection system transmits, responsive to the request, the headline incongruence score for the electronic document.
    Type: Application
    Filed: November 17, 2021
    Publication date: May 18, 2023
    Inventor: Seunghyun Yoon
  • Publication number: 20230136527
    Abstract: Systems and methods for natural language processing are described. One or more aspects of a method, apparatus, and non-transitory computer readable medium include receiving a text phrase; encoding the text phrase using an encoder to obtain a hidden representation of the text phrase, wherein the encoder is trained during a first training phrase using self-supervised learning based on a first contrastive loss and during a second training phrase using supervised learning based on a second contrastive learning loss; identifying an intent of the text phrase from a predetermined set of intent labels using a classification network, wherein the classification network is jointly trained with the encoder in the second training phase; and generating a response to the text phrase based on the intent.
    Type: Application
    Filed: November 4, 2021
    Publication date: May 4, 2023
    Inventors: Jianguo Zhang, Trung Huu Bui, Seunghyun Yoon, Xiang Chen, Quan Hung Tran, Walter W. Chang
  • Publication number: 20220196428
    Abstract: Disclosed is a method for providing geographic information and analysis information for each user using a universal map, which includes: a map information receiving step of requesting authentication to a universal map server storing map information including one of a map image and guide information by transmitting user authentication information except socially disadvantaged type information, to receive map information from the universal map server; a guide information extraction step of extracting guide information, as guide information for providing the geographic information on the map image to the user terminal from the received map information, corresponding to the socially disadvantaged type information of the user among the user account information from a data model database; and a geographic information providing step of providing geographic information customized according to the user, by outputting geographic information reflecting attribute information of spatial objects to the user terminal based
    Type: Application
    Filed: August 31, 2021
    Publication date: June 23, 2022
    Inventors: Jaemin YOON, Sooyeon HAN, Suyeon CHO, Sunwoo KIM, Seunghyun YOON
  • Publication number: 20220138363
    Abstract: Disclosed are a system for realizing a digital twin using XML parsing of building information modeling data and an energy visualization system using the same that are implemented by a computing device, which includes: an object information definition module for defining attribute information of an energy consumption-related object constituting indoor spatial information; an address mapping module for defining a grid address for a grid region constituting the one space and mapping an object having attribute information; a digital twin implementation module for implementing digital twin data for the predetermined space in a virtual storage space; and an energy flow visualization module for applying the attribute information of the object and virtual energy application scenario data to the digital twin data, and creating energy flow visualization data.
    Type: Application
    Filed: August 23, 2021
    Publication date: May 5, 2022
    Inventors: Jaemin Yoon, Wonwoo Lee, Insun Baek, Dongkyu Na, Suyeon Cho, Seunghyun Yoon
  • Publication number: 20220074760
    Abstract: Disclosed is a method for providing geographic information and analysis information for each user using a universal map, which includes: a map information receiving step of requesting authentication to a universal map server storing map information including one of a map image and guide information by transmitting user authentication information except socially disadvantaged type information, to receive map information from the universal map server; a guide information extraction step of extracting guide information, as guide information for providing the geographic information on the map image to the user terminal from the received map information, corresponding to the socially disadvantaged type information of the user among the user account information from a data model database; and a geographic information providing step of providing geographic information customized according to the user, by outputting geographic information reflecting attribute information of spatial objects to the user terminal based
    Type: Application
    Filed: December 10, 2019
    Publication date: March 10, 2022
    Inventors: Jaemin YOON, Sooyeon HAN, Suyeon CHO, Eunjeong JANG, Sunwoo KIM, Seunghyun YOON
  • Publication number: 20220076693
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for determining speech emotion. In particular, a speech emotion recognition system generates an audio feature vector and a textual feature vector for a sequence of words. Further, the speech emotion recognition system utilizes a neural attention mechanism that intelligently blends together the audio feature vector and the textual feature vector to generate attention output. Using the attention output, which includes consideration of both audio and text modalities for speech corresponding to the sequence of words, the speech emotion recognition system can apply attention methods to one of the feature vectors to generate a hidden feature vector. Based on the hidden feature vector, the speech emotion recognition system can generate a speech emotion probability distribution of emotions among a group of candidate emotions, and then select one of the candidate emotions as corresponding to the sequence of words.
    Type: Application
    Filed: November 15, 2021
    Publication date: March 10, 2022
    Inventors: Trung Bui, Subhadeep Dey, Seunghyun Yoon
  • Patent number: 11271876
    Abstract: The present disclosure relates to utilizing a graph neural network to accurately and flexibly identify text phrases that are relevant for responding to a query. For example, the disclosed systems can generate a graph topology having a plurality of nodes that correspond to a plurality of text phrases and a query. The disclosed systems can then utilize a graph neural network to analyze the graph topology, iteratively propagating and updating node representations corresponding to the plurality of nodes, in order to identify text phrases that can be used to respond to the query. In some embodiments, the disclosed systems can then generate a digital response to the query based on the identified text phrases.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: March 8, 2022
    Assignee: Adobe Inc.
    Inventors: Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui
  • Patent number: 11205444
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for determining speech emotion. In particular, a speech emotion recognition system generates an audio feature vector and a textual feature vector for a sequence of words. Further, the speech emotion recognition system utilizes a neural attention mechanism that intelligently blends together the audio feature vector and the textual feature vector to generate attention output. Using the attention output, which includes consideration of both audio and text modalities for speech corresponding to the sequence of words, the speech emotion recognition system can apply attention methods to one of the feature vectors to generate a hidden feature vector. Based on the hidden feature vector, the speech emotion recognition system can generate a speech emotion probability distribution of emotions among a group of candidate emotions, and then select one of the candidate emotions as corresponding to the sequence of words.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: December 21, 2021
    Assignee: Adobe Inc.
    Inventors: Trung Bui, Subhadeep Dey, Seunghyun Yoon
  • Publication number: 20210058345
    Abstract: The present disclosure relates to utilizing a graph neural network to accurately and flexibly identify text phrases that are relevant for responding to a query. For example, the disclosed systems can generate a graph topology having a plurality of nodes that correspond to a plurality of text phrases and a query. The disclosed systems can then utilize a graph neural network to analyze the graph topology, iteratively propagating and updating node representations corresponding to the plurality of nodes, in order to identify text phrases that can be used to respond to the query. In some embodiments, the disclosed systems can then generate a digital response to the query based on the identified text phrases.
    Type: Application
    Filed: August 22, 2019
    Publication date: February 25, 2021
    Inventors: Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui
  • Publication number: 20210050033
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for determining speech emotion. In particular, a speech emotion recognition system generates an audio feature vector and a textual feature vector for a sequence of words. Further, the speech emotion recognition system utilizes a neural attention mechanism that intelligently blends together the audio feature vector and the textual feature vector to generate attention output. Using the attention output, which includes consideration of both audio and text modalities for speech corresponding to the sequence of words, the speech emotion recognition system can apply attention methods to one of the feature vectors to generate a hidden feature vector. Based on the hidden feature vector, the speech emotion recognition system can generate a speech emotion probability distribution of emotions among a group of candidate emotions, and then select one of the candidate emotions as corresponding to the sequence of words.
    Type: Application
    Filed: August 16, 2019
    Publication date: February 18, 2021
    Inventors: Trung Bui, Subhadeep Dey, Seunghyun Yoon
  • Publication number: 20170065311
    Abstract: An anterior cervical plate assembly is characterized in that it comprises a base plate and fastener with a two-part locking mechanism. The locking mechanism includes a blocking plate that translates along a recess in the base plate to partially overlap a fastener, preventing back-out. The locking mechanism also includes a locking piece that extends through the base plate and has a rotational cam surface that causes the translation of the blocking plate.
    Type: Application
    Filed: September 6, 2016
    Publication date: March 9, 2017
    Applicant: GS Medical Inc.
    Inventors: Milan George, Seunghyun Yoon
  • Patent number: D1023857
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: April 23, 2024
    Assignee: LG ELECTRONICS INC.
    Inventors: Mujin Park, Dongeun Kim, Hyunjin Yoon, Euiseok Lee, Jaebeom Im, Sanghyun Choi, Seunghyun Ji
  • Patent number: D1024353
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: April 23, 2024
    Assignee: LG ELECTRONICS INC.
    Inventors: Seunghyun Ji, Dongeun Kim, Hyunjin Yoon, Euiseok Lee, Jaebeom Im, Sanghyun Choi, Mujin Park