Patents by Inventor Doo Soon Kim

Doo Soon Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11893345
    Abstract: Systems and methods for natural language processing are described. One or more embodiments of the present disclosure receive a document comprising a plurality of words organized into a plurality of sentences, the words comprising an event trigger word and an argument candidate word, generate word representation vectors for the words, generate a plurality of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the plurality of sentences, generate a relationship representation vector based on the document structures, and predict a relationship between the event trigger word and the argument candidate word based on the relationship representation vector.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: February 6, 2024
    Assignee: ADOBE, INC.
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt, Quan Tran, Varun Manjunatha, Lidan Wang, Rajiv Jain, Doo Soon Kim, Walter Chang
  • Publication number: 20230403175
    Abstract: Systems and methods for coreference resolution are provided. One aspect of the systems and methods includes inserting a speaker tag into a transcript, wherein the speaker tag indicates that a name in the transcript corresponds to a speaker of a portion of the transcript; encoding a plurality of candidate spans from the transcript based at least in part on the speaker tag to obtain a plurality of span vectors; extracting a plurality of entity mentions from the transcript based on the plurality of span vectors, wherein each of the plurality of entity mentions corresponds to one of the plurality of candidate spans; and generating coreference information for the transcript based on the plurality of entity mentions, wherein the coreference information indicates that a pair of candidate spans of the plurality of candidate spans corresponds to a pair of entity mentions that refer to a same entity.
    Type: Application
    Filed: June 14, 2022
    Publication date: December 14, 2023
    Inventors: Tuan Manh Lai, Trung Huu Bui, Doo Soon Kim
  • Publication number: 20230297603
    Abstract: Systems and methods for natural language processing are described. Embodiments of the present disclosure identify a task set including a plurality of pseudo tasks, wherein each of the plurality of pseudo tasks includes a support set corresponding to a first natural language processing (NLP) task and a query set corresponding to a second NLP task; update a machine learning model in an inner loop based on the support set; update the machine learning model in an outer loop based on the query set; and perform the second NLP task using the machine learning model.
    Type: Application
    Filed: March 18, 2022
    Publication date: September 21, 2023
    Inventors: Meryem M'hamdi, Doo Soon Kim, Franck Dernoncourt, Trung Huu Bui
  • Publication number: 20230267726
    Abstract: Embodiments of the disclosure provide a machine learning model for generating a predicted executable command for an image. The learning model includes an interface configured to obtain an utterance indicating a request associated with the image, an utterance sub-model, a visual sub-model, an attention network, and a selection gate. The machine learning model generates a segment of the predicted executable command from weighted probabilities of each candidate token in a predetermined vocabulary determined based on the visual features, the concept features, current command features, and the utterance features extracted from the utterance or the image.
    Type: Application
    Filed: February 18, 2022
    Publication date: August 24, 2023
    Inventors: Seunghyun Yoon, Trung Huu Bui, Franck Dernoncourt, Hyounghun Kim, Doo Soon Kim
  • Patent number: 11709690
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating coachmarks and concise instructions based on operation descriptions for performing application operations. For example, the disclosed systems can utilize a multi-task summarization neural network to analyze an operation description and generate a coachmark and a concise instruction corresponding to the operation description. In addition, the disclosed systems can provide a coachmark and a concise instruction for display within a user interface to, directly within a client application, guide a user to perform an operation by interacting with a particular user interface element.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: July 25, 2023
    Assignee: Adobe Inc.
    Inventors: Nedim Lipka, Doo Soon Kim
  • Patent number: 11630952
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can classify term sequences within a source text based on textual features analyzed by both an implicit-class-recognition model and an explicit-class-recognition model. For example, by applying machine-learning models for both implicit and explicit class recognition, the disclosed systems can determine a class corresponding to a particular term sequence within a source text and identify the particular term sequence reflecting the class. The dual-model architecture can equip the disclosed systems to apply (i) the implicit-class-recognition model to recognize implicit references to a class in source texts and (ii) the explicit-class-recognition model to recognize explicit references to the same class in source texts.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: April 18, 2023
    Assignee: Adobe Inc.
    Inventors: Sean MacAvaney, Franck Dernoncourt, Walter Chang, Seokhwan Kim, Doo Soon Kim, Chen Fang
  • Patent number: 11620457
    Abstract: Systems and methods for sentence fusion are described. Embodiments receive coreference information for a first sentence and a second sentence, wherein the coreference information identifies entities associated with both a term of the first sentence and a term of the second sentence, apply an entity constraint to an attention head of a sentence fusion network, wherein the entity constraint limits attention weights of the attention head to terms that correspond to a same entity of the coreference information, and predict a fused sentence using the sentence fusion network based on the entity constraint, wherein the fused sentence combines information from the first sentence and the second sentence.
    Type: Grant
    Filed: February 17, 2021
    Date of Patent: April 4, 2023
    Assignee: ADOBE INC.
    Inventors: Logan Lebanoff, Franck Dernoncourt, Doo Soon Kim, Lidan Wang, Walter Chang
  • Patent number: 11561969
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for generating pairs of natural language queries and corresponding query-language representations. For example, the disclosed systems can generate a contextual representation of a prior-generated dialogue sequence to compare with logical-form rules. In some implementations, the logical-form rules comprise trigger conditions and corresponding logical-form actions for constructing a logical-form representation of a subsequent dialogue sequence. Based on the comparison to logical-form rules indicating satisfaction of one or more trigger conditions, the disclosed systems can perform logical-form actions to generate a logical-form representation of a subsequent dialogue sequence.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: January 24, 2023
    Assignee: Adobe Inc.
    Inventors: Doo Soon Kim, Anthony M Colas, Franck Dernoncourt, Moumita Sinha, Trung Bui
  • Patent number: 11537950
    Abstract: This disclosure describes one or more implementations of a text sequence labeling system that accurately and efficiently utilize a joint-learning self-distillation approach to improve text sequence labeling machine-learning models. For example, in various implementations, the text sequence labeling system trains a text sequence labeling machine-learning teacher model to generate text sequence labels. The text sequence labeling system then creates and trains a text sequence labeling machine-learning student model utilizing the training and the output of the teacher model. Upon the student model achieving improved results over the teacher model, the text sequence labeling system re-initializes the teacher model with the learned model parameters of the student model and repeats the above joint-learning self-distillation framework. The text sequence labeling system then utilizes a trained text sequence labeling model to generate text sequence labels from input documents.
    Type: Grant
    Filed: October 14, 2020
    Date of Patent: December 27, 2022
    Assignee: Adobe Inc.
    Inventors: Trung Bui, Tuan Manh Lai, Quan Tran, Doo Soon Kim
  • Publication number: 20220383150
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that provide a platform for on-demand selection of machine-learning models and on-demand learning of parameters for the selected machine-learning models via cloud-based systems. For instance, the disclosed system receives a request indicating a selection of a machine-learning model to perform a machine-learning task (e.g., a natural language task) utilizing a specific dataset (e.g., a user-defined dataset). The disclosed system utilizes a scheduler to monitor available computing devices on cloud-based storage systems for instantiating the selected machine-learning model. Using the indicated dataset at a determined cloud-based computing device, the disclosed system automatically trains the machine-learning model.
    Type: Application
    Filed: May 26, 2021
    Publication date: December 1, 2022
    Inventors: Nham Van Le, Tuan Manh Lai, Trung Bui, Doo Soon Kim
  • Patent number: 11514338
    Abstract: An activity planning system comprises a knowledge base, a query processor, and a temporal reasoner. A query including temporal constraints is input into the query processor. The query processor converts the query into a formal representation. The formal representation is a formal graphical semantic representation grounded on an ontology defined in the knowledge base. The temporal reasoner processes the query representation output by the query processor against the knowledge base which defines a set of object. For each object, the temporal reasoner produces a normalized score from 0 to 1 to indicate the degree of how likely the object satisfies the temporal constraints imposed by the query.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: November 29, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Doo Soon Kim, Fuliang Weng
  • Publication number: 20220374426
    Abstract: Systems and methods for natural language processing are described. One or more embodiments of the present disclosure receive a query related to information in a table, compute an operation selector by combining the query with an operation embedding representing a plurality of table operations, compute a column selector by combining the query with a weighted operation embedding, compute a row selector based on the operation selector and the column selector, compute a probability value for a cell in the table based on the row selector and the column selector, where the probability value represents a probability that the cell provides an answer to the query, and transmit contents of the cell based on the probability value.
    Type: Application
    Filed: May 11, 2021
    Publication date: November 24, 2022
    Inventors: Dung Thai, Doo Soon Kim, Franck Dernoncourt, Trung Bui
  • Patent number: 11468880
    Abstract: Dialog system training techniques using a simulated user system are described. In one example, a simulated user system supports multiple agents. The dialog system, for instance, may be configured for use with an application (e.g., digital image editing application). The simulated user system may therefore simulate user actions involving both the application and the dialog system which may be used to train the dialog system. Additionally, the simulated user system is not limited to simulation of user interactions by a single input mode (e.g., natural language inputs), but also supports multimodal inputs.
    Type: Grant
    Filed: November 21, 2018
    Date of Patent: October 11, 2022
    Assignee: Adobe Inc.
    Inventors: Tzu-Hsiang Lin, Trung Huu Bui, Doo Soon Kim
  • Publication number: 20220318505
    Abstract: Systems and methods for natural language processing are described. One or more embodiments of the present disclosure receive a document comprising a plurality of words organized into a plurality of sentences, the words comprising an event trigger word and an argument candidate word, generate word representation vectors for the words, generate a plurality of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the plurality of sentences, generate a relationship representation vector based on the document structures, and predict a relationship between the event trigger word and the argument candidate word based on the relationship representation vector.
    Type: Application
    Filed: April 6, 2021
    Publication date: October 6, 2022
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt, Quan Tran, Varun Manjunatha, Lidan Wang, Rajiv Jain, Doo Soon Kim, Walter Chang
  • Publication number: 20220261555
    Abstract: Systems and methods for sentence fusion are described. Embodiments receive coreference information for a first sentence and a second sentence, wherein the coreference information identifies entities associated with both a term of the first sentence and a term of the second sentence, apply an entity constraint to an attention head of a sentence fusion network, wherein the entity constraint limits attention weights of the attention head to terms that correspond to a same entity of the coreference information, and predict a fused sentence using the sentence fusion network based on the entity constraint, wherein the fused sentence combines information from the first sentence and the second sentence.
    Type: Application
    Filed: February 17, 2021
    Publication date: August 18, 2022
    Inventors: Logan Lebanoff, Franck Dernoncourt, Doo Soon Kim, Lidan Wang, Walter Chang
  • Patent number: 11307881
    Abstract: In implementations of systems for generating suggestions with knowledge graph embedding vectors, a computing device implements a suggestion system to receive input data describing user interactions with an application for editing digital content. The suggestion system generates input embedding vectors based on the user interactions with the application and determines an item based on the input embedding vectors and knowledge graph embedding vectors generated from nodes of a knowledge graph describing a tutorial for editing digital content. The suggestion system generates an indication of the item for display in a user interface of a display device.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: April 19, 2022
    Assignee: Adobe Inc.
    Inventors: Ripul Bhutani, Oliver Markus Michael Brdiczka, Doo Soon Kim, Aliakbar Darabi, Yinglan Ma
  • Publication number: 20220114476
    Abstract: This disclosure describes one or more implementations of a text sequence labeling system that accurately and efficiently utilize a joint-learning self-distillation approach to improve text sequence labeling machine-learning models. For example, in various implementations, the text sequence labeling system trains a text sequence labeling machine-learning teacher model to generate text sequence labels. The text sequence labeling system then creates and trains a text sequence labeling machine-learning student model utilizing the training and the output of the teacher model. Upon the student model achieving improved results over the teacher model, the text sequence labeling system re-initializes the teacher model with the learned model parameters of the student model and repeats the above joint-learning self-distillation framework. The text sequence labeling system then utilizes a trained text sequence labeling model to generate text sequence labels from input documents.
    Type: Application
    Filed: October 14, 2020
    Publication date: April 14, 2022
    Inventors: Trung Bui, Tuan Manh Lai, Quan Tran, Doo Soon Kim
  • Patent number: 11271876
    Abstract: The present disclosure relates to utilizing a graph neural network to accurately and flexibly identify text phrases that are relevant for responding to a query. For example, the disclosed systems can generate a graph topology having a plurality of nodes that correspond to a plurality of text phrases and a query. The disclosed systems can then utilize a graph neural network to analyze the graph topology, iteratively propagating and updating node representations corresponding to the plurality of nodes, in order to identify text phrases that can be used to respond to the query. In some embodiments, the disclosed systems can then generate a digital response to the query based on the identified text phrases.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: March 8, 2022
    Assignee: Adobe Inc.
    Inventors: Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui
  • Patent number: 11170158
    Abstract: Techniques are disclosed for abstractive summarization process for summarizing documents, including long documents. A document is encoded using an encoder-decoder architecture with attentive decoding. In particular, an encoder for modeling documents generates both word-level and section-level representations of a document. A discourse-aware decoder then captures the information flow from all discourse sections of a document. In order to extend the robustness of the generated summarization, a neural attention mechanism considers both word-level as well as section-level representations of a document. The neural attention mechanism may utilize a set of weights that are applied to the word-level representations and section-level representations.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: November 9, 2021
    Assignee: Adobe Inc.
    Inventors: Arman Cohan, Walter W. Chang, Trung Huu Bui, Franck Dernoncourt, Doo Soon Kim
  • Publication number: 20210303555
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for generating pairs of natural language queries and corresponding query-language representations. For example, the disclosed systems can generate a contextual representation of a prior-generated dialogue sequence to compare with logical-form rules. In some implementations, the logical-form rules comprise trigger conditions and corresponding logical-form actions for constructing a logical-form representation of a subsequent dialogue sequence. Based on the comparison to logical-form rules indicating satisfaction of one or more trigger conditions, the disclosed systems can perform logical-form actions to generate a logical-form representation of a subsequent dialogue sequence.
    Type: Application
    Filed: March 30, 2020
    Publication date: September 30, 2021
    Inventors: Doo Soon Kim, Anthony M Colas, Franck Dernoncourt, Moumita Sinha, Trung Bui