Patents by Inventor FRANCK DERNONCOURT

FRANCK DERNONCOURT has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220318520
    Abstract: Systems and methods for natural language processing are described. One or more embodiments of the present disclosure generate a word embedding for each word of an input phrase, wherein the input phrase indicates a sentiment toward an aspect term, compute a gate vector based on the aspect term, identify a dependency tree representing relations between words of the input phrase, generate a representation vector based on the dependency tree and the word embedding using a graph convolution network, wherein the gate vector is applied to a layer of the graph convolution network, and generate a probability distribution over a plurality of sentiments based on the representation vector.
    Type: Application
    Filed: March 31, 2021
    Publication date: October 6, 2022
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt
  • Publication number: 20220292263
    Abstract: Systems and methods for natural language processing are described. One or more embodiments of the disclosure provide an entity matching apparatus trained using machine learning techniques to determine whether a query name corresponds to a candidate name based on a similarity score. In some examples, the query name and the candidate name are encoded using a character encoder to produce a regularized input sequence and a regularized candidate sequence, respectively. The regularized input sequence and the regularized candidate sequence are formed from a regularized character set having fewer characters than a natural language character set.
    Type: Application
    Filed: March 12, 2021
    Publication date: September 15, 2022
    Inventors: LIDAN WANG, Franck Dernoncourt
  • Publication number: 20220277186
    Abstract: The present disclosure describes systems and methods for dialog processing and information retrieval. Embodiments of the present disclosure provide a dialog system (e.g., a task-oriented dialog system) with adaptive recurrent hopping and dual context encoding to receive and understand a natural language query from a user, manage dialog based on natural language conversation, and generate natural language responses. For example, a memory network can employ a memory recurrent neural net layer and a decision meta network (e.g., a subnet) to determine an adaptive number of memory hops for obtaining readouts from a knowledge base. Further, in some embodiments, a memory network uses a dual context encoder to encode information from original context and canonical context using parallel encoding layers.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 1, 2022
    Inventors: QUAN TRAN, Franck Dernoncourt, Walter Chang
  • Publication number: 20220261555
    Abstract: Systems and methods for sentence fusion are described. Embodiments receive coreference information for a first sentence and a second sentence, wherein the coreference information identifies entities associated with both a term of the first sentence and a term of the second sentence, apply an entity constraint to an attention head of a sentence fusion network, wherein the entity constraint limits attention weights of the attention head to terms that correspond to a same entity of the coreference information, and predict a fused sentence using the sentence fusion network based on the entity constraint, wherein the fused sentence combines information from the first sentence and the second sentence.
    Type: Application
    Filed: February 17, 2021
    Publication date: August 18, 2022
    Inventors: Logan Lebanoff, Franck Dernoncourt, Doo Soon Kim, Lidan Wang, Walter Chang
  • Publication number: 20220253477
    Abstract: The present disclosure describes systems and methods for information retrieval. Embodiments of the disclosure provide a retrieval network that leverages external knowledge to provide reformulated search query suggestions, enabling more efficient network searching and information retrieval. For example, a search query from a user (e.g., a query mention of a knowledge graph entity that is included in a search query from a user) may be added to a knowledge graph as a surrogate entity via entity linking. Embedding techniques are then invoked on the updated knowledge graph (e.g., the knowledge graph that includes additional edges between surrogate entities and other entities of the original knowledge graph), and entities neighboring the surrogate entity are retrieved based on the embedding (e.g., based on a computed distance between the surrogate entity and candidate entities in the embedding space). Search results can then be ranked and displayed based on relevance to the neighboring entity.
    Type: Application
    Filed: February 8, 2021
    Publication date: August 11, 2022
    Inventors: NEDIM LIPKA, Seyedsaed Rezayidemne, Vishwa Vinay, Ryan Rossi, Franck Dernoncourt, Tracy Holloway King
  • Publication number: 20220245179
    Abstract: Systems and methods for similarity search are described. Embodiments identify a document and a query corresponding to a matching phrase in the document, encode the query and a candidate phrase, score the candidate phrase using at least one learning-based score and at least one surface form score, wherein the at least one learning based score is based on the encoding, and the at least one surface form score is based on a surface form of the query and a surface form of the candidate phrase, and select the matching phrase based on the scoring.
    Type: Application
    Filed: February 1, 2021
    Publication date: August 4, 2022
    Inventors: FRANCK DERNONCOURT, Amir Pouran Ben Veyseh
  • Publication number: 20220179848
    Abstract: The present disclosure provides a memory-based neural network for question answering. Embodiments of the disclosure identify meta-evidence nodes in an embedding space, where the meta-evidence nodes represent salient features of a training set. Each element of the training set may include a questions appended to a ground truth answer. The training set may also include questions with wrong answers that are indicated as such. In some examples, a neural Turing machine (NTM) reads a dataset and summarizes the dataset into a few meta-evidence nodes. A subsequent question may be appended to multiple candidate answers to form an input phrase, which may also be embedded in the embedding space. Then, corresponding weights may be identified for each of the meta-evidence nodes. The embedded input phrase and the weighted meta-evidence nodes may be used to identify the most appropriate answer.
    Type: Application
    Filed: December 9, 2020
    Publication date: June 9, 2022
    Inventors: QUAN TRAN, Walter Chang, Franck Dernoncourt
  • Publication number: 20220147770
    Abstract: Certain embodiments involve using a machine-learning tool to generate metadata identifying segments and topics for text within a document. For instance, in some embodiments, a text processing system obtains input text and applies a segmentation-and-labeling model to the input text. The segmentation-and-labeling model is trained to generate a predicted segment for the input text using a segmentation network. The segmentation-and-labeling model is also trained to generate a topic for the predicted segment using a pooling network of the model to the predicted segment. The output of the model is usable for generating metadata identifying the predicted segment and the associated topic.
    Type: Application
    Filed: November 6, 2020
    Publication date: May 12, 2022
    Inventors: Rajiv Jain, Varun Ion Manjunatha, Joseph Barrow, Vlad Ion Moraniu, Franck Dernoncourt, Sasha Spala, Nicholas Miller
  • Publication number: 20220138425
    Abstract: Systems and methods for natural language processing are described. Embodiments of the inventive concept are configured to receive an input sequence and a plurality of candidate long forms for a short form contained in the input sequence, encode the input sequence to produce an input sequence representation, encode each of the plurality of candidate long forms to produce a plurality of candidate long form representations, wherein each of the candidate long form representations is based on a plurality of sample expressions and each of the sample expressions includes a candidate long form and contextual information, compute a plurality of similarity scores based on the candidate long form representations and the input sequence representation, and select a long form for the short form based on the plurality of similarity scores.
    Type: Application
    Filed: November 5, 2020
    Publication date: May 5, 2022
    Inventors: FRANCK DERNONCOURT, Amir Pouran Ben Veyseh
  • Publication number: 20220138534
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that utilize a plurality of neural networks to determine structural and semantic information via different views of a word sequence and then utilize this information to extract a relationship between word sequence entities. For example, the disclosed systems generate a plurality of sets of encoded word representation vectors utilizing the plurality of neural networks. The disclosed system then extracts the relationship from an overall word representation vector generated based on the sets of encoded word representation vectors. Furthermore, the disclosed system enforces structural and semantic consistency between views via a plurality of constrains involving a control mechanism for the semantic view and a plurality of losses.
    Type: Application
    Filed: November 3, 2020
    Publication date: May 5, 2022
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt, Quan Tran, Lidan Wang
  • Publication number: 20220138185
    Abstract: Systems and methods for natural language processing are described. Embodiments are configured to receive a structured representation of a search query, wherein the structured representation comprises a plurality of nodes and at least one edge connecting two of the nodes, receive a modification expression for the search query, wherein the modification expression comprises a natural language expression, generate a modified structured representation based on the structured representation and the modification expression using a neural network configured to combine structured representation features and natural language expression features, and perform a search based on the modified structured representation.
    Type: Application
    Filed: November 3, 2020
    Publication date: May 5, 2022
    Inventors: Quan Tran, Zhe Lin, Xuanli He, Walter Chang, Trung Bui, Franck Dernoncourt
  • Patent number: 11271876
    Abstract: The present disclosure relates to utilizing a graph neural network to accurately and flexibly identify text phrases that are relevant for responding to a query. For example, the disclosed systems can generate a graph topology having a plurality of nodes that correspond to a plurality of text phrases and a query. The disclosed systems can then utilize a graph neural network to analyze the graph topology, iteratively propagating and updating node representations corresponding to the plurality of nodes, in order to identify text phrases that can be used to respond to the query. In some embodiments, the disclosed systems can then generate a digital response to the query based on the identified text phrases.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: March 8, 2022
    Assignee: Adobe Inc.
    Inventors: Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui
  • Publication number: 20220067992
    Abstract: This disclosure involves executing artificial intelligence models that infer image editing operations from natural language requests spoken by a user. Further, this disclosure performs the inferred image editing operations using inferred parameters for the image editing operations. Systems and methods may be provided that infer one or more image editing operations from a natural language request associated with a source image, locate areas of the source that are relevant to the one or more image editing operations to generate image masks, and performing the one or more image editing operations to generate a modified source image.
    Type: Application
    Filed: August 31, 2020
    Publication date: March 3, 2022
    Inventors: Ning XU, Trung Bui, Jing Shi, Franck Dernoncourt
  • Patent number: 11263394
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for sentence compression in which a provided sentence is compressed to fit within an allotted space. Portions of the input sentence are copied to generate the compressed sentence. Upon receipt of a sentence, top candidate compressed sentences may be determined based on probabilities of segments of the input sentence to be included in a potential compressed sentence. The top candidate compressed sentences are re-ranked based on grammatical accuracy scores for each of the candidate compressed sentences using a language model trained using linguistic features of words and/or phrases. The highest scoring candidate compressed sentence may be presented to the user.
    Type: Grant
    Filed: August 2, 2019
    Date of Patent: March 1, 2022
    Assignee: Adobe Inc.
    Inventors: Sebastian Gehrmann, Franck Dernoncourt
  • Publication number: 20220050967
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that extract a definition for a term from a source document by utilizing a single machine-learning framework to classify a word sequence from the source document as including a term definition and to label words from the word sequence. To illustrate, the disclosed system can receive a source document including a word sequence arranged in one or more sentences. The disclosed systems can utilize a machine-learning model to classify the word sequence as comprising a definition for a term and generate labels for the words from the word sequence corresponding to the term and the definition. Based on classifying the word sequence and the generated labels, the disclosed system can extract the definition for the term from the source document.
    Type: Application
    Filed: August 11, 2020
    Publication date: February 17, 2022
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt, Quan Tran, Yiming Yang, Lidan Wang, Rajiv Jain, Vlad Morariu, Walter Chang
  • Patent number: 11232263
    Abstract: Certain embodiments involve a method for generating a summary. The method includes one or more processing devices performing operations including generating a set of word embeddings corresponding to each word of a text input. The operations further include generating a set of selection probabilities corresponding to each word of the text input using the respective word embeddings. Further, the operations include calculating a set of sentence saliency scores for a set of sentences of the text input using respective selection probabilities of the set of selection probabilities for each word of the text input. Additional, the operations include generating the summary of the text input using a subset of sentences from the set of sentences with greatest sentence saliency scores from the set of sentence saliency scores.
    Type: Grant
    Filed: August 7, 2019
    Date of Patent: January 25, 2022
    Assignee: Adobe Inc.
    Inventors: Sebastian Gehrmann, Franck Dernoncourt
  • Patent number: 11232255
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed that collect and analyze annotation performance data to generate digital annotations for evaluating and training automatic electronic document annotation models. In particular, in one or more embodiments, the disclosed systems provide electronic documents to annotators based on annotator topic preferences. The disclosed systems then identify digital annotations and annotation performance data such as a time period spent by an annotator in generating digital annotations and annotator responses to digital annotation questions. Furthermore, in one or more embodiments, the disclosed systems utilize the identified digital annotations and the annotation performance data to generate a final set of reliable digital annotations. Additionally, in one or more embodiments, the disclosed systems provide the final set of digital annotations for utilization in training a machine learning model to generate annotations for electronic documents.
    Type: Grant
    Filed: June 13, 2018
    Date of Patent: January 25, 2022
    Assignee: Adobe Inc.
    Inventors: Franck Dernoncourt, Walter Chang, Trung Bui, Sean Fitzgerald, Sasha Spala, Kishore Aradhya, Carl Dockhorn
  • Patent number: 11222167
    Abstract: The disclosure describes one or more embodiments of a structured text summary system that generates structured text summaries of digital documents based on an interactive graphical user interface. For example, the structured text summary system can collaborate with users to create structured text summaries of a digital document based on automatically generating document tags corresponding to the digital document, determining segments of the digital document that correspond to a selected document tag, and generating structured text summaries for those document segments.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: January 11, 2022
    Assignee: ADOBE INC.
    Inventors: Sebastian Gehrmann, Franck Dernoncourt, Lidan Wang, Carl Dockhorn, Yu Gong
  • Patent number: 11210470
    Abstract: Methods and systems are provided for identifying subparts of a text. A neural network system can receive a set of sentences that includes context sentences and target sentences that indicate a decision point in a text. The neural network system can generate context vector sentences and target sentence vectors by encoding context from the set of sentences. These context sentence vectors can be weighted to focus on relevant information. The weighted context sentence vectors and the target sentence vectors can then be used to output a label for the decision point in the text.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: December 28, 2021
    Assignee: Adobe Inc.
    Inventors: Seokhwan Kim, Walter W. Chang, Nedim Lipka, Franck Dernoncourt, Chan Young Park
  • Patent number: 11195048
    Abstract: In implementations of generating descriptions of image relationships, a computing device implements a description system which receives a source digital image and a target digital image. The description system generates a source feature sequence from the source digital image and a target feature sequence from the target digital image. A visual relationship between the source digital image and the target digital image is determined by using cross-attention between the source feature sequence and the target feature sequence. The system generates a description of a visual transformation between the source digital image and the target digital image based on the visual relationship.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: December 7, 2021
    Assignees: Adobe Inc., The University Of North Carolina At Chapel Hill
    Inventors: Trung Huu Bui, Zhe Lin, Hao Tan, Franck Dernoncourt, Mohit Bansal