Patents by Inventor FRANCK DERNONCOURT

FRANCK DERNONCOURT has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11170158
    Abstract: Techniques are disclosed for abstractive summarization process for summarizing documents, including long documents. A document is encoded using an encoder-decoder architecture with attentive decoding. In particular, an encoder for modeling documents generates both word-level and section-level representations of a document. A discourse-aware decoder then captures the information flow from all discourse sections of a document. In order to extend the robustness of the generated summarization, a neural attention mechanism considers both word-level as well as section-level representations of a document. The neural attention mechanism may utilize a set of weights that are applied to the word-level representations and section-level representations.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: November 9, 2021
    Assignee: Adobe Inc.
    Inventors: Arman Cohan, Walter W. Chang, Trung Huu Bui, Franck Dernoncourt, Doo Soon Kim
  • Publication number: 20210326371
    Abstract: Techniques and systems are described for performing semantic text searches. A semantic text-searching solution uses a machine learning system (such as a deep learning system) to determine associations between the semantic meanings of words. These associations are not limited by the spelling, syntax, grammar, or even definition of words. Instead, the associations can be based on the context in which characters, words, and/or phrases are used in relation to one another. In response to detecting a request to locate text within an electronic document associated with a keyword, the semantic text-searching solution can return strings within the document that have matching and/or related semantic meanings or contexts, in addition to exact matches (e.g., string matches) within the document. The semantic text-searching solution can then output an indication of the matching strings.
    Type: Application
    Filed: April 15, 2020
    Publication date: October 21, 2021
    Inventors: Trung Bui, Yu Gong, Tushar Dublish, Sasha Spala, Sachin Soni, Nicholas Miller, Joon Kim, Franck Dernoncourt, Carl Dockhorn, Ajinkya Kale
  • Publication number: 20210303555
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for generating pairs of natural language queries and corresponding query-language representations. For example, the disclosed systems can generate a contextual representation of a prior-generated dialogue sequence to compare with logical-form rules. In some implementations, the logical-form rules comprise trigger conditions and corresponding logical-form actions for constructing a logical-form representation of a subsequent dialogue sequence. Based on the comparison to logical-form rules indicating satisfaction of one or more trigger conditions, the disclosed systems can perform logical-form actions to generate a logical-form representation of a subsequent dialogue sequence.
    Type: Application
    Filed: March 30, 2020
    Publication date: September 30, 2021
    Inventors: Doo Soon Kim, Anthony M Colas, Franck Dernoncourt, Moumita Sinha, Trung Bui
  • Publication number: 20210303786
    Abstract: Techniques are described herein for determining a long-form of an abbreviation using a machine learning based approach that takes into consideration both sequential context and structural context, where the long-form corresponds to a meaning of the abbreviation as used in a sequence of words that form a sentence. In some embodiments, word representations are generated for different words in the sequence of words, and a combined representation is generated for the abbreviation based on a word representation corresponding to the abbreviation, a sequential context representation, and a structural context representation. The sequential context representation can be generated based on word representations for words positioned near the abbreviation. The structural context representation can be generated based on word representations for words that are syntactically related to the abbreviation.
    Type: Application
    Filed: March 25, 2020
    Publication date: September 30, 2021
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt
  • Publication number: 20210295191
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for selecting hyper-parameter sets by utilizing a modified Bayesian optimization approach based on a combination of accuracy and training efficiency metrics of a machine learning model. For example, the disclosed systems can fit accuracy regression and efficiency regression models to observed metrics associated with hyper-parameter sets of a machine learning model. The disclosed systems can also implement a trade-off acquisition function that implements an accuracy-training efficiency balance metric to explore the hyper-parameter feature space and select hyper-parameters for training the machine learning model considering a balance between accuracy and training efficiency.
    Type: Application
    Filed: March 20, 2020
    Publication date: September 23, 2021
    Inventors: Trung Bui, Lidan Wang, Franck Dernoncourt
  • Publication number: 20210279414
    Abstract: Systems and methods for parsing natural language sentences using an artificial neural network (ANN) are described. Embodiments of the described systems and methods may generate a plurality of word representation matrices for an input sentence, wherein each of the word representation matrices is based on an input matrix of word vectors, a query vector, a matrix of key vectors, and a matrix of value vectors, and wherein a number of the word representation matrices is based on a number of syntactic categories, compress each of the plurality of word representation matrices to produce a plurality of compressed word representation matrices, concatenate the plurality of compressed word representation matrices to produce an output matrix of word vectors, and identify at least one word from the input sentence corresponding to a syntactic category based on the output matrix of word vectors.
    Type: Application
    Filed: March 5, 2020
    Publication date: September 9, 2021
    Inventors: KHALIL MRINI, WALlTER CHANG, TRUNG BUI, QUAN TRAN, FRANCK DERNONCOURT
  • Publication number: 20210279622
    Abstract: Methods for natural language semantic matching performed by training and using a Markov Network model are provided. The trained Markov Network model can be used to identify answers to questions. Training may be performed using question-answer pairs that include labels indicating a correct or incorrect answer to a question. The trained Markov Network model can be used to identify answers to questions from sources stored on a database. The Markov Network model provides superior performance over other semantic matching models, in particular, where the training data set includes a different information domain type relative to the input question or the output answer of the trained Markov Network model.
    Type: Application
    Filed: March 9, 2020
    Publication date: September 9, 2021
    Inventors: Trung Huu Bui, Tong Sun, Natwar Modani, Lidan Wang, Franck Dernoncourt
  • Patent number: 11113323
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for techniques for identifying textual similarity and performing answer selection. A textual-similarity computing model can use a pre-trained language model to generate vector representations of a question and a candidate answer from a target corpus. The target corpus can be clustered into latent topics (or other latent groupings), and probabilities of a question or candidate answer being in each of the latent topics can be calculated and condensed (e.g., downsampled) to improve performance and focus on the most relevant topics. The condensed probabilities can be aggregated and combined with a downstream vector representation of the question (or answer) so the model can use focused topical and other categorical information as auxiliary information in a similarity computation.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: September 7, 2021
    Assignee: Adobe Inc.
    Inventors: Seung-hyun Yoon, Franck Dernoncourt, Trung Huu Bui, Doo Soon Kim, Carl Iwan Dockhorn, Yu Gong
  • Patent number: 11100917
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that generate ground truth annotations of target utterances in digital image editing dialogues in order to create a state-driven training data set. In particular, in one or more embodiments, the disclosed systems utilize machine and user defined tags, machine learning model predictions, and user input to generate a ground truth annotation that includes frame information in addition to intent, attribute, object, and/or location information. In at least one embodiment, the disclosed systems generate ground truth annotations in conformance with an annotation ontology that results in fast and accurate digital image editing dialogue annotation.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: August 24, 2021
    Assignee: ADOBE INC.
    Inventors: Trung Bui, Zahra Rahimi, Yinglan Ma, Seokhwan Kim, Franck Dernoncourt
  • Publication number: 20210232850
    Abstract: In implementations of generating descriptions of image relationships, a computing device implements a description system which receives a source digital image and a target digital image. The description system generates a source feature sequence from the source digital image and a target feature sequence from the target digital image. A visual relationship between the source digital image and the target digital image is determined by using cross-attention between the source feature sequence and the target feature sequence. The system generates a description of a visual transformation between the source digital image and the target digital image based on the visual relationship.
    Type: Application
    Filed: January 23, 2020
    Publication date: July 29, 2021
    Applicant: Adobe Inc.
    Inventors: Trung Huu Bui, Zhe Lin, Hao Tan, Franck Dernoncourt, Mohit Bansal
  • Publication number: 20210232770
    Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for parsing a given input referring expression into a parse structure and generating a semantic computation graph to identify semantic relationships among and between objects. At a high level, when embodiments of the preset invention receive a referring expression, a parse tree is created and mapped into a hierarchical subject, predicate, object graph structure that labeled noun objects in the referring expression, the attributes of the labeled noun objects, and predicate relationships (e.g., verb actions or spatial propositions) between the labeled objects. Embodiments of the present invention then transform the subject, predicate, object graph structure into a semantic computation graph that may be recursively traversed and interpreted to determine how noun objects, their attributes and modifiers, and interrelationships are provided to downstream image editing, searching, or caption indexing tasks.
    Type: Application
    Filed: January 29, 2020
    Publication date: July 29, 2021
    Inventors: Zhe Lin, Walter W. Chang, Scott Cohen, Khoi Viet Pham, Jonathan Brandt, Franck Dernoncourt
  • Publication number: 20210216577
    Abstract: Techniques and systems are provided for predicting answers in response to one or more input queries. For instance, text from a corpus of text can be processed by a reader to generate one or multiple question and answer spaces. A question and answer space can include answerable questions and the answers associated with the questions (referred to as “question and answer pairs”). A query defining a question can be received (e.g., from a user input device) and processed by a retriever portion of the system. The retriever portion of the system can retrieve an answer to the question from the one or more pre-constructed question and answer spaces, and/or can determine an answer by comparing one or more answers retrieved from the one or more pre-constructed question and answer spaces to an answer generated by a retriever-reader system.
    Type: Application
    Filed: January 13, 2020
    Publication date: July 15, 2021
    Applicant: Adobe Inc.
    Inventors: Jinfeng Xiao, Lidan Wang, Franck Dernoncourt, Trung Bui, Tong Sun
  • Publication number: 20210192126
    Abstract: The disclosure describes one or more embodiments of a structured text summary system that generates structured text summaries of digital documents based on an interactive graphical user interface. For example, the structured text summary system can collaborate with users to create structured text summaries of a digital document based on automatically generating document tags corresponding to the digital document, determining segments of the digital document that correspond to a selected document tag, and generating structured text summaries for those document segments.
    Type: Application
    Filed: December 19, 2019
    Publication date: June 24, 2021
    Inventors: Sebastian Gehrmann, Franck Dernoncourt, Lidan Wang, Carl Dockhorn, Yu Gong
  • Publication number: 20210174193
    Abstract: A system, method and non-transitory computer readable medium for editing images with verbal commands are described. Embodiments of the system, method and non-transitory computer readable medium may include an artificial neural network (ANN) comprising a word embedding component configured to convert text input into a set of word vectors, a feature encoder configured to create a combined feature vector for the text input based on the word vectors, a scoring layer configured to compute labeling scores based on the combined feature vectors, wherein the feature encoder, the scoring layer, or both are trained using multi-task learning with a loss function including a first loss value and an additional loss value based on mutual information, context-based prediction, or sentence-based prediction, and a command component configured to identify a set of image editing word labels based on the labeling scores.
    Type: Application
    Filed: December 6, 2019
    Publication date: June 10, 2021
    Inventors: AMIR POURAN BEN VEYSEH, FRANCK DERNONCOURT
  • Patent number: 11016997
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating query results based on domain-specific dynamic word embeddings. For example, the disclosed systems can generate dynamic vector representations of words that include domain-specific embedded information. In addition, the disclosed systems can compare the dynamic vector representations with vector representations of query terms received as part of a search query. The disclosed systems can further identify one or more digital content items to provide as part of a query result that include words corresponding to the query terms based on the comparison of the vector representations. In some embodiments, the disclosed systems can also train a word embedding model to generate accurate vector representations of unique words.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: May 25, 2021
    Assignee: ADOBE INC.
    Inventors: Xiaolei Huang, Franck Dernoncourt, Walter Chang
  • Patent number: 11011157
    Abstract: Techniques are disclosed for generating ASR training data. According to an embodiment, impactful ASR training corpora is generated efficiently, and the quality or relevance of ASR training corpora being generated is increased by leveraging knowledge of the ASR system being trained. An example methodology includes: selecting one of a word or phrase, based on knowledge and/or content of said ASR training corpora; presenting a textual representation of said word or phrase; receiving a speech utterance that includes said word or phrase; receiving a transcript for said speech utterance; presenting said transcript for review (to allow for editing, if needed); and storing said transcript and said audio file in an ASR system training database. The selecting may include, for instance, selecting a word or phrase that is under-represented in said database, and/or based upon an n-gram distribution on a language, and/or based upon known areas that tend to incur transcription mistakes.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: May 18, 2021
    Assignee: ADOBE INC.
    Inventor: Franck Dernoncourt
  • Publication number: 20210133279
    Abstract: The present disclosure relates to utilizing a neural network to flexibly generate label distributions for modifying a segment of text to emphasize one or more words that accurately communicate the meaning of the segment of text. For example, the disclosed systems can utilize a neural network having a long short-term memory neural network architecture to analyze a segment of text and generate a plurality of label distributions corresponding to the words included therein. The label distribution for a given word can include probabilities across a plurality of labels from a text emphasis labeling scheme where a given probability represents the degree to which the corresponding label describes the word. The disclosed systems can modify the segment of text to emphasize one or more of the included words based on the generated label distributions.
    Type: Application
    Filed: November 4, 2019
    Publication date: May 6, 2021
    Inventors: Amirreza Shirani, Franck Dernoncourt, Paul Asente, Nedim Lipka, Seokhwan Kim, Jose Echevarria
  • Publication number: 20210058345
    Abstract: The present disclosure relates to utilizing a graph neural network to accurately and flexibly identify text phrases that are relevant for responding to a query. For example, the disclosed systems can generate a graph topology having a plurality of nodes that correspond to a plurality of text phrases and a query. The disclosed systems can then utilize a graph neural network to analyze the graph topology, iteratively propagating and updating node representations corresponding to the plurality of nodes, in order to identify text phrases that can be used to respond to the query. In some embodiments, the disclosed systems can then generate a digital response to the query based on the identified text phrases.
    Type: Application
    Filed: August 22, 2019
    Publication date: February 25, 2021
    Inventors: Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui
  • Publication number: 20210042391
    Abstract: Certain embodiments involve a method for generating a summary. The method includes one or more processing devices performing operations including generating a set of word embeddings corresponding to each word of a text input. The operations further include generating a set of selection probabilities corresponding to each word of the text input using the respective word embeddings. Further, the operations include calculating a set of sentence saliency scores for a set of sentences of the text input using respective selection probabilities of the set of selection probabilities for each word of the text input. Additional, the operations include generating the summary of the text input using a subset of sentences from the set of sentences with greatest sentence saliency scores from the set of sentence saliency scores.
    Type: Application
    Filed: August 7, 2019
    Publication date: February 11, 2021
    Inventors: Sebastian Gehrmann, Franck Dernoncourt
  • Publication number: 20210034699
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for sentence compression in which a provided sentence is compressed to fit within an allotted space. Portions of the input sentence are copied to generate the compressed sentence. Upon receipt of a sentence, top candidate compressed sentences may be determined based on probabilities of segments of the input sentence to be included in a potential compressed sentence. The top candidate compressed sentences are re-ranked based on grammatical accuracy scores for each of the candidate compressed sentences using a language model trained using linguistic features of words and/or phrases. The highest scoring candidate compressed sentence may be presented to the user.
    Type: Application
    Filed: August 2, 2019
    Publication date: February 4, 2021
    Inventors: Sebastian Gehrmann, Franck Dernoncourt