Patents by Inventor Romain Paulus
Romain Paulus has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11113598Abstract: A novel unified neural network framework, the dynamic memory network, is disclosed. This unified framework reduces every task in natural language processing to a question answering problem over an input sequence. Inputs and questions are used to create and connect deep memory sequences. Answers are then generated based on dynamically retrieved memories.Type: GrantFiled: July 27, 2016Date of Patent: September 7, 2021Assignee: salesforce.com, inc.Inventors: Richard Socher, Ankit Kumar, Ozan Irsoy, Mohit Iyyer, Caiming Xiong, Stephen Merity, Romain Paulus
-
Patent number: 11003704Abstract: A system for text summarization includes an encoder for encoding input tokens of a document and a decoder for emitting summary tokens which summarize the document based on the encoded input tokens. At each iteration the decoder generates attention scores between a current hidden state of the decoder and previous hidden states of the decoder, generates a current decoder context from the attention scores and the previous hidden states of the decoder, and selects a next summary token based on the current decoder context. The selection of the next summary token prevents emission of repeated summary phrases in a summary of the document.Type: GrantFiled: November 26, 2019Date of Patent: May 11, 2021Assignee: salesforce.com, inc.Inventor: Romain Paulus
-
Patent number: 10909157Abstract: A system is disclosed for providing an abstractive summary of a source textual document. The system includes an encoder, a decoder, and a fusion layer. The encoder is capable of generating an encoding for the source textual document. The decoder is separated into a contextual model and a language model. The contextual model is capable of extracting words from the source textual document using the encoding. The language model is capable of generating vectors paraphrasing the source textual document based on pre-training with a training dataset. The fusion layer is capable of generating the abstractive summary of the source textual document from the extracted words and the generated vectors for paraphrasing. In some embodiments, the system utilizes a novelty metric to encourage the generation of novel phrases for inclusion in the abstractive summary.Type: GrantFiled: July 31, 2018Date of Patent: February 2, 2021Assignee: salesforce.com, inc.Inventors: Romain Paulus, Wojciech Kryscinski, Caiming Xiong
-
Publication number: 20200142917Abstract: A system for text summarization includes an encoder for encoding input tokens of a document and a decoder for emitting summary tokens which summarize the document based on the encoded input tokens. At each iteration the decoder generates attention scores between a current hidden state of the decoder and previous hidden states of the decoder, generates a current decoder context from the attention scores and the previous hidden states of the decoder, and selects a next summary token based on the current decoder context. The selection of the next summary token prevents emission of repeated summary phrases in a summary of the document.Type: ApplicationFiled: November 26, 2019Publication date: May 7, 2020Inventor: Romain PAULUS
-
Patent number: 10521465Abstract: A system for text summarization includes an encoder for encoding input tokens of a document and a decoder for emitting summary tokens which summarize the document based on the encoded input tokens. At each iteration the decoder generates attention scores between a current hidden state of the decoder and previous hidden states of the decoder, generates a current decoder context from the attention scores and the previous hidden states of the decoder, and selects a next summary token based on the current decoder context and a current encoder context of the encoder. The attention scores penalize candidate summary tokens having high attention scores in previous iterations. In some embodiments, the attention scores include an attention score for each of the previous hidden states of the decoder. In some embodiments, the selection of the next summary token prevents emission of repeated summary phrases in a summary of the document.Type: GrantFiled: June 25, 2019Date of Patent: December 31, 2019Assignee: salesforce.com, inc.Inventor: Romain Paulus
-
Publication number: 20190362020Abstract: A system is disclosed for providing an abstractive summary of a source textual document. The system includes an encoder, a decoder, and a fusion layer. The encoder is capable of generating an encoding for the source textual document. The decoder is separated into a contextual model and a language model. The contextual model is capable of extracting words from the source textual document using the encoding. The language model is capable of generating vectors paraphrasing the source textual document based on pre-training with a training dataset. The fusion layer is capable of generating the abstractive summary of the source textual document from the extracted words and the generated vectors for paraphrasing. In some embodiments, the system utilizes a novelty metric to encourage the generation of novel phrases for inclusion in the abstractive summary.Type: ApplicationFiled: July 31, 2018Publication date: November 28, 2019Inventors: Romain Paulus, Wojciech Kryscinski, Caiming Xiong
-
Patent number: 10474709Abstract: Disclosed RNN-implemented methods and systems for abstractive text summarization process input token embeddings of a document through an encoder that produces encoder hidden states; applies the decoder hidden state to encoder hidden states to produce encoder attention scores for encoder hidden states; generates encoder temporal scores for the encoder hidden states by exponentially normalizing a particular encoder hidden state's encoder attention score over its previous encoder attention scores; generates normalized encoder temporal scores by unity normalizing the temporal scores; produces the intra-temporal encoder attention vector; applies the decoder hidden state to each of previous decoder hidden states to produce decoder attention scores for each of the previous decoder hidden states; generates normalized decoder attention scores for previous decoder hidden states by exponentially normalizing each of the decoder attention scores; identifies previously predicted output tokens; produces the intra-decoder atType: GrantFiled: November 16, 2017Date of Patent: November 12, 2019Assignee: salesforce.com, inc.Inventor: Romain Paulus
-
Publication number: 20190311002Abstract: A system for text summarization includes an encoder for encoding input tokens of a document and a decoder for emitting summary tokens which summarize the document based on the encoded input tokens. At each iteration the decoder generates attention scores between a current hidden state of the decoder and previous hidden states of the decoder, generates a current decoder context from the attention scores and the previous hidden states of the decoder, and selects a next summary token based on the current decoder context and a current encoder context of the encoder. The attention scores penalize candidate summary tokens having high attention scores in previous iterations. In some embodiments, the attention scores include an attention score for each of the previous hidden states of the decoder. In some embodiments, the selection of the next summary token prevents emission of repeated summary phrases in a summary of the document.Type: ApplicationFiled: June 25, 2019Publication date: October 10, 2019Inventor: Romain Paulus
-
Patent number: 10380161Abstract: Disclosed RNN-implemented methods and systems for abstractive text summarization process input token embeddings of a document through an encoder that produces encoder hidden states; applies the decoder hidden state to encoder hidden states to produce encoder attention scores for encoder hidden states; generates encoder temporal scores for the encoder hidden states by exponentially normalizing a particular encoder hidden state's encoder attention score over its previous encoder attention scores; generates normalized encoder temporal scores by unity normalizing the temporal scores; produces the intra-temporal encoder attention vector; applies the decoder hidden state to each of previous decoder hidden states to produce decoder attention scores for each of the previous decoder hidden states; generates normalized decoder attention scores for previous decoder hidden states by exponentially normalizing each of the decoder attention scores; identifies previously predicted output tokens; produces the intra-decoder atType: GrantFiled: November 16, 2017Date of Patent: August 13, 2019Assignee: salesforce.com, inc.Inventor: Romain Paulus
-
Publication number: 20180300400Abstract: Disclosed RNN-implemented methods and systems for abstractive text summarization process input token embeddings of a document through an encoder that produces encoder hidden states; applies the decoder hidden state to encoder hidden states to produce encoder attention scores for encoder hidden states; generates encoder temporal scores for the encoder hidden states by exponentially normalizing a particular encoder hidden state's encoder attention score over its previous encoder attention scores; generates normalized encoder temporal scores by unity normalizing the temporal scores; produces the intra-temporal encoder attention vector; applies the decoder hidden state to each of previous decoder hidden states to produce decoder attention scores for each of the previous decoder hidden states; generates normalized decoder attention scores for previous decoder hidden states by exponentially normalizing each of the decoder attention scores; identifies previously predicted output tokens; produces the intra-decoder atType: ApplicationFiled: November 16, 2017Publication date: October 18, 2018Applicant: salesforce.com, inc.Inventor: Romain Paulus
-
Publication number: 20170024645Abstract: A novel unified neural network framework, the dynamic memory network, is disclosed. This unified framework reduces every task in natural language processing to a question answering problem over an input sequence. Inputs and questions are used to create and connect deep memory sequences. Answers are then generated based on dynamically retrieved memories.Type: ApplicationFiled: July 27, 2016Publication date: January 26, 2017Applicant: salesforce.com, inc.Inventors: Richard Socher, Ankit Kumar, Ozan Irsoy, Mohit Iyyer, Caiming Xiong, Stephen Merity, Romain Paulus
-
Publication number: 20160350653Abstract: A novel unified neural network framework, the dynamic memory network, is disclosed. This unified framework reduces every task in natural language processing to a question answering problem over an input sequence. Inputs and questions are used to create and connect deep memory sequences. Answers are then generated based on dynamically retrieved memories.Type: ApplicationFiled: June 1, 2016Publication date: December 1, 2016Applicant: salesforce.com, inc.Inventors: Richard Socher, Ankit Kumar, Ozan Irsoy, Mohit Iyyer, Caiming Xiong, Stephen Merity, Romain Paulus