Patents by Inventor Alexander R. Fabbri

Alexander R. Fabbri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12361201
    Abstract: Embodiments described herein provide a document summarization framework that employs an ensemble of summarization models, each of which is a modified version of a base summarization model to control hallucination. For example, a base summarization model may first be trained on a full training data set. The trained base summarization model is then fine-tuned using a first filtered subset of the training data which contains noisy data, resulting in an “anti-expert” model. The parameters of the anti-expert model are subtracted from the parameters of the trained base model to produce a final summarization model which yields robust factual performance.
    Type: Grant
    Filed: August 3, 2022
    Date of Patent: July 15, 2025
    Assignee: Salesforce, Inc.
    Inventors: Prafulla Kumar Choubey, Alexander R. Fabbri, Jesse Vig, Chien-Sheng Wu, Wenhao Liu, Nazneen Rajani
  • Patent number: 12204847
    Abstract: Embodiments described herein provide a method for text summarization. The method includes receiving a training dataset having at least an uncompressed text, a compressed text, and one or more information entities accompanying the compressed text. The method also includes generating, using a perturber model, a perturbed text with the one or more information entities being inserted into the compressed text. The method further includes training the perturber model based on a first training objective, and generating, using the trained perturber model, a perturbed summary in response to an input of a reference summary. The method further includes generating, via an editor model, a predicted summary by removing information from the perturbed summary conditioned on a source document of the reference summary, and training the editor model based on a second training objective.
    Type: Grant
    Filed: October 6, 2022
    Date of Patent: January 21, 2025
    Assignee: Salesforce, Inc.
    Inventors: Alexander R. Fabbri, Prafulla Kumar Choubey, Jesse Vig, Chien-Sheng Wu, Caiming Xiong
  • Publication number: 20240394539
    Abstract: Embodiments described herein provide systems and methods for training neural network based language models using human feedback. An existing (or generated) summary of a document is provided, and that summary may be used to generate a number of other summaries. A human annotator may reject the summary if there is any factuality issue with the summary. Summaries which are agreed to have no factuality problems are used as baseline summaries. Small atomic edits are made to the baseline summaries (e.g., replacing a single word or phrase) to create a group of summaries. Human annotators label each of these summaries as factual or not. The annotated summaries are used to train a summarization model and/or a factual detector model.
    Type: Application
    Filed: September 26, 2023
    Publication date: November 28, 2024
    Inventors: Wojciech Kryscinski, Alexander R. Fabbri, Caiming Xiong, Shafiq Rayhan Joty, Chien-Sheng Wu, Divyansh Agarwal, Philippe Laban
  • Publication number: 20240370640
    Abstract: Embodiments described herein provide a query-focused summarization model that employs a single or dual encoder model. A two-step approach may be adopted that first extracts parts of the source document and then synthesizes the extracted segments into a final summary. In another embodiment, an end-to-end approach may be adopted that splits the source document into overlapping segments, and then concatenates encodings into a single embedding sequence for the decoder to output a summary.
    Type: Application
    Filed: July 16, 2024
    Publication date: November 7, 2024
    Inventors: Wojciech Kryscinski, Alexander R. Fabbri, Jesse Vig
  • Patent number: 12050855
    Abstract: Embodiments described herein provide a query-focused summarization model that employs a single or dual encoder model. A two-step approach may be adopted that first extracts parts of the source document and then synthesizes the extracted segments into a final summary. In another embodiment, an end-to-end approach may be adopted that splits the source document into overlapping segments, and then concatenates encodings into a single embedding sequence for the decoder to output a summary.
    Type: Grant
    Filed: May 20, 2022
    Date of Patent: July 30, 2024
    Assignee: Salesforce, Inc.
    Inventors: Wojciech Kryscinski, Alexander R. Fabbri, Jesse Vig
  • Publication number: 20240242022
    Abstract: Embodiments described herein provide a structured conversation summarization framework. A user interface may be provided which allows an agent to perform a conversation with a customer, for example regarding resolving a customer support issue. Utterances by both the agent and customer may be stored, and at the end of the conversation, the utterances may be used to generate a structured summary. The structured summary may include components such as a general summary, an issue summary, and a resolution summary. Using neural network models and heuristics, each component of the summary may be automatically generated.
    Type: Application
    Filed: January 18, 2023
    Publication date: July 18, 2024
    Inventors: Victor Yee, Chien-Sheng Wu, Na Cheng, Alexander R. Fabbri, Zachary Alexander, Nicholas Feinig, Sameer Abhinkar, Shashank Harinath, Sitaram Asur, Jacob Nathaniel Huffman, Wojciech Kryscinski, Caiming Xiong
  • Publication number: 20230419017
    Abstract: Embodiments described herein provide a method for text summarization. The method includes receiving a training dataset having at least an uncompressed text, a compressed text, and one or more information entities accompanying the compressed text. The method also includes generating, using a perturber model, a perturbed text with the one or more information entities being inserted into the compressed text. The method further includes training the perturber model based on a first training objective, and generating, using the trained perturber model, a perturbed summary in response to an input of a reference summary. The method further includes generating, via an editor model, a predicted summary by removing information from the perturbed summary conditioned on a source document of the reference summary, and training the editor model based on a second training objective.
    Type: Application
    Filed: October 6, 2022
    Publication date: December 28, 2023
    Inventors: Alexander R. Fabbri, Prafulla Kumar Choubey, Jesse Vig, Chien-Sheng Wu, Caiming Xiong
  • Publication number: 20230376677
    Abstract: Embodiments described herein provide a document summarization framework that employs an ensemble of summarization models, each of which is a modified version of a base summarization model to control hallucination. For example, a base summarization model may first be trained on a full training data set. The trained base summarization model is then fine-tuned using a first filtered subset of the training data which contains noisy data, resulting in an “anti-expert” model. The parameters of the anti-expert model are subtracted from the parameters of the trained base model to produce a final summarization model which yields robust factual performance.
    Type: Application
    Filed: August 3, 2022
    Publication date: November 23, 2023
    Inventors: Prafulla Kumar Choubey, Alexander R. Fabbri, Jesse Vig, Chien-Sheng Wu, Wenhao Liu, Nazneen Rajani
  • Publication number: 20220277135
    Abstract: Embodiments described herein provide a query-focused summarization model that employs a single or dual encoder model. A two-step approach may be adopted that first extracts parts of the source document and then synthesizes the extracted segments into a final summary. In another embodiment, an end-to-end approach may be adopted that splits the source document into overlapping segments, and then concatenates encodings into a single embedding sequence for the decoder to output a summary.
    Type: Application
    Filed: May 20, 2022
    Publication date: September 1, 2022
    Inventors: Wojciech Kryscinski, Alexander R. Fabbri, Jesse Vig