Patents by Inventor Francesco Piccinno

Francesco Piccinno has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12380143
    Abstract: Systems and methods for pre-training and fine-tuning of neural-network-based language models to reason directly over tables without generating logical forms. In some examples, a language model can be pre-trained using masked-language modeling tasks synthetically generated from tables pulled from a knowledge corpus. In some examples, the language model may be further pre-trained using pairs of counterfactual statements generated from those tables, and/or one or more statements that compare selected data from those tables. The language model may then be fine-tuned using examples that include only a question, an answer, and a table, allowing fine-tuning examples to be harvested directly from existing benchmark datasets or synthetically generated.
    Type: Grant
    Filed: November 20, 2023
    Date of Patent: August 5, 2025
    Assignee: GOOGLE LLC
    Inventors: Thomas Müller, Jonathan Herzig, Paweł Nowak, Julian Eisenschlos, Francesco Piccinno, Syrine Krichene
  • Publication number: 20240386215
    Abstract: Provided is a one-shot solution to visual language reasoning. Example systems described herein decompose the challenge of visual language reasoning into two steps: translation of a graphical depiction of data (e.g., a plot or chart) into text; followed by reasoning over the translated text. In particular, example systems described herein can include a machine-learned visual-to-language conversion model that translates a graphical depiction of a dataset to a set of text descriptive of the dataset. The output of visual-to-language conversion model can then be directly used to prompt a language model, (e.g., a pretrained large language model (LLM)), exploiting the few-shot reasoning capabilities of the language model.
    Type: Application
    Filed: May 17, 2023
    Publication date: November 21, 2024
    Inventors: Julian Martin Eisenschlos, Francesco Piccinno, Yasemin Altun, Syrine Krichene, Kenton Chiu Tsun Lee, Fangyu Liu, Mandar Joshi, Chenxi Pang, Wenhu Chen
  • Publication number: 20240086436
    Abstract: Systems and methods for pre-training and fine-tuning of neural-network-based language models to reason directly over tables without generating logical forms. In some examples, a language model can be pre-trained using masked-language modeling tasks synthetically generated from tables pulled from a knowledge corpus. In some examples, the language model may be further pre-trained using pairs of counterfactual statements generated from those tables, and/or one or more statements that compare selected data from those tables. The language model may then be fine-tuned using examples that include only a question, an answer, and a table, allowing fine-tuning examples to be harvested directly from existing benchmark datasets or synthetically generated.
    Type: Application
    Filed: November 20, 2023
    Publication date: March 14, 2024
    Inventors: Thomas Müller, Jonathan Herzig, Pawel Nowak, Julian Eisenschlos, Francesco Piccinno, Syrine Krichene
  • Patent number: 11868381
    Abstract: Systems and methods for pre-training and fine-tuning of neural-network-based language models to reason directly over tables without generating logical forms. In some examples, a language model can be pre-trained using masked-language modeling tasks synthetically generated from tables pulled from a knowledge corpus. In some examples, the language model may be further pre-trained using pairs of counterfactual statements generated from those tables, and/or one or more statements that compare selected data from those tables. The language model may then be fine-tuned using examples that include only a question, an answer, and a table, allowing fine-tuning examples to be harvested directly from existing benchmark datasets or synthetically generated.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: January 9, 2024
    Assignee: Google LLC
    Inventors: Thomas Müller, Jonathan Herzig, Pawel Nowak, Julian Eisenschlos, Francesco Piccinno, Syrine Krichene
  • Publication number: 20220309087
    Abstract: Systems and methods for pre-training and fine-tuning of neural-network-based language models to reason directly over tables without generating logical forms. In some examples, a language model can be pre-trained using masked-language modeling tasks synthetically generated from tables pulled from a knowledge corpus. In some examples, the language model may be further pre-trained using pairs of counterfactual statements generated from those tables, and/or one or more statements that compare selected data from those tables. The language model may then be fine-tuned using examples that include only a question, an answer, and a table, allowing fine-tuning examples to be harvested directly from existing benchmark datasets or synthetically generated.
    Type: Application
    Filed: March 29, 2021
    Publication date: September 29, 2022
    Inventors: Thomas Müller, Jonathan Herzig, Pawel Nowak, Julian Eisenschlos, Francesco Piccinno, Syrine Krichene