Patents by Inventor Semih Yavuz

Semih Yavuz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230419027
    Abstract: Embodiments described herein provide a prompt-based transfer learning method that employs shared latent space prompt tuning). Specifically, a shared latent space is assumed, among all source and target tasks, where each vector in the space captures a basis skill to do a particular task. Given an instance (from either a source task or a target task), it is first encoded into an instance representation vector and then queries the latent space, which yields a skill vector for this instance. This vector modulates a frozen model, via soft prompts which are a simple prompt transformation (the prompt generator in FIG. 3) of the basis skill vector, to generate an answer for the instance. The latent space and prompt transformation are learned end-to-end in upstream pre-training on source tasks.
    Type: Application
    Filed: November 30, 2022
    Publication date: December 28, 2023
    Inventors: Bo Pang, Semih Yavuz, Caiming Xiong, Yingbo Zhou
  • Patent number: 11829721
    Abstract: Embodiments described herein provide dynamic blocking, a decoding algorithm which enables large-scale pretrained language models to generate high-quality paraphrases in an un-supervised setting. Specifically, in order to obtain an alternative surface form, when the language model emits a token that is present in the source sequence, the language model is prevented from generating the next token that is the same as the subsequent source token in the source sequence at the next time step. In this way, the language model is forced to generate a paraphrased sequence of the input source sequence, but with mostly different wording.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: November 28, 2023
    Assignee: salesforce.com, inc.
    Inventors: Tong Niu, Semih Yavuz, Yingbo Zhou, Nitish Shirish Keskar, Huan Wang, Caiming Xiong
  • Patent number: 11741142
    Abstract: Embodiments described herein provide document summarization systems and methods that utilize fine-tuning of pre-trained abstractive summarization models to produce summaries that more faithfully track the content of the documents. Such abstractive summarization models may be pre-trained using a corpus consisting of pairs of articles and associated summaries. For each article-summary pair, a pseudo label or control code is generated and represents a faithfulness of the summary with respect to the article. The pre-trained model is then fine-tuned based on the article-summary pairs and the corresponding control codes. The resulting fine-tuned models then provide improved faithfulness in document summarization tasks.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: August 29, 2023
    Assignee: salesforce.com, inc.
    Inventors: Haopeng Zheng, Semih Yavuz, Wojciech Kryscinski, Kazuma Hashimoto, Yingbo Zhou
  • Patent number: 11727210
    Abstract: Embodiments described herein provide systems and methods for data-to-text generation. The embodiments receive input data that includes a resource description framework (RDF) triples in an RDF graph. A data-to-text generation system generates position aware embeddings, including position embeddings, triple role embeddings, and tree-level embeddings. Using the position aware embeddings and the RDF graph, the data-to-text generation system generates a textual description for the RDF graph.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: August 15, 2023
    Assignee: Salesforce.com, Inc.
    Inventors: Qingyun Wang, Nazneen Rajani, Semih Yavuz, Xi Lin
  • Publication number: 20230229957
    Abstract: Methods, apparatuses, and computer-program products are disclosed. The method may include inputting one or more subcomponent training datasets into the plurality of subcomponent models of the machine learning model, the machine learning model may be configured to perform a final task, and the plurality of subcomponent models may be configured to perform sequential subtasks that result in the final task. The method may include computing one or more weights for data points of the one or more subcomponent training datasets and the one or more weights may be based on a contribution of the data points to an end-to-end error loss measurement associated with performing the final task of the machine learning model. The method may include training the plurality of subcomponent models based on the one or more weights for the data points of the one or more subcomponent training datasets.
    Type: Application
    Filed: January 14, 2022
    Publication date: July 20, 2023
    Inventors: Shuyang Li, Yingbo Zhou, Semih Yavuz, Govardana Sachithanandam Ramachandran
  • Publication number: 20230055188
    Abstract: Embodiments described herein provide a question answering approach that answers a question by generating an executable logical form. First, a ranking model is used to select a set of good logical forms from a pool of logical forms obtained by searching over a knowledge graph. The selected logical forms are good in the sense that they are close to (or exactly match, in some cases) the intents in the question and final desired logical form. Next, a generation model is adopted conditioned on the question as well as the selected logical forms to generate the target logical form and execute it to obtain the final answer. For example, at inference stage, when a question is received, a matching logical form is identified from the question, based on which the final answer can be generated based on the node that is associated with the matching logical form in the knowledge base.
    Type: Application
    Filed: December 29, 2021
    Publication date: February 23, 2023
    Inventors: Xi Ye, Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou
  • Publication number: 20230054068
    Abstract: Embodiments described herein provide document summarization systems and methods that utilize fine-tuning of pre-trained abstractive summarization models to produce summaries that more faithfully track the content of the documents. Such abstractive summarization models may be pre-trained using a corpus consisting of pairs of articles and associated summaries. For each article-summary pair, a pseudo label or control code is generated and represents a faithfulness of the summary with respect to the article. The pre-trained model is then fine-tuned based on the article-summary pairs and the corresponding control codes. The resulting fine-tuned models then provide improved faithfulness in document summarization tasks.
    Type: Application
    Filed: January 31, 2022
    Publication date: February 23, 2023
    Inventors: Haopeng Zheng, Semih Yavuz, Wojciech Kryscinski, Kazuma Hashimoto, Yingbo Zhou
  • Publication number: 20230059870
    Abstract: Embodiments described herein provide a question answering approach that answers a question by generating an executable logical form. First, a ranking model is used to select a set of good logical forms from a pool of logical forms obtained by searching over a knowledge graph. The selected logical forms are good in the sense that they are close to (or exactly match, in some cases) the intents in the question and final desired logical form. Next, a generation model is adopted conditioned on the question as well as the selected logical forms to generate the target logical form and execute it to obtain the final answer. For example, at inference stage, when a question is received, a matching logical form is identified from the question, based on which the final answer can be generated based on the node that is associated with the matching logical form in the knowledge base.
    Type: Application
    Filed: December 29, 2021
    Publication date: February 23, 2023
    Inventors: Xi Ye, Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou
  • Publication number: 20220415324
    Abstract: Training and/or utilizing a single neural network model to generate, at each of a plurality of assistant turns of a dialog session between a user and an automated assistant, a corresponding automated assistant natural language response and/or a corresponding automated assistant action. For example, at a given assistant turn of a dialog session, both a corresponding natural language response and a corresponding action can be generated jointly and based directly on output generated using the single neural network model. The corresponding response and/or corresponding action can be generated based on processing, using the neural network model, dialog history and a plurality of discrete resources. For example, the neural network model can be used to generate a response and/or action on a token-by-token basis.
    Type: Application
    Filed: August 30, 2022
    Publication date: December 29, 2022
    Inventors: Arvind Neelakantan, Daniel Duckworth, Ben Goodrich, Vishaal Prasad, Chinnadhurai Sankar, Semih Yavuz
  • Publication number: 20220383159
    Abstract: Embodiments described herein provide a fusion-in-decoder (FID) based model (referred to as “PATHID”) for open-domain multi-hop question answering. Specifically, PATHID addresses the gap between the general behavior of the FID model on single-hop and multi-hop question answering, and provides more transparency into the reasoning path. In addition to answer generation, PATHID explicitly models the full reasoning path to resolve the answer with a generative sequence-to-sequence model.
    Type: Application
    Filed: November 23, 2021
    Publication date: December 1, 2022
    Inventors: Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou
  • Publication number: 20220374459
    Abstract: Embodiments described herein provide a dense hierarchical retrieval for open-domain question and answering for a corpus of documents using a document-level and passage-level dense retrieval model. Specifically, each document is viewed as a structural collection that has sections, subsections and paragraphs. Each document may be split into short length passages, where a document-level retrieval model and a passage-level retrieval model may be applied to return a smaller set of filtered texts. Top documents may be identified after encoding the question and the documents and determining document relevance scores to the encoded question. Thereafter, a set of top passages are further identified based on encoding of the passages and determining passage relevance scores to the encoded question. The document and passage relevance scores may be used in combination to determine a final retrieval ranking for the documents having the set of top passages.
    Type: Application
    Filed: November 23, 2021
    Publication date: November 24, 2022
    Inventors: Ye Liu, Kazuma Hashimoto, Yingbo Zhou, Semih Yavuz, Caiming Xiong
  • Patent number: 11475890
    Abstract: Training and/or utilizing a single neural network model to generate, at each of a plurality of assistant turns of a dialog session between a user and an automated assistant, a corresponding automated assistant natural language response and/or a corresponding automated assistant action. For example, at a given assistant turn of a dialog session, both a corresponding natural language response and a corresponding action can be generated jointly and based directly on output generated using the single neural network model. The corresponding response and/or corresponding action can be generated based on processing, using the neural network model, dialog history and a plurality of discrete resources. For example, the neural network model can be used to generate a response and/or action on a token-by-token basis.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: October 18, 2022
    Assignee: GOOGLE LLC
    Inventors: Arvind Neelakantan, Daniel Duckworth, Ben Goodrich, Vishaal Prasad, Chinnadhurai Sankar, Semih Yavuz
  • Publication number: 20220129629
    Abstract: Embodiments described herein provide dynamic blocking, a decoding algorithm which enables large-scale pretrained language models to generate high-quality paraphrases in an un-supervised setting. Specifically, in order to obtain an alternative surface form, when the language model emits a token that is present in the source sequence, the language model is prevented from generating the next token that is the same as the subsequent source token in the source sequence at the next time step. In this way, the language model is forced to generate a paraphrased sequence of the input source sequence, but with mostly different wording.
    Type: Application
    Filed: January 28, 2021
    Publication date: April 28, 2022
    Inventors: Tong Niu, Semih Yavuz, Yingbo Zhou, Nitish Shirish Keskar, Huan Wang, Caiming Xiong
  • Publication number: 20220050964
    Abstract: Embodiments described herein provide systems and methods for data-to-text generation. The embodiments receive input data that includes a resource description framework (RDF) triples in an RDF graph. A data-to-text generation system generates position aware embeddings, including position embeddings, triple role embeddings, and tree-level embeddings. Using the position aware embeddings and the RDF graph, the data-to-text generation system generates a textual description for the RDF graph.
    Type: Application
    Filed: January 29, 2021
    Publication date: February 17, 2022
    Inventors: Qingyun Wang, Nazneen Rajani, Semih Yavuz, Xi Lin
  • Publication number: 20210375269
    Abstract: Embodiments described herein utilize pre-trained masked language models as the backbone for dialogue act tagging and provide cross-domain generalization of the resulting dialogue acting taggers. For example, a pre-trained MASK token of BERT model may be used as a controllable mechanism for augmenting text input, e.g., generating tags for an input of unlabeled dialogue history. The pre-trained MASK model can be trained with semi-supervised learning, e.g., using multiple objectives from supervised tagging loss, masked tagging loss, masked language model loss, and/or a disagreement loss.
    Type: Application
    Filed: August 21, 2020
    Publication date: December 2, 2021
    Inventors: Semih Yavuz, Kazuma Hashimoto, Wenhao Liu, Nitish Shirish Keskar, Richard Socher, Caiming Xiong
  • Publication number: 20200402507
    Abstract: Training and/or utilizing a single neural network model to generate, at each of a plurality of assistant turns of a dialog session between a user and an automated assistant, a corresponding automated assistant natural language response and/or a corresponding automated assistant action. For example, at a given assistant turn of a dialog session, both a corresponding natural language response and a corresponding action can be generated jointly and based directly on output generated using the single neural network model. The corresponding response and/or corresponding action can be generated based on processing, using the neural network model, dialog history and a plurality of discrete resources. For example, the neural network model can be used to generate a response and/or action on a token-by-token basis.
    Type: Application
    Filed: June 24, 2020
    Publication date: December 24, 2020
    Inventors: Arvind Neelakantan, Daniel Duckworth, Ben Goodrich, Vishaal Prasad, Chinnadhurai Sankar, Semih Yavuz