Patents by Inventor Semih Yavuz
Semih Yavuz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240347061Abstract: Training and/or utilizing a single neural network model to generate, at each of a plurality of assistant turns of a dialog session between a user and an automated assistant, a corresponding automated assistant natural language response and/or a corresponding automated assistant action. For example, at a given assistant turn of a dialog session, both a corresponding natural language response and a corresponding action can be generated jointly and based directly on output generated using the single neural network model. The corresponding response and/or corresponding action can be generated based on processing, using the neural network model, dialog history and a plurality of discrete resources. For example, the neural network model can be used to generate a response and/or action on a token-by-token basis.Type: ApplicationFiled: June 24, 2024Publication date: October 17, 2024Inventors: Arvind Neelakantan, Daniel Duckworth, Ben Goodrich, Vishaal Prasad, Chinnadhurai Sankar, Semih Yavuz
-
Patent number: 12105744Abstract: Embodiments described herein provide a semantic parsing framework which may be referred to as Uni-Parser. The Uni-Parser framework may be applied to question answering on both knowledge bases and databases. The three main stages of the Uni-Parser framework are enumeration, ranking, and generation. At the enumeration stage, primitives are enumerated based on matching the question to the data structure. After enumerating primitives, the Uni-Parser framework may rank the primitives used a trained ranker model. The top ranked primitives may then be used as inputs to a generator which is a learned sequence to sequence model which produces a logical form.Type: GrantFiled: November 29, 2022Date of Patent: October 1, 2024Assignee: Salesforce, Inc.Inventors: Ye Liu, Semih Yavuz, Yingbo Zhou, Rui Meng
-
Publication number: 20240249113Abstract: Embodiments described herein provide systems and methods for question answering using a hybrid question parser and executor model. The hybrid question parser and executor model includes a hybrid parser model and a hybrid executor model. The hybrid parser model includes a first neural network model, and generates a representation of an input question. The representation includes primitives and operations representing relationships among the primitives. The hybrid executor model generates an answer to the input question by executing the representation based on an input text document. The hybrid executor model includes an execution neural network model for executing the primitives of the representation, and an execution programming model for executing the operations of the representation.Type: ApplicationFiled: June 14, 2023Publication date: July 25, 2024Inventors: Ye LIU, Semih YAVUZ, Rui MENG, Yingbo ZHOU
-
Patent number: 12020706Abstract: Training and/or utilizing a single neural network model to generate, at each of a plurality of assistant turns of a dialog session between a user and an automated assistant, a corresponding automated assistant natural language response and/or a corresponding automated assistant action. For example, at a given assistant turn of a dialog session, both a corresponding natural language response and a corresponding action can be generated jointly and based directly on output generated using the single neural network model. The corresponding response and/or corresponding action can be generated based on processing, using the neural network model, dialog history and a plurality of discrete resources. For example, the neural network model can be used to generate a response and/or action on a token-by-token basis.Type: GrantFiled: August 30, 2022Date of Patent: June 25, 2024Assignee: GOOGLE LLCInventors: Arvind Neelakantan, Daniel Duckworth, Ben Goodrich, Vishaal Prasad, Chinnadhurai Sankar, Semih Yavuz
-
Publication number: 20240202530Abstract: Embodiments described herein provide systems and methods for training a text retrieval model. A system may generate queries associated with provided documents. The queries may be generated in one or more different manners. Examples of query generation may include extracting relevant spans of text from the documents, prompting a language model for a topic, title, abstractive summary, and/or extractive summary based on the documents. Metadata such as title or other HTML tags may be used as queries. Using the one or more queries, the text retrieval model may be trained using contrastive learning, using the generated query, and positive and negative sample documents. A fine-tuning training phase may be performed using domain-specific data which may also be done with generated query pairs, or may be done in a supervised fashion with provided queries. The text retrieval model may be used to locate documents given an input query.Type: ApplicationFiled: April 19, 2023Publication date: June 20, 2024Inventors: Rui Meng, Yingbo Zhou, Ye Liu, Semih Yavuz, Ning Yu
-
Publication number: 20240176805Abstract: Embodiments described herein provide a semantic parsing framework which may be referred to as Uni-Parser. The Uni-Parser framework may be applied to question answering on both knowledge bases and databases. The three main stages of the Uni-Parser framework are enumeration, ranking, and generation. At the enumeration stage, primitives are enumerated based on matching the question to the data structure. After enumerating primitives, the Uni-Parser framework may rank the primitives used a trained ranker model. The top ranked primitives may then be used as inputs to a generator which is a learned sequence to sequence model which produces a logical form.Type: ApplicationFiled: November 29, 2022Publication date: May 30, 2024Inventors: Ye LIU, Semih YAVUZ, Yingbo ZHOU, Rui MENG
-
Publication number: 20230419027Abstract: Embodiments described herein provide a prompt-based transfer learning method that employs shared latent space prompt tuning). Specifically, a shared latent space is assumed, among all source and target tasks, where each vector in the space captures a basis skill to do a particular task. Given an instance (from either a source task or a target task), it is first encoded into an instance representation vector and then queries the latent space, which yields a skill vector for this instance. This vector modulates a frozen model, via soft prompts which are a simple prompt transformation (the prompt generator in FIG. 3) of the basis skill vector, to generate an answer for the instance. The latent space and prompt transformation are learned end-to-end in upstream pre-training on source tasks.Type: ApplicationFiled: November 30, 2022Publication date: December 28, 2023Inventors: Bo Pang, Semih Yavuz, Caiming Xiong, Yingbo Zhou
-
Patent number: 11829721Abstract: Embodiments described herein provide dynamic blocking, a decoding algorithm which enables large-scale pretrained language models to generate high-quality paraphrases in an un-supervised setting. Specifically, in order to obtain an alternative surface form, when the language model emits a token that is present in the source sequence, the language model is prevented from generating the next token that is the same as the subsequent source token in the source sequence at the next time step. In this way, the language model is forced to generate a paraphrased sequence of the input source sequence, but with mostly different wording.Type: GrantFiled: January 28, 2021Date of Patent: November 28, 2023Assignee: salesforce.com, inc.Inventors: Tong Niu, Semih Yavuz, Yingbo Zhou, Nitish Shirish Keskar, Huan Wang, Caiming Xiong
-
Patent number: 11741142Abstract: Embodiments described herein provide document summarization systems and methods that utilize fine-tuning of pre-trained abstractive summarization models to produce summaries that more faithfully track the content of the documents. Such abstractive summarization models may be pre-trained using a corpus consisting of pairs of articles and associated summaries. For each article-summary pair, a pseudo label or control code is generated and represents a faithfulness of the summary with respect to the article. The pre-trained model is then fine-tuned based on the article-summary pairs and the corresponding control codes. The resulting fine-tuned models then provide improved faithfulness in document summarization tasks.Type: GrantFiled: January 31, 2022Date of Patent: August 29, 2023Assignee: salesforce.com, inc.Inventors: Haopeng Zheng, Semih Yavuz, Wojciech Kryscinski, Kazuma Hashimoto, Yingbo Zhou
-
Patent number: 11727210Abstract: Embodiments described herein provide systems and methods for data-to-text generation. The embodiments receive input data that includes a resource description framework (RDF) triples in an RDF graph. A data-to-text generation system generates position aware embeddings, including position embeddings, triple role embeddings, and tree-level embeddings. Using the position aware embeddings and the RDF graph, the data-to-text generation system generates a textual description for the RDF graph.Type: GrantFiled: January 29, 2021Date of Patent: August 15, 2023Assignee: Salesforce.com, Inc.Inventors: Qingyun Wang, Nazneen Rajani, Semih Yavuz, Xi Lin
-
Publication number: 20230229957Abstract: Methods, apparatuses, and computer-program products are disclosed. The method may include inputting one or more subcomponent training datasets into the plurality of subcomponent models of the machine learning model, the machine learning model may be configured to perform a final task, and the plurality of subcomponent models may be configured to perform sequential subtasks that result in the final task. The method may include computing one or more weights for data points of the one or more subcomponent training datasets and the one or more weights may be based on a contribution of the data points to an end-to-end error loss measurement associated with performing the final task of the machine learning model. The method may include training the plurality of subcomponent models based on the one or more weights for the data points of the one or more subcomponent training datasets.Type: ApplicationFiled: January 14, 2022Publication date: July 20, 2023Inventors: Shuyang Li, Yingbo Zhou, Semih Yavuz, Govardana Sachithanandam Ramachandran
-
Publication number: 20230054068Abstract: Embodiments described herein provide document summarization systems and methods that utilize fine-tuning of pre-trained abstractive summarization models to produce summaries that more faithfully track the content of the documents. Such abstractive summarization models may be pre-trained using a corpus consisting of pairs of articles and associated summaries. For each article-summary pair, a pseudo label or control code is generated and represents a faithfulness of the summary with respect to the article. The pre-trained model is then fine-tuned based on the article-summary pairs and the corresponding control codes. The resulting fine-tuned models then provide improved faithfulness in document summarization tasks.Type: ApplicationFiled: January 31, 2022Publication date: February 23, 2023Inventors: Haopeng Zheng, Semih Yavuz, Wojciech Kryscinski, Kazuma Hashimoto, Yingbo Zhou
-
Publication number: 20230055188Abstract: Embodiments described herein provide a question answering approach that answers a question by generating an executable logical form. First, a ranking model is used to select a set of good logical forms from a pool of logical forms obtained by searching over a knowledge graph. The selected logical forms are good in the sense that they are close to (or exactly match, in some cases) the intents in the question and final desired logical form. Next, a generation model is adopted conditioned on the question as well as the selected logical forms to generate the target logical form and execute it to obtain the final answer. For example, at inference stage, when a question is received, a matching logical form is identified from the question, based on which the final answer can be generated based on the node that is associated with the matching logical form in the knowledge base.Type: ApplicationFiled: December 29, 2021Publication date: February 23, 2023Inventors: Xi Ye, Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou
-
Publication number: 20230059870Abstract: Embodiments described herein provide a question answering approach that answers a question by generating an executable logical form. First, a ranking model is used to select a set of good logical forms from a pool of logical forms obtained by searching over a knowledge graph. The selected logical forms are good in the sense that they are close to (or exactly match, in some cases) the intents in the question and final desired logical form. Next, a generation model is adopted conditioned on the question as well as the selected logical forms to generate the target logical form and execute it to obtain the final answer. For example, at inference stage, when a question is received, a matching logical form is identified from the question, based on which the final answer can be generated based on the node that is associated with the matching logical form in the knowledge base.Type: ApplicationFiled: December 29, 2021Publication date: February 23, 2023Inventors: Xi Ye, Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou
-
Publication number: 20220415324Abstract: Training and/or utilizing a single neural network model to generate, at each of a plurality of assistant turns of a dialog session between a user and an automated assistant, a corresponding automated assistant natural language response and/or a corresponding automated assistant action. For example, at a given assistant turn of a dialog session, both a corresponding natural language response and a corresponding action can be generated jointly and based directly on output generated using the single neural network model. The corresponding response and/or corresponding action can be generated based on processing, using the neural network model, dialog history and a plurality of discrete resources. For example, the neural network model can be used to generate a response and/or action on a token-by-token basis.Type: ApplicationFiled: August 30, 2022Publication date: December 29, 2022Inventors: Arvind Neelakantan, Daniel Duckworth, Ben Goodrich, Vishaal Prasad, Chinnadhurai Sankar, Semih Yavuz
-
Publication number: 20220383159Abstract: Embodiments described herein provide a fusion-in-decoder (FID) based model (referred to as “PATHID”) for open-domain multi-hop question answering. Specifically, PATHID addresses the gap between the general behavior of the FID model on single-hop and multi-hop question answering, and provides more transparency into the reasoning path. In addition to answer generation, PATHID explicitly models the full reasoning path to resolve the answer with a generative sequence-to-sequence model.Type: ApplicationFiled: November 23, 2021Publication date: December 1, 2022Inventors: Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou
-
Publication number: 20220374459Abstract: Embodiments described herein provide a dense hierarchical retrieval for open-domain question and answering for a corpus of documents using a document-level and passage-level dense retrieval model. Specifically, each document is viewed as a structural collection that has sections, subsections and paragraphs. Each document may be split into short length passages, where a document-level retrieval model and a passage-level retrieval model may be applied to return a smaller set of filtered texts. Top documents may be identified after encoding the question and the documents and determining document relevance scores to the encoded question. Thereafter, a set of top passages are further identified based on encoding of the passages and determining passage relevance scores to the encoded question. The document and passage relevance scores may be used in combination to determine a final retrieval ranking for the documents having the set of top passages.Type: ApplicationFiled: November 23, 2021Publication date: November 24, 2022Inventors: Ye Liu, Kazuma Hashimoto, Yingbo Zhou, Semih Yavuz, Caiming Xiong
-
Patent number: 11475890Abstract: Training and/or utilizing a single neural network model to generate, at each of a plurality of assistant turns of a dialog session between a user and an automated assistant, a corresponding automated assistant natural language response and/or a corresponding automated assistant action. For example, at a given assistant turn of a dialog session, both a corresponding natural language response and a corresponding action can be generated jointly and based directly on output generated using the single neural network model. The corresponding response and/or corresponding action can be generated based on processing, using the neural network model, dialog history and a plurality of discrete resources. For example, the neural network model can be used to generate a response and/or action on a token-by-token basis.Type: GrantFiled: June 24, 2020Date of Patent: October 18, 2022Assignee: GOOGLE LLCInventors: Arvind Neelakantan, Daniel Duckworth, Ben Goodrich, Vishaal Prasad, Chinnadhurai Sankar, Semih Yavuz
-
Publication number: 20220129629Abstract: Embodiments described herein provide dynamic blocking, a decoding algorithm which enables large-scale pretrained language models to generate high-quality paraphrases in an un-supervised setting. Specifically, in order to obtain an alternative surface form, when the language model emits a token that is present in the source sequence, the language model is prevented from generating the next token that is the same as the subsequent source token in the source sequence at the next time step. In this way, the language model is forced to generate a paraphrased sequence of the input source sequence, but with mostly different wording.Type: ApplicationFiled: January 28, 2021Publication date: April 28, 2022Inventors: Tong Niu, Semih Yavuz, Yingbo Zhou, Nitish Shirish Keskar, Huan Wang, Caiming Xiong
-
Publication number: 20220050964Abstract: Embodiments described herein provide systems and methods for data-to-text generation. The embodiments receive input data that includes a resource description framework (RDF) triples in an RDF graph. A data-to-text generation system generates position aware embeddings, including position embeddings, triple role embeddings, and tree-level embeddings. Using the position aware embeddings and the RDF graph, the data-to-text generation system generates a textual description for the RDF graph.Type: ApplicationFiled: January 29, 2021Publication date: February 17, 2022Inventors: Qingyun Wang, Nazneen Rajani, Semih Yavuz, Xi Lin