Patents by Inventor Noah Constant

Noah Constant has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240020546
    Abstract: Systems and methods for prompt tuning can utilize previously-learned prompts for the initialization of tuning for prompts on different tasks that may differ from the task associated with the previously-learned prompt. The prompt being utilized for initialization can be a generic prompt and/or may be a prompt selected based on a determined similarity between two or more task embeddings.
    Type: Application
    Filed: July 13, 2022
    Publication date: January 18, 2024
    Inventors: Tu Thanh Vu, Daniel Matthew Cer, Noah Constant, Brian David Lester, Rami Al-Rfou
  • Publication number: 20230325725
    Abstract: Systems and methods for natural language processing can leverage trained prompts to condition a large pre-trained machine-learned model to generate an output for a specific task. For example, a subset of parameters may be trained for the particular task to then be input with a set of input data into the pre-trained machine-learned model to generate the task-specific output. During the training of the prompt, the parameters of the pre-trained machine-learned model can be frozen, which can reduce the computational resources used during training while still leveraging the previously learned data from the pre-trained machine-learned model.
    Type: Application
    Filed: April 12, 2022
    Publication date: October 12, 2023
    Inventors: Brian David Lester, Rami Al-Rfou, Noah Constant
  • Publication number: 20230274100
    Abstract: The technology provides a model-based approach for multilingual text rewriting that is applicable across many languages and across different styles including formality levels or other textual attributes. The model is configured to manipulate both language and textual attributes jointly. This approach supports zero-shot formality-sensitive translation, with no labeled data in the target language. An encoder-decoder architectural approach with attribute extraction is used to train rewriter models that can thus be used in “universal” textual rewriting across many different languages. A cross-lingual learning signal can be incorporated into the training approach. Certain training processes do not employ any exemplars. This approach enables not just straight translation, but also the ability to create new sentences with different attributes.
    Type: Application
    Filed: February 28, 2022
    Publication date: August 31, 2023
    Inventors: Xavier Eduardo Garcia, Orhan Firat, Noah Constant, Xiaoyue Guo
  • Patent number: 11238211
    Abstract: A system may use a machine-learned model to determine whether to classify a sequence of one or more words within a first document that is being edited as a candidate hyperlink based at least in part on context associated with the first document. In response to classifying the sequence of one or more words as the candidate hyperlink, the system may use the machine-learned model and based at least in part on the sequence of one or more words and the context to determine one or more candidate document to be hyperlinked from the sequence of one or more words. In response to receiving an indication of a second document being selected out of the one or more candidate documents, the system may modify the first document to associate the sequence of one or more words with a hyperlink to the second document.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: February 1, 2022
    Assignee: Google LLC
    Inventors: Jan van de Kerkhof, Balint Miklos, Amr Abdelfattah, Tobias Kaufmann, László Lukacs, Bjarke Ebert, Victor Anchidin, Brian Strope, Heeyoung Lee, Yun-hsuan Sung, Noah Constant, Neil Smith
  • Publication number: 20200410157
    Abstract: A system may use a machine-learned model to determine whether to classify a sequence of one or more words within a first document that is being edited as a candidate hyperlink based at least in part on context associated with the first document. In response to classifying the sequence of one or more words as the candidate hyperlink, the system may use the machine-learned model and based at least in part on the sequence of one or more words and the context to determine one or more candidate document to be hyperlinked from the sequence of one or more words. In response to receiving an indication of a second document being selected out of the one or more candidate documents, the system may modify the first document to associate the sequence of one or more words with a hyperlink to the second document.
    Type: Application
    Filed: March 14, 2019
    Publication date: December 31, 2020
    Inventors: Jan van de Kerkhof, Balint Miklos, Amr Abdelfattah, Tobias Kaufmann, László Lukács, Bjarke Ebert, Victor Anchidin, Brian Strope, Heeyoung Lee, Yun-hsuan Sung, Noah Constant, Neil Smith