Patents by Inventor Aliaksei Severyn

Aliaksei Severyn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11881210
    Abstract: A method for generating a prosodic representation includes receiving a text utterance having one or more words. Each word has at least one syllable having at least one phoneme. The method also includes generating, using a Bidirectional Encoder Representations from Transformers (BERT) model, a sequence of wordpiece embeddings and selecting an utterance embedding for the text utterance, the utterance embedding representing an intended prosody. Each wordpiece embedding is associated with one of the one or more words of the text utterance. For each syllable, using the selected utterance embedding and a prosody model that incorporates the BERT model, the method also includes generating a corresponding prosodic syllable embedding for the syllable based on the wordpiece embedding associated with the word that includes the syllable and predicting a duration of the syllable by encoding linguistic features of each phoneme of the syllable with the corresponding prosodic syllable embedding for the syllable.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: January 23, 2024
    Assignee: Google LLC
    Inventors: Tom Marius Kenter, Manish Kumar Sharma, Robert Andrew James Clark, Aliaksei Severyn
  • Publication number: 20230376676
    Abstract: Provided are improved machine learning-based text editing models. Specifically, example implementations include a flexible semi-auto-regressive text-editing approach for generation, designed to derive the maximum benefit from non-auto-regressive text-editing and autoregressive decoding. In contrast to conventional sequence-to-sequence (seq2seq) models, the proposed approach is fast at inference time, while being capable of modeling flexible input-output transformations.
    Type: Application
    Filed: May 23, 2022
    Publication date: November 23, 2023
    Inventors: Jonathan Stephen Mallinson, Aliaksei Severyn, Eric Emil Malmi, Jakub Adamek
  • Publication number: 20230342411
    Abstract: Techniques of generating short answers for queries by a search engine include performing a training operation on a corpus of training data to train the score prediction engine, the corpus of training data including candidate passages providing short answers for display in callouts and remaining respective passages, from which a top scoring short answer is generated. In such implementations, the corpus of training data further includes the remaining respective passages and the respective titles of the candidate passage and remaining respective passages.
    Type: Application
    Filed: March 9, 2022
    Publication date: October 26, 2023
    Inventors: Preyas Dalsukhbhai Popat, Gaurav Bhaskar Gite, John Blitzer, Jayant Madhavan, Aliaksei Severyn
  • Publication number: 20220405490
    Abstract: A method of training a text-generating model for grammatical error correction (GEC) includes obtaining a multilingual set of text samples where each text sample includes a monolingual textual representation of a respective sentence. The operations also include, for each text sample of the multilingual set of text samples, generating a corrupted synthetic version of the respective text sample where the corrupted synthetic version of the respective text sample includes a grammatical change to the monolingual textual representation of the respective sentence associated with the respective text sample. The operations further include training the text-generating model using a training set of sample pairs. Each sample pair in the training set of sample pairs includes one of the respective text samples of the multilingual set of text samples and the corresponding corrupted synthetic version of the one of the respective text samples of the multilingual set of text samples.
    Type: Application
    Filed: June 16, 2021
    Publication date: December 22, 2022
    Applicant: Google LLC
    Inventors: Sebastian Krause, Sascha Rothe, Jonathan Mallinson, Eric Malmi, Aliaksei Severyn
  • Publication number: 20210350795
    Abstract: A method for generating a prosodic representation includes receiving a text utterance having one or more words. Each word has at least one syllable having at least one phoneme. The method also includes generating, using a Bidirectional Encoder Representations from Transformers (BERT) model, a sequence of wordpiece embeddings and selecting an utterance embedding for the text utterance, the utterance embedding representing an intended prosody. Each wordpiece embedding is associated with one of the one or more words of the text utterance. For each syllable, using the selected utterance embedding and a prosody model that incorporates the BERT model, the method also includes generating a corresponding prosodic syllable embedding for the syllable based on the wordpiece embedding associated with the word that includes the syllable and predicting a duration of the syllable by encoding linguistic features of each phoneme of the syllable with the corresponding prosodic syllable embedding for the syllable.
    Type: Application
    Filed: May 5, 2020
    Publication date: November 11, 2021
    Applicant: Google LLC
    Inventors: Tom Marius Kenter, Manish Kumar Sharma, Robert Andrew James Clark, Aliaksei Severyn
  • Publication number: 20170270407
    Abstract: A method includes training a neural network having parameters on training data, in which the neural network receives an input state and processes the input state to generate a respective score for each decision in a set of decisions. The method includes receiving training data including training text sequences and, for each training text sequence, a corresponding gold decision sequence. The method includes training the neural network on the training data to determine trained values of parameters of the neural network. Training the neural network includes for each training text sequence: maintaining a beam of candidate decision sequences for the training text sequence, updating each candidate decision sequence by adding one decision at a time, determining that a gold candidate decision sequence matching a prefix of the gold decision sequence has dropped out of the beam, and in response, performing an iteration of gradient descent to optimize an objective function.
    Type: Application
    Filed: January 17, 2017
    Publication date: September 21, 2017
    Inventors: Christopher Alberti, Aliaksei Severyn, Daniel Andor, Slav Petrov, Kuzman Ganchev Ganchev, David Joseph Weiss, Michael John Collins, Alessandro Presta