Patents by Inventor Laurent BESACIER

Laurent BESACIER has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230214605
    Abstract: Methods and systems for unsupervised training for a neural multilingual sequence-to-sequence (seq2seq) model. Denoising adapters for each of one or more languages is inserted into an encoder and/or a decoder of the seq2seq model. Parameters of the one or more denoising adapters are trained on a language-specific denoising task using monolingual text for each of the one or more languages. Cross-attention weights of the seq2seq model with the trained denoising adapter layers are fine-tuned on a translation task in at least one of the one or more languages with parallel data.
    Type: Application
    Filed: September 9, 2022
    Publication date: July 6, 2023
    Inventors: Alexandre BÉRARD, Laurent BESACIER, Matthias GALLÉ, Ahmet ÜSTÜN
  • Publication number: 20230215421
    Abstract: Methods and systems for generating an end-to-end neural text-to-speech (TTS) model to process an input text to generate speech representations. An annotated set of text documents including annotations inserted therein to indicate prosodic features are input into the TTS model. The TTS model is trained using the annotated dataset and a corresponding dataset of speech representations of the text documents that include prosody associated with the indicated prosodic features. The trained TTS model learns to associate the prosody with the annotations.
    Type: Application
    Filed: September 23, 2022
    Publication date: July 6, 2023
    Inventors: Ioan CALAPODESCU, Inyoung KIM, Laurent BESACIER, Siddique LATIF
  • Patent number: 11625544
    Abstract: In methods for training a natural language generation (NLG) model using a processor a document-level machine translation (MT) model is provided by training an MT model to receive as input, token sequences in a first language, and to generate as output, token sequences in a second language. An augmented document-level MT model is provided by training the document-level MT model to receive as input, paired language-independent structured data and token sequences in the first language, and to generate as output, token sequences in the second language. The augmented document-level MT model is trained to receive as input, language-independent structured data, and to generate as output, token sequences in the second language.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: April 11, 2023
    Assignee: NAVER CORPORATION
    Inventors: Ioan Calapodescu, Alexandre Berard, Fahimeh Saleh, Laurent Besacier
  • Publication number: 20220147721
    Abstract: Multilingual neural machine translation systems having monolingual adapter layers and bilingual adapter layers for zero-shot translation include an encoder configured for encoding an input sentence in a source language into an encoder representation and a decoder configured for processing output of the encoder adapter layer to generate a decoder representation. The encoder includes an encoder adapter selector for selecting, from a plurality of encoder adapter layers, an encoder adapter layer for the source language to process the encoder representation. The decoder includes a decoder adapter selector for selecting, from a plurality of decoder adapter layers, a decoder adapter layer for a target language for generating a translated sentence of the input sentence in the target language from the decoder representation.
    Type: Application
    Filed: November 8, 2021
    Publication date: May 12, 2022
    Applicant: Naver Corporation
    Inventors: Matthias GALLE, Alexandre BERARD, Laurent BESACIER, Jerin PHILIP
  • Publication number: 20220050973
    Abstract: In methods for training a natural language generation (NLG) model using a processor a document-level machine translation (MT) model is provided by training an MT model to receive as input, token sequences in a first language, and to generate as output, token sequences in a second language. An augmented document-level MT model is provided by training the document-level MT model to receive as input, paired language-independent structured data and token sequences in the first language, and to generate as output, token sequences in the second language. The augmented document-level MT model is trained to receive as input, language-independent structured data, and to generate as output, token sequences in the second language.
    Type: Application
    Filed: September 17, 2020
    Publication date: February 17, 2022
    Inventors: Ioan CALAPODESCU, Alexandre BERARD, Fahimeh SALEH, Laurent BESACIER