Patents by Inventor Yann Nicolas Dauphin

Yann Nicolas Dauphin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11709996
    Abstract: Systems, methods, and non-transitory computer-readable media can train a sequence model to output respective captions, or portions of captions, for content items. A determination can be made that a user of the social networking system is posting a content item for publication through a social networking system. A set of captions, or portions of captions, can be determined for the content item being posted based at least in part on the sequence model. The set of captions, or portions of captions, can be provided as suggestions to the user for use in a caption describing the content item being posted.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: July 25, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: Anitha Kannan, Yuandong Tian, Yann Nicolas Dauphin
  • Patent number: 10839790
    Abstract: Exemplary embodiments relate to improvements to neural networks for translation and other sequence-to-sequence tasks. A convolutional neural network may include multiple blocks, each having a convolution layer and gated linear units; gating may determine what information passes through to the next block level. Residual connections, which add the input of a block back to its output, may be applied around each block. Further, an attention may be applied to determine which word is most relevant to translate next. By applying repeated passes of the attention to multiple layers of the decoder, the decoder is able to work on the entire structure of a sentence at once (with no temporal dependency). In addition to better accuracy, this configuration is better at capturing long-range dependencies, better models the hierarchical syntax structure of a sentence, and is highly parallelizable and thus faster to run on hardware.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: November 17, 2020
    Assignee: FACEBOOK, INC.
    Inventors: Jonas Gehring, Michael Auli, Yann Nicolas Dauphin, David G. Grangier, Dzianis Yarats
  • Publication number: 20180261214
    Abstract: Exemplary embodiments relate to improvements to neural networks for translation and other sequence-to-sequence tasks. A convolutional neural network may include multiple blocks, each having a convolution layer and gated linear units; gating may determine what information passes through to the next block level. Residual connections, which add the input of a block back to its output, may be applied around each block. Further, an attention may be applied to determine which word is most relevant to translate next. By applying repeated passes of the attention to multiple layers of the decoder, the decoder is able to work on the entire structure of a sentence at once (with no temporal dependency). In addition to better accuracy, this configuration is better at capturing long-range dependencies, better models the hierarchical syntax structure of a sentence, and is highly parallelizable and thus faster to run on hardware.
    Type: Application
    Filed: December 20, 2017
    Publication date: September 13, 2018
    Inventors: Jonas Gehring, Michael Auli, Yann Nicolas Dauphin, David G. Grangier, Dzianis Yarats
  • Publication number: 20180197098
    Abstract: Systems, methods, and non-transitory computer-readable media can determine one or more chunks for a content item to be captioned. Each chunk can include one or more terms that describe at least a portion of the subject matter captured in the content item. One or more sentiments are determined based on the subject matter captured in the content item. One or more emotions are determined for the content item. At least one emoted caption is generated for the content item based at least in part on the one or more chunks, sentiments, and emotions. The emoted caption can include at least one term that conveys an emotion represented by the subject matter captured in the content item.
    Type: Application
    Filed: January 10, 2017
    Publication date: July 12, 2018
    Inventors: Karthik Subbian, Anitha Kannan, Yann Nicolas Dauphin
  • Publication number: 20180189260
    Abstract: Systems, methods, and non-transitory computer-readable media can train a sequence model to output respective captions, or portions of captions, for content items. A determination can be made that a user of the social networking system is posting a content item for publication through a social networking system. A set of captions, or portions of captions, can be determined for the content item being posted based at least in part on the sequence model. The set of captions, or portions of captions, can be provided as suggestions to the user for use in a caption describing the content item being posted.
    Type: Application
    Filed: December 30, 2016
    Publication date: July 5, 2018
    Inventors: Anitha Kannan, Yuandong Tian, Yann Nicolas Dauphin
  • Publication number: 20150310862
    Abstract: One or more aspects of the subject disclosure are directed towards performing a semantic parsing task, such as classifying text corresponding to a spoken utterance into a class. Feature data representative of input data is provided to a semantic parsing mechanism that uses a deep model trained at least in part via unsupervised learning using unlabeled data. For example, if used in a classification task, a classifier may use an associated deep neural network that is trained to have an embeddings layer corresponding to at least one of words, phrases, or sentences. The layers are learned from unlabeled data, such as query click log data.
    Type: Application
    Filed: April 24, 2014
    Publication date: October 29, 2015
    Applicant: Microsoft Corporation
    Inventors: Yann Nicolas Dauphin, Dilek Z. Hakkani-Tur, Gokhan Tur, Larry Paul Heck