Patents by Inventor Tuan Manh Lai

Tuan Manh Lai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230403175
    Abstract: Systems and methods for coreference resolution are provided. One aspect of the systems and methods includes inserting a speaker tag into a transcript, wherein the speaker tag indicates that a name in the transcript corresponds to a speaker of a portion of the transcript; encoding a plurality of candidate spans from the transcript based at least in part on the speaker tag to obtain a plurality of span vectors; extracting a plurality of entity mentions from the transcript based on the plurality of span vectors, wherein each of the plurality of entity mentions corresponds to one of the plurality of candidate spans; and generating coreference information for the transcript based on the plurality of entity mentions, wherein the coreference information indicates that a pair of candidate spans of the plurality of candidate spans corresponds to a pair of entity mentions that refer to a same entity.
    Type: Application
    Filed: June 14, 2022
    Publication date: December 14, 2023
    Inventors: Tuan Manh Lai, Trung Huu Bui, Doo Soon Kim
  • Patent number: 11776036
    Abstract: The present description relates to systems, methods, and non-transitory computer readable media for generating digital responses to digital queries by utilizing a classification model and query-specific analysis models. For example, the described systems can train a classification model to generate query classifications corresponding to product queries, conversational queries, and/or recommendation/purchase queries. Moreover, the described systems can apply the classification model to select pertinent models for particular queries. For example, upon classifying a product query, the described systems can utilize a neural ranking model (trained based on a set of training product specifications and training queries) to generate relevance scores for product specifications associated with a digital query.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: October 3, 2023
    Assignee: Adobe Inc.
    Inventors: Tuan Manh Lai, Trung Bui, Sheng Li, Quan Hung Tran, Hung Bui
  • Publication number: 20230197081
    Abstract: A computer-implemented method is disclosed for determining one or more characteristics of a dialog between a computer system and user. The method may comprise receiving a system utterance comprising one or more tokens defining one or more words generated by the computer system; receiving a user utterance comprising one or more tokens defining one or more words uttered by a user in response to the system utterance, the system utterance and the user utterance forming a dialog context; receiving one or more utterance candidates comprising one or more tokens; for each utterance candidate, generating an input sequence combining the one or more tokens of each of the system utterance, the user utterance, and the utterance candidate; and for each utterance candidate, evaluating the generated input sequence with a model to determine a probability that the utterance candidate is relevant to the dialog context.
    Type: Application
    Filed: February 9, 2023
    Publication date: June 22, 2023
    Applicant: Adobe Inc.
    Inventors: Tuan Manh Lai, Trung Bui, Quan Tran
  • Patent number: 11651211
    Abstract: Techniques for training a first neural network (NN) model using a pre-trained second NN model are disclosed. In an example, training data is input to the first and second models. The training data includes masked tokens and unmasked tokens. In response, the first model generates a first prediction associated with a masked token and a second prediction associated with an unmasked token, and the second model generates a third prediction associated with the masked token and a fourth prediction associated with the unmasked token. The first model is trained, based at least in part on the first, second, third, and fourth predictions. In another example, a prediction associated with a masked token, a prediction associated with an unmasked token, and a prediction associated with whether two sentences of training data are adjacent sentences are received from each of the first and second models. The first model is trained using the predictions.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: May 16, 2023
    Assignee: Adobe Inc.
    Inventors: Tuan Manh Lai, Trung Huu Bui, Quan Hung Tran
  • Patent number: 11610584
    Abstract: A computer-implemented method is disclosed for determining one or more characteristics of a dialog between a computer system and user. The method may comprise receiving a system utterance comprising one or more tokens defining one or more words generated by the computer system; receiving a user utterance comprising one or more tokens defining one or more words uttered by a user in response to the system utterance, the system utterance and the user utterance forming a dialog context; receiving one or more utterance candidates comprising one or more tokens; for each utterance candidate, generating an input sequence combining the one or more tokens of each of the system utterance, the user utterance, and the utterance candidate; and for each utterance candidate, evaluating the generated input sequence with a model to determine a probability that the utterance candidate is relevant to the dialog context.
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: March 21, 2023
    Assignee: Adobe Inc.
    Inventors: Tuan Manh Lai, Trung Bui, Quan Hung Tran
  • Patent number: 11537950
    Abstract: This disclosure describes one or more implementations of a text sequence labeling system that accurately and efficiently utilize a joint-learning self-distillation approach to improve text sequence labeling machine-learning models. For example, in various implementations, the text sequence labeling system trains a text sequence labeling machine-learning teacher model to generate text sequence labels. The text sequence labeling system then creates and trains a text sequence labeling machine-learning student model utilizing the training and the output of the teacher model. Upon the student model achieving improved results over the teacher model, the text sequence labeling system re-initializes the teacher model with the learned model parameters of the student model and repeats the above joint-learning self-distillation framework. The text sequence labeling system then utilizes a trained text sequence labeling model to generate text sequence labels from input documents.
    Type: Grant
    Filed: October 14, 2020
    Date of Patent: December 27, 2022
    Assignee: Adobe Inc.
    Inventors: Trung Bui, Tuan Manh Lai, Quan Tran, Doo Soon Kim
  • Publication number: 20220383150
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that provide a platform for on-demand selection of machine-learning models and on-demand learning of parameters for the selected machine-learning models via cloud-based systems. For instance, the disclosed system receives a request indicating a selection of a machine-learning model to perform a machine-learning task (e.g., a natural language task) utilizing a specific dataset (e.g., a user-defined dataset). The disclosed system utilizes a scheduler to monitor available computing devices on cloud-based storage systems for instantiating the selected machine-learning model. Using the indicated dataset at a determined cloud-based computing device, the disclosed system automatically trains the machine-learning model.
    Type: Application
    Filed: May 26, 2021
    Publication date: December 1, 2022
    Inventors: Nham Van Le, Tuan Manh Lai, Trung Bui, Doo Soon Kim
  • Publication number: 20220114476
    Abstract: This disclosure describes one or more implementations of a text sequence labeling system that accurately and efficiently utilize a joint-learning self-distillation approach to improve text sequence labeling machine-learning models. For example, in various implementations, the text sequence labeling system trains a text sequence labeling machine-learning teacher model to generate text sequence labels. The text sequence labeling system then creates and trains a text sequence labeling machine-learning student model utilizing the training and the output of the teacher model. Upon the student model achieving improved results over the teacher model, the text sequence labeling system re-initializes the teacher model with the learned model parameters of the student model and repeats the above joint-learning self-distillation framework. The text sequence labeling system then utilizes a trained text sequence labeling model to generate text sequence labels from input documents.
    Type: Application
    Filed: October 14, 2020
    Publication date: April 14, 2022
    Inventors: Trung Bui, Tuan Manh Lai, Quan Tran, Doo Soon Kim
  • Publication number: 20210375277
    Abstract: A computer-implemented method is disclosed for determining one or more characteristics of a dialog between a computer system and user. The method may comprise receiving a system utterance comprising one or more tokens defining one or more words generated by the computer system; receiving a user utterance comprising one or more tokens defining one or more words uttered by a user in response to the system utterance, the system utterance and the user utterance forming a dialog context; receiving one or more utterance candidates comprising one or more tokens; for each utterance candidate, generating an input sequence combining the one or more tokens of each of the system utterance, the user utterance, and the utterance candidate; and for each utterance candidate, evaluating the generated input sequence with a model to determine a probability that the utterance candidate is relevant to the dialog context.
    Type: Application
    Filed: June 1, 2020
    Publication date: December 2, 2021
    Inventors: Tuan Manh LAI, Trung BUI, Quan Hung TRAN
  • Patent number: 11113479
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that can determine an answer to a query based on matching probabilities for combinations of respective candidate answers. For example, the disclosed systems can utilize a gated-self attention mechanism (GSAM) to interpret inputs that include contextual information, a query, and candidate answers. The disclosed systems can also utilize a memory network in tandem with the GSAM to form a gated self-attention memory network (GSAMN) to refine outputs or predictions over multiple reasoning hops. Further, the disclosed systems can utilize transfer learning of the GSAM/GSAMN from an initial training dataset to a target training dataset.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: September 7, 2021
    Assignee: Adobe Inc.
    Inventors: Quan Tran, Tuan Manh Lai, Trung Bui
  • Publication number: 20210182662
    Abstract: Techniques for training a first neural network (NN) model using a pre-trained second NN model are disclosed. In an example, training data is input to the first and second models. The training data includes masked tokens and unmasked tokens. In response, the first model generates a first prediction associated with a masked token and a second prediction associated with an unmasked token, and the second model generates a third prediction associated with the masked token and a fourth prediction associated with the unmasked token. The first model is trained, based at least in part on the first, second, third, and fourth predictions. In another example, a prediction associated with a masked token, a prediction associated with an unmasked token, and a prediction associated with whether two sentences of training data are adjacent sentences are received from each of the first and second models. The first model is trained using the predictions.
    Type: Application
    Filed: December 17, 2019
    Publication date: June 17, 2021
    Applicant: Adobe Inc.
    Inventors: Tuan Manh Lai, Trung Huu Bui, Quan Hung Tran
  • Publication number: 20210081503
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that can determine an answer to a query based on matching probabilities for combinations of respective candidate answers. For example, the disclosed systems can utilize a gated-self attention mechanism (GSAM) to interpret inputs that include contextual information, a query, and candidate answers. The disclosed systems can also utilize a memory network in tandem with the GSAM to form a gated self-attention memory network (GSAMN) to refine outputs or predictions over multiple reasoning hops. Further, the disclosed systems can utilize transfer learning of the GSAM/GSAMN from an initial training dataset to a target training dataset.
    Type: Application
    Filed: September 12, 2019
    Publication date: March 18, 2021
    Inventors: Quan Tran, Tuan Manh Lai, Trung Bui
  • Publication number: 20190325068
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating digital responses to digital queries by utilizing a classification model and query-specific analysis models. For example, the disclosed systems can train a classification model to generate query classifications corresponding to product queries, conversational queries, and/or recommendation/purchase queries. Moreover, the disclosed systems can apply the classification model to select pertinent models for particular queries. For example, upon classifying a product query, disclosed systems can utilize a neural ranking model (trained based on a set of training product specifications and training queries) to generate relevance scores for product specifications associated with a digital query.
    Type: Application
    Filed: April 19, 2018
    Publication date: October 24, 2019
    Inventors: Tuan Manh Lai, Trung Bui, Sheng Li, Quan Hung Tran, Hung Bui