Patents by Inventor Quan Hung Tran

Quan Hung Tran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135165
    Abstract: One aspect of systems and methods for data correction includes identifying a false label from among predicted labels corresponding to different parts of an input sample, wherein the predicted labels are generated by a neural network trained based on a training set comprising training samples and training labels corresponding to parts of the training samples; computing an influence of each of the training labels on the false label by approximating a change in a conditional loss for the neural network corresponding to each of the training labels; identifying a part of a training sample of the training samples and a corresponding source label from among the training labels based on the computed influence; and modifying the training set based on the identified part of the training sample and the corresponding source label to obtain a corrected training set.
    Type: Application
    Filed: October 18, 2022
    Publication date: April 25, 2024
    Inventors: Varun Manjunatha, Sarthak Jain, Rajiv Bhawanji Jain, Ani Nenkova Nenkova, Christopher Alan Tensmeyer, Franck Dernoncourt, Quan Hung Tran, Ruchi Deshpande
  • Patent number: 11967128
    Abstract: The present disclosure describes a model for large scale color prediction of objects identified in images. Embodiments of the present disclosure include an object detection network, an attention network, and a color classification network. The object detection network generates object features for an object in an image and may include a convolutional neural network (CNN), region proposal network, or a ResNet. The attention network generates an attention vector for the object based on the object features, wherein the attention network takes a query vector based on the object features, and a plurality of key vector and a plurality of value vectors corresponding to a plurality of colors as input. The color classification network generates a color attribute vector based on the attention vector, wherein the color attribute vector indicates a probability of the object including each of the plurality of colors.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: April 23, 2024
    Assignee: ADOBE INC.
    Inventors: Qiuyu Chen, Quan Hung Tran, Kushal Kafle, Trung Huu Bui, Franck Dernoncourt, Walter Chang
  • Publication number: 20240037939
    Abstract: A group captioning system includes computing hardware, software, and/or firmware components in support of the enhanced group captioning contemplated herein. In operation, the system generates a target embedding for a group of target images, as well as a reference embedding for a group of reference images. The system identifies information in-common between the group of target images and the group of reference images and removes the joint information from the target embedding and the reference embedding. The result is a contrastive group embedding that includes a contrastive target embedding and a contrastive reference embedding with which to construct a contrastive group embedding, which is then input to a model to obtain a group caption for the target group of images.
    Type: Application
    Filed: October 16, 2023
    Publication date: February 1, 2024
    Inventors: Quan Hung TRAN, Long Thanh MAI, Zhe LIN, Zhuowan LI
  • Publication number: 20240037906
    Abstract: Systems and methods for color prediction are described. Embodiments of the present disclosure receive an image that includes an object including a color, generate a color vector based on the image using a color classification network, where the color vector includes a color value corresponding to each of a set of colors, generate a bias vector by comparing the color vector to teach of a set of center vectors, where each of the set of center vectors corresponds to a color of the set of colors, and generate an unbiased color vector based on the color vector and the bias vector, where the unbiased color vector indicates the color of the object.
    Type: Application
    Filed: July 26, 2022
    Publication date: February 1, 2024
    Inventors: Qiuyu Chen, Quan Hung Tran, Kushal Kafle, Trung Huu Bui, Franck Dernoncourt, Walter W. Chang
  • Publication number: 20240020337
    Abstract: Systems and methods for intent discovery and video summarization are described. Embodiments of the present disclosure receive a video and a transcript of the video, encode the video to obtain a sequence of video encodings, encode the transcript to obtain a sequence of text encodings, apply a visual gate to the sequence of text encodings based on the sequence of video encodings to obtain gated text encodings, and generate an intent label for the transcript based on the gated text encodings.
    Type: Application
    Filed: July 12, 2022
    Publication date: January 18, 2024
    Inventors: Adyasha Maharana, Quan Hung Tran, Seunghyun Yoon, Franck Dernoncourt, Trung Huu Bui, Walter W. Chang
  • Patent number: 11790650
    Abstract: A group captioning system includes computing hardware, software, and/or firmware components in support of the enhanced group captioning contemplated herein. In operation, the system generates a target embedding for a group of target images, as well as a reference embedding for a group of reference images. The system identifies information in-common between the group of target images and the group of reference images and removes the joint information from the target embedding and the reference embedding. The result is a contrastive group embedding that includes a contrastive target embedding and a contrastive reference embedding with which to construct a contrastive group embedding, which is then input to a model to obtain a group caption for the target group of images.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: October 17, 2023
    Inventors: Quan Hung Tran, Long Thanh Mai, Zhe Lin, Zhuowan Li
  • Patent number: 11776036
    Abstract: The present description relates to systems, methods, and non-transitory computer readable media for generating digital responses to digital queries by utilizing a classification model and query-specific analysis models. For example, the described systems can train a classification model to generate query classifications corresponding to product queries, conversational queries, and/or recommendation/purchase queries. Moreover, the described systems can apply the classification model to select pertinent models for particular queries. For example, upon classifying a product query, the described systems can utilize a neural ranking model (trained based on a set of training product specifications and training queries) to generate relevance scores for product specifications associated with a digital query.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: October 3, 2023
    Assignee: Adobe Inc.
    Inventors: Tuan Manh Lai, Trung Bui, Sheng Li, Quan Hung Tran, Hung Bui
  • Publication number: 20230259708
    Abstract: Systems and methods for key-phrase extraction are described. The systems and methods include receiving a transcript including a text paragraph and generating key-phrase data for the text paragraph using a key-phrase extraction network. The key-phrase extraction network is trained to identify domain-relevant key-phrase data based on domain data obtained using a domain discriminator network. The systems and methods further include generating meta-data for the transcript based on the key-phrase data.
    Type: Application
    Filed: February 14, 2022
    Publication date: August 17, 2023
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt, Walter W. Chang, Trung Huu Bui, Hanieh Deilamsalehy, Seunghyun Yoon, Rajiv Bhawanji Jain, Quan Hung Tran, Varun Manjunatha
  • Patent number: 11651211
    Abstract: Techniques for training a first neural network (NN) model using a pre-trained second NN model are disclosed. In an example, training data is input to the first and second models. The training data includes masked tokens and unmasked tokens. In response, the first model generates a first prediction associated with a masked token and a second prediction associated with an unmasked token, and the second model generates a third prediction associated with the masked token and a fourth prediction associated with the unmasked token. The first model is trained, based at least in part on the first, second, third, and fourth predictions. In another example, a prediction associated with a masked token, a prediction associated with an unmasked token, and a prediction associated with whether two sentences of training data are adjacent sentences are received from each of the first and second models. The first model is trained using the predictions.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: May 16, 2023
    Assignee: Adobe Inc.
    Inventors: Tuan Manh Lai, Trung Huu Bui, Quan Hung Tran
  • Publication number: 20230136527
    Abstract: Systems and methods for natural language processing are described. One or more aspects of a method, apparatus, and non-transitory computer readable medium include receiving a text phrase; encoding the text phrase using an encoder to obtain a hidden representation of the text phrase, wherein the encoder is trained during a first training phrase using self-supervised learning based on a first contrastive loss and during a second training phrase using supervised learning based on a second contrastive learning loss; identifying an intent of the text phrase from a predetermined set of intent labels using a classification network, wherein the classification network is jointly trained with the encoder in the second training phase; and generating a response to the text phrase based on the intent.
    Type: Application
    Filed: November 4, 2021
    Publication date: May 4, 2023
    Inventors: Jianguo Zhang, Trung Huu Bui, Seunghyun Yoon, Xiang Chen, Quan Hung Tran, Walter W. Chang
  • Patent number: 11610584
    Abstract: A computer-implemented method is disclosed for determining one or more characteristics of a dialog between a computer system and user. The method may comprise receiving a system utterance comprising one or more tokens defining one or more words generated by the computer system; receiving a user utterance comprising one or more tokens defining one or more words uttered by a user in response to the system utterance, the system utterance and the user utterance forming a dialog context; receiving one or more utterance candidates comprising one or more tokens; for each utterance candidate, generating an input sequence combining the one or more tokens of each of the system utterance, the user utterance, and the utterance candidate; and for each utterance candidate, evaluating the generated input sequence with a model to determine a probability that the utterance candidate is relevant to the dialog context.
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: March 21, 2023
    Assignee: Adobe Inc.
    Inventors: Tuan Manh Lai, Trung Bui, Quan Hung Tran
  • Patent number: 11574142
    Abstract: The technology described herein is directed to a reinforcement learning based framework for training a natural media agent to learn a rendering policy without human supervision or labeled datasets. The reinforcement learning based framework feeds the natural media agent a training dataset to implicitly learn the rendering policy by exploring a canvas and minimizing a loss function. Once trained, the natural media agent can be applied to any reference image to generate a series (or sequence) of continuous-valued primitive graphic actions, e.g., sequence of painting strokes, that when rendered by a synthetic rendering environment on a canvas, reproduce an identical or transformed version of the reference image subject to limitations of an action space and the learned rendering policy.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: February 7, 2023
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xihui Liu, Quan Hung Tran, Jianming Zhang, Handong Zhao
  • Publication number: 20220383031
    Abstract: The present disclosure describes a model for large scale color prediction of objects identified in images. Embodiments of the present disclosure include an object detection network, an attention network, and a color classification network. The object detection network generates object features for an object in an image and may include a convolutional neural network (CNN), region proposal network, or a ResNet. The attention network generates an attention vector for the object based on the object features, wherein the attention network takes a query vector based on the object features, and a plurality of key vector and a plurality of value vectors corresponding to a plurality of colors as input. The color classification network generates a color attribute vector based on the attention vector, wherein the color attribute vector indicates a probability of the object including each of the plurality of colors.
    Type: Application
    Filed: May 28, 2021
    Publication date: December 1, 2022
    Inventors: Qiuyu Chen, Quan Hung Tran, Kushal Kafle, Trung Huu Bui, Franck Dernoncourt, Walter Chang
  • Publication number: 20220058390
    Abstract: A group captioning system includes computing hardware, software, and/or firmware components in support of the enhanced group captioning contemplated herein. In operation, the system generates a target embedding for a group of target images, as well as a reference embedding for a group of reference images. The system identifies information in-common between the group of target images and the group of reference images and removes the joint information from the target embedding and the reference embedding. The result is a contrastive group embedding that includes a contrastive target embedding and a contrastive reference embedding with which to construct a contrastive group embedding, which is then input to a model to obtain a group caption for the target group of images.
    Type: Application
    Filed: August 20, 2020
    Publication date: February 24, 2022
    Inventors: Quan Hung Tran, Long Thanh Mai, Zhe Lin, Zhuowan Li
  • Publication number: 20220036127
    Abstract: The technology described herein is directed to a reinforcement learning based framework for training a natural media agent to learn a rendering policy without human supervision or labeled datasets. The reinforcement learning based framework feeds the natural media agent a training dataset to implicitly learn the rendering policy by exploring a canvas and minimizing a loss function. Once trained, the natural media agent can be applied to any reference image to generate a series (or sequence) of continuous-valued primitive graphic actions, e.g., sequence of painting strokes, that when rendered by a synthetic rendering environment on a canvas, reproduce an identical or transformed version of the reference image subject to limitations of an action space and the learned rendering policy.
    Type: Application
    Filed: July 30, 2020
    Publication date: February 3, 2022
    Inventors: Zhe Lin, Xihui Liu, Quan Hung Tran, Jianming Zhang, Handong Zhao
  • Publication number: 20210375277
    Abstract: A computer-implemented method is disclosed for determining one or more characteristics of a dialog between a computer system and user. The method may comprise receiving a system utterance comprising one or more tokens defining one or more words generated by the computer system; receiving a user utterance comprising one or more tokens defining one or more words uttered by a user in response to the system utterance, the system utterance and the user utterance forming a dialog context; receiving one or more utterance candidates comprising one or more tokens; for each utterance candidate, generating an input sequence combining the one or more tokens of each of the system utterance, the user utterance, and the utterance candidate; and for each utterance candidate, evaluating the generated input sequence with a model to determine a probability that the utterance candidate is relevant to the dialog context.
    Type: Application
    Filed: June 1, 2020
    Publication date: December 2, 2021
    Inventors: Tuan Manh LAI, Trung BUI, Quan Hung TRAN
  • Publication number: 20210182662
    Abstract: Techniques for training a first neural network (NN) model using a pre-trained second NN model are disclosed. In an example, training data is input to the first and second models. The training data includes masked tokens and unmasked tokens. In response, the first model generates a first prediction associated with a masked token and a second prediction associated with an unmasked token, and the second model generates a third prediction associated with the masked token and a fourth prediction associated with the unmasked token. The first model is trained, based at least in part on the first, second, third, and fourth predictions. In another example, a prediction associated with a masked token, a prediction associated with an unmasked token, and a prediction associated with whether two sentences of training data are adjacent sentences are received from each of the first and second models. The first model is trained using the predictions.
    Type: Application
    Filed: December 17, 2019
    Publication date: June 17, 2021
    Applicant: Adobe Inc.
    Inventors: Tuan Manh Lai, Trung Huu Bui, Quan Hung Tran
  • Publication number: 20190325068
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating digital responses to digital queries by utilizing a classification model and query-specific analysis models. For example, the disclosed systems can train a classification model to generate query classifications corresponding to product queries, conversational queries, and/or recommendation/purchase queries. Moreover, the disclosed systems can apply the classification model to select pertinent models for particular queries. For example, upon classifying a product query, disclosed systems can utilize a neural ranking model (trained based on a set of training product specifications and training queries) to generate relevance scores for product specifications associated with a digital query.
    Type: Application
    Filed: April 19, 2018
    Publication date: October 24, 2019
    Inventors: Tuan Manh Lai, Trung Bui, Sheng Li, Quan Hung Tran, Hung Bui