Patents by Inventor Chu Hong Hoi

Chu Hong Hoi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220156530
    Abstract: An interpolative centroid contrastive learning (ICCL) framework is disclosed for learning a more discriminative representation for tail classes. Specifically, data samples, such as natural images, are projected into a low-dimensional embedding space, and class centroids for respective classes are created as average embeddings of samples that belong to a respective class. Virtual training samples are then created by interpolating two images from two samplers: a class-agnostic sampler which returns all images from both the head class and the tail class with an equal probability, and a class-aware sampler which focuses more on tail-class images by sampling images from the tail class with a higher probability compared to images from the head class. The sampled images, e.g., images from the class-agnostic sampler and images from the class-aware sampler may be interpolated to generate interpolated images.
    Type: Application
    Filed: March 1, 2021
    Publication date: May 19, 2022
    Inventors: Anthony Meng Huat Tiong, Junnan Li, Chu Hong Hoi
  • Publication number: 20220156507
    Abstract: The system and method are directed to a prototypical contrastive learning (PCL). The PCL explicitly encodes the hierarchical semantic structure of the dataset into the learned embedding space and prevents the network from exploiting low-level cues for solving the unsupervised learning task. The PCL includes prototypes as the latent variables to help find the maximum-likelihood estimation of the network parameters in an expectation-maximization framework. The PCL iteratively performs an E-step for finding prototypes with clustering and M-step for optimizing the network on a contrastive loss.
    Type: Application
    Filed: February 2, 2022
    Publication date: May 19, 2022
    Inventors: Junnan Li, Chu Hong Hoi
  • Patent number: 11334766
    Abstract: Systems and methods are provided for training object detectors of a neural network model with a mixture of label noise and bounding box noise. According to some embodiments, a learning framework is provided which jointly optimizes object labels, bounding box coordinates, and model parameters by performing alternating noise correction and model training. In some embodiments, to disentangle label noise and bounding box noise, a two-step noise correction method is employed. In some examples, the first step performs class-agnostic bounding box correction by minimizing classifier discrepancy and maximizing region objectness. In some examples, the second step uses dual detection heads for label correction and class-specific bounding box refinement.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: May 17, 2022
    Assignee: salesforce.com, inc.
    Inventors: Junnan Li, Chu Hong Hoi
  • Publication number: 20220139384
    Abstract: Embodiments described herein provide methods and systems for training task-oriented dialogue (TOD) language models. In some embodiments, a TOD language model may receive a TOD dataset including a plurality of dialogues and a model input sequence may be generated from the dialogues using a first token prefixed to each user utterance and a second token prefixed to each system response of the dialogues. In some embodiments, the first token or the second token may be randomly replaced with a mask token to generate a masked training sequence and a masked language modeling (MLM) loss may be computed using the masked training sequence. In some embodiments, the TOD language model may be updated based on the MLM loss.
    Type: Application
    Filed: November 3, 2020
    Publication date: May 5, 2022
    Inventors: Chien-Sheng Wu, Chu Hong Hoi, Richard Socher, Caiming Xiong
  • Publication number: 20220114464
    Abstract: Embodiments described herein provide a two-stage model-agnostic approach for generating counterfactual explanation via counterfactual feature selection and counterfactual feature optimization. Given a query instance, counterfactual feature selection picks a subset of feature columns and values that can potentially change the prediction and then counterfactual feature optimization determines the best feature value for the selected feature as a counterfactual example.
    Type: Application
    Filed: January 29, 2021
    Publication date: April 14, 2022
    Inventors: Wenzhuo Yang, Jia Li, Chu Hong Hoi, Caiming Xiong
  • Publication number: 20220114481
    Abstract: Embodiments described herein provide a two-stage model-agnostic approach for generating counterfactual explanation via counterfactual feature selection and counterfactual feature optimization. Given a query instance, counterfactual feature selection picks a subset of feature columns and values that can potentially change the prediction and then counterfactual feature optimization determines the best feature value for the selected feature as a counterfactual example.
    Type: Application
    Filed: January 29, 2021
    Publication date: April 14, 2022
    Inventors: Wenzhuo Yang, Jia Li, Chu Hong Hoi, Caiming Xiong
  • Publication number: 20220108169
    Abstract: Embodiments described herein provide systems and methods for a partially supervised training model for questioning answering tasks. Specifically, the partially supervised training model may include two modules—a query parsing module and a program execution module. The query parsing module parses queries into a grogram, and the program execution module execute the program to reach an answer through explicit reasoning and partial supervision. In this way, the partially supervised training model can be trained with answers as supervision, obviating the need for supervision by gold program operations and gold query-span attention at each step of the program.
    Type: Application
    Filed: January 29, 2021
    Publication date: April 7, 2022
    Inventors: Amrita Saha, Shafiq Rayhan Joty, Chu Hong Hoi
  • Publication number: 20220108688
    Abstract: Embodiments described herein provide an Adapt-and-Adjust (A2) mechanism for multilingual speech recognition model that combines both adaptation and adjustment methods as an integrated end-to-end training to improve the models' generalization and mitigate the long-tailed issue. Specifically, a multilingual language model mBERT is utilized, and converted into an autoregressive transformer decoder. In addition, a cross-attention module is added to the encoder on top of the mBERT's self-attention layer in order to explore the acoustic space in addition to the text space. The joint training of the encoder and mBERT decoder can bridge the semantic gap between the speech and the text.
    Type: Application
    Filed: January 29, 2021
    Publication date: April 7, 2022
    Inventors: Guangsen Wang, Chu Hong Hoi, Genta Indra Winata
  • Patent number: 11288438
    Abstract: Systems and methods are provided for performing a video-grounded dialogue task by a neural network model using bi-directional spatial-temporal reasoning. According to some embodiments, the systems and methods implement a dual network architecture or framework. This framework includes one network or reasoning module that learns dependencies between text and video in the direction of spatial?temporal, and another network or reasoning module that learns in the direction of temporal?spatial. The output of the multimodal reasoning modules may be combined to learn dependencies between language features in dialogues. The result joint representation is used as a contextual feature to the decoding components which allow the model to semantically generate meaningful responses to the users. In some embodiments, pointer networks are extended to the video-grounded dialogue task to allow the model to point to specific tokens from multiple source sequences to generate responses.
    Type: Grant
    Filed: February 4, 2020
    Date of Patent: March 29, 2022
    Assignee: salesforce.com, inc.
    Inventors: Hung Le, Chu Hong Hoi
  • Publication number: 20220067506
    Abstract: A learning mechanism with partially-labeled web images is provided while correcting the noise labels during the learning. Specifically, the mechanism employs a momentum prototype that represents common characteristics of a specific class. One training objective is to minimize the difference between the normalized embedding of a training image sample and the momentum prototype of the corresponding class. Meanwhile, during the training process, the momentum prototype is used to generate a pseudo label for the training image sample, which can then be used to identify and remove out of distribution (OOD) samples to correct the noisy labels from the original partially-labeled training images. The momentum prototype for each class is in turn constantly updated based on the embeddings of new training samples and their pseudo labels.
    Type: Application
    Filed: August 28, 2020
    Publication date: March 3, 2022
    Inventors: Junnan Li, Chu Hong Hoi
  • Patent number: 11263476
    Abstract: The system and method are directed to a prototypical contrastive learning (PCL). The PCL explicitly encodes the hierarchical semantic structure of the dataset into the learned embedding space and prevents the network from exploiting low-level cues for solving the unsupervised learning task. The PCL includes prototypes as the latent variables to help find the maximum-likelihood estimation of the network parameters in an expectation-maximization framework. The PCL iteratively performs an E-step for finding prototypes with clustering and M-step for optimizing the network on a contrastive loss.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: March 1, 2022
    Assignee: salesforce.com, inc.
    Inventors: Junnan Li, Chu Hong Hoi
  • Publication number: 20220050966
    Abstract: A system performs conversations with users using chatbots customized for performing a set of tasks. The system may be a multi-tenant system that allows customization of the chatbots for each tenant. The system receives a task configuration that maps tasks to entity types and an entity configuration that specifies methods for determining entities of a particular entity type. The system receives a user utterance and determines the intent of the user using an intent detection model, for example, a neural network. The intent represents a task that the user is requesting. The system determines one or more entities corresponding to the task. The system performs tasks based on the determined intent and the entities and performs conversations with users based on the tasks.
    Type: Application
    Filed: August 13, 2020
    Publication date: February 17, 2022
    Inventors: Xinyi Yang, Tian Xie, Caiming Xiong, Wenhao Liu, Huan Wang, Jin Qu, Soujanya Lanka, Chu Hong Hoi, Xugang Ye, Feihong Wu
  • Publication number: 20210383188
    Abstract: A method for generating a neural network, including initializing the neural network including a plurality of cells, each cell corresponding to a graph including one or more nodes, each node corresponding to a latent representation of a dataset. A plurality of gates are generated, wherein each gate independently determines whether an operation between two nodes is used. A first regularization is performed using the plurality of gates. The first regularization is one of a group-structured sparsity regularization and a path-depth-wised regularization. An optimization is performed on the neural network by adjusting its network parameters and gate parameters based on the regularization of the sparsity.
    Type: Application
    Filed: October 16, 2020
    Publication date: December 9, 2021
    Inventors: Pan Zhou, Chu Hong Hoi
  • Publication number: 20210374553
    Abstract: Embodiments described herein provide systems and methods for noise-robust contrastive learning. In view of the need for a noise-robust learning system, embodiments described herein provides a contrastive learning mechanism that combats noise by learning robust representations of the noisy data samples. Specifically, the training images are projected into a low-dimensional subspace, and the geometric structure of the subspace is regularized with: (1) a consistency contrastive loss that enforces images with perturbations to have similar embeddings; and (2) a prototypical contrastive loss augmented with a predetermined learning principle, which encourages the embedding for a linearly-interpolated input to have the same linear relationship with respect to the class prototypes. The low-dimensional embeddings are also trained to reconstruct the high-dimensional features, which preserves the learned information and regularizes the classifier.
    Type: Application
    Filed: September 9, 2020
    Publication date: December 2, 2021
    Inventors: Junnan Li, Chu Hong Hoi
  • Publication number: 20210374132
    Abstract: Embodiments are directed to a machine learning recommendation system. The system receives a user query for generating a recommendation for one or more items with an explanation associated with recommending the one or more items. The system obtains first features of at least one user and second features of a set of items. The system provides the first features and the second features to a first machine learning network for determining a predicted score for an item. The system provides a portion of the first features and a portion of the second features to second machine learning networks for determining explainability scores for an item and generating corresponding explanation narratives. The system provides the recommendation for one or more items and corresponding explanation narratives based on ranking predicted scores and explainability scores for the items.
    Type: Application
    Filed: November 10, 2020
    Publication date: December 2, 2021
    Inventors: Wenzhuo Yang, Jia Li, Chenxi Li, Latrice Barnett, Markus Anderle, Simo Arajarvi, Harshavardhan Utharavalli, Caiming Xiong, Richard Socher, Chu Hong Hoi
  • Publication number: 20210375280
    Abstract: Embodiments described herein provide a dynamic topic tracking mechanism that tracks how the conversation topics change from one utterance to another and use the tracking information to rank candidate responses. A pre-trained language model may be used for response selection in the multi-party conversations, which consists of two steps: (1) a topic-based pre-training to embed topic information into the language model with self-supervised learning, and (2) a multi-task learning on the pretrained model by jointly training response selection and dynamic topic prediction and disentanglement tasks.
    Type: Application
    Filed: September 8, 2020
    Publication date: December 2, 2021
    Inventors: Weishi Wang, Shafiq Rayhan Joty, Chu Hong Hoi
  • Publication number: 20210295091
    Abstract: The system and method are directed to a prototypical contrastive learning (PCL). The PCL explicitly encodes the hierarchical semantic structure of the dataset into the learned embedding space and prevents the network from exploiting low-level cues for solving the unsupervised learning task. The PCL includes prototypes as the latent variables to help find the maximum-likelihood estimation of the network parameters in an expectation-maximization framework. The PCL iteratively performs an E-step for finding prototypes with clustering and M-step for optimizing the network on a contrastive loss.
    Type: Application
    Filed: May 8, 2020
    Publication date: September 23, 2021
    Inventors: Junnan Li, Chu Hong Hoi
  • Publication number: 20210232773
    Abstract: A visual dialogue model receives image input and text input that includes a dialogue history between the model and a current utterance by a human user. The model generates a unified contextualized representation using a transformer encoder network, in which the unified contextualized representation includes a token level encoding of the image input and text input. The model generates an encoded visual dialogue input from the unified contextualized representation using visual dialogue encoding layers. The encoded visual dialogue input includes a position level encoding and a segment type encoding. The model generates an answer prediction from the encoded visual dialogue input using a first self-attention mask associated with discriminative settings or a second self-attention mask associated with generative settings. Dense annotation fine tuning may be performed to increase accuracy of the answer prediction. The model provides the answer prediction as a response to the current utterance of the human user.
    Type: Application
    Filed: July 15, 2020
    Publication date: July 29, 2021
    Inventors: Yue WANG, Chu Hong HOI, Shafiq Rayhan JOTY
  • Publication number: 20210174798
    Abstract: Embodiments described in this disclosure illustrate the use of self-/semi supervised approaches for label-efficient DST in task-oriented dialogue systems. Conversational behavior is modeled by next response generation and turn utterance generation tasks. Prediction consistency is strengthened by augmenting data with stochastic word dropout and label guessing. Experimental results show that by exploiting self-supervision the joint goal accuracy can be boosted with limited labeled data.
    Type: Application
    Filed: May 8, 2020
    Publication date: June 10, 2021
    Inventors: Chien-Sheng Wu, Chu Hong Hoi, Caiming Xiong
  • Publication number: 20210174023
    Abstract: Embodiments described herein provide systems and methods for an Explicit Memory Tracker (EMT) that tracks each rule sentence to perform decision making and to generate follow-up clarifying questions. Specifically, the EMT first segments the regulation text into several rule sentences and allocates the segmented rule sentences into memory modules, and then feeds information regarding the user scenario and dialogue history into the EMT sequentially to update each memory module separately. At each dialogue turn, the EMT makes a decision among based on current memory status of the memory modules whether further clarification is needed to come up with an answer to a user question. The EMT determines that further clarification is needed by identifying an underspecified rule sentence span by modulating token-level span distributions with sentence-level selection scores. The EMT extracts the underspecified rule sentence span and rephrases the underspecified rule sentence span to generate a follow-up question.
    Type: Application
    Filed: April 30, 2020
    Publication date: June 10, 2021
    Inventors: Yifan GAO, Chu Hong HOI, Shafiq Rayhan JOTY, Chien-Sheng WU