Patents by Inventor Nitish Shirish Keskar

Nitish Shirish Keskar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210141865
    Abstract: A multi-tenant system performs custom configuration of a tenant-specific chatbot to process and act upon natural language requests. The multi-tenant system configures the tenant-specific chatbots without requiring tenant-specific training. The multi-tenant system providing a user interface for configuring a tenant-specific set of permitted actions. The multi-tenant system determines a set of example phrases for each of the selected permitted actions. The multi-tenant system receives a natural language request from a user and identifies the action that the user wants to perform. The multi-tenant system uses a neural network to compare the natural language request with example phrases to identify an example phrase that matches the natural language request. The multi-tenant system performs the action corresponding to the matching example phrase.
    Type: Application
    Filed: November 11, 2019
    Publication date: May 13, 2021
    Inventors: Michael Machado, James Douglas Harrison, Caiming Xiong, Xinyi Yang, Thomas Archie Cook, Roojuta Lalani, Jean-Marc Soumet, Karl Ryszard Skucha, Juan Manuel Rodriguez, Manju Vijayakumar, Vishal Motwani, Tian Xie, Bryan McCann, Nitish Shirish Keskar, Armen Abrahamyan, Zhihao Zou, Chitra Gulabrani, Minal Khodani, Adarsha Badarinath, Rohiniben Thakar, Srikanth Kollu, Kevin Schoen, Qiong Liu, Amit Hetawal, Kevin Zhang, Kevin Zhang, Victor Brouk, Johnson Liu, Rafael Amsili
  • Patent number: 11003867
    Abstract: Approaches for cross-lingual regularization for multilingual generalization include a method for training a natural language processing (NLP) deep learning module. The method includes accessing a first dataset having a first training data entry, the first training data entry including one or more natural language input text strings in a first language; translating at least one of the one or more natural language input text strings of the first training data entry from the first language to a second language; creating a second training data entry by starting with the first training data entry and substituting the at least one of the natural language input text strings in the first language with the translation of the at least one of the natural language input text strings in the second language; adding the second training data entry to a second dataset; and training the deep learning module using the second dataset.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: May 11, 2021
    Assignee: salesforce.com, inc.
    Inventors: Jasdeep Singh, Nitish Shirish Keskar, Bryan McCann
  • Publication number: 20200380213
    Abstract: Approaches for multitask learning as question answering include an input layer for encoding a context and a question, a self-attention based transformer including an encoder and a decoder, a first bi-directional long-term short-term memory (biLSTM) for further encoding an output of the encoder, a long-term short-term memory (LSTM) for generating a context-adjusted hidden state from the output of the decoder and a hidden state, an attention network for generating first attention weights based on an output of the first biLSTM and an output of the LSTM, a vocabulary layer for generating a distribution over a vocabulary, a context layer for generating a distribution over the context, and a switch for generating a weighting between the distributions over the vocabulary and the context, generating a composite distribution based on the weighting, and selecting a word of an answer using the composite distribution.
    Type: Application
    Filed: August 18, 2020
    Publication date: December 3, 2020
    Inventors: Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, Richard Socher
  • Publication number: 20200334334
    Abstract: Systems and methods for unifying question answering and text classification via span extraction include a preprocessor for preparing a source text and an auxiliary text based on a task type of a natural language processing task, an encoder for receiving the source text and the auxiliary text from the preprocessor and generating an encoded representation of a combination of the source text and the auxiliary text, and a span-extractive decoder for receiving the encoded representation and identifying a span of text within the source text that is a result of the NLP task. The task type is one of entailment, classification, or regression. In some embodiments, the source text includes one or more of text received as input when the task type is entailment, a list of classifications when the task type is entailment or classification, or a list of similarity options when the task type is regression.
    Type: Application
    Filed: July 22, 2019
    Publication date: October 22, 2020
    Inventors: Nitish Shirish Keskar, Bryan McCann, Richard Socher, Caiming Xiong
  • Patent number: 10776581
    Abstract: Approaches for multitask learning as question answering include an input layer for encoding a context and a question, a self-attention based transformer including an encoder and a decoder, a first bi-directional long-term short-term memory (biLSTM) for further encoding an output of the encoder, a long-term short-term memory (LSTM) for generating a context-adjusted hidden state from the output of the decoder and a hidden state, an attention network for generating first attention weights based on an output of the first biLSTM and an output of the LSTM, a vocabulary layer for generating a distribution over a vocabulary, a context layer for generating a distribution over the context, and a switch for generating a weighting between the distributions over the vocabulary and the context, generating a composite distribution based on the weighting, and selecting a word of an answer using the composite distribution.
    Type: Grant
    Filed: May 8, 2018
    Date of Patent: September 15, 2020
    Assignee: salesforce.com, inc.
    Inventors: Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, Richard Socher
  • Publication number: 20200285706
    Abstract: Approaches for cross-lingual regularization for multilingual generalization include a method for training a natural language processing (NLP) deep learning module. The method includes accessing a first dataset having a first training data entry, the first training data entry including one or more natural language input text strings in a first language; translating at least one of the one or more natural language input text strings of the first training data entry from the first language to a second language; creating a second training data entry by starting with the first training data entry and substituting the at least one of the natural language input text strings in the first language with the translation of the at least one of the natural language input text strings in the second language; adding the second training data entry to a second dataset; and training the deep learning module using the second dataset.
    Type: Application
    Filed: April 30, 2019
    Publication date: September 10, 2020
    Inventors: Jasdeep Singh, Nitish Shirish Keskar, Bryan McCann
  • Publication number: 20190355270
    Abstract: Approaches for natural language processing include a multi-layer encoder for encoding words from a context and words from a question in parallel, a multi-layer decoder for decoding the encoded context and the encoded question, a pointer generator for generating distributions over the words from the context, the words from the question, and words in a vocabulary based on an output from the decoder, and a switch. The switch generates a weighting of the distributions over the words from the context, the words from the question, and the words in the vocabulary, generates a composite distribution based on the weighting of the distribution over the first words from the context, the distribution over the second words from the question, and the distribution over the words in the vocabulary, and selects words for inclusion in an answer using the composite distribution.
    Type: Application
    Filed: June 12, 2018
    Publication date: November 21, 2019
    Applicant: salesforce.com, inc.
    Inventors: Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, Richard Socher
  • Publication number: 20190251168
    Abstract: Approaches for multitask learning as question answering include an input layer for encoding a context and a question, a self-attention based transformer including an encoder and a decoder, a first bi-directional long-term short-term memory (biLSTM) for further encoding an output of the encoder, a long-term short-term memory (LSTM) for generating a context-adjusted hidden state from the output of the decoder and a hidden state, an attention network for generating first attention weights based on an output of the first biLSTM and an output of the LSTM, a vocabulary layer for generating a distribution over a vocabulary, a context layer for generating a distribution over the context, and a switch for generating a weighting between the distributions over the vocabulary and the context, generating a composite distribution based on the weighting, and selecting a word of an answer using the composite distribution.
    Type: Application
    Filed: May 8, 2018
    Publication date: August 15, 2019
    Inventors: Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, Richard Socher
  • Publication number: 20190251431
    Abstract: Approaches for multitask learning as question answering include a method for training that includes receiving a plurality of training samples including training samples from a plurality of task types, presenting the training samples to a neural model to generate an answer, determining an error between the generated answer and the natural language ground truth answer for each training sample presented, and adjusting parameters of the neural model based on the error. Each of the training samples includes a natural language context, question, and ground truth answer. An order in which the training samples are presented to the neural model includes initially selecting the training samples according to a first training strategy and switching to selecting the training samples according to a second training strategy. In some embodiments the first training strategy is a sequential training strategy and the second training strategy is a joint training strategy.
    Type: Application
    Filed: May 8, 2018
    Publication date: August 15, 2019
    Inventors: Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, Richard Socher
  • Publication number: 20190188568
    Abstract: Hybrid training of deep networks includes a multi-layer neural network. The training includes setting a current learning algorithm for the multi-layer neural network to a first learning algorithm. The training further includes iteratively applying training data to the neural network, determining a gradient for parameters of the neural network based on the applying of the training data, updating the parameters based on the current learning algorithm, and determining whether the current learning algorithm should be switched to a second learning algorithm based on the updating. The training further includes, in response to the determining that the current learning algorithm should be switched to a second learning algorithm, changing the current learning algorithm to the second learning algorithm and initializing a learning rate of the second learning algorithm based on the gradient and a step used by the first learning algorithm to update the parameters of the neural network.
    Type: Application
    Filed: March 20, 2018
    Publication date: June 20, 2019
    Inventors: Nitish Shirish KESKAR, Richard SOCHER
  • Publication number: 20190130273
    Abstract: A method for sequence-to-sequence prediction using a neural network model includes generating an encoded representation based on an input sequence using an encoder of the neural network model and predicting an output sequence based on the encoded representation using a decoder of the neural network model. The neural network model includes a plurality of model parameters learned according to a machine learning process. At least one of the encoder or the decoder includes a branched attention layer. Each branch of the branched attention layer includes an interdependent scaling node configured to scale an intermediate representation of the branch by a learned scaling parameter. The learned scaling parameter depends on one or more other learned scaling parameters of one or more other interdependent scaling nodes of one or more other branches of the branched attention layer.
    Type: Application
    Filed: January 30, 2018
    Publication date: May 2, 2019
    Inventors: Nitish Shirish Keskar, Karim Ahmed, Richard SOCHER