Patents by Inventor Shashank HARINATH

Shashank HARINATH has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240143945
    Abstract: Embodiments described herein provide a cross-lingual intent classification model that predicts in multiple languages without the need of training data in all the multiple languages. For example, data requirement for training can be reduced to just one utterance per intent label. Specifically, when an utterance is fed to the intent classification model, the model checks whether the utterance is similar to any of the example utterances provided for each intent. If any such utterance(s) are found, the model returns the specified intent, otherwise, it returns out of domain (OOD).
    Type: Application
    Filed: January 30, 2023
    Publication date: May 2, 2024
    Inventors: Shubham Mehrotra, Zachary Alexander, Shilpa Bhagavath, Gurkirat Singh, Shashank Harinath, Anuprit Kale
  • Patent number: 11922303
    Abstract: Embodiments described herein provides a training mechanism that transfers the knowledge from a trained BERT model into a much smaller model to approximate the behavior of BERT. Specifically, the BERT model may be treated as a teacher model, and a much smaller student model may be trained using the same inputs to the teacher model and the output from the teacher model. In this way, the student model can be trained within a much shorter time than the BERT teacher model, but with comparable performance with BERT.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: March 5, 2024
    Assignee: Salesforce, Inc.
    Inventors: Wenhao Liu, Ka Chun Au, Shashank Harinath, Bryan McCann, Govardana Sachithanandam Ramachandran, Alexis Roos, Caiming Xiong
  • Patent number: 11880659
    Abstract: Methods and systems for hierarchical natural language understanding are described. A representation of an utterance is inputted to a first machine learning model to obtain information on the first utterance. According to the information on the utterance a determination that the representation of the utterance is to be inputted to a second machine learning model that performs a dedicated natural language task is performed. In response to determining that the representation of the utterance is to be inputted to a second machine learning model, the utterance is inputted to the second machine learning model to obtain an output of the dedicated natural language task.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: January 23, 2024
    Assignee: Salesforce, Inc.
    Inventors: Shiva Kumar Pentyala, Jean-Marc Soumet, Shashank Harinath, Shilpa Bhagavath, Johnson Liu, Ankit Chadha
  • Publication number: 20230333901
    Abstract: Techniques are disclosed that pertain to facilitating the execution of machine learning (ML) models. A computer system may implement an ML model layer that permits ML models built using any of a plurality of different ML model frameworks to be submitted without a submitting entity having to define execution logic for a submitted ML model. The computer system may receive, via the ML model layer, configuration metadata for a particular ML model. The computer system may then receive a prediction request from a user to produce a prediction based on the particular ML model. The computer system may produce a prediction based on the particular ML model. As a part of producing that prediction, the computer system may select, in accordance with the received configuration metadata, one of a plurality of types of hardware resources on which to load the particular ML model.
    Type: Application
    Filed: April 19, 2022
    Publication date: October 19, 2023
    Inventors: Arpeet Kale, Shashank Harinath
  • Publication number: 20230086302
    Abstract: A method that includes receiving an input at an interactive conversation service that uses an intent classification model. The method may further include generating, using an encoder model of the intent classification model, a set of output vectors corresponding to the input, where the encoder model is configured to determine a set of metrics corresponding to intent classifications. The method may further include determining, using an outlier detection model of the intent classification model, whether the input is in-domain or out-of-domain (OOD) based on a first vector of the set of output vectors satisfying a domain threshold relative to one or more of the intent classifications. The method may further include outputting, by the intent classification model, a second vector of the set of output vectors that indicates the set of metrics corresponding to the intent classifications or an indication that the input is OOD.
    Type: Application
    Filed: September 20, 2021
    Publication date: March 23, 2023
    Inventors: Shilpa Bhagavath, Shubham Mehrotra, Abhishek Sharma, Shashank Harinath, Na Cheng, Zineb Laraki
  • Patent number: 11544465
    Abstract: Approaches to using unstructured input to update heterogeneous data stores include receiving unstructured text input, receiving a template for interpreting the unstructured text input, identifying, using an entity classifier, entities in the unstructured text input, identifying one or more potential parent entities from the identified entities based on the template, receiving a selection of a parent entity from the one or more potential parent entities, identifying one or more potential child entities from the identified entities based on the template and the selected parent entity, receiving a selection of a child entity from the one or more potential child entities, identifying an action item in the unstructured text input based on the identified entities and the template, determining, using an intent classifier, an intent of the action item, and updating a data store based on the determined intent, the identified entities, and the selected child entity.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: January 3, 2023
    Assignee: SALESFORCE.COM, INC.
    Inventors: Michael Machado, John Ball, Thomas Archie Cook, Jr., Shashank Harinath, Roojuta Lalani, Zineb Laraki, Qingqing Liu, Mike Rosenbaum, Karl Ryszard Skucha, Jean-Marc Soumet, Manju Vijayakumar
  • Patent number: 11537899
    Abstract: An embodiment proposed herein uses sparsification techniques to train the neural network with a high feature dimension that may yield desirable in-domain detection accuracy but may prune away dimensions in the output that are less important. Specifically, a sparsification vector is generated based on Gaussian distribution (or other probabilistic distribution) and is used to multiply with the higher dimension output to reduce the number of feature dimensions. The pruned output may be then used for the neural network to learn the sparsification vector. In this way, out-of-distribution detection accuracy can be improved.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: December 27, 2022
    Assignee: Salesforce.com, Inc.
    Inventors: Govardana Sachithanandam Ramachandran, Ka Chun Au, Shashank Harinath, Wenhao Liu, Alexis Roos, Caiming Xiong
  • Patent number: 11481636
    Abstract: An embodiment provided herein preprocesses the input samples to the classification neural network, e.g., by adding Gaussian noise to word/sentence representations to make the function of the neural network satisfy Lipschitz property such that a small change in the input does not cause much change to the output if the input sample is in-distribution. Method to induce properties in the feature representation of neural network such that for out-of-distribution examples the feature representation magnitude is either close to zero or the feature representation is orthogonal to all class representations. Method to generate examples that are structurally similar to in-domain and semantically out-of domain for use in out-of-domain classification training. Method to prune feature representation dimension to mitigate long tail error of unused dimension in out-of-domain classification. Using these techniques, the accuracy of both in-domain and out-of-distribution identification can be improved.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: October 25, 2022
    Assignee: Salesforce.com, Inc.
    Inventors: Govardana Sachithanandam Ramachandran, Ka Chun Au, Shashank Harinath, Wenhao Liu, Alexis Roos, Caiming Xiong
  • Patent number: 11436481
    Abstract: A method for natural language processing includes receiving, by one or more processors, an unstructured text input. An entity classifier is used to identify entities in the unstructured text input. The identifying the entities includes generating, using a plurality of sub-classifiers of a hierarchical neural network classifier of the entity classifier, a plurality of lower-level entity identifications associated with the unstructured text input. The identifying the entities further includes generating, using a combiner of the hierarchical neural network classifier, a plurality of higher-level entity identifications associated with the unstructured text input based on the plurality of lower-level entity identifications. Identified entities are provided based on the plurality of higher-level entity identifications.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: September 6, 2022
    Assignee: SALESFORCE.COM, INC.
    Inventors: Govardana Sachithanandam Ramachandran, Michael Machado, Shashank Harinath, Linwei Zhu, Yufan Xue, Abhishek Sharma, Jean-Marc Soumet, Bryan McCann
  • Publication number: 20220245349
    Abstract: Methods and systems for hierarchical natural language understanding are described. A representation of an utterance is inputted to a first machine learning model to obtain information on the first utterance. According to the information on the utterance a determination that the representation of the utterance is to be inputted to a second machine learning model that performs a dedicated natural language task is performed. In response to determining that the representation of the utterance is to be inputted to a second machine learning model, the utterance is inputted to the second machine learning model to obtain an output of the dedicated natural language task.
    Type: Application
    Filed: January 29, 2021
    Publication date: August 4, 2022
    Inventors: Shiva Kumar Pentyala, Jean-Marc Soumet, Shashank Harinath, Shilpa Bhagavath, Johnson Liu, Ankit Chadha
  • Publication number: 20210209305
    Abstract: Approaches to using unstructured input to update heterogeneous data stores include receiving unstructured text input, receiving a template for interpreting the unstructured text input, identifying, using an entity classifier, entities in the unstructured text input, identifying one or more potential parent entities from the identified entities based on the template, receiving a selection of a parent entity from the one or more potential parent entities, identifying one or more potential child entities from the identified entities based on the template and the selected parent entity, receiving a selection of a child entity from the one or more potential child entities, identifying an action item in the unstructured text input based on the identified entities and the template, determining, using an intent classifier, an intent of the action item, and updating a data store based on the determined intent, the identified entities, and the selected child entity.
    Type: Application
    Filed: March 24, 2021
    Publication date: July 8, 2021
    Inventors: Michael MACHADO, John BALL, Thomas Archie COOK, JR., Shashank HARINATH, Roojuta LALANI, Zineb LARAKI, Qingqing LIU, Mike ROSENBAUM, Karl Ryszard SKUCHA, Jean-Marc SOUMET, Manju VIJAYAKUMAR
  • Publication number: 20210150366
    Abstract: An embodiment proposed herein uses sparsification techniques to train the neural network with a high feature dimension that may yield desirable in-domain detection accuracy but may prune away dimensions in the output that are less important. Specifically, a sparsification vector is generated based on Gaussian distribution (or other probabilistic distribution) and is used to multiply with the higher dimension output to reduce the number of feature dimensions. The pruned output may be then used for the neural network to learn the sparsification vector. In this way, out-of-distribution detection accuracy can be improved.
    Type: Application
    Filed: May 18, 2020
    Publication date: May 20, 2021
    Inventors: Govardana Sachithanandam Ramachandran, Ka Chun Au, Shashank Harinath, Wenhao Liu, Alexis Roos, Caiming Xiong
  • Publication number: 20210150340
    Abstract: Embodiments described herein provides a training mechanism that transfers the knowledge from a trained BERT model into a much smaller model to approximate the behavior of BERT. Specifically, the BERT model may be treated as a teacher model, and a much smaller student model may be trained using the same inputs to the teacher model and the output from the teacher model. In this way, the student model can be trained within a much shorter time than the BERT teacher model, but with comparable performance with BERT.
    Type: Application
    Filed: May 18, 2020
    Publication date: May 20, 2021
    Inventors: Wenhao Liu, Ka Chun Au, Shashank Harinath, Bryan McCann, Govardana Sachithanandam Ramachandran, Alexis Roos, Caiming Xiong
  • Publication number: 20210150365
    Abstract: An embodiment provided herein preprocesses the input samples to the classification neural network, e.g., by adding Gaussian noise to word/sentence representations to make the function of the neural network satisfy Lipschitz property such that a small change in the input does not cause much change to the output if the input sample is in-distribution. Method to induce properties in the feature representation of neural network such that for out-of-distribution examples the feature representation magnitude is either close to zero or the feature representation is orthogonal to all class representations. Method to generate examples that are structurally similar to in-domain and semantically out-of domain for use in out-of-domain classification training. Method to prune feature representation dimension to mitigate long tail error of unused dimension in out-of-domain classification. Using these techniques, the accuracy of both in-domain and out-of-distribution identification can be improved.
    Type: Application
    Filed: May 18, 2020
    Publication date: May 20, 2021
    Inventors: Govardana Sachithanandam Ramachandran, Ka Chun Au, Shashank Harinath, Wenhao Liu, Alexis Roos, Caiming Xiong
  • Patent number: 10970486
    Abstract: Approaches to using unstructured input to update heterogeneous data stores include receiving unstructured text input, receiving a template for interpreting the unstructured text input, identifying, using an entity classifier, entities in the unstructured text input, identifying one or more potential parent entities from the identified entities based on the template, receiving a selection of a parent entity from the one or more potential parent entities, identifying one or more potential child entities from the identified entities based on the template and the selected parent entity, receiving a selection of a child entity from the one or more potential child entities, identifying an action item in the unstructured text input based on the identified entities and the template, determining, using an intent classifier, an intent of the action item, and updating a data store based on the determined intent, the identified entities, and the selected child entity.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: April 6, 2021
    Assignee: salesforce.com, inc.
    Inventors: Michael Machado, John Ball, Thomas Archie Cook, Jr., Shashank Harinath, Roojuta Lalani, Zineb Laraki, Qingqing Liu, Mike Rosenbaum, Karl Ryszard Skucha, Jean-Marc Soumet, Manju Vijayakumar
  • Publication number: 20200089757
    Abstract: Approaches to using unstructured input to update heterogeneous data stores include receiving unstructured text input, receiving a template for interpreting the unstructured text input, identifying, using an entity classifier, entities in the unstructured text input, identifying one or more potential parent entities from the identified entities based on the template, receiving a selection of a parent entity from the one or more potential parent entities, identifying one or more potential child entities from the identified entities based on the template and the selected parent entity, receiving a selection of a child entity from the one or more potential child entities, identifying an action item in the unstructured text input based on the identified entities and the template, determining, using an intent classifier, an intent of the action item, and updating a data store based on the determined intent, the identified entities, and the selected child entity.
    Type: Application
    Filed: September 18, 2018
    Publication date: March 19, 2020
    Inventors: Michael MACHADO, John BALL, Thomas Archie COOK, JR., Shashank HARINATH, Roojuta LALANI, Zineb LARAKI, Qingqing LIU, Mike ROSENBAUM, Karl Ryszard SKUCHA, Jean-Marc SOUMET, Manju VIJAYAKUMAR
  • Publication number: 20200090033
    Abstract: A method for natural language processing includes receiving, by one or more processors, an unstructured text input. An entity classifier is used to identify entities in the unstructured text input. The identifying the entities includes generating, using a plurality of sub-classifiers of a hierarchical neural network classifier of the entity classifier, a plurality of lower-level entity identifications associated with the unstructured text input. The identifying the entities further includes generating, using a combiner of the hierarchical neural network classifier, a plurality of higher-level entity identifications associated with the unstructured text input based on the plurality of lower-level entity identifications. Identified entities are provided based on the plurality of higher-level entity identifications.
    Type: Application
    Filed: September 18, 2018
    Publication date: March 19, 2020
    Inventors: Govardana Sachithanandam RAMACHANDRAN, Michael MACHADO, Shashank HARINATH, Linwei ZHU, Yufan XUE, Abhishek SHARMA, Jean-Marc SOUMET, Bryan MCCANN
  • Publication number: 20200090034
    Abstract: For a database system accessible by one or more users, a neural network model and related method are provided that allow a user of the database system to provide unstructured input in the form of a verbal or textual narrative or utterance that expresses the information in a language and manner that is more comfortable for the user. A portion of the narrative or utterance may relate to one or action items that the user intends to be taken with respect to the database system, such as creating, updating, modifying, or deleting a database item (e.g., contact, calendar item, deal, etc.). The neural model processes the unstructured input (narrative or utterance) and determines or classifies the intent with respect to the action item for the database.
    Type: Application
    Filed: September 18, 2018
    Publication date: March 19, 2020
    Inventors: Govardana Sachithanandam RAMACHANDRAN, Shashank HARINATH, Abhishek SHARMA, Jean-Marc SOUMET, Michael MACHADO, Bryan MCCANN