Patents by Inventor Richard Socher

Richard Socher has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11966446
    Abstract: The present application generally relates to search engines, and more specifically to systems and methods for a search tool for code snippets. Embodiment described herein provide a customized code search system that generates code search results from customized data sources, extract code snippets from the code search results, and presents the code snippets via a user interface. In one embodiment, the search system adopts a machine learning module to generate and highlight search results from different data sources that include code examples, e.g., in a programming language. To improve search efficiency, in response to a code search query, the search system may extract code snippets from search results from relevant sources in a user interface element, such as user selectable panels.
    Type: Grant
    Filed: June 6, 2023
    Date of Patent: April 23, 2024
    Assignee: SuSea, Inc
    Inventors: Richard Socher, Bryan McCann
  • Patent number: 11928600
    Abstract: A method for sequence-to-sequence prediction using a neural network model includes generating an encoded representation based on an input sequence using an encoder of the neural network model and predicting an output sequence based on the encoded representation using a decoder of the neural network model. The neural network model includes a plurality of model parameters learned according to a machine learning process. At least one of the encoder or the decoder includes a branched attention layer. Each branch of the branched attention layer includes an interdependent scaling node configured to scale an intermediate representation of the branch by a learned scaling parameter. The learned scaling parameter depends on one or more other learned scaling parameters of one or more other interdependent scaling nodes of one or more other branches of the branched attention layer.
    Type: Grant
    Filed: January 30, 2018
    Date of Patent: March 12, 2024
    Assignee: Salesforce, Inc.
    Inventors: Nitish Shirish Keskar, Karim Ahmed, Richard Socher
  • Patent number: 11922305
    Abstract: Embodiments described herein provide safe policy improvement (SPI) in a batch reinforcement learning framework for a task-oriented dialogue. Specifically, a batch reinforcement learning framework for dialogue policy learning is provided, which improves the performance of the dialogue and learns to shape a reward that reasons the invention behind human response rather than just imitating the human demonstration.
    Type: Grant
    Filed: November 25, 2020
    Date of Patent: March 5, 2024
    Assignee: Salesforce, Inc.
    Inventors: Govardana Sachithanandam Ramachandran, Kazuma Hashimoto, Caiming Xiong, Richard Socher
  • Publication number: 20240020538
    Abstract: Embodiments described herein provide systems and methods for a customized generative AI platform that provides users with a tool to generate various formats of responses to user inputs that incorporate results from searches performed by the generative AI platform. The system may use a neural network to utilize input data and contextual information to identify potential search queries, gather relevant data, sort information, generate text-based responses to user inputs, and present response and search results via user-engageable elements.
    Type: Application
    Filed: July 18, 2023
    Publication date: January 18, 2024
    Inventors: Richard Socher, Bryan McCann
  • Publication number: 20230419050
    Abstract: Embodiments described herein provide a pipelined natural language question answering system that improves a BERT-based system. Specifically, the natural language question answering system uses a pipeline of neural networks each trained to perform a particular task. The context selection network identifies premium context from context for the question. The question type network identifies the natural language question as a yes, no, or span question and a yes or no answer to the natural language question when the question is a yes or no question. The span extraction model determines an answer span to the natural language question when the question is a span question.
    Type: Application
    Filed: September 7, 2023
    Publication date: December 28, 2023
    Inventors: Akari ASAI, Kazuma HASHIMOTO, Richard SOCHER, Caiming XIONG
  • Publication number: 20230394095
    Abstract: The present application generally relates to search engines, and more specifically to systems and methods for a search tool for code snippets. Embodiment described herein provide a customized code search system that generates code search results from customized data sources, extract code snippets from the code search results, and presents the code snippets via a user interface. In one embodiment, the search system adopts a machine learning module to generate and highlight search results from different data sources that include code examples, e.g., in a programming language. To improve search efficiency, in response to a code search query, the search system may extract code snippets from search results from relevant sources in a user interface element, such as user selectable panels.
    Type: Application
    Filed: June 6, 2023
    Publication date: December 7, 2023
    Inventors: Richard Socher, Bryan McCann
  • Patent number: 11822897
    Abstract: Approaches for the translation of structured text include an embedding module for encoding and embedding source text in a first language, an encoder for encoding output of the embedding module, a decoder for iteratively decoding output of the encoder based on generated tokens in translated text from previous iterations, a beam module for constraining output of the decoder with respect to possible embedded tags to include in the translated text for a current iteration using a beam search, and a layer for selecting a token to be included in the translated text for the current iteration. The translated text is in a second language different from the first language. In some embodiments, the approach further includes scoring and pointer modules for selecting the token based on the output of the beam module or copied from the source text or reference text from a training pair best matching the source text.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: November 21, 2023
    Assignee: salesforce.com, inc.
    Inventors: Kazuma Hashimoto, Raffaella Buschiazzo, James Bradbury, Teresa Anna Marshall, Caiming Xiong, Richard Socher
  • Patent number: 11797825
    Abstract: The technology disclosed provides a so-called “joint many-task neural network model” to solve a variety of increasingly complex natural language processing (NLP) tasks using growing depth of layers in a single end-to-end model. The model is successively trained by considering linguistic hierarchies, directly connecting word representations to all model layers, explicitly using predictions in lower tasks, and applying a so-called “successive regularization” technique to prevent catastrophic forgetting. Three examples of lower level model layers are part-of-speech (POS) tagging layer, chunking layer, and dependency parsing layer. Two examples of higher level model layers are semantic relatedness layer and textual entailment layer. The model achieves the state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: October 24, 2023
    Assignee: Salesforce, Inc.
    Inventors: Kazuma Hashimoto, Caiming Xiong, Richard Socher
  • Patent number: 11783164
    Abstract: The technology disclosed provides a so-called “joint many-task neural network model” to solve a variety of increasingly complex natural language processing (NLP) tasks using growing depth of layers in a single end-to-end model. The model is successively trained by considering linguistic hierarchies, directly connecting word representations to all model layers, explicitly using predictions in lower tasks, and applying a so-called “successive regularization” technique to prevent catastrophic forgetting. Three examples of lower level model layers are part-of-speech (POS) tagging layer, chunking layer, and dependency parsing layer. Two examples of higher level model layers are semantic relatedness layer and textual entailment layer. The model achieves the state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: October 10, 2023
    Assignee: Salesforce.com, Inc.
    Inventors: Kazuma Hashimoto, Caiming Xiong, Richard Socher
  • Patent number: 11775775
    Abstract: Embodiments described herein provide a pipelined natural language question answering system that improves a BERT-based system. Specifically, the natural language question answering system uses a pipeline of neural networks each trained to perform a particular task. The context selection network identifies premium context from context for the question. The question type network identifies the natural language question as a yes, no, or span question and a yes or no answer to the natural language question when the question is a yes or no question. The span extraction model determines an answer span to the natural language question when the question is a span question.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: October 3, 2023
    Assignee: Salesforce.com, Inc.
    Inventors: Akari Asai, Kazuma Hashimoto, Richard Socher, Caiming Xiong
  • Patent number: 11749264
    Abstract: Embodiments described herein provide methods and systems for training task-oriented dialogue (TOD) language models. In some embodiments, a TOD language model may receive a TOD dataset including a plurality of dialogues and a model input sequence may be generated from the dialogues using a first token prefixed to each user utterance and a second token prefixed to each system response of the dialogues. In some embodiments, the first token or the second token may be randomly replaced with a mask token to generate a masked training sequence and a masked language modeling (MLM) loss may be computed using the masked training sequence. In some embodiments, the TOD language model may be updated based on the MLM loss.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: September 5, 2023
    Assignee: Salesforce, Inc.
    Inventors: Chien-Sheng Wu, Chu Hong Hoi, Richard Socher, Caiming Xiong
  • Patent number: 11741372
    Abstract: Approaches to zero-shot learning include partitioning training data into first and second sets according to classes assigned to the training data, training a prediction module based on the first set to predict a cluster center based on a class label, training a correction module based on the second set and each of the class labels in the first set to generate a correction to a cluster center predicted by the prediction module, presenting a new class label for a new class to the prediction module to predict a new cluster center, presenting the new class label, the predicted new cluster center, and each of the class labels in the first set to the correction module to generate a correction for the predicted new cluster center, augmenting a classifier based on the corrected cluster center for the new class, and classifying input data into the new class using the classifier.
    Type: Grant
    Filed: August 9, 2021
    Date of Patent: August 29, 2023
    Assignee: salesforce.com, inc.
    Inventors: Lily Hu, Caiming Xiong, Richard Socher
  • Publication number: 20230244733
    Abstract: Embodiments described herein provide systems and methods for a customized search platform that provides users control and transparency in their searches. The system may use a ranker and parser to utilize input data and contextual information to identify search applications, sort the search applications, and present search results via user-engageable elements. The system may also use input from a user to personalize and update search results based on a user's interaction with user-engageable elements.
    Type: Application
    Filed: April 6, 2023
    Publication date: August 3, 2023
    Inventors: Bryan McCann, Swetha Mandava, Nathaniel Roth, Richard Socher
  • Patent number: 11687588
    Abstract: Systems and methods are provided for weakly supervised natural language localization (WSNLL), for example, as implemented in a neural network or model. The WSNLL network is trained with long, untrimmed videos, i.e., videos that have not been temporally segmented or annotated. The WSNLL network or model defines or generates a video-sentence pair, which corresponds to a pairing of an untrimmed video with an input text sentence. According to some embodiments, the WSNLL network or model is implemented with a two-branch architecture, where one branch performs segment sentence alignment and the other one conducts segment selection. These methods and systems are specifically used to predict how a video proposal matches a text query using respective visual and text features.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: June 27, 2023
    Assignee: Salesforce.com, Inc.
    Inventors: Mingfei Gao, Richard Socher, Caiming Xiong
  • Patent number: 11676022
    Abstract: A method for training parameters of a first domain adaptation model. The method includes evaluating a cycle consistency objective using a first task specific model associated with a first domain and a second task specific model associated with a second domain, and evaluating one or more first discriminator models to generate a first discriminator objective using the second task specific model. The one or more first discriminator models include a plurality of discriminators corresponding to a plurality of bands that corresponds domain variable ranges of the first and second domains respectively. The method further includes updating, based on the cycle consistency objective and the first discriminator objective, one or more parameters of the first domain adaptation model for adapting representations from the first domain to the second domain.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: June 13, 2023
    Assignee: salesforce.com, inc.
    Inventors: Ehsan Hosseini-Asl, Caiming Xiong, Yingbo Zhou, Richard Socher
  • Patent number: 11669712
    Abstract: A method for evaluating robustness of one or more target neural network models using natural typos. The method includes receiving one or more natural typo generation rules associated with a first task associated with a first input document type, receiving a first target neural network model, and receiving a first document and corresponding its ground truth labels. The method further includes generating one or more natural typos for the first document based on the one or more natural typo generation rules, and providing, to the first target neural network model, a test document generated based on the first document and the one or more natural typos as an input document to generate a first output. A robustness evaluation result of the first target neural network model is generated based on a comparison between the output and the ground truth labels.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: June 6, 2023
    Assignee: salesforce.com, inc.
    Inventors: Lichao Sun, Kazuma Hashimoto, Jia Li, Richard Socher, Caiming Xiong
  • Patent number: 11657233
    Abstract: Systems and methods for unifying question answering and text classification via span extraction include a preprocessor for preparing a source text and an auxiliary text based on a task type of a natural language processing task, an encoder for receiving the source text and the auxiliary text from the preprocessor and generating an encoded representation of a combination of the source text and the auxiliary text, and a span-extractive decoder for receiving the encoded representation and identifying a span of text within the source text that is a result of the NLP task. The task type is one of entailment, classification, or regression. In some embodiments, the source text includes one or more of text received as input when the task type is entailment, a list of classifications when the task type is entailment or classification, or a list of similarity options when the task type is regression.
    Type: Grant
    Filed: February 16, 2022
    Date of Patent: May 23, 2023
    Assignee: salesforce.com, inc.
    Inventors: Nitish Shirish Keskar, Bryan McCann, Richard Socher, Caiming Xiong
  • Publication number: 20230141023
    Abstract: Embodiments described herein provide systems and methods for a customized search platform that provides users control and transparency in their searches. The system may use a ranker and parser to utilize input data and contextual information to identify search applications, sort the search applications, and present search results via user-engageable elements. The system may also use input from a user to personalize and update search results based on a user's interaction with user-engageable elements.
    Type: Application
    Filed: November 4, 2022
    Publication date: May 11, 2023
    Inventors: Bryan McCann, Swetha Mandava, Nathaniel Roth, Richard Socher
  • Patent number: 11631009
    Abstract: Approaches for multi-hop knowledge graph reasoning with reward shaping include a system and method of training a system to search relational paths in a knowledge graph. The method includes identifying, using an reasoning module, a plurality of first outgoing links from a current node in a knowledge graph, masking, using the reasoning module, one or more links from the plurality of first outgoing links to form a plurality of second outgoing links, rewarding the reasoning module with a reward of one when a node corresponding to an observed answer is reached, and rewarding the reasoning module with a reward identified by a reward shaping network when a node not corresponding to an observed answer is reached. In some embodiments, the reward shaping network is pre-trained.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: April 18, 2023
    Assignee: Salesforce.com, Inc
    Inventors: Xi Victoria Lin, Caiming Xiong, Richard Socher
  • Patent number: 11615249
    Abstract: Approaches for multitask learning as question answering include an input layer for encoding a context and a question, a self-attention based transformer including an encoder and a decoder, a first bi-directional long-term short-term memory (biLSTM) for further encoding an output of the encoder, a long-term short-term memory (LSTM) for generating a context-adjusted hidden state from the output of the decoder and a hidden state, an attention network for generating first attention weights based on an output of the first biLSTM and an output of the LSTM, a vocabulary layer for generating a distribution over a vocabulary, a context layer for generating a distribution over the context, and a switch for generating a weighting between the distributions over the vocabulary and the context, generating a composite distribution based on the weighting, and selecting a word of an answer using the composite distribution.
    Type: Grant
    Filed: August 18, 2020
    Date of Patent: March 28, 2023
    Assignee: salesforce.com, inc.
    Inventors: Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, Richard Socher