Patents by Inventor Kazuma Hashimoto

Kazuma Hashimoto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230098809
    Abstract: An information processing apparatus includes a control unit configured to execute a scene detection process, a parameter extraction process, and an output process. The scene detection process detects a scene from an input content. The parameter extraction process extracts a realistic sensation parameter for wave control that corresponds to a scene that is detected by the scene detection process. The output process outputs a wave signal for the content that is produced by processing sound data of the input content by a realistic sensation parameter that is extracted by the parameter extraction process.
    Type: Application
    Filed: March 18, 2022
    Publication date: March 30, 2023
    Applicant: DENSO TEN Limited
    Inventors: Shinichi SHIOTSU, Yoshikuni MIKI, Rei HIROMI, Yohei KAKEE, Iku NAKAJO, Kazuma HASHIMOTO
  • Publication number: 20230054068
    Abstract: Embodiments described herein provide document summarization systems and methods that utilize fine-tuning of pre-trained abstractive summarization models to produce summaries that more faithfully track the content of the documents. Such abstractive summarization models may be pre-trained using a corpus consisting of pairs of articles and associated summaries. For each article-summary pair, a pseudo label or control code is generated and represents a faithfulness of the summary with respect to the article. The pre-trained model is then fine-tuned based on the article-summary pairs and the corresponding control codes. The resulting fine-tuned models then provide improved faithfulness in document summarization tasks.
    Type: Application
    Filed: January 31, 2022
    Publication date: February 23, 2023
    Inventors: Haopeng Zheng, Semih Yavuz, Wojciech Kryscinski, Kazuma Hashimoto, Yingbo Zhou
  • Publication number: 20230055188
    Abstract: Embodiments described herein provide a question answering approach that answers a question by generating an executable logical form. First, a ranking model is used to select a set of good logical forms from a pool of logical forms obtained by searching over a knowledge graph. The selected logical forms are good in the sense that they are close to (or exactly match, in some cases) the intents in the question and final desired logical form. Next, a generation model is adopted conditioned on the question as well as the selected logical forms to generate the target logical form and execute it to obtain the final answer. For example, at inference stage, when a question is received, a matching logical form is identified from the question, based on which the final answer can be generated based on the node that is associated with the matching logical form in the knowledge base.
    Type: Application
    Filed: December 29, 2021
    Publication date: February 23, 2023
    Inventors: Xi Ye, Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou
  • Publication number: 20230059870
    Abstract: Embodiments described herein provide a question answering approach that answers a question by generating an executable logical form. First, a ranking model is used to select a set of good logical forms from a pool of logical forms obtained by searching over a knowledge graph. The selected logical forms are good in the sense that they are close to (or exactly match, in some cases) the intents in the question and final desired logical form. Next, a generation model is adopted conditioned on the question as well as the selected logical forms to generate the target logical form and execute it to obtain the final answer. For example, at inference stage, when a question is received, a matching logical form is identified from the question, based on which the final answer can be generated based on the node that is associated with the matching logical form in the knowledge base.
    Type: Application
    Filed: December 29, 2021
    Publication date: February 23, 2023
    Inventors: Xi Ye, Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou
  • Patent number: 11580977
    Abstract: A conversation engine performs conversations with users using chatbots customized for performing a set of tasks that can be performed using an online system. The conversation engine loads a chatbot configuration that specifies the behavior of a chatbot including the tasks that can be performed by the chatbot, the types of entities relevant to each task, and so on. The conversation may be voice based and use natural language. The conversation engine may load different chatbot configurations to implement different chatbots. The conversation engine receives a conversation engine configuration that specifies the behavior of the conversation engine across chatbots. The system may be a multi-tenant system that allows customization of the chatbots for each tenant.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: February 14, 2023
    Assignee: Salesforce, Inc.
    Inventors: Xinyi Yang, Tian Xie, Caiming Xiong, Wenhao Liu, Huan Wang, Kazuma Hashimoto, Yingbo Zhou, Xugang Ye, Jin Qu, Feihong Wu
  • Patent number: 11544470
    Abstract: An online system allows user interactions using natural language expressions. The online system uses a machine learning based model to infer an intent represented by a user expression. The machine learning based model takes as input a user expression and an example expression to compute a score indicating whether the user expression matches the example expression. Based on the scores, the intent inference module determines a most applicable intent for the expression. The online system determines a confidence threshold such that user expressions indicating a high confidence are assigned the most applicable intent and user expressions indicating a low confidence are assigned an out-of-scope intent. The online system encodes the example expressions using the machine learning based model. The online system may compare an encoded user expression with encoded example expressions to identify a subset of example expressions used to determine the most applicable intent.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: January 3, 2023
    Assignee: Salesforce, Inc.
    Inventors: Jianguo Zhang, Kazuma Hashimoto, Chien-Sheng Wu, Wenhao Liu, Richard Socher, Caiming Xiong
  • Patent number: 11537801
    Abstract: Approaches for the translation of structured text include an embedding module for encoding and embedding source text in a first language, an encoder for encoding output of the embedding module, a decoder for iteratively decoding output of the encoder based on generated tokens in translated text from previous iterations, a beam module for constraining output of the decoder with respect to possible embedded tags to include in the translated text for a current iteration using a beam search, and a layer for selecting a token to be included in the translated text for the current iteration. The translated text is in a second language different from the first language. In some embodiments, the approach further includes scoring and pointer modules for selecting the token based on the output of the beam module or copied from the source text or reference text from a training pair best matching the source text.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: December 27, 2022
    Assignee: Salesforce.com, Inc.
    Inventors: Kazuma Hashimoto, Raffaella Buschiazzo, James Bradbury, Teresa Marshall, Caiming Xiong, Richard Socher
  • Publication number: 20220383159
    Abstract: Embodiments described herein provide a fusion-in-decoder (FID) based model (referred to as “PATHID”) for open-domain multi-hop question answering. Specifically, PATHID addresses the gap between the general behavior of the FID model on single-hop and multi-hop question answering, and provides more transparency into the reasoning path. In addition to answer generation, PATHID explicitly models the full reasoning path to resolve the answer with a generative sequence-to-sequence model.
    Type: Application
    Filed: November 23, 2021
    Publication date: December 1, 2022
    Inventors: Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou
  • Publication number: 20220374459
    Abstract: Embodiments described herein provide a dense hierarchical retrieval for open-domain question and answering for a corpus of documents using a document-level and passage-level dense retrieval model. Specifically, each document is viewed as a structural collection that has sections, subsections and paragraphs. Each document may be split into short length passages, where a document-level retrieval model and a passage-level retrieval model may be applied to return a smaller set of filtered texts. Top documents may be identified after encoding the question and the documents and determining document relevance scores to the encoded question. Thereafter, a set of top passages are further identified based on encoding of the passages and determining passage relevance scores to the encoded question. The document and passage relevance scores may be used in combination to determine a final retrieval ranking for the documents having the set of top passages.
    Type: Application
    Filed: November 23, 2021
    Publication date: November 24, 2022
    Inventors: Ye Liu, Kazuma Hashimoto, Yingbo Zhou, Semih Yavuz, Caiming Xiong
  • Publication number: 20220366893
    Abstract: Some embodiments of the current disclosure disclose methods and systems for training for training a natural language processing intent classification model to perform few-shot classification tasks. In some embodiments, a pair of an utterance and a first semantic label labeling the utterance may be generated and a neural network that is configured to perform natural language inference tasks may be utilized to determine the existence of an entailment relationship between the utterance and the semantic label. The semantic label may be predicted as the intent class of the utterance based on the entailment relationship and the pair may be used to train the natural language processing intent classification model to perform few-shot classification tasks.
    Type: Application
    Filed: November 23, 2021
    Publication date: November 17, 2022
    Inventors: Jin Qu, Wenhao Liu, Kazuma Hashimoto, Caiming Xiong
  • Publication number: 20220227375
    Abstract: An information processing device that is mountable on a vehicle, includes: an acquiring unit configured to acquire at least one of internal condition or external condition of a user of a digital content including a virtual space experience; an estimating unit configured to estimate a sickness status of the user based on the at least one of internal condition or external condition acquired by the acquiring unit; and a guidance processing unit configured to perform a guidance processing such that a driving state of the vehicle is guided to suppress sickness, depending on the sickness status of the user estimated by the estimating unit.
    Type: Application
    Filed: March 24, 2021
    Publication date: July 21, 2022
    Applicant: DENSO TEN Limited
    Inventors: Motoki KOJIMA, Hiroyuki WATABE, Shinichi SHIOTSU, Tomoe OHTSUKI, Kazuma HASHIMOTO, Haruo HARADA
  • Publication number: 20220101844
    Abstract: A conversation engine performs conversations with users using chatbots customized for performing a set of tasks that can be performed using an online system. The conversation engine loads a chatbot configuration that specifies the behavior of a chatbot including the tasks that can be performed by the chatbot, the types of entities relevant to each task, and so on. The conversation may be voice based and use natural language. The conversation engine may load different chatbot configurations to implement different chatbots. The conversation engine receives a conversation engine configuration that specifies the behavior of the conversation engine across chatbots. The system may be a multi-tenant system that allows customization of the chatbots for each tenant.
    Type: Application
    Filed: September 29, 2020
    Publication date: March 31, 2022
    Inventors: Xinyi Yang, Tian Xie, Caiming Xiong, Wenhao Liu, Huan Wang, Kazuma Hashimoto, Yingbo Zhou, Xugang Ye, Jin Qu, Feihong Wu
  • Publication number: 20220103491
    Abstract: A conversation engine performs conversations with users using chatbots customized for performing a set of tasks that can be performed using an online system. The conversation engine loads a chatbot configuration that specifies the behavior of a chatbot including the tasks that can be performed by the chatbot, the types of entities relevant to each task, and so on. The conversation may be voice based and use natural language. The conversation engine may load different chatbot configurations to implement different chatbots. The conversation engine receives a conversation engine configuration that specifies the behavior of the conversation engine across chatbots. The system may be a multi-tenant system that allows customization of the chatbots for each tenant.
    Type: Application
    Filed: September 29, 2020
    Publication date: March 31, 2022
    Inventors: Xinyi Yang, Tian Xie, Caiming Xiong, Wenhao Liu, Huan Wang, Kazuma Hashimoto, Jin Qu, Feihong Wu, Yingbo Zhou
  • Publication number: 20220083837
    Abstract: The technology disclosed provides a so-called “joint many-task neural network model” to solve a variety of increasingly complex natural language processing (NLP) tasks using growing depth of layers in a single end-to-end model. The model is successively trained by considering linguistic hierarchies, directly connecting word representations to all model layers, explicitly using predictions in lower tasks, and applying a so-called “successive regularization” technique to prevent catastrophic forgetting. Three examples of lower level model layers are part-of-speech (POS) tagging layer, chunking layer, and dependency parsing layer. Two examples of higher level model layers are semantic relatedness layer and textual entailment layer. The model achieves the state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment.
    Type: Application
    Filed: November 23, 2021
    Publication date: March 17, 2022
    Inventors: Kazuma Hashimoto, Caiming Xiong, Richard Socher
  • Publication number: 20220036884
    Abstract: Embodiments described herein provide safe policy improvement (SPI) in a batch reinforcement learning framework for a task-oriented dialogue. Specifically, a batch reinforcement learning framework for dialogue policy learning is provided, which improves the performance of the dialogue and learns to shape a reward that reasons the invention behind human response rather than just imitating the human demonstration.
    Type: Application
    Filed: October 13, 2021
    Publication date: February 3, 2022
    Inventors: Govardana Sachithanandam Ramachandran, Kazuma Hashimoto, Caiming Xiong, Richard Socher
  • Patent number: 11222253
    Abstract: The technology disclosed provides a so-called “joint many-task neural network model” to solve a variety of increasingly complex natural language processing (NLP) tasks using growing depth of layers in a single end-to-end model. The model is successively trained by considering linguistic hierarchies, directly connecting word representations to all model layers, explicitly using predictions in lower tasks, and applying a so-called “successive regularization” technique to prevent catastrophic forgetting. Three examples of lower level model layers are part-of-speech (POS) tagging layer, chunking layer, and dependency parsing layer. Two examples of higher level model layers are semantic relatedness layer and textual entailment layer. The model achieves the state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment.
    Type: Grant
    Filed: January 31, 2017
    Date of Patent: January 11, 2022
    Assignee: salesforce.com, inc.
    Inventors: Kazuma Hashimoto, Caiming Xiong, Richard Socher
  • Publication number: 20210397799
    Abstract: Approaches for the translation of structured text include an embedding module for encoding and embedding source text in a first language, an encoder for encoding output of the embedding module, a decoder for iteratively decoding output of the encoder based on generated tokens in translated text from previous iterations, a beam module for constraining output of the decoder with respect to possible embedded tags to include in the translated text for a current iteration using a beam search, and a layer for selecting a token to be included in the translated text for the current iteration. The translated text is in a second language different from the first language. In some embodiments, the approach further includes scoring and pointer modules for selecting the token based on the output of the beam module or copied from the source text or reference text from a training pair best matching the source text.
    Type: Application
    Filed: August 31, 2021
    Publication date: December 23, 2021
    Inventors: Kazuma Hashimoto, Raffaella Buschiazzo, James Bradbury, Teresa Anna Marshall, Caiming Xiong, Richard Socher
  • Publication number: 20210383212
    Abstract: Embodiments described herein provide safe policy improvement (SPI) in a batch reinforcement learning framework for a task-oriented dialogue. Specifically, a batch reinforcement learning framework for dialogue policy learning is provided, which improves the performance of the dialogue and learns to shape a reward that reasons the invention behind human response rather than just imitating the human demonstration.
    Type: Application
    Filed: November 25, 2020
    Publication date: December 9, 2021
    Inventors: Govardana Sachithanandam Ramachandran, Kazuma Hashimoto, Caiming Xiong, Richard Socher
  • Publication number: 20210374353
    Abstract: An online system allows user interactions using natural language expressions. The online system uses a machine learning based model to infer an intent represented by a user expression. The machine learning based model takes as input a user expression and an example expression to compute a score indicating whether the user expression matches the example expression. Based on the scores, the intent inference module determines a most applicable intent for the expression. The online system determines a confidence threshold such that user expressions indicating a high confidence are assigned the most applicable intent and user expressions indicating a low confidence are assigned an out-of-scope intent. The online system encodes the example expressions using the machine learning based model. The online system may compare an encoded user expression with encoded example expressions to identify a subset of example expressions used to determine the most applicable intent.
    Type: Application
    Filed: August 28, 2020
    Publication date: December 2, 2021
    Inventors: Jianguo Zhang, Kazuma Hashimoto, Chien-Sheng Wu, Wenhao Liu, Richard Socher, Caiming Xiong
  • Publication number: 20210375269
    Abstract: Embodiments described herein utilize pre-trained masked language models as the backbone for dialogue act tagging and provide cross-domain generalization of the resulting dialogue acting taggers. For example, a pre-trained MASK token of BERT model may be used as a controllable mechanism for augmenting text input, e.g., generating tags for an input of unlabeled dialogue history. The pre-trained MASK model can be trained with semi-supervised learning, e.g., using multiple objectives from supervised tagging loss, masked tagging loss, masked language model loss, and/or a disagreement loss.
    Type: Application
    Filed: August 21, 2020
    Publication date: December 2, 2021
    Inventors: Semih Yavuz, Kazuma Hashimoto, Wenhao Liu, Nitish Shirish Keskar, Richard Socher, Caiming Xiong