Patents by Inventor Kazuma Hashimoto
Kazuma Hashimoto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12169698Abstract: Embodiments described herein provide a pipelined natural language question answering system that improves a BERT-based system. Specifically, the natural language question answering system uses a pipeline of neural networks each trained to perform a particular task. The context selection network identifies premium context from context for the question. The question type network identifies the natural language question as a yes, no, or span question and a yes or no answer to the natural language question when the question is a yes or no question. The span extraction model determines an answer span to the natural language question when the question is a span question.Type: GrantFiled: September 7, 2023Date of Patent: December 17, 2024Assignee: Salesforce, Inc.Inventors: Akari Asai, Kazuma Hashimoto, Richard Socher, Caiming Xiong
-
Patent number: 12169596Abstract: An information processing apparatus includes a control unit configured to execute a scene detection process, a parameter extraction process, and an output process. The scene detection process detects a scene from an input content. The parameter extraction process extracts a realistic sensation parameter for wave control that corresponds to a scene that is detected by the scene detection process. The output process outputs a wave signal for the content that is produced by processing sound data of the input content by a realistic sensation parameter that is extracted by the parameter extraction process.Type: GrantFiled: March 18, 2022Date of Patent: December 17, 2024Assignee: DENSO TEN LimitedInventors: Shinichi Shiotsu, Yoshikuni Miki, Rei Hiromi, Yohei Kakee, Iku Nakajo, Kazuma Hashimoto
-
Patent number: 12164878Abstract: Embodiments described herein provide a cross-lingual sentence alignment framework that is trained only on rich-resource language pairs. To obtain an accurate aligner, a pretrained multi-lingual language model is used, and a classifier is trained on parallel data from rich-resource language pairs. This trained classifier may then be used for cross-lingual transfer with low-resource languages.Type: GrantFiled: January 21, 2022Date of Patent: December 10, 2024Assignee: Salesforce, Inc.Inventors: Tong Niu, Kazuma Hashimoto, Yingbo Zhou, Caiming Xiong
-
Publication number: 20240347569Abstract: Each pixel 8 constituting a solid-state imaging device 1 includes a light scattering unit 27 receiving incident light L1 and generating absorbed light L2 including scattered light, and a photoelectric conversion unit 26 receiving the absorbed light L2 from a light input surface 26a and generating a signal voltage corresponding to the received absorbed light L2. The light scattering unit 27 includes a plurality of metal structures 27a disposed with a predetermined cycle length. The light scattering unit 27 generates, as the scattered light, diffracted light caused by plasmons corresponding to the incident light L1.Type: ApplicationFiled: January 17, 2022Publication date: October 17, 2024Inventors: Nobukazu TERANISHI, Atsushi ONO, Kazuma HASHIMOTO, Takahito YOSHINAGA
-
Patent number: 11922305Abstract: Embodiments described herein provide safe policy improvement (SPI) in a batch reinforcement learning framework for a task-oriented dialogue. Specifically, a batch reinforcement learning framework for dialogue policy learning is provided, which improves the performance of the dialogue and learns to shape a reward that reasons the invention behind human response rather than just imitating the human demonstration.Type: GrantFiled: November 25, 2020Date of Patent: March 5, 2024Assignee: Salesforce, Inc.Inventors: Govardana Sachithanandam Ramachandran, Kazuma Hashimoto, Caiming Xiong, Richard Socher
-
Publication number: 20240070456Abstract: Provided are systems and methods for corrective reward optimization for generative sequential labeling. In particular, example aspects of the present disclosure are directed to an effective framework for generative reward optimization of text (or other) data sequences, certain example implementations of which can be referred to as “GROOT”. Example implementations of the proposed framework work by training a generative sequential labeling model to match the decoder output distribution with that of the (possibly black-box) reward function. Using an iterative training regime, the framework can first generate prediction candidates and then correct errors in the candidate. Finally, a loss function can be used that contrasts those candidates based on their reward values (e.g., as measured by a reward function that encodes the specific objectives for a particular setting or application).Type: ApplicationFiled: August 31, 2023Publication date: February 29, 2024Inventors: Karthik Raman, Kazuma Hashimoto
-
Patent number: 11902221Abstract: A conversation engine performs conversations with users using chatbots customized for performing a set of tasks that can be performed using an online system. The conversation engine loads a chatbot configuration that specifies the behavior of a chatbot including the tasks that can be performed by the chatbot, the types of entities relevant to each task, and so on. The conversation may be voice based and use natural language. The conversation engine may load different chatbot configurations to implement different chatbots. The conversation engine receives a conversation engine configuration that specifies the behavior of the conversation engine across chatbots. The system may be a multi-tenant system that allows customization of the chatbots for each tenant.Type: GrantFiled: September 29, 2020Date of Patent: February 13, 2024Assignee: Salesforce, Inc.Inventors: Xinyi Yang, Tian Xie, Caiming Xiong, Wenhao Liu, Huan Wang, Kazuma Hashimoto, Jin Qu, Feihong Wu, Yingbo Zhou
-
Patent number: 11887599Abstract: A conversation engine performs conversations with users using chatbots customized for performing a set of tasks that can be performed using an online system. The conversation engine loads a chatbot configuration that specifies the behavior of a chatbot including the tasks that can be performed by the chatbot, the types of entities relevant to each task, and so on. The conversation may be voice based and use natural language. The conversation engine may load different chatbot configurations to implement different chatbots. The conversation engine receives a conversation engine configuration that specifies the behavior of the conversation engine across chatbots. The system may be a multi-tenant system that allows customization of the chatbots for each tenant.Type: GrantFiled: February 10, 2023Date of Patent: January 30, 2024Assignee: Salesforce, Inc.Inventors: Xinyi Yang, Tian Xie, Caiming Xiong, Wenhao Liu, Huan Wang, Kazuma Hashimoto, Yingbo Zhou, Xugang Ye, Jin Qu, Feihong Wu
-
Publication number: 20230419050Abstract: Embodiments described herein provide a pipelined natural language question answering system that improves a BERT-based system. Specifically, the natural language question answering system uses a pipeline of neural networks each trained to perform a particular task. The context selection network identifies premium context from context for the question. The question type network identifies the natural language question as a yes, no, or span question and a yes or no answer to the natural language question when the question is a yes or no question. The span extraction model determines an answer span to the natural language question when the question is a span question.Type: ApplicationFiled: September 7, 2023Publication date: December 28, 2023Inventors: Akari ASAI, Kazuma HASHIMOTO, Richard SOCHER, Caiming XIONG
-
Patent number: 11822897Abstract: Approaches for the translation of structured text include an embedding module for encoding and embedding source text in a first language, an encoder for encoding output of the embedding module, a decoder for iteratively decoding output of the encoder based on generated tokens in translated text from previous iterations, a beam module for constraining output of the decoder with respect to possible embedded tags to include in the translated text for a current iteration using a beam search, and a layer for selecting a token to be included in the translated text for the current iteration. The translated text is in a second language different from the first language. In some embodiments, the approach further includes scoring and pointer modules for selecting the token based on the output of the beam module or copied from the source text or reference text from a training pair best matching the source text.Type: GrantFiled: August 31, 2021Date of Patent: November 21, 2023Assignee: salesforce.com, inc.Inventors: Kazuma Hashimoto, Raffaella Buschiazzo, James Bradbury, Teresa Anna Marshall, Caiming Xiong, Richard Socher
-
Patent number: 11797825Abstract: The technology disclosed provides a so-called “joint many-task neural network model” to solve a variety of increasingly complex natural language processing (NLP) tasks using growing depth of layers in a single end-to-end model. The model is successively trained by considering linguistic hierarchies, directly connecting word representations to all model layers, explicitly using predictions in lower tasks, and applying a so-called “successive regularization” technique to prevent catastrophic forgetting. Three examples of lower level model layers are part-of-speech (POS) tagging layer, chunking layer, and dependency parsing layer. Two examples of higher level model layers are semantic relatedness layer and textual entailment layer. The model achieves the state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment.Type: GrantFiled: May 26, 2021Date of Patent: October 24, 2023Assignee: Salesforce, Inc.Inventors: Kazuma Hashimoto, Caiming Xiong, Richard Socher
-
Patent number: 11783164Abstract: The technology disclosed provides a so-called “joint many-task neural network model” to solve a variety of increasingly complex natural language processing (NLP) tasks using growing depth of layers in a single end-to-end model. The model is successively trained by considering linguistic hierarchies, directly connecting word representations to all model layers, explicitly using predictions in lower tasks, and applying a so-called “successive regularization” technique to prevent catastrophic forgetting. Three examples of lower level model layers are part-of-speech (POS) tagging layer, chunking layer, and dependency parsing layer. Two examples of higher level model layers are semantic relatedness layer and textual entailment layer. The model achieves the state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment.Type: GrantFiled: October 26, 2020Date of Patent: October 10, 2023Assignee: Salesforce.com, Inc.Inventors: Kazuma Hashimoto, Caiming Xiong, Richard Socher
-
Patent number: 11775775Abstract: Embodiments described herein provide a pipelined natural language question answering system that improves a BERT-based system. Specifically, the natural language question answering system uses a pipeline of neural networks each trained to perform a particular task. The context selection network identifies premium context from context for the question. The question type network identifies the natural language question as a yes, no, or span question and a yes or no answer to the natural language question when the question is a yes or no question. The span extraction model determines an answer span to the natural language question when the question is a span question.Type: GrantFiled: November 26, 2019Date of Patent: October 3, 2023Assignee: Salesforce.com, Inc.Inventors: Akari Asai, Kazuma Hashimoto, Richard Socher, Caiming Xiong
-
Patent number: 11763090Abstract: An online system that allows users to interact with it using expressions in natural language form includes an intent inference module allowing it to infer an intent represented by a user expression. The intent inference module has a set of possible intents, along with a small set of example natural language expressions known to represent that intent. When a user interacts with the system using a natural language expression for which the intent is not already known, the intent inference module applies a natural language inference model to compute scores indicating whether the user expression textually entails the various example natural language expressions. Based on the scores, the intent inference module determines an intent that is most applicable for the expression. If an intent cannot be determined with sufficient confidence, the intent inference module may further attempt to determine whether the various example natural language expressions textually entail the user expression.Type: GrantFiled: December 18, 2019Date of Patent: September 19, 2023Assignee: Salesforce, Inc.Inventors: Tian Xie, Kazuma Hashimoto, Xinyi Yang, Caiming Xiong
-
Patent number: 11741142Abstract: Embodiments described herein provide document summarization systems and methods that utilize fine-tuning of pre-trained abstractive summarization models to produce summaries that more faithfully track the content of the documents. Such abstractive summarization models may be pre-trained using a corpus consisting of pairs of articles and associated summaries. For each article-summary pair, a pseudo label or control code is generated and represents a faithfulness of the summary with respect to the article. The pre-trained model is then fine-tuned based on the article-summary pairs and the corresponding control codes. The resulting fine-tuned models then provide improved faithfulness in document summarization tasks.Type: GrantFiled: January 31, 2022Date of Patent: August 29, 2023Assignee: salesforce.com, inc.Inventors: Haopeng Zheng, Semih Yavuz, Wojciech Kryscinski, Kazuma Hashimoto, Yingbo Zhou
-
Publication number: 20230186916Abstract: A conversation engine performs conversations with users using chatbots customized for performing a set of tasks that can be performed using an online system. The conversation engine loads a chatbot configuration that specifies the behavior of a chatbot including the tasks that can be performed by the chatbot, the types of entities relevant to each task, and so on. The conversation may be voice based and use natural language. The conversation engine may load different chatbot configurations to implement different chatbots. The conversation engine receives a conversation engine configuration that specifies the behavior of the conversation engine across chatbots. The system may be a multi-tenant system that allows customization of the chatbots for each tenant.Type: ApplicationFiled: February 10, 2023Publication date: June 15, 2023Inventors: Xinyi Yang, Tian Xie, Caiming Xiong, Wenhao Liu, Huan Wang, Kazuma Hashimoto, Yingbo Zhou, Xugang Ye, Jin Qu, Feihong Wu
-
Patent number: 11669712Abstract: A method for evaluating robustness of one or more target neural network models using natural typos. The method includes receiving one or more natural typo generation rules associated with a first task associated with a first input document type, receiving a first target neural network model, and receiving a first document and corresponding its ground truth labels. The method further includes generating one or more natural typos for the first document based on the one or more natural typo generation rules, and providing, to the first target neural network model, a test document generated based on the first document and the one or more natural typos as an input document to generate a first output. A robustness evaluation result of the first target neural network model is generated based on a comparison between the output and the ground truth labels.Type: GrantFiled: September 3, 2019Date of Patent: June 6, 2023Assignee: salesforce.com, inc.Inventors: Lichao Sun, Kazuma Hashimoto, Jia Li, Richard Socher, Caiming Xiong
-
Publication number: 20230153542Abstract: Embodiments described herein provide a cross-lingual sentence alignment framework that is trained only on rich-resource language pairs. To obtain an accurate aligner, a pretrained multi-lingual language model is used, and a classifier is trained on parallel data from rich-resource language pairs. This trained classifier may then be used for cross-lingual transfer with low-resource languages.Type: ApplicationFiled: January 21, 2022Publication date: May 18, 2023Inventors: Tong Niu, Kazuma Hashimoto, Yingbo Zhou, Caiming Xiong
-
Publication number: 20230098809Abstract: An information processing apparatus includes a control unit configured to execute a scene detection process, a parameter extraction process, and an output process. The scene detection process detects a scene from an input content. The parameter extraction process extracts a realistic sensation parameter for wave control that corresponds to a scene that is detected by the scene detection process. The output process outputs a wave signal for the content that is produced by processing sound data of the input content by a realistic sensation parameter that is extracted by the parameter extraction process.Type: ApplicationFiled: March 18, 2022Publication date: March 30, 2023Applicant: DENSO TEN LimitedInventors: Shinichi SHIOTSU, Yoshikuni MIKI, Rei HIROMI, Yohei KAKEE, Iku NAKAJO, Kazuma HASHIMOTO
-
Publication number: 20230054068Abstract: Embodiments described herein provide document summarization systems and methods that utilize fine-tuning of pre-trained abstractive summarization models to produce summaries that more faithfully track the content of the documents. Such abstractive summarization models may be pre-trained using a corpus consisting of pairs of articles and associated summaries. For each article-summary pair, a pseudo label or control code is generated and represents a faithfulness of the summary with respect to the article. The pre-trained model is then fine-tuned based on the article-summary pairs and the corresponding control codes. The resulting fine-tuned models then provide improved faithfulness in document summarization tasks.Type: ApplicationFiled: January 31, 2022Publication date: February 23, 2023Inventors: Haopeng Zheng, Semih Yavuz, Wojciech Kryscinski, Kazuma Hashimoto, Yingbo Zhou