Patents by Inventor Ryosuke Kohita
Ryosuke Kohita has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11816581Abstract: A fast neural transition-based parser. The fast neural transition-based parser includes a decision tree-based classifier and a state vector control loss function. The decision tree-based classifier is dynamically used to replace a multilayer perceptron in the fast neural transition-based parser, and the decision tree-based classifier increases speed of neural transition-based parsing. The state vector control loss function trains the fast neural transition-based parser, the state vector control loss function builds a vector space favorable for building a decision tree that is used for the decision tree-based classifier in the neural transition-based parser, and the state vector control loss function maintains accuracy of neural transition-based parsing while the decision tree-based classifier is used to increase the speed of the neural transition-based parsing while using the decision tree-based classifier to increase the speed of the neural transition-based parsing.Type: GrantFiled: September 8, 2020Date of Patent: November 14, 2023Assignee: International Business Machines CorporationInventors: Ryosuke Kohita, Daisuke Takuma
-
Patent number: 11645461Abstract: A method is provided for dictionary expansion. The method acquires an object from a user and adds the object to a set of objects previously acquired from the user that form an expandable dictionary. The method calculates a centroid based on the set. The method calculates a similarity score of each of a plurality of objects relative to the centroid for each of a plurality of object features to calculate a weighted sum of similarity scores for each of the plurality of objects. The method presents candidate objects selected among the plurality of objects based on the weighted sum. The method acquires, from the user, a preferred candidate object among the candidate objects. The method updates weights of the plurality of features to maximize the weighed sum of similarity scores for the preferred candidate object. The method expands the dictionary by adding the preferred candidate object to the expandable dictionary.Type: GrantFiled: February 10, 2020Date of Patent: May 9, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ryosuke Kohita, Issei Yoshida, Hiroshi Kanayama, Tetsuya Nasukawa
-
Patent number: 11409958Abstract: Methods and systems for performing a language processing task include setting an angular coordinate for a vector representation of each of a set of words, based on similarity of the words to one another. A radial coordinate is set for the vector representation of each word, according to hierarchical relationships between the words. A language processing task is performed based on hierarchical word relationships using the vector representations of the words.Type: GrantFiled: September 25, 2020Date of Patent: August 9, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ran Iwamoto, Ryosuke Kohita, Akifumi Wachi
-
Publication number: 20220180166Abstract: Next state prediction technology that performs the following computer based operations: receiving state information that includes information indicative of a current state of an environment; processing the state information to predict a future state of the environment, with the processing being performed by a hybrid computer system that includes both of the following: (i) neural network software module(s) that include machine learning functionality, and (ii) symbolic rule based software modules; and using the prediction of the next state of the environment as an input with respect to taking a further action (for example, activating a hardware device or effecting a communication to a human or another device).Type: ApplicationFiled: December 3, 2020Publication date: June 9, 2022Inventors: Akifumi Wachi, Ryosuke Kohita, Daiki Kimura
-
Publication number: 20220164668Abstract: A method for safe reinforcement learning receives an action and a current state of an environment. The method evaluates, using a Logical Neural Network (LNN) structure, an action safetyness logical inference based on the current state of an environment and a current action candidate from an agent. The method outputs upper and lower bounds on the action, responsive to an evaluation of the action safetyness logical inference. The method calculates a contradiction value for the action by using the upper and lower bounds. The contradiction value indicates a level of contradiction for each of a plurality of logic rules implemented by the LNN structure. The method evaluates the action L with respect to safetyness based on the contradiction value. The method selectively performs the action responsive to an evaluation of the action indicating that the action is safe to perform based on the contradiction value exceeding a safetyness threshold.Type: ApplicationFiled: November 24, 2020Publication date: May 26, 2022Inventors: Daiki Kimura, Akifumi Wachi, Subhajit Chaudhury, Ryosuke Kohita, Asim Munawar, Michiaki Tatsubori
-
Publication number: 20220164647Abstract: A method for action pruning in Reinforcement Learning receives a current state of an environment. The method evaluates, using a Logical Neural Network (LNN) structure, a logical inference based on the current state. The method outputs upper and lower bounds on each action from a set of possible actions of an agent in the environment, responsive to an evaluation of the logical inference. The method calculates, for each pair of a possible action of the agent in the environment and the current state, a probability by using the upper and lower bounds. Each of calculated probabilities indicates a respective priority ratio for the each action. The method obtains a policy in Reinforcement Learning for the current state by using the calculated probabilities. The method prunes one or more actions from the set of actions as being in violation of the policy such that the one or more actions are ignored.Type: ApplicationFiled: November 24, 2020Publication date: May 26, 2022Inventors: Daiki Kimura, Akifumi Wachi, Subhajit Chaudhury, Ryosuke Kohita, Asim Munawar, Michiaki Tatsubori
-
Publication number: 20220129637Abstract: A computer identifies, within a task description, words that correspond to semantic element labels for the task. The computer receives, from a task source operatively connected with the computer, a textual description of a task. The computer receives semantic element labels, element identification rules, and at least one reference sentence showing natural language semantic element label use. The computer parses the description to generate words for the semantic element label to generate, a Rule Match Values based on the element identification rules for the parsed words. The computer collects words having RMVs above a threshold into sets of associated of candidate words and generates, using a neural network trained on the reference sentence, Match Likelihood Values (MLVs) indicating whether the candidate words represent a semantic element label with which the candidate word is associated. The computer selects to represent the semantic element, the associated candidate word having a highest MLV.Type: ApplicationFiled: October 23, 2020Publication date: April 28, 2022Inventors: Ryosuke Kohita, Akifumi Wachi, Daiki Kimura
-
Patent number: 11308274Abstract: A computer-implemented method is provided. The method includes acquiring a seed word; calculating a similarity score of each of a plurality of words relative to the seed word for each of a plurality of models to calculate a weighted sum of similarity scores for each of the plurality of words; outputting a plurality of candidate words among the plurality of words; acquiring annotations indicating at least one of preferred words and non-preferred words among the plurality of the candidate words; updating weights of the plurality of models in a manner to cause weighted sums of similarity scores for the preferred words to be relatively larger than the weighted sums of the similarity scores for the non-preferred words, based on the annotations; and grouping the plurality of candidate words output based on the weighted sum of similarity scores calculated with updated weights of the plurality of models.Type: GrantFiled: May 17, 2019Date of Patent: April 19, 2022Assignee: International Business Machines CorporationInventors: Ryosuke Kohita, Issei Yoshida, Tetsuya Nasukawa, Hiroshi Kanayama
-
Patent number: 11294945Abstract: A computer-implemented method is presented for performing Q-learning with language model for unsupervised text summarization. The method includes mapping each word of a sentence into a vector by using word embedding via a deep learning natural language processing model, assigning each of the words to an action and operation status, determining, for each of the words whose operation status represents “unoperated,” a status by calculating a local encoding and a global encoding, and concatenating the local encoding and the global encoding, the local encoding calculated based on a vector, an action, and an operation status of the word, and the global encoding calculated based on each of the local encodings of the words in a self-attention fashion, and determining, via an editorial agent, a Q-value for each of the words in terms of each of three actions based on the status.Type: GrantFiled: May 19, 2020Date of Patent: April 5, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ryosuke Kohita, Akifumi Wachi
-
Publication number: 20220100956Abstract: Methods and systems for performing a language processing task include setting an angular coordinate for a vector representation of each of a set of words, based on similarity of the words to one another. A radial coordinate is set for the vector representation of each word, according to hierarchical relationships between the words. A language processing task is performed based on hierarchical word relationships using the vector representations of the words.Type: ApplicationFiled: September 25, 2020Publication date: March 31, 2022Inventors: Ran Iwamoto, Ryosuke Kohita, Akifumi Wachi
-
Publication number: 20220076138Abstract: A fast neural transition-based parser. The fast neural transition-based parser includes a decision tree-based classifier and a state vector control loss function. The decision tree-based classifier is dynamically used to replace a multilayer perceptron in the fast neural transition-based parser, and the decision tree-based classifier increases speed of neural transition-based parsing. The state vector control loss function trains the fast neural transition-based parser, the state vector control loss function builds a vector space favorable for building a decision tree that is used for the decision tree-based classifier in the neural transition-based parser, and the state vector control loss function maintains accuracy of neural transition-based parsing while the decision tree-based classifier is used to increase the speed of the neural transition-based parsing while using the decision tree-based classifier to increase the speed of the neural transition-based parsing.Type: ApplicationFiled: September 8, 2020Publication date: March 10, 2022Inventors: Ryosuke Kohita, Daisuke Takuma
-
Publication number: 20210365485Abstract: A computer-implemented method is presented for performing Q-learning with language model for unsupervised text summarization. The method includes mapping each word of a sentence into a vector by using word embedding via a deep learning natural language processing model, assigning each of the words to an action and operation status, determining, for each of the words whose operation status represents “unoperated,” a status by calculating a local encoding and a global encoding, and concatenating the local encoding and the global encoding, the local encoding calculated based on a vector, an action, and an operation status of the word, and the global encoding calculated based on each of the local encodings of the words in a self-attention fashion, and determining, via an editorial agent, a Q-value for each of the words in terms of each of three actions based on the status.Type: ApplicationFiled: May 19, 2020Publication date: November 25, 2021Inventors: Ryosuke Kohita, Akifumi Wachi
-
Publication number: 20210248315Abstract: A method is provided for dictionary expansion. The method acquires an object from a user and adds the object to a set of objects previously acquired from the user that form an expandable dictionary. The method calculates a centroid based on the set. The method calculates a similarity score of each of a plurality of objects relative to the centroid for each of a plurality of object features to calculate a weighted sum of similarity scores for each of the plurality of objects. The method presents candidate objects selected among the plurality of objects based on the weighted sum. The method acquires, from the user, a preferred candidate object among the candidate objects. The method updates weights of the plurality of features to maximize the weighed sum of similarity scores for the preferred candidate object. The method expands the dictionary by adding the preferred candidate object to the expandable dictionary.Type: ApplicationFiled: February 10, 2020Publication date: August 12, 2021Inventors: Ryosuke Kohita, Issei Yoshida, Hiroshi Kanayama, Tetsuya Nasukawa
-
Publication number: 20200364298Abstract: A computer-implemented method is provided. The method includes acquiring a seed word; calculating a similarity score of each of a plurality of words relative to the seed word for each of a plurality of models to calculate a weighted sum of similarity scores for each of the plurality of words; outputting a plurality of candidate words among the plurality of words; acquiring annotations indicating at least one of preferred words and non-preferred words among the plurality of the candidate words; updating weights of the plurality of models in a manner to cause weighted sums of similarity scores for the preferred words to be relatively larger than the weighted sums of the similarity scores for the non-preferred words, based on the annotations; and grouping the plurality of candidate words output based on the weighted sum of similarity scores calculated with updated weights of the plurality of models.Type: ApplicationFiled: May 17, 2019Publication date: November 19, 2020Inventors: Ryosuke Kohita, Issei Yoshida, Tetsuya Nasukawa, Hiroshi Kanayama