Patents by Inventor Ryu IIDA

Ryu IIDA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11861307
    Abstract: A request paraphrasing system 120 allowing a dialogue system to flexibly address to requests in various different manners of expression includes: a pre-processing unit 130 converting a user input 56 to a word vector sequence; and a neural paraphrasing model 94 trained in advance by machine learning to receive the word vector sequence as an input and paraphrasing a request represented by the word vector sequence to a request having a higher probability of obtaining an answer from a question-answering device 122 than the request before paraphrasing. As pre-processing, whether the user input 56 is a request or not may be determined and it may be paraphrased only when it is determined to be a request. Further, a classification model 98 may classify the input request to determine to which request class it belongs, and the classification may be input as one feature to neural paraphrasing model 94.
    Type: Grant
    Filed: March 5, 2019
    Date of Patent: January 2, 2024
    Assignee: National Institute of Information and Communications Technology
    Inventors: Yoshihiko Asao, Ryu Iida, Canasai Kruengkrai, Noriyuki Abe, Kanako Onishi, Kentaro Torisawa, Yutaka Kidawara
  • Publication number: 20230385558
    Abstract: A text classifier 90 for answer identification is capable of highly accurate identification of an answer candidate to a question, by effectively using background knowledge related to the question, in order to extract an answer candidate to the question, the text classifier including: a BERT (Bidirectional Encoder Representation from Transformers) receiving a question and an answer candidate as inputs; a knowledge integration transformer receiving the output of BERT as an input; a background knowledge representation generator receiving a question and an answer as inputs and generating a group of background knowledge representation vectors for the question; and a vector converter respectively converting the question and the answer candidate to embedded vectors and inputting the same to the background knowledge representation generator.
    Type: Application
    Filed: October 13, 2021
    Publication date: November 30, 2023
    Inventors: Jonghoon OH, Kentaro TORISAWA, Julien KLOETZER, Ryu IIDA
  • Publication number: 20220253599
    Abstract: A program for training a representation generator generating a representation representing an answer part included in a passage to classify whether the passage is related to an answer or not. The program causes a computer to operate as: a fake representation generator responsive to a question and a passage for outputting a fake representation representing an answer part of the passage; a real representation generator for outputting, for the question and a core answer, a real representation representing the core answer, in the same format as fake representation; a discriminator for discriminating whether fake representation and real representation are a real or fake representation; and a generative adversarial network unit training the discriminator and fake representation generator through generative adversarial network such that error determination of fake representation is maximized and error determination of real representation is minimized.
    Type: Application
    Filed: July 6, 2020
    Publication date: August 11, 2022
    Inventors: Jonghoon OH, Kazuma KADOWAKI, Julien KLOETZER, Ryu IIDA, Kentaro TORISAWA
  • Patent number: 11176328
    Abstract: A question answering device includes: a general word vector converter converting a question and an answer to semantic vectors in accordance with general context; a general sentence level CNN 214, in response to similarities of semantic vectors between words in question and answer and to strength of causality between the words, for weighting each semantic vector to calculate sentence level representations of the question and the answer; a general passage level CNN 218, in response to similarity between sentence level representations of question and answer, and to strength of relation of vectors in the sentence level representations viewed from causality, for weighting the sentence level representation to calculate a passage level representation for the question and answer passage; and a classifier determining whether or not an answer is a correct answer, based on the similarities between outputs from CNNs 214 and 218.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: November 16, 2021
    Assignee: National Institute of Information and Communications Technology
    Inventors: Jonghoon Oh, Kentaro Torisawa, Canasai Kruengkrai, Ryu Iida, Julien Kloetzer
  • Publication number: 20210326675
    Abstract: A memory for a question-answering device that reduces influence of noise on answer generation and is capable of generating highly accurate answers includes: a memory configured to normalize vector expressions of answers included in a set of answers extracted from a prescribed background knowledge source for each of a plurality of mutually different questions and to store the results as normalized vectors; and a key-value memory access unit responsive to application of a question vector derived from a question for accessing the memory and for updating the question vector by using a degree of relatedness between the question vector and the plurality of questions and using the normalized vectors corresponding to respective ones of the plurality of questions.
    Type: Application
    Filed: June 18, 2019
    Publication date: October 21, 2021
    Inventors: Jonghoon OH, Kentaro TORISAWA, Canasai KRUENGKRAI, Julien KLOETZER, Ryu IIDA, Ryo ISHIDA, Yoshihiko ASAO
  • Patent number: 11106714
    Abstract: A summary generating apparatus includes a text storage device storing text with information indicating a portion to be focused on; word vector converters vectorizing each word of the text and adding an element indicating whether the word is focused on or not to the vector and thereby converting the text to a word vector sequence; an LSTM implemented by a neural network performing sequence-to-sequence type conversion, pre-trained by machine learning to output, in response to each of the word vectors of the word vector sequence input in a prescribed order, a summary of the text consisting of the words represented by the word sequence; and input units inputting each of the word vectors of the word vector sequence in the prescribed order to the neural network.
    Type: Grant
    Filed: May 7, 2018
    Date of Patent: August 31, 2021
    Assignee: National Institute of Information and Communications Technology
    Inventors: Ryu Iida, Kentaro Torisawa, Jonghoon Oh, Canasai Kruengkrai, Yoshihiko Asao, Noriyuki Abe, Junta Mizuno, Julien Kloetzer
  • Patent number: 10936664
    Abstract: A dialogue system includes: a question generating unit receiving an input sentence from a user and generating a question using an expression included in the input sentence, by using a dependency relation; an answer obtaining unit inputting the question generated by the question generating unit to a question-answering system and obtaining an answer to the question from question-answering system; and an utterance generating unit for generating an output sentence to the input sentence, based on the answer obtained by the answer obtaining unit.
    Type: Grant
    Filed: July 26, 2017
    Date of Patent: March 2, 2021
    Assignee: National Institute of Information and Communications Technology
    Inventors: Noriyuki Abe, Kanako Onishi, Kentaro Torisawa, Canasai Kruengkrai, Jonghoon Oh, Ryu Iida, Yutaka Kidawara
  • Publication number: 20210034817
    Abstract: A request paraphrasing system 120 allowing a dialogue system to flexibly address to requests in various different manners of expression includes: a pre-processing unit 130 converting a user input 56 to a word vector sequence; and a neural paraphrasing model 94 trained in advance by machine learning to receive the word vector sequence as an input and paraphrasing a request represented by the word vector sequence to a request having a higher probability of obtaining an answer from a question-answering device 122 than the request before paraphrasing. As pre-processing, whether the user input 56 is a request or not may be determined and it may be paraphrased only when it is determined to be a request. Further, a classification model 98 may classify the input request to determine to which request class it belongs, and the classification may be input as one feature to neural paraphrasing model 94.
    Type: Application
    Filed: March 5, 2019
    Publication date: February 4, 2021
    Inventors: Yoshihiko ASAO, Ryu IIDA, Canasai KRUENGKRAI, Noriyuki ABE, Kanako ONISHI, Kentaro TORISAWA, Yutaka KIDAWARA
  • Publication number: 20200183983
    Abstract: A dialogue system includes: a question generating unit receiving an input sentence from a user and generating a question using an expression included in the input sentence, by using a dependency relation; an answer obtaining unit inputting the question generated by the question generating unit to a question-answering system and obtaining an answer to the question from question-answering system; and an utterance generating unit for generating an output sentence to the input sentence, based on the answer obtained by the answer obtaining unit.
    Type: Application
    Filed: July 26, 2017
    Publication date: June 11, 2020
    Inventors: Noriyuki ABE, Kanako ONISHI, Kentaro TORISAWA, Canasai KRUENGKRAI, Jonghoon OH, Ryu IIDA, Yutaka KIDAWARA
  • Publication number: 20200159755
    Abstract: A summary generating apparatus includes; a text storage device storing text with information indicating a portion to be focused on; word vector converters vectorizing each word of the text and adding an element indicating whether the word is focused on or not to the vector and thereby converting the text to a word vector sequence; an LSTM implemented by a neural network performing sequence-to-sequence type conversion, pre-trained by machine learning to output, in response to each of the word vectors of the word vector sequence input in a prescribed order, a summary of the text consisting of the words represented by the word sequence; and input units inputting each of the word vectors of the word vector sequence in the prescribed order to the neural network.
    Type: Application
    Filed: May 7, 2018
    Publication date: May 21, 2020
    Inventors: Ryu IIDA, Kentaro TORISAWA, Jonghoon OH, Canasai KRUENGKRAI, Yoshihiko ASAO, Noriyuki ABE, Junta MIZUNO, Julien KLOETZER
  • Publication number: 20200134263
    Abstract: A question answering device includes: a general word vector converter converting a question and an answer to semantic vectors in accordance with general context; a general sentence level CNN 214, in response to similarities of semantic vectors between words in question and answer and to strength of causality between the words, for weighting each semantic vector to calculate sentence level representations of the question and the answer; a general passage level CNN 218, in response to similarity between sentence level representations of question and answer, and to strength of relation of vectors in the sentence level representations viewed from causality, for weighting the sentence level representation to calculate a passage level representation for the question and answer passage; and a classifier determining whether or not an answer is a correct answer, based on the similarities between outputs from CNNs 214 and 218.
    Type: Application
    Filed: June 14, 2018
    Publication date: April 30, 2020
    Inventors: Jonghoon OH, Kentaro TORISAWA, Canasai KRUENGKRAI, Ryu IIDA, Julien KLOETZER
  • Publication number: 20200034722
    Abstract: A question-answering system includes a storage unit storing expressions representing causality; an answer receiving unit receiving a question and answer passages each including an answer candidate to the question; a causality expression extracting unit extracting a causality expression from each of the answer passages; a relevant causality expression extracting unit selecting, for a combination of the question and an answer passage, an expression most relevant to the combination, from the storage unit; and a neural network receiving the question, the answer passages, semantic relation expressions related to the answer passages, and one of the relevant expressions for the combination of the question and the answer passages, and selecting an answer to the question from the answer passages.
    Type: Application
    Filed: October 2, 2017
    Publication date: January 30, 2020
    Inventors: Jonghoon OH, Kentaro TORISAWA, Canasai KRUENGKRAI, Ryu IIDA, Julien KLOETZER
  • Publication number: 20190188257
    Abstract: A context analysis apparatus includes an analysis control unit for detecting a predicate of which subject is omitted and antecedent candidates thereof, and an anaphora/ellipsis analysis unit determining a word to be identified. The anaphora/ellipsis analysis unit includes: word vector generating units generating a plurality of different types of word vectors from sentences for the antecedent candidates; a convolutional neural network receiving as an input a word vector and trained to output a score indicating the probability of each antecedent candidate being the omitted word; and a list storage unit and a identification unit determining a antecedent candidate having the highest score. The word vectors include a plurality of word vectors each extracted at least by using the object of analysis and character sequences of the entire sentences other than the candidates. Similar processing is also possible on other words such as a referring expression.
    Type: Application
    Filed: August 30, 2017
    Publication date: June 20, 2019
    Inventors: Ryu IIDA, Kentaro TORISAWA, Canasai KRUENGKRAI, Jonghoon OH, Julien KLOETZER
  • Patent number: 10157171
    Abstract: An annotation data generation assisting system includes: an input/output device receiving an input through an interactive process; morphological analysis system 380 and dependency parsing system performing morphological and dependency parsing on text data in text archive; first to fourth candidate generating units detecting a zero anaphor or a referring expression in the dependency relation of a predicate in a sequence of morphemes, identifying a position as an object of annotation and estimating candidates of expressions to be inserted by using language knowledge; a candidate DB storing estimated candidates; and an interactive annotation device reading candidates of annotation from candidate DB and annotate a candidate selected by an interactive process by input/output device.
    Type: Grant
    Filed: January 20, 2016
    Date of Patent: December 18, 2018
    Assignee: National Institute of Information and Communications Technology
    Inventors: Ryu Iida, Kentaro Torisawa, Chikara Hashimoto, Jonghoon Oh, Kiyonori Ootake, Yutaka Kidawara
  • Publication number: 20180246953
    Abstract: A training device includes: a question issuing unit issuing a question stored in a question and expected answer storage unit to a question answering system; an answer candidate filtering unit, an answer candidate determining unit, training data generating/labeling unit, and a training data selecting unit, generating and adding to a training data storage unit training data for a ranking unit of question answering system, from pairs of a question and each of a plurality of answer candidates output with scores from why-question answering system; and an iteration control unit controlling question issuing unit, answer candidate filtering unit, answer candidate determining unit, training data generating/labeling unit and training data selecting unit such that training of the training device, issuance of question and addition of training data are repeated until an end condition is satisfied.
    Type: Application
    Filed: August 26, 2016
    Publication date: August 30, 2018
    Inventors: Jonghoon OH, Kentaro TORISAWA, Chikara HASHIMOTO, Ryu IIDA, Masahiro TANAKA, Julien KLOETZER
  • Publication number: 20180011830
    Abstract: annotation data generation assisting system includes: an input/output device receiving an input through an interactive process; morphological analysis system and dependency parsing system performing morphological and dependency parsing on text data in text archive; first to fourth candidate generating units detecting a zero anaphor or a referring expression in the dependency relation of a predicate in a sequence of morphemes, identifying a position as an object of annotation and estimating candidates of expressions to be inserted by using language knowledge; a candidate DB storing estimated candidates; and an interactive annotation device reading candidates of annotation from candidate DB and annotate a candidate selected by an interactive process by input/output device.
    Type: Application
    Filed: January 20, 2016
    Publication date: January 11, 2018
    Inventors: Ryu IIDA, Kentaro TORISAWA, Chikara HASHIMOTO, Jonghoon OH, Kiyonori OOTAKE, Yutaka KIDAWARA
  • Patent number: 8868407
    Abstract: A referring expression processor which uses a probabilistic model and in which referring expressions including descriptive, anaphoric and deictic expressions are understood and generated in the course of dialogue is provided. The referring expression processor according to the present invention includes: a referring expression processing section which performs at least one of understanding and generation of referring expressions using a probabilistic model constructed with a referring expression Bayesian network, each referring expression Bayesian network representing relationships between a reference domain (D) which is a set of possible referents, a referent (X) in the reference domain, a concept (C) concerning the referent and a word (W) which represents the concept; and a memory which stores data necessary for constructing the referring expression Bayesian network.
    Type: Grant
    Filed: June 25, 2012
    Date of Patent: October 21, 2014
    Assignee: Honda Motor Co., Ltd.
    Inventors: Kotaro Funakoshi, Mikio Nakano, Takenobu Tokunaga, Ryu Iida
  • Publication number: 20130013290
    Abstract: A referring expression processor which uses a probabilistic model and in which referring expressions including descriptive, anaphoric and deictic expressions are understood and generated in the course of dialogue is provided. The referring expression processor according to the present invention includes: a referring expression processing section which performs at least one of understanding and generation of referring expressions using a probabilistic model constructed with a referring expression Bayesian network, each referring expression Bayesian network representing relationships between a reference domain (D) which is a set of possible referents, a referent (X) in the reference domain, a concept (C) concerning the referent and a word (W) which represents the concept; and a memory which stores data necessary for constructing the referring expression Bayesian network.
    Type: Application
    Filed: June 25, 2012
    Publication date: January 10, 2013
    Applicant: HONDA MOTOR CO., LTD.
    Inventors: Kotaro FUNAKOSHI, Mikio NAKANO, Takenobu TOKUNAGA, Ryu IIDA