Patents by Inventor Junji Tomita
Junji Tomita has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11972365Abstract: A question generation device includes: generating means which uses a query and a relevant document including an answer to the query as input and, using a machine learning model having been learned in advance, generates a revised query in which a potentially defective portion of the query is supplemented with a word included in a prescribed lexical set.Type: GrantFiled: April 25, 2019Date of Patent: April 30, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Atsushi Otsuka, Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Hisako Asano, Junji Tomita
-
Patent number: 11954435Abstract: A text generation apparatus includes a memory and a processor configured to execute acquiring a reference text based on an input text and information different from the input text; and generating a text based on the input text and the reference text, wherein the acquiring and the generating are implemented as neural networks based on learned parameters.Type: GrantFiled: March 3, 2020Date of Patent: April 9, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Hisako Asano, Junji Tomita, Atsushi Otsuka
-
Patent number: 11941365Abstract: A model learning apparatus of the present invention has a question/answer-pair expansion unit and a translation-model learning unit. The question/answer-pair expansion unit generates expansion question/answer pairs by increasing the number of question/answer pairs associated with an index indicating that it sounds more like the character. The translation-model learning unit learns a translation model and a reverse translation model by using the expansion question/answer pairs. A response selecting apparatus of the present invention has a record unit, a document search unit, a score calculation unit, and a ranking unit. The record unit records question/answer pairs and the above described learned translation model. The score calculation unit obtains a translation likelihood which is a numerical value based on the probability of obtaining the answer from the input question and calculates a score of each of a search-result question/answer pair with respect to the input question.Type: GrantFiled: April 9, 2019Date of Patent: March 26, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Ryuichiro Higashinaka, Masahiro Mizukami, Junji Tomita
-
Patent number: 11893353Abstract: To make it possible to accurately generate a word vector even if vocabulary of a word vector data set is not limited. In a vector generating device 10 that generates vectors representing an input sentence P, when generating a series of the vectors representing the input sentence P based on vectors corresponding to words included in the input sentence P, a definition-sentence-considered-context encode unit 280 generates, based on a dictionary DB 230 storing sets of headwords y and definition sentences Dy, which are sentences defining the headwords y, concerning a word, which is the headword stored in the dictionary DB, among the words included in the input sentence P, the series of the vectors representing the input sentence P using the definition sentence Dy of the headwords y.Type: GrantFiled: March 4, 2019Date of Patent: February 6, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Kosuke Nishida, Kyosuke Nishida, Hisako Asano, Junji Tomita
-
Patent number: 11809820Abstract: It is an object to successfully absorb a difference in characteristics to be taken into consideration between languages and implement common named entity extraction in a processing system.Type: GrantFiled: April 22, 2019Date of Patent: November 7, 2023Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Kuniko Saito, Nozomi Kobayashi, Junji Tomita
-
Publication number: 20230306202Abstract: A language processing apparatus includes: a preprocessing unit that splits an input text into a plurality of short texts; a language processing unit that calculates a first feature and a second feature using a trained model for each of the plurality of short texts; and an external storage unit configured to store a third feature for one or more short texts, and the language processing unit uses the trained model to calculate the second feature for a certain short text using the first feature of the short text and the third feature stored in the external storage unit.Type: ApplicationFiled: August 20, 2020Publication date: September 28, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Yasuhito OSUGI, Itsumi SAITO, Kyosuke NISHIDA, Hisako ASANO, Junji TOMITA
-
Publication number: 20230252983Abstract: An input unit receives a morpheme array and parts-of-speech of morphemes of the morpheme array. An ambiguous word candidate acquisition unit (26) acquires, for each morpheme of the morpheme array, based on a notation and a part-of-speech of the morpheme, reading candidates of the morpheme from reading candidates of the morpheme defined in advance for each combination of a notation and a part-of-speech of the morpheme. A disambiguation unit (30) determines a reading of the morpheme from the acquired reading candidates of the morpheme by using a disambiguation rule in which a reading of the morpheme is defined in advance correspondingly to appearance positions of other morphemes and notations, parts-of-speech, or character types of the other morphemes.Type: ApplicationFiled: May 8, 2019Publication date: August 10, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Nozomi KOBAYASHI, Yusuke IJIMA, Junji TOMITA
-
Patent number: 11704485Abstract: An appropriate vector of any phrase can be generated. A lattice construction unit 212 constructs a lattice structure formed by links binding adjacent word or phrase candidates based on a morphological analysis result and a dependency analysis result of input text. A first learning unit 213 performs learning of a neural network A for estimating nearby word or phrase candidates from word or phrase candidates based on the lattice structure. A vector generation unit 214 acquires a vector of each of the word or phrase candidates from the neural network A and sets the vector as learning data. A second learning unit performs learning of a neural network B for vectorizing the word or phrase candidates based on the learning data.Type: GrantFiled: February 15, 2019Date of Patent: July 18, 2023Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Itsumi Saito, Kyosuke Nishida, Hisako Asano, Junji Tomita
-
Patent number: 11693854Abstract: This disclosure is provided, in which an answer generation unit configured to receive a document and a question as inputs, and execute processing of generating an answer sentence for the question by a learned model by using a word included in a union of a predetermined first vocabulary and a second vocabulary composed of words included in the document and the question, in which the learned model includes a learned neural network that has been learned in advance whether word included in the answer sentence is included in the second vocabulary, and increases or decreases a probability at which a word included in the second vocabulary is selected as the word included in the answer sentence at the time of generating the answer sentence by the learned neural network.Type: GrantFiled: March 27, 2019Date of Patent: July 4, 2023Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Kyosuke Nishida, Atsushi Otsuka, Itsumi Saito, Hisako Asano, Junji Tomita
-
Publication number: 20230195723Abstract: An estimating device according to an embodiment includes a first input processing unit that takes a question sentence relating to a database and configuration information representing a configuration of the database as input, and creates first input data configured of the question sentence, a table name of a table stored in the database, a column name of a column included in the table of the table name, and a value of the column, and a first estimating unit that estimates whether or not a column name included in the first input data is used in an SQL query for searching the database for an answer with regard to the question sentence, using a first parameter that is trained in advanceType: ApplicationFiled: May 20, 2020Publication date: June 22, 2023Inventors: Soichiro KAKU, Kyosuke NISHIDA, Junji TOMITA
-
Patent number: 11651166Abstract: A learning device of a phrase generation model includes a memory; and a processor configured to execute learning the phrase generation model including an encoder and a decoder, by using, as training data, a 3-tuple. The 3-tuple includes a combination of phrases and at least one of a conjunctive expression representing a relationship between the phrases, and a relational label indicating the relationship represented by the conjunctive expression. The encoder is configured to convert a phrase into a vector from a 2-tuple. The 2-tuple includes a phrase and at least one of the conjunctive expression and the relational label. The decoder is configured to generate, from the converted vector and the conjunctive expression or the relational label, a phrase having the relationship represented by the conjunctive expression or the relational label with respect to the phrase.Type: GrantFiled: February 22, 2019Date of Patent: May 16, 2023Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Itsumi Saito, Kyosuke Nishida, Hisako Asano, Junji Tomita
-
Publication number: 20230130902Abstract: A text generation apparatus includes a processor and a memory storing program instructions that cause the processor to receive an input sentence including one or more words and generate a sentence: estimate an importance of each word included in the input sentence and encode the input sentence; and take the importance and a result of encoding the input sentence as inputs to generate the sentence based on the input sentence. The processor uses a neural network based on learned parameters. This improves the accuracy of sentence generation.Type: ApplicationFiled: March 3, 2020Publication date: April 27, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Itsumi SAITO, Kyosuke NISHIDA, Kosuke NISHIDA, Hisako ASANO, Junji TOMITA
-
Publication number: 20230076576Abstract: A learning device includes a memory; and a processor configured to execute answer generation means for taking data including text, and a question text related to the data as inputs; creating, by using a model parameter of a neural network, a token sequence that takes visual information in the data into consideration, and generating an answer text to the question text, based on the created token sequence; and learning means for learning the model parameter by using the answer text and a correct answer text to the question text.Type: ApplicationFiled: December 9, 2020Publication date: March 9, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Kyosuke NISHIDA, Ryota TANAKA, Sen YOSHIDA, Junji TOMITA
-
Publication number: 20230072537Abstract: A learning apparatus according to an embodiment has a feature generation means configured to take a search query, a first document related to the search query, and a second document that is not related to the search query as input, and generate a feature of the search query, a feature of the first document, and a feature of the second document, by using model parameters of a neural network, and an update means configured to take the feature of the search query, the feature of the first document, and the feature of the second document as input, and update the model parameters by using an error function including a cost function that is a differentiable approximation function of an L0 norm.Type: ApplicationFiled: January 29, 2020Publication date: March 9, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Taku HASEGAWA, Kyosuke NISHIDA, Junji TOMITA, Hisako ASANO
-
Publication number: 20230034414Abstract: A dialogue processing apparatus includes one or more computers each including a memory and a processor configured to receive a question Qi as a word string representing a current question in an interactive machine reading comprehension task, a question history {Qi, . . . , Qi?1} as a set of word strings representing previous questions, and an answer history {A1, . . . , Ai?1} as a set of word strings representing previous answers to the previous questions, and use a pre-learned first model parameter, to generate an encoded context vector reflecting an attribute or an importance degree of each of the previous questions and the previous answers; and receive a document P to be used to generate an answer Ai to the question Qi and the encoded context vector, and use a pre-learned second model parameter, to perform matching between the document and the previous questions and previous answers, to generate the answer.Type: ApplicationFiled: December 12, 2019Publication date: February 2, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Yasuhito OSUGI, Itsumi SAITO, Kyosuke NISHIDA, Hisako ASANO, Junji TOMITA
-
Patent number: 11568132Abstract: The present disclosure relates to concurrent learning of a relationship estimation model and a phrase generation model. The relationship estimation model estimates a relationship between phrases. The phrase generation model generates a phrase that relates to an input phrase. The phrase generation model includes an encoder and a decoder. The encoder converts a phrase into a vector using a three-piece set as learning data. The decoder generates, based on the converted vector and a connection expression or a relationship label, a phrase having a relationship expressed by the connection expression or the relationship label for the phrase. The relationship estimation model generates a relationship score from the converted vector, which indicates each phrase included in a combination of the phrases, and a vector indicating the connection expression and the relationship label.Type: GrantFiled: March 1, 2019Date of Patent: January 31, 2023Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Itsumi Saito, Kyosuke Nishida, Hisako Asano, Junji Tomita
-
Publication number: 20230026110Abstract: In a training data generation method, a computer executes: a generation step for generating partial data of a summary sentence created for text data; an extraction step for extracting, from the text data, a sentence set that is a portion of the text data, based on a similarity with the partial data; and a determination step for determining whether or not the partial data is to be used as training data for a neural network that generates a summary sentence, based on the similarity between the partial data and the sentence set. Thus, it is possible to streamline the collection of training data for a neural summarization model.Type: ApplicationFiled: December 18, 2019Publication date: January 26, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Itsumi SAITO, Kyosuke NISHIDA, Hisako ASANO, Junji TOMITA
-
Publication number: 20230028376Abstract: The efficiency of summary learning that requires an additional input parameter is improved by causing a computer to execute: a first learning step of learning a first model for calculating an importance value of each component in source text, with use of a first training data group and a second training data group, the first training data group including source text, a query related to a summary of the source text, and summary data related to the query in the source text, and the second training group including source text and summary data generated based on the source text; and a second learning step of learning a second model for generating summary data from source text of training data, with use of each piece of training data in the second training data group and a plurality of components extracted for each piece of training data in the second training data group based on importance values calculated by the first model for components of the source text of the piece of training data.Type: ApplicationFiled: December 18, 2019Publication date: January 26, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Itsumi SAITO, Kyosuke NISHIDA, Kosuke NISHIDA, Hisako ASANO, Junji TOMITA
-
Patent number: 11545150Abstract: Provided are a dialogue device, a dialogue method, a data structure, and a program capable of realizing various dialogues while reducing the amount of description of a dialogue scenario. A knowledge transition unit 120 determines a next type of knowledge based on: a knowledge base 130 in which a relation label indicating each of relations between a plurality of types of knowledge is attached to knowledge about each of utterances to express the knowledge about the utterance; a user utterance; current knowledge; and a dialogue scenario including a basic scenario in which a transition method between the plurality of types of knowledge in the knowledge base is determined using the relation label, and an utterance generation unit 150 generates a system utterance based on the next type of knowledge.Type: GrantFiled: March 4, 2019Date of Patent: January 3, 2023Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Atsushi Otsuka, Ko Mitsuda, Taichi Katayama, Junji Tomita
-
Patent number: 11537790Abstract: To arrange all words so that the distance of a given word pair will be appropriate. Using as input a concept base 22 which is a set of pairs of a word and a vector representing a concept of the word, and a dictionary 24 which is a set of semantically distant or close word pairs, when a word pair C being a pair of given words A, B in the concept base 22 is present in the dictionary 24, conversion means 30 associates with the word pair C a magnitude D of a difference vector between a difference vector V? between a converted vector of the word A and a converted vector of the word B, and a vector kV determined by multiplying a difference vector V between the vector of the word A in the concept base 22 and the vector of the word B in the concept base 22 by a scalar value k. When the word pair C is not present in the dictionary 24, the conversion means 30 associates the magnitude D of the difference vector between the difference vector V? and the difference vector V with the word pair C.Type: GrantFiled: April 4, 2019Date of Patent: December 27, 2022Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Katsuji Bessho, Hisako Asano, Junji Tomita