Patents by Inventor Tomoharu Iwata
Tomoharu Iwata has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12039786Abstract: A learning device includes: input means for inputting route information on a set of routes each constituted by one or more ways, and passing mobile object information that indicates the number of passing mobile objects on an observed way, out of the one or more ways, at each time point; and learning means for learning parameters of a model in which a travel speed of the mobile objects is taken into consideration, using the route information and the passing mobile object information.Type: GrantFiled: December 3, 2019Date of Patent: July 16, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Tomoharu Iwata, Naoki Marumo, Hitoshi Shimizu
-
Publication number: 20240232624Abstract: Provided is a technique for performing learning of a neural network including an encoder and a decoder such that a certain latent variable included in a latent variable vector is larger or the certain latent variable included in the latent variable vector is smaller as a magnitude of a certain property included in an input vector is larger. A neural network learning device performs learning of a neural network including an encoder that converts an input vector into a latent variable vector and a decoder that converts the latent variable vector into an output vector such that the input vector and the output vector are substantially equal to each other, and the learning is performed to cause the latent variable to have monotonicity with respect to the input vector.Type: ApplicationFiled: May 17, 2021Publication date: July 11, 2024Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Takashi HATTORI, Hiroshi SAWADA, Tomoharu IWATA
-
Publication number: 20240232646Abstract: A learning device for predicting an occurrence of an event includes a memory and a processor configured to divide a support set extracted from a set of previous data for learning into a plurality of sections, output a first latent vector based on each of the plurality of divided sections and output a second latent vector based on each of the output first latent vectors, and output an intensity function indicating a likelihood of the event occurring based on the second latent vector.Type: ApplicationFiled: May 7, 2021Publication date: July 11, 2024Inventors: Yoshiaki TAKIMOTO, Takeshi KURASHIMA, Yusuke TANAKA, Tomoharu IWATA
-
Publication number: 20240220800Abstract: Provided is a technique for performing learning of a neural network including an encoder and a decoder such that a certain latent variable included in a latent variable vector is larger or the certain latent variable included in the latent variable vector is smaller as a magnitude of a certain property included in an input vector is larger. A neural network learning device performs learning of a neural network including an encoder that converts an input vector into a latent variable vector and a decoder that converts the latent variable vector into an output vector such that the input vector and the output vector are substantially identical to each other, and the learning is performed in such a manner that a condition that all weight parameters of the decoder are non-negative values or all weight parameters of the decoder are non-positive values is satisfied.Type: ApplicationFiled: May 17, 2021Publication date: July 4, 2024Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Takashi HATTORI, Hiroshi SAWADA, Tomoharu IWATA
-
Publication number: 20240169204Abstract: A learning method executed by a computer including a memory and a processor, that includes: inputting a learning data set including a plurality of pieces of observation data; estimating, by a neural network, parameters of prior distributions of a plurality of pieces of data in a case where the post-missing observation data is expressed by a product of the plurality of pieces of data, using the post-missing observation data in which some values included in the observation data are set as missing values; updating the plurality of pieces of data using the parameters of the prior distributions such that the product of the plurality of pieces of data matches the post-missing observation data; estimating a missing value of the post-missing observation data from the plurality of pieces of updated data; and updating model parameters including parameters of the neural network to increase estimation accuracy of the missing value.Type: ApplicationFiled: March 11, 2021Publication date: May 23, 2024Inventor: Tomoharu IWATA
-
Patent number: 11940281Abstract: A learning apparatus includes an input unit configured to input route information indicating a route including one or more paths of a plurality of paths and moving body number information indicating the number of moving bodies for a date and a time on a path to be observed among the plurality of paths, and a learning unit configured to learn a parameter of a model indicating a relationship between the number of moving bodies for each of the plurality of paths and the number of moving bodies for the route and a relationship between the numbers of moving bodies for the route at different dates and times by using the route information and the moving body number information.Type: GrantFiled: February 12, 2020Date of Patent: March 26, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Tomoharu Iwata, Hitoshi Shimizu
-
Publication number: 20240045921Abstract: A parameter estimation device for estimating a plurality of parameters used for calculating high-resolution data from aggregated data aggregated to coarse granularity, the parameter estimation device comprising: a parameter estimation unit configured to estimate a plurality of parameters that are unknown variables in a model so as to maximize a marginal likelihood based on the assumption that actually observed aggregated data is generated from the model based on a multivariate Gaussian process in which a plurality of latent Gaussian processes for a plurality of types of aggregated data in a plurality of domains are represented by linear mixing; and a storage unit configured to store the plurality of parameters, wherein the plurality of parameters include a hyperparameter of a prior distribution to a mixing coefficient used in the linear mixing.Type: ApplicationFiled: December 10, 2020Publication date: February 8, 2024Inventors: Yusuke TANAKA, Tomoharu IWATA, Takeshi KURASHIMA, Hiroyuki TODA
-
Publication number: 20240012869Abstract: An estimation apparatus includes: input means for inputting aggregated data in which a plurality of data are aggregated, and feature data representing a feature of the aggregated data; determination means for determining a parameter of a model of the plurality of data before the aggregation of the aggregated data, using a predetermined function and the feature data; and estimation means for estimating a parameter of the function and the plurality of data by optimizing a predetermined objective function, using the aggregated data.Type: ApplicationFiled: November 7, 2019Publication date: January 11, 2024Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Tomoharu IWATA, Hitoshi SHIMIZU
-
Patent number: 11859992Abstract: The present disclosure relates to an apparatus and methods of estimating a traffic volume of moving objects. In particular, the present disclosure estimates the traffic volume based on amounts of traffic volume of the moving objects observed at observation points based on a routing matrix and a visitor matrix. The routing matrix indicates whether moving objects that pass through specific waypoints are to be observed at an observation point. The visitor matrix indicates whether a moving object departing or arriving at the observation point. The present disclosure enables estimating a traffic volume of moving objects on various routes based on observed data with errors in data and varying lengths in observation periods.Type: GrantFiled: March 26, 2019Date of Patent: January 2, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Hitoshi Shimizu, Tatsushi Matsubayashi, Yusuke Tanaka, Takuma Otsuka, Hiroshi Sawada, Tomoharu Iwata, Naonori Ueda
-
Publication number: 20230419120Abstract: A learning method according to an embodiment causes a computer to execute: an input step of inputting a plurality of data sets; and a learning step of learning, based on the plurality of input data sets, an estimation model for estimating a parameter of a topic model from a smaller amount of data than an amount ot data included in the plurality of data sets.Type: ApplicationFiled: October 5, 2020Publication date: December 28, 2023Inventor: Tomoharu IWATA
-
Publication number: 20230325661Abstract: A learning method, executed by a computer including a memory and a processor, includes: inputting a plurality of items of data, and a plurality of labels representing clusters to which the plurality of items of data belong; converting each of the plurality of items of data by a predetermined neural network, to generate a plurality of items of representation data; clustering the plurality of items of representation data; calculating a predetermined evaluation scale indicating performance of the clustering, based on the clustering result and the plurality of labels; and learning a parameter of the neural network, based on the evaluation scale.Type: ApplicationFiled: September 18, 2020Publication date: October 12, 2023Inventor: Tomoharu IWATA
-
Publication number: 20230274133Abstract: A learning method includes: receiving as input a set of data sets {D1, . . . , DT} wherein Dt for a task tin a task set {1, . . . , T} includes feature amount vectors of cases of t; sampling t from the task set, and sampling a first subset from Dt and a second subset from Dt excluding the first subset; generating a task vector representing a property oft corresponding to the first subset by a first neural network; nonlinearly transforming feature amount vectors included in data included in the second subset by a second neural network using the task vector; calculating scores representing degrees of anomaly of the feature amount vectors using the transformed feature amount vectors and a preset center vector; and learning parameters of the first and second neural networks so as to make an index value representing generalized performance of anomaly detection higher using the scores.Type: ApplicationFiled: July 6, 2020Publication date: August 31, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventor: Tomoharu IWATA
-
Publication number: 20230244928Abstract: A learning apparatus includes a memory and a processor to execute: receiving as input, when denoting a set of indices representing response variables of a task r in a set of tasks R, as Cr, a data set Drc composed of pairs of the response variables and explanatory variable; sampling the task r from R, an index c from Cr, and a first subset from Drc and a second subset from a set of Drc excluding the first subset; generating a task vector representing a property of a task corresponding to the first subset with a first neural network; calculating, from the task vector and explanatory variables in the second subset, predicted values of response variables for the explanatory variables with a second neural network; and updating the first and second neural networks using an error between response variables in the second subset and the predicted values thereof.Type: ApplicationFiled: June 8, 2020Publication date: August 3, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventor: Tomoharu IWATA
-
Publication number: 20230222319Abstract: A learning method, executed by a computer, according to one embodiment includes an input procedure for receiving a series data set set X={Xd}d?D composed of series data sets Xd for learning in a task d?D when a task set is set as D, a sampling procedure for sampling the task d from the task set D and then sampling a first subset from a series data set Xd corresponding to the task d and a second subset from a set obtained by excluding the first subset from the series data set Xd, a generation procedure for generating a task vector representing characteristics of the first subset using parameters of a first neural network, a prediction procedure for calculating, from the task vector and series data included in the second subset, a predicted value of each value included in the series data using parameters of a second neural network, and a learning procedure for updating learning target parameters including the parameters of the first neural network and the parameters of the second neural network using an error bType: ApplicationFiled: June 8, 2020Publication date: July 13, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventor: Tomoharu IWATA
-
Publication number: 20230222324Abstract: A method includes receiving data including cases and labels therefor, calculating a predicted value of a label for each case included in the data using parameters of a neural network and information representing cases in which the labels are observed among the cases in the data, selecting one case from the data using parameters of another neural network and information representing the cases where the labels are observed among the cases in the data, training the parameters of the neural network using an error between the predicted value and a value of the label for each case in the data, and training the parameters of the other neural network using the error and another error between a predicted value of a label for each case when the one case is additionally observed and a value of the label for the case.Type: ApplicationFiled: June 8, 2020Publication date: July 13, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventor: Tomoharu IWATA
-
Publication number: 20230196097Abstract: A ranking function generating apparatus includes a memory and a processor configured to execute producing training data including at least a first search log related to a first item included in a search result of a search query, a second search log related to a second item included in the search result, and respective domains of the first search log and the second search log; and learning, using the training data, parameters of a neural network that implements ranking functions for a plurality of domains through multi-task learning regarding each of the domains as a task.Type: ApplicationFiled: May 18, 2020Publication date: June 22, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Hitoshi SHIMIZU, Tomoharu IWATA
-
Patent number: 11615273Abstract: In a classifier whose classification accuracy is maintained without frequently collecting labeled learning data, a learning unit learns a classification criterion of a classifier at each time point in the past until the present and learns a time series change of the classification criterion by using data for learning to which a label is given and that is collected until the present. A classifier creating unit predicts a classification criterion of a future classifier and creates a classifier that outputs a label representing an attribute of input data by using the learned classification criterion and time series change.Type: GrantFiled: January 19, 2017Date of Patent: March 28, 2023Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Atsutoshi Kumagai, Tomoharu Iwata
-
Publication number: 20230016231Abstract: A learning device relating to one embodiment includes: an input unit configured to input a plurality of datasets of different feature spaces; a first generation unit configured to generate a feature latent vector indicating a property of an individual feature of the dataset for each of the datasets; a second generation unit configured to generate an instance latent vector indicating the property of observation data for each of observation vectors included in the datasets; a prediction unit configured to predict a solution by a model for solving a machine learning problem of interest by using the feature latent vector and the instance latent vector; and a learning unit configured to learn a parameter of the model by optimizing a predetermined objective function by using the feature latent vector, the instance latent vector and the solution for each of the datasets.Type: ApplicationFiled: November 29, 2019Publication date: January 19, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Tomoharu IWATA, Atsutoshi KUMAGAI
-
Publication number: 20220405585Abstract: A latent representation calculation unit (131) uses a first model to calculate, from samples belonging to a domain, a latent representation representing a feature of the domain. A domain-by-domain objective function generation unit (132) and an all-domain objective function generation unit (133) generate, from the samples belonging to the domain and from the latent representation of the domain calculated by the latent representation calculation unit (131), an objective function related to a second model that calculates an anomaly score of each of the samples. An update unit (134) updates the first model and the second model so as to optimize the objective functions of a plurality of the domains calculated by the domain-by-domain objective function generation unit (132) and the all-domain objective function generation unit (133).Type: ApplicationFiled: October 16, 2019Publication date: December 22, 2022Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Atsutoshi KUMAGAI, Tomoharu IWATA
-
Publication number: 20220405624Abstract: An acquisition unit 15a acquires data in a task. The learning unit 15b learns a generation model representing a distribution of a probability of the data in the task so that a mutual information amount between a latent variable and an observed variable is minimized in the model.Type: ApplicationFiled: November 21, 2019Publication date: December 22, 2022Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Hiroshi TAKAHASHI, Tomoharu IWATA, Sekitoshi KANAI, Atsutoshi KUMAGAI, Yuki YAMANAKA, Masanori YAMADA, Satoshi YAGI