Patents by Inventor Yasutoshi IDA

Yasutoshi IDA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230142452
    Abstract: A data processing apparatus includes a lower bound calculation unit that calculates, in search of a hyperparameter, based on a norm of each row or column of a gram matrix to be processed, a lower bound of an optimal condition value when a solution of a parameter vector corresponding to the row or column is a zero vector, and an important matrix determination unit that determines whether the row or column is important. Further, there is an important matrix extraction unit that extracts the row or column determined to be important, an important matrix updating unit that updates a parameter corresponding to the row or column determined to be important. Also, there is an upper bound calculation unit that calculates an upper bound of the optimal condition value corresponding to the rows or columns to be processed, a calculation omission determination unit, and an updating calculation unit.
    Type: Application
    Filed: April 27, 2020
    Publication date: May 11, 2023
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventor: Yasutoshi IDA
  • Publication number: 20220147537
    Abstract: A data analysis device (10) is a data analysis device that extracts groups of important features from multidimensional data by using Sparse Group Lasso, and includes: a matrix norm computation unit (11) that computes a norm of a Gram matrix of given data; a score computation unit (12) that computes a score for a computation-target group among the groups of the data based on the norm; an omission determination unit (13) that determines whether or not to omit computation for the computation-target group based on the score; and a solver application unit (14) that applies, to the computation-target group, computation processing of Block Coordinate Descent used in Sparse Group Lasso in solving an optimization problem, when the omission determination unit (13) determines not to omit the computation for the computation-target group.
    Type: Application
    Filed: March 26, 2020
    Publication date: May 12, 2022
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Yasutoshi IDA, Yasuhiro FUJIWARA
  • Publication number: 20210192341
    Abstract: A first calculation unit (121), for each of layers of a neural network, discretizes a parameter using a step function and then calculates an output signal. Further, a second calculation unit (122), for each of layers of a neural network, calculates a gradient of an error function of the output signal with respect to the parameter using a continuous function to which the step function is approximated. Further, an updating unit (123) updates the parameter on the basis of the gradient calculated by the second calculation unit (122).
    Type: Application
    Filed: April 11, 2019
    Publication date: June 24, 2021
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Yu OYA, Yasutoshi IDA
  • Publication number: 20200410348
    Abstract: A learning device (10) calculates, for each layer in a multilayer neural network, a degree of contribution indicating a degree of contribution to an estimation result of the multilayer neural network, and selects a to-be-erased layer on the basis of the degree of contribution of each layer. The learning device (10) erases the to-be-erased layer from the multilayer neural network, and learns the multilayer neural network from which the to-be-erased layer has been erased.
    Type: Application
    Filed: April 4, 2019
    Publication date: December 31, 2020
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventor: Yasutoshi IDA
  • Publication number: 20190156240
    Abstract: A learning apparatus according to the present invention is a learning apparatus that performs learning using a stochastic gradient descent method in machine learning, and includes: a processor configured to: calculate a first-order gradient in the stochastic gradient descent method; calculate a statistic of the first-order gradient; remove an initialization bias when calculating the statistic of the first-order gradient from the statistic of the first-order gradient calculated; adjust a learning rate by dividing the learning rate by standard deviation of the first-order gradient based on the statistic of the first-order gradient; and update a parameter of a learning model using the learning rate adjusted.
    Type: Application
    Filed: April 14, 2017
    Publication date: May 23, 2019
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Yasutoshi IDA, Yasuhiro FUJIWARA