Patents by Inventor Akihiko Kasagi

Akihiko Kasagi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220147872
    Abstract: A non-transitory computer-readable recording medium storing a calculation processing program for causing a computer to execute processing, the processing including: calculating error gradients of a plurality of layers of a machine learning model that includes an input layer of the machine learning model at the time of machine learning of the machine learning model; selecting a layer of which the error gradient is less than a threshold as a suppression target of the machine learning; and controlling a learning rate and performing the machine learning on the layer selected as the suppression target in a certain period of time before the machine learning is suppressed.
    Type: Application
    Filed: August 30, 2021
    Publication date: May 12, 2022
    Applicant: FUJITSU LIMITED
    Inventors: YUTAKA KAI, Akihiko Kasagi, Yasushi Hara, Takumi Danjo
  • Publication number: 20220147772
    Abstract: A computer-implemented method includes: calculating error gradients with respect to a plurality of layers included in a machine learning model at a time of machine learning of the machine learning model, the plurality of layers including an input layer of the machine learning model; specifying, as a layer to be suppressed, a layer located in a range from a position of the input layer to a predetermined position among the layers in which the error gradient is less than a threshold; and suppressing the machine learning for the layer to be suppressed.
    Type: Application
    Filed: July 15, 2021
    Publication date: May 12, 2022
    Applicant: FUJITSU LIMITED
    Inventors: YUTAKA KAI, Akihiko Kasagi, Yasushi Hara, Takumi Danjo
  • Patent number: 11327764
    Abstract: A method for controlling an information processing system, the information processing system including multiple information processing devices coupled to each other, each of the multiple information processing devices including multiple main operation devices and multiple aggregate operation devices that are coupled to each other, the method includes: acquiring, by each of the aggregate operation devices, array data items from a main operation device coupled to the concerned aggregate operation device; determining the order of dimensions in which a process is executed and in which the information processing devices are coupled to each other; executing for each of the dimensions in accordance with the order of the dimensions, a process of halving the array data items and distributing the array data items to information processing devices arranged in the dimension; executing a process of transmitting, to information processing devices arranged in the dimension, operation results calculated based on data items.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: May 10, 2022
    Assignee: FUJITSU LIMITED
    Inventors: Akihiko Kasagi, Takashi Arakawa
  • Publication number: 20220019898
    Abstract: An information processing method executed by a computer, the method includes inputting training data to a machine learning model that includes a convolution layer and acquire an output result by the machine learning model; extracting a specific element that meets a specific condition from among elements included in error information based on an error between the training data and the output result; and performing machine learning of the convolution layer using the specific element.
    Type: Application
    Filed: April 16, 2021
    Publication date: January 20, 2022
    Applicant: FUJITSU LIMITED
    Inventor: Akihiko KASAGI
  • Publication number: 20210406683
    Abstract: A process includes starting a learning process for building a model including multiple layers each including a parameter. The learning process executes iterations, each including calculating output error of the model using training data and updating the parameter value based on the output error. The process also includes selecting two or more candidate layers representing candidates for layers, where the updating is to be suppressed, based on results of a first iteration of the learning process. The process also includes calculating, based on the number of iterations executed up to the first iteration, a ratio value which becomes larger when the number of iterations executed is greater, and determining, amongst the candidate layers, one or more layers, where the updating is to be suppressed at a second iteration following the first iteration. The number of one or more layers is determined according to the ratio value.
    Type: Application
    Filed: April 9, 2021
    Publication date: December 30, 2021
    Applicant: FUJITSU LIMITED
    Inventors: YUTAKA KAI, Akihiko Kasagi
  • Publication number: 20210397948
    Abstract: A memory holds a model including a plurality of layers including their respective parameters and training data. A processor starts learning processing, which repeatedly calculates an error of an output of the model by using the training data, calculates an error gradient, which indicates a gradient of the error with respect to the parameters, for each of the layers, and updates the parameters based on the error gradients. The processor calculates a difference between a first error gradient calculated in a first iteration in the learning processing and a second error gradient calculated in a second iteration after the first iteration for a first layer among the plurality of layers. In a case where the difference is less than a threshold, the processor skips the calculating of the error gradient and the updating of the parameter for the first layer in a third iteration after the second iteration.
    Type: Application
    Filed: March 10, 2021
    Publication date: December 23, 2021
    Applicant: FUJITSU LIMITED
    Inventors: Yasushi Hara, Akihiko Kasagi, Takumi Danjo, YUTAKA KAI
  • Publication number: 20210158212
    Abstract: A first learning rate is set to a first block including a first parameter, and a second learning rate, which is smaller than the first learning rate, is set to a second block including a second parameter. The first block and the second block are included in a model. Learning processing in which updating the first parameter based on a prediction error of the model, the prediction error having been calculated by using training data, and the first learning rate and updating the second parameter based on the prediction error and the second learning rate are performed iteratively is started. An update frequency of the second parameter is controlled such that this update frequency becomes lower than an update frequency of the first parameter by intermittently omitting the updating of the second parameter in the learning processing based on a relationship between the first and second learning rates.
    Type: Application
    Filed: October 8, 2020
    Publication date: May 27, 2021
    Applicant: FUJITSU LIMITED
    Inventors: YUTAKA KAI, Akihiko Kasagi
  • Publication number: 20200372336
    Abstract: Each of a plurality of processors enters, to a model representing a neural network and including a common first weight, first data different from that used by the other processors, calculates an error gradient for the first weight, and integrates the gradients calculated by each processor. Each processor stores the first weight in a memory and updates the weight of the model to a second weight based on a hyperparameter value different from those used by the other processors, the integrated error gradient, and the first weight. Each processor enters common second data to the model, compares the evaluation results acquired by each processor, and selects a common hyperparameter value. Each processor updates the weight of the model to a third weight based on the selected hyperparameter value, the integrated error gradient, and the first weight stored in the memory.
    Type: Application
    Filed: May 14, 2020
    Publication date: November 26, 2020
    Applicant: FUJITSU LIMITED
    Inventors: Akihiko Kasagi, Akihiro TABUCHI, Masafumi Yamazaki
  • Publication number: 20200372347
    Abstract: A method of controlling an information processing apparatus, the information processing apparatus being configured to perform learning processing by using a neural network, the method includes: executing a calculation processing that includes calculating a learning rate, the learning rate being configured to change in the form of a continuous curve such that the time from when the learning rate is at an intermediate value of a maximum value to when the learning rate reaches a minimum value is shorter than the time from when the learning processing starts to when the learning rate reaches the intermediate value of the maximum value; and executing a control processing that includes controlling, based on the calculated learning rate, an amount of update at the time when a weight parameter is updated in an update processing.
    Type: Application
    Filed: April 20, 2020
    Publication date: November 26, 2020
    Applicant: FUJITSU LIMITED
    Inventors: Masafumi Yamazaki, Akihiko Kasagi, Akihiro TABUCHI
  • Publication number: 20200372321
    Abstract: An information processing method implemented by a computer includes: executing a generation processing that includes generating a first mini-batch by performing data extension processing on learning data and processing to generate a second mini-batch without performing the data extension processing on the learning data; and executing a learning processing by using a neural network, the learning processing being configured to perform first learning by using the first mini-batch, and then perform second learning by using the second mini-batch.
    Type: Application
    Filed: April 29, 2020
    Publication date: November 26, 2020
    Applicant: FUJITSU LIMITED
    Inventors: Akihiro TABUCHI, Akihiko Kasagi
  • Patent number: 10768932
    Abstract: An arithmetic processing circuit includes, a dividing circuit that divides a plurality of data blocks into groups of a number equal to the number of arithmetic processing circuits included in an information processing apparatus, a data selecting circuit that selects respective first data blocks from the plurality of data blocks included in the respective groups, a transmission destination selecting circuit that selects arithmetic processing circuits different from each other as respective transmission destinations from the plurality of arithmetic processing circuits for the respective first data blocks selected by the data selecting circuit based on destination number information obtained by exclusive disjunction operation on identification number information assigned to each arithmetic processing circuit and cyclic number information assigned to each group, and a transmitting circuit that transmits the respective first data blocks selected by the data selecting circuit to the respective arithmetic processing
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: September 8, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Akihiko Kasagi
  • Publication number: 20200192717
    Abstract: An information processing apparatus for controlling a plurality of nodes mutually coupled via a plurality of cables, the apparatus includes: a memory; a processor coupled to the memory, the processor being configured to cause a first node to execute first processing to extract coupling relationship between the plurality of nodes, the first node being one of the plurality of nodes, being sequentially allocated from each of the plurality of nodes, the first processing including executing allocation processing that allocates unique coordinate information to the first node and allocates common coordinate information to nodes excluding the first node; executing transmission processing that causes the first node to transmit first information to each of the cables coupled to the first node; and executing identification processing that identifies a node having received the first information as neighboring node coupled to one of the plurality of cables coupled to the first node.
    Type: Application
    Filed: November 6, 2019
    Publication date: June 18, 2020
    Applicant: FUJITSU LIMITED
    Inventor: Akihiko Kasagi
  • Publication number: 20200167162
    Abstract: A method for controlling an information processing system, the information processing system including multiple information processing devices coupled to each other, each of the multiple information processing devices including multiple main operation devices and multiple aggregate operation devices that are coupled to each other, the method includes: acquiring, by each of the aggregate operation devices, array data items from a main operation device coupled to the concerned aggregate operation device; determining the order of dimensions in which a process is executed and in which the information processing devices are coupled to each other; executing for each of the dimensions in accordance with the order of the dimensions, a process of halving the array data items and distributing the array data items to information processing devices arranged in the dimension; executing a process of transmitting, to information processing devices arranged in the dimension, operation results calculated based on data items.
    Type: Application
    Filed: October 28, 2019
    Publication date: May 28, 2020
    Applicant: FUJITSU LIMITED
    Inventors: Akihiko Kasagi, Takashi Arakawa
  • Patent number: 10558730
    Abstract: A computing method includes: generating first partitioned matrices by partitioning the first matrix by a least common multiple of the M and the N in the row direction and by the N in the column direction; generating second partitioned matrices by partitioning the second matrix by the M in the row direction and by the least common multiple in the column direction; adding a first product of the first partitioned matrices and the second partitioned matrices to a first result matrix; transmitting the first partitioned matrices to computing elements directly connected to that computing element out of other computing elements connected to each other in a torus-like manner in the row direction; transmitting the second partitioned matrices to computing elements directly connected to that computing element out of other computing elements connected to each other in a torus-like manner in the column direction.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: February 11, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Akihiko Kasagi
  • Patent number: 10387997
    Abstract: An information processing device includes a first memory and a processor. The processor includes a circuit and a second memory. The circuit calculates a first value from a plurality of third pixels that are located above an interpolation pixel from among a plurality of first pixels, calculates a second value from a plurality of fourth pixels that are located above the interpolation pixel from among an plurality of second pixels, calculates a third value from a plurality of fifth pixels that are located below the interpolation pixel, calculates a fourth value from a plurality of sixth pixels that are located below the interpolation pixel, calculates a first gradient value from the first and third values, calculates a second gradient value from the second and fourth values, determines an edge direction according to the first gradient value and the second gradient value, and calculates a pixel value of an interpolation pixel.
    Type: Grant
    Filed: October 23, 2017
    Date of Patent: August 20, 2019
    Assignee: FUJITSU LIMITED
    Inventor: Akihiko Kasagi
  • Publication number: 20190138302
    Abstract: An arithmetic processing circuit includes, a dividing circuit that divides a plurality of data blocks into groups of a number equal to the number of arithmetic processing circuits included in an information processing apparatus, a data selecting circuit that selects respective first data blocks from the plurality of data blocks included in the respective groups, a transmission destination selecting circuit that selects arithmetic processing circuits different from each other as respective transmission destinations from the plurality of arithmetic processing circuits for the respective first data blocks selected by the data selecting circuit based on destination number information obtained by exclusive disjunction operation on identification number information assigned to each arithmetic processing circuit and cyclic number information assigned to each group, and a transmitting circuit that transmits the respective first data blocks selected by the data selecting circuit to the respective arithmetic processing
    Type: Application
    Filed: October 30, 2018
    Publication date: May 9, 2019
    Applicant: FUJITSU LIMITED
    Inventor: Akihiko Kasagi
  • Publication number: 20180246854
    Abstract: A computing method includes: generating first partitioned matrices by partitioning the first matrix by a least common multiple of the M and the N in the row direction and by the N in the column direction; generating second partitioned matrices by partitioning the second matrix by the M in the row direction and by the least common multiple in the column direction; adding a first product of the first partitioned matrices and the second partitioned matrices to a first result matrix; transmitting the first partitioned matrices to computing elements directly connected to that computing element out of other computing elements connected to each other in a torus-like manner in the row direction; transmitting the second partitioned matrices to computing elements directly connected to that computing element out of other computing elements connected to each other in a torus-like manner in the column direction.
    Type: Application
    Filed: February 13, 2018
    Publication date: August 30, 2018
    Applicant: FUJITSU LIMITED
    Inventor: Akihiko Kasagi
  • Publication number: 20180150934
    Abstract: An information processing device includes a first memory and a processor. The processor includes a circuit and a second memory. The circuit calculates a first value from a plurality of third pixels that are located above an interpolation pixel from among a plurality of first pixels, calculates a second value from a plurality of fourth pixels that are located above the interpolation pixel from among an plurality of second pixels, calculates a third value from a plurality of fifth pixels that are located below the interpolation pixel, calculates a fourth value from a plurality of sixth pixels that are located below the interpolation pixel, calculates a first gradient value from the first and third values, calculates a second gradient value from the second and fourth values, determines an edge direction according to the first gradient value and the second gradient value, and calculates a pixel value of an interpolation pixel.
    Type: Application
    Filed: October 23, 2017
    Publication date: May 31, 2018
    Applicant: FUJITSU LIMITED
    Inventor: Akihiko Kasagi
  • Publication number: 20180032911
    Abstract: The parallel information processing apparatus includes a plurality of nodes each including a first processor and a second processor. The first processor is configured to execute a computation process using a coefficient for a target data, computing a coefficient variation based on a result of the computation process, transferring the computed coefficient variation to the second processor and requesting the second processor to execute a transfer/receipt process. The second processor is configured to transmit the coefficient variation transferred from the first processor to another node and receive the coefficient variation computed by another node and integrate the coefficient variation transferred from the first processor and the coefficient variation computed by another node. At least one of the first processor and the second processor updates the coefficient to be used for the computation process from next time onward based on the integrated coefficient variation.
    Type: Application
    Filed: June 27, 2017
    Publication date: February 1, 2018
    Applicant: FUJITSU LIMITED
    Inventors: Masafumi Yamazaki, Tsuguchika TABARU, Akihiko Kasagi
  • Publication number: 20180032869
    Abstract: A machine learning method, using a neural network as a model, executed by a computer, the machine learning method including dividing a first batch data into a plurality of pieces of second batch data, the first batch data being a set of sample data to be input into the model in a machine learning, allocating the plurality of pieces of second batch data to a plurality of computers, the model having a specified layered structure and a specified parameter of the neural network being applied to the plurality of computers, making the plurality of computers to execute the machine learning based on the plurality of allocated second batch data, obtaining, from each of the plurality of computers, a plurality of correction amounts of the parameter derived by the executed machine learning, and correcting the model by modifying the specified parameter in accordance with the plurality of correction amounts.
    Type: Application
    Filed: July 27, 2017
    Publication date: February 1, 2018
    Applicant: FUJITSU LIMITED
    Inventors: Tsuguchika TABARU, Masafumi YAMAZAKI, Akihiko KASAGI