Patents by Inventor Yasufumi Sakai

Yasufumi Sakai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240127051
    Abstract: A non-transitory computer-readable recording medium stores a machine learning program for causing a computer to execute a process including: for a machine learning model that includes a plurality of preliminarily trained layers, a first output layer formed according to a downstream task and coupled to a final layer of the plurality of layers, and a plurality of second output layers that is coupled to respective outputs of layers other than the final layer of the plurality of layers and has a same configuration as the first output layer, training only the first output layer and the second output layer of the machine learning model using the downstream task; and training the entire machine learning model that includes the first output layer and the second output layer using the downstream task.
    Type: Application
    Filed: July 26, 2023
    Publication date: April 18, 2024
    Applicant: Fujitsu Limited
    Inventor: Yasufumi SAKAI
  • Patent number: 11811427
    Abstract: An information processing apparatus includes: a memory configured to store program instructions to perform quantization on quantization target data; and a processor configured to execute the program instructions stored in the memory, the program instructions including: obtaining a distribution of appearance frequencies of a plurality of variable elements included in the quantization target data; and aligning a most significant bit position of a quantization position to a variable element smaller than a variable element of a maximum value among the plurality of variable elements based on the distribution of the appearance frequencies of the plurality of variable elements.
    Type: Grant
    Filed: July 16, 2020
    Date of Patent: November 7, 2023
    Assignee: FUJITSU LIMITED
    Inventors: Yasufumi Sakai, Sosaku Moriki
  • Patent number: 11809995
    Abstract: An information processing device, includes a memory; and a processor coupled to the memory and configured to: calculate a quantization error when a variable to be used in a neural network is quantized, generate a threshold value based on reference information related to a first recognition rate obtained by past learning of the neural network and a second recognition rate that is obtained by calculation of the neural network, determine a variable of data type to be quantized among variables to be used for calculation of the neural network based on the calculated quantization error and the generated threshold value, and execute the calculation of the neural network by using the variable of data type.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: November 7, 2023
    Assignee: FUJITSU LIMITED
    Inventor: Yasufumi Sakai
  • Publication number: 20230281440
    Abstract: A method including: obtaining a reduction ratio of each element of layers in a trained model of a neural network; when the neural network includes a process that outputs a tensor as a result of a given calculation on tensors and when tensors from first layers preceding the process are inputted, inserting a second layer that performs a zero padding between the first layers and the process, the first layers including a preceding layer of the process and including one or more layers preceding the preceding layer and being shortcut-connected to the process; and padding tensors inputted into second layers associated one with each first layer with one or more zero matrices such that a number of elements of each tensor inputted into the process from the first layers after reducing of elements of each first layer in accordance with the reduction ratio comes to be a first number.
    Type: Application
    Filed: November 10, 2022
    Publication date: September 7, 2023
    Applicant: Fujitsu Limited
    Inventor: Yasufumi Sakai
  • Patent number: 11675567
    Abstract: An information processing device that executes calculation of a neural network, includes a memory; and a processor coupled to the memory and the processor configured to: set a division position for quantization of a variable to be used for the calculation so that a quantization error based on a difference between the variable before the quantization and the variable after the quantization is reduced; and quantize the variable based on the division position set.
    Type: Grant
    Filed: April 6, 2020
    Date of Patent: June 13, 2023
    Assignee: FUJITSU LIMITED
    Inventor: Yasufumi Sakai
  • Publication number: 20230130638
    Abstract: A computer-readable recording medium has stored therein a program for causing a computer to execute a process including: calculating thresholds of errors in tensors between before and after reduction one for each element of a plurality of layers in a trained model of a neural network including the layers; selecting reduction ratio candidates to be applied one to each of the layers based on the thresholds and errors in tensors between before and after reduction in cases where the elements are reduced by each of reduction ratio candidates in each of the layers; and determining reduction ratios to be applied one to each of the layers based on inference accuracy of the trained model and inference accuracy of a reduced model after machine learning, the reduced model being obtained by reducing each element of the layers in the trained model according to the reduction ratio candidates to be applied.
    Type: Application
    Filed: July 13, 2022
    Publication date: April 27, 2023
    Applicant: FUJITSU LIMITED
    Inventor: Yasufumi Sakai
  • Publication number: 20230123756
    Abstract: A tensor quantization apparatus includes one or more memories; and one or more processors coupled to the one or more memories and the one or more processors configured to quantize a plurality of elements included in a tensor in first training of a neural network by changing a data type of each of the plurality of elements to first data type.
    Type: Application
    Filed: December 19, 2022
    Publication date: April 20, 2023
    Applicant: FUJITSU LIMITED
    Inventors: Yasufumi SAKAI, Enxhi Kreshpa
  • Publication number: 20230100644
    Abstract: A process includes, wherein a subset of elements of first training-data that includes elements is masked in second training-data, generating, from the second training-data, third training-data in which a subset of elements of data that includes output of a generator that estimates an element appropriate for a masked-portion in the first training-data and a first element other than the masked-portion in the second training-data is masked, and updating a parameter of a discriminator, which identifies whether the first element out of the third training-data replaces an element of the first training-data and which estimates an element appropriate for the masked-portion in the third training-data, so as to minimize an integrated loss function obtained by integrating first and second loss functions that are calculated based on output of the discriminator and the first training-data and that are respectively related to an identification result and an estimation result of the discriminator.
    Type: Application
    Filed: July 5, 2022
    Publication date: March 30, 2023
    Applicant: FUJITSU LIMITED
    Inventors: Masahiro ASAOKA, Yasufumi SAKAI, Akihiko KASAGI
  • Publication number: 20230064003
    Abstract: A non-transitory computer-readable storage medium storing a threshold determination program that causes a processor included in a computer to execute a process, the process includes quantitating a plurality of numerical values of a quantization target using a variable representing a candidate of a threshold, and determining the threshold based on a quantization error for each of the plurality of numerical values, the quantization error is specified based on the quantitating.
    Type: Application
    Filed: May 13, 2022
    Publication date: March 2, 2023
    Applicant: FUJITSU LIMITED
    Inventors: Enxhi Kreshpa, Tsuguchika TABARU, Yasufumi Sakai
  • Publication number: 20220374716
    Abstract: A non-transitory computer-readable storage medium storing a machine learning program that causes at least one computer to execute a process, the process includes acquiring a calculation amount of each partial network of a plurality of partial networks that is included in a neural network; determining a target channel based on the calculation amount of the each partial network and a scaling coefficient of each channel in a batch normalization layer included in the each partial network; and deleting the target channel.
    Type: Application
    Filed: March 21, 2022
    Publication date: November 24, 2022
    Applicant: FUJITSU LIMITED
    Inventors: Hong GAO, Yasufumi SAKAI
  • Publication number: 20220172022
    Abstract: A non-transitory computer-readable storage medium storing a quantization program that causes at least one computer to execute a process, the process includes calculating, for all layers of a neural network, differences each between a trust region radius threshold and a quantization error of a first bit width that is narrower by one step than a second bit width; calculating, based on the differences, each scaling coefficient for all the layers; updating a trust region radius by using a smallest value among the scaling coefficients; and quantizing a parameter of the neural network by a third bit width set based on the trust region.
    Type: Application
    Filed: October 14, 2021
    Publication date: June 2, 2022
    Applicant: FUJITSU LIMITED
    Inventor: Yasufumi Sakai
  • Publication number: 20220027758
    Abstract: First clustering is performed on a plurality of samples each including time-series measurement values of power consumption to thereby generate a plurality of first clusters. The plurality of first clusters are each classified as a second cluster satisfying a determination condition or a third cluster that does not satisfy the determination condition. The determination condition includes at least one of a first criterion in which the variance of correlation values between samples is less than a first threshold and a second criterion in which the average of the correlation values exceeds a second threshold. Second clustering is performed on samples included in the third cluster to divide the third cluster into a plurality of fourth clusters. Training data for use in generation of a model for predicting power consumption is generated based on the second cluster and at least one of the fourth clusters.
    Type: Application
    Filed: April 12, 2021
    Publication date: January 27, 2022
    Applicant: FUJITSU LIMITED
    Inventors: Enxhi Kreshpa, Shigeto SUZUKI, Yasufumi Sakai, Takashi Shiraishi, Takuji YAMAMOTO
  • Publication number: 20210216867
    Abstract: A processor quantizes a plurality of first intermediate data obtained from a training into intermediate data of a first fixed-point number according to a first fixed-point number format, obtains a first quantization error between the first intermediate data and the intermediate data of the first fixed-point number, quantizes the first intermediate data into intermediate data of a second fixed-point number according to a second fixed-point number format, and obtains a second quantization error between the first intermediate data and the intermediate data of the second fixed-point number. The processor compares the first quantization error with the second quantization error and determine as a determined fixed-point number format the fixed-point number format having the lower of the quantization errors, and executes the training operation with intermediate data of a fixed-point number obtained by quantizing the plurality of first intermediate data according to the determined fixed-point number format.
    Type: Application
    Filed: November 25, 2020
    Publication date: July 15, 2021
    Applicant: FUJITSU LIMITED
    Inventor: Yasufumi Sakai
  • Publication number: 20210097397
    Abstract: An information processing apparatus includes: a memory; and a processor configured to: execute a predetermined operation on each of pieces of input data so as to generate pieces of first operation result data that is a result of the predetermined operation; acquire statistical information regarding a distribution of digits of most significant bits that are unsigned for each of the pieces of first operation result data; store the pieces of first operation result data based on a predetermined data type in a register; execute a saturation process or a rounding process on the pieces of first operation result data based on, out of a first data type and a second data type that represent operation result data with a predetermined bit width, the second data type having a narrower bit width than the first data type, so as to generate a pieces of second operation result data.
    Type: Application
    Filed: September 11, 2020
    Publication date: April 1, 2021
    Applicant: FUJITSU LIMITED
    Inventor: Yasufumi Sakai
  • Patent number: 10954113
    Abstract: A flexible container (80) includes a reservoir part (82) that reserves a liquid and a discharge passage part (81) that communicates with the reservoir part (82) to take out the liquid. A discharge pump includes a plurality of roller parts (50) that press the discharge passage part (81) and an endless transfer mechanism unit (60) that has the plurality of roller parts (50) attached at a predetermined interval and causes the roller parts (50) to go around. The endless transfer mechanism unit (60) has an arrangement including a straight-line portion in which the roller parts (50) linearly move, in a path of the go-around movement. An attachment interval of the plurality of roller parts (50) is set to such an interval that before a certain roller part (50) going around reaches a position where the certain roller part (50) does not press the discharge passage part (81), a subsequent roller part (50) is able to reach a position where the subsequent roller part (50) presses the discharge passage part (81).
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: March 23, 2021
    Assignee: SUNTORY HOLDINGS LIMITED
    Inventors: Yasufumi Sakai, Yuji Suzuki, Masatoshi Aihara, Hiroki Yokoyama
  • Publication number: 20210081802
    Abstract: An information processing device, includes a memory; and a processor coupled to the memory and configured to: calculate a quantization error when a variable to be used in a neural network is quantized, generate a threshold value based on reference information related to a first recognition rate obtained by past learning of the neural network and a second recognition rate that is obtained by calculation of the neural network, determine a variable of data type to be quantized among variables to be used for calculation of the neural network based on the calculated quantization error and the generated threshold value, and execute the calculation of the neural network by using the variable of data type.
    Type: Application
    Filed: August 20, 2020
    Publication date: March 18, 2021
    Applicant: FUJITSU LIMITED
    Inventor: Yasufumi Sakai
  • Publication number: 20210081801
    Abstract: An information processing apparatus, includes a memory; and a processor coupled to the memory and configured to: quantize at least one of variables used in the neural network, add predetermined noise to each of the at least one of variables, and execute the neural network by using the at least one of quantized variables to which the predetermined noise has been added.
    Type: Application
    Filed: August 7, 2020
    Publication date: March 18, 2021
    Applicant: FUJITSU LIMITED
    Inventor: Yasufumi Sakai
  • Publication number: 20210081783
    Abstract: An information processing apparatus includes: a memory configured to store program instructions to perform quantization on quantization target data; and a processor configured to execute the program instructions stored in the memory, the program instructions including: obtaining a distribution of appearance frequencies of a plurality of variable elements included in the quantization target data; and aligning a most significant bit position of a quantization position to a variable element smaller than a variable element of a maximum value among the plurality of variable elements based on the distribution of the appearance frequencies of the plurality of variable elements.
    Type: Application
    Filed: July 16, 2020
    Publication date: March 18, 2021
    Applicant: FUJITSU LIMITED
    Inventors: Yasufumi Sakai, Sosaku Moriki
  • Publication number: 20210081785
    Abstract: An information processing device, includes a memory; and a processor coupled to the memory and configured to: determine a plurality of bit ranges after quantization for at least one of a plurality of types of variables to be used in a neural network, calculate a plurality of recognition rates of the neural network by using each of a plurality of variable groups which includes the plurality of types of variables, and in which a bit range of at least one of the plurality of types of variables is different, and determine to use a variable group of the plurality of variable groups, the variable group having a maximum recognition rate among the plurality of calculated recognition rates, for calculation of the neural network.
    Type: Application
    Filed: August 26, 2020
    Publication date: March 18, 2021
    Applicant: FUJITSU LIMITED
    Inventor: Yasufumi Sakai
  • Publication number: 20200334521
    Abstract: An information processing device that executes calculation of a neural network, includes a memory; and a processor coupled to the memory and the processor configured to: set a division position for quantization of a variable to be used for the calculation so that a quantization error based on a difference between the variable before the quantization and the variable after the quantization is reduced; and quantize the variable based on the division position set.
    Type: Application
    Filed: April 6, 2020
    Publication date: October 22, 2020
    Applicant: FUJITSU LIMITED
    Inventor: Yasufumi Sakai