Patents by Inventor Daichi MURATA

Daichi MURATA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240070463
    Abstract: Optimization of a compression algorithm to be applied is realized in subgraph units of a neural network. A preferred aspect of the present invention is an information processing device that selects an algorithm for compressing a neural network. The information processing device includes a subgraph dividing section which divides the neural network into subgraphs and an optimizing section which outputs a compression configuration in which one compression technique selected from a plurality thereof is associated with each of the subgraphs.
    Type: Application
    Filed: September 22, 2021
    Publication date: February 29, 2024
    Applicant: Hitachi Astemo, Ltd.
    Inventors: Daichi MURATA, Akira KITAYAMA, Hiroaki ITO, Masayoshi KURODA
  • Patent number: 11886874
    Abstract: An arithmetic operation device causes a convolution arithmetic unit to perform a convolution arithmetic operation between a filter and target data corresponding to a size of the filter in each of a plurality of convolution layers constituting a neural network. The arithmetic operation device includes: a bit reduction unit that reduces a bit string corresponding to a first bit number from a least significant bit of the target data and reduces a bit string corresponding to a second bit number from a least significant bit of a weight that is an element of the filter for each convolution layer; and a bit addition unit that adds a bit string corresponding to a third bit number obtained by adding the first bit number and the second bit number to a least significant bit of a convolution arithmetic operation result output from the convolution arithmetic unit by inputting the target data and the weight after being reduced by the bit reduction unit to the convolution arithmetic unit.
    Type: Grant
    Filed: April 8, 2020
    Date of Patent: January 30, 2024
    Assignee: HITACHI ASTEMO, LTD.
    Inventors: Tadashi Kishimoto, Goichi Ono, Akira Kitayama, Daichi Murata
  • Publication number: 20230005244
    Abstract: When it is assumed that a large-scale Deep Neural Network for autonomous driving applied compression, there are problems of a decrease in recognition accuracy of a post-compression Neural Network (NN) model and an increase in a compression design period, due to a large number of harmful or unnecessary training images (invalid training images). A training image selection unit B100 calculates an influence value on an inference, and generates an indexed training image set 1004-1 necessary for an NN compression design, by using the influence value. A neural network compression unit P200 notified of the result via a memory P300 compresses the NN.
    Type: Application
    Filed: October 30, 2020
    Publication date: January 5, 2023
    Applicant: Hitachi Astemo, Ltd.
    Inventor: Daichi MURATA
  • Publication number: 20220236985
    Abstract: An arithmetic operation device causes a convolution arithmetic unit to perform a convolution arithmetic operation between a filter and target data corresponding to a size of the filter in each of a plurality of convolution layers constituting a neural network. The arithmetic operation device includes: a bit reduction unit that reduces a bit string corresponding to a first bit number from a least significant bit of the target data and reduces a bit string corresponding to a second bit number from a least significant bit of a weight that is an element of the filter for each convolution layer; and a bit addition unit that adds a bit string corresponding to a third bit number obtained by adding the first bit number and the second bit number to a least significant bit of a convolution arithmetic operation result output from the convolution arithmetic unit by inputting the target data and the weight after being reduced by the bit reduction unit to the convolution arithmetic unit.
    Type: Application
    Filed: April 8, 2020
    Publication date: July 28, 2022
    Applicant: HITACHI ASTEMO, LTD.
    Inventors: Tadashi KISHIMOTO, Goichi ONO, Akira KITAYAMA, Daichi MURATA
  • Publication number: 20220129704
    Abstract: A computing device includes: an inference circuit that calculates a recognition result of a recognition target and reliability of the recognition result using sensor data from a sensor group that detects the recognition target and a first classifier that classifies the recognition target; and a classification circuit that classifies the sensor data into either an associated target with which the recognition result is associated or a non-associated target with which the recognition result is not associated, based on the reliability of the recognition result calculated by the inference circuit.
    Type: Application
    Filed: October 21, 2019
    Publication date: April 28, 2022
    Applicant: Hitachi Astemo, Ltd.
    Inventor: Daichi MURATA
  • Publication number: 20220092395
    Abstract: A computing device having input data and a neural network which performs an operation using a weighting factor includes a network analyzing unit which calculates a state of ignition of neurons of the neural network by the input data, and a contracting unit which narrows down candidates for contraction patterns from a plurality of contraction patterns to which a contraction rate of the neural network is set, based on the ignition state of the neurons, and executes the contraction of the neural network, based on the narrowed-down candidates for the contraction patterns to generate a post-contraction neural network.
    Type: Application
    Filed: October 11, 2019
    Publication date: March 24, 2022
    Applicant: HITACHI ASTEMO, LTD.
    Inventor: Daichi MURATA
  • Publication number: 20200250529
    Abstract: An arithmetic device which receives input data, a neural network, and a hyperparameter and optimizes the hyperparameter, the arithmetic device includes: a sensitivity analysis part which inputs the input data to the neural network and calculates a sensitivity to a recognition accuracy of the neural network for each of the hyperparameter; an optimization part which includes a plurality of kinds of optimization algorithms and selects the optimization algorithm according to the sensitivity to optimize the hyperparameter with the selected optimization algorithm; and a reconfiguration part which reconfigures the neural network on a basis of the optimized hyperparameter.
    Type: Application
    Filed: January 29, 2020
    Publication date: August 6, 2020
    Inventor: Daichi MURATA
  • Publication number: 20200012926
    Abstract: Provided is a learning device of a neural network including a bitwidth reducing unit, a learning unit, and a memory. The bitwidth reducing unit executes a first quantization that applies a first quantization area to a numerical value to be calculated in a neural network model. The learning unit performs learning with respect to the neural network model to which the first quantization has been executed. The bitwidth reducing unit executes a second quantization that applies a second quantization area to a numerical value to be calculated in the neural network model on which learning has been performed in the learning unit. The memory stores the neural network model to which the second quantization has been executed.
    Type: Application
    Filed: July 2, 2019
    Publication date: January 9, 2020
    Applicant: HITACHI, LTD.
    Inventor: Daichi MURATA