Patents by Inventor Shuhei Nitta

Shuhei Nitta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250078475
    Abstract: According to one embodiment, a training apparatus includes processing circuitry. The processing circuitry acquires a plurality of items of subject data and a plurality of items of incidental data corresponding to the plurality of items of subject data, calculates an importance of each of the plurality of items of subject data based on a distribution of the plurality of items of incidental data, determines, for each of the plurality of items of subject data, a number of items of training data according to the importance, and generate a plurality of items of training data corresponding to the determined number of items of training data, and iteratively trains a learning model on the plurality of items of training data for each of the plurality of items of subject data by unsupervised learning.
    Type: Application
    Filed: July 1, 2024
    Publication date: March 6, 2025
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Shun HIRAO, Shuhei NITTA
  • Publication number: 20250078455
    Abstract: According to one embodiment, a training apparatus includes processing circuitry. The processing circuitry acquires a plurality of items of subject data and a target cluster number, iteratively trains a learning model on the plurality of items of subject data by unsupervised learning based on learning conditions, estimates a feature cluster number based on a plurality of feature vectors corresponding to the plurality of items of subject data, and updates the learning conditions based on the feature cluster number and the target cluster number.
    Type: Application
    Filed: July 1, 2024
    Publication date: March 6, 2025
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Shun HIRAO, Shuhei NITTA
  • Publication number: 20250068697
    Abstract: A feature vector calculation apparatus includes processing circuitry. The processing circuitry is configured to: acquire target data, a plurality of pieces of deformed data obtained by deforming the target data, the target data comprising a plurality of pieces of target data, and a trained model adapted to receive input of each of the pieces of deformed data and output a feature vector; calculate the feature vector using each of the pieces of the deformed data and the trained model; and calculate, for each of the pieces of target data, a degree of variation indicative of a degree of variation in the feature vector.
    Type: Application
    Filed: August 21, 2024
    Publication date: February 27, 2025
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Shuhei NITTA, Shun HIRAO, Yasutaka FURUSHO
  • Publication number: 20250068985
    Abstract: A clustering apparatus includes processing circuitry. The processing circuitry is configured to: acquire target data, a first trained model adapted to receive input of the target data and output a first feature vector, and a second trained model adapted to receive input of the target data and output a second feature vector; calculate the first feature vector using the first trained model and the target data; calculate the second feature vector using the second trained model and the target data; calculate a second cluster by dividing the second feature vector; and integrate the first feature vector with the second cluster.
    Type: Application
    Filed: August 21, 2024
    Publication date: February 27, 2025
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Shuhei NITTA, Shun HIRAO, Yasutaka FURUSHO
  • Publication number: 20250045588
    Abstract: According to an embodiment, a learning method of optimizing a neural network, includes updating and specifying. In the updating, each of a plurality of weight coefficients included in the neural network is updated so that an objective function obtained by adding a basic loss function and an L2 regularization term multiplied by a regularization strength is minimized. In the specifying, an inactive node and an inactive channel are specified among a plurality of nodes and a plurality of channels included in the neural network.
    Type: Application
    Filed: August 14, 2024
    Publication date: February 6, 2025
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Atsushi YAGUCHI, Wataru ASANO, Shuhei NITTA, Yukinobu SAKATA, Akiyuki TANIZAWA
  • Publication number: 20250037016
    Abstract: According to one embodiment, a machine learning apparatus includes processing circuitry. The processing circuitry acquires a training sample including an object sample and a target value correlated with the object sample. The processing circuitry generates a first augmented sample by applying data augmentation to the object sample in accordance with a first data augmentation parameter. The processing circuitry generates a parameter output function that inputs therein the object sample and outputs a second data augmentation parameter corresponding to the object sample, by machine learning based on the object sample, the target value and the first augmented sample.
    Type: Application
    Filed: February 29, 2024
    Publication date: January 30, 2025
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Yasutaka FURUSHO, Shuhei NITTA
  • Patent number: 12175365
    Abstract: According to one embodiment, a learning apparatus includes a setting unit, a training unit, and a display. The setting unit sets one or more second training conditions based on a first training condition relating to a first trained model. The training unit trains one or more neural networks in accordance with the one or more second training conditions and generates one or more second trained models which execute a task identical to a task executed by the first trained model. The display displays a graph showing an inference performance and calculation cost of each of the one or more second trained models.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: December 24, 2024
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata, Akiyuki Tanizawa
  • Publication number: 20240353357
    Abstract: According to one embodiment, a crystal phase information extraction apparatus includes a first estimation unit and a second estimation unit. The first estimation unit estimates first crystal phase information on a first polycrystalline material. The second estimation unit performs, using the first crystal phase information, iterative optimization on diffraction data acquired from a second polycrystalline material having a component ratio of a crystal phase of interest smaller than that of the first polycrystalline material, and estimates second crystal phase information on the second polycrystalline material.
    Type: Application
    Filed: July 2, 2024
    Publication date: October 24, 2024
    Applicants: KABUSHIKI KAISHA TOSHIBA, TOSHIBA MATERIALS CO., LTD.
    Inventors: Shuhei NITTA, Naoyuki SANADA, Seiichi SUENAGA, Katsuyuki AOKI, Kentaro IWAI, Yoshihito YAMAGATA
  • Publication number: 20240289635
    Abstract: According to one embodiment, a learning system includes a plurality of local devices and a server. The plurality of local devices each includes processing circuitry configured to determine a federated local training condition indicating a training condition in federated learning of a local model based on preliminary local training information including a preliminary local training condition and a preliminary local training result in a case where a model is preliminarily trained using local data. The server includes processing circuitry configured to determine a global training condition of a global model based on the preliminary local training information.
    Type: Application
    Filed: October 11, 2023
    Publication date: August 29, 2024
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Albert RODRIGUEZ MULET, Shuhei NITTA, Ryusuke HIRAI, Yasutaka FURUSHO
  • Patent number: 11926195
    Abstract: A gap formed in a front portion of a door is easily and reliably filled so that occurrence of noise is reduced. A frame front side portion retaining a glass run front side portion being inserted is provided in a front portion of a window frame. A front side of an outer panel is provided with a mirror base to which a mirror base cover is attached. A rear edge of the mirror base is provided with a rear plate portion extending to a cabin inner side. A gap is formed between the frame front side portion and the rear plate portion. An outer sealing plate portion overlapping with a cabin outer side of the mirror base is provided with a gap filling portion extending downward.
    Type: Grant
    Filed: October 22, 2021
    Date of Patent: March 12, 2024
    Assignee: Nishikawa Rubber Co., Ltd.
    Inventors: Haruka Yanoshita, Masaki Motodera, Keizo Matsuoka, Shuhei Nitta, Tatsuya Nagai
  • Publication number: 20240028902
    Abstract: According to one embodiment, a learning apparatus includes a processor. The processor trains a neural network model having a plurality of pathways and generate a trained model. The processor performs pruning on the trained model and calculate a number of remaining parameters of each of the pathways. The processor generates a candidate model for reconstruction, the candidate model for reconstruction being generated by deleting a pathway in which the number of parameters is equal to or less than a threshold. The processor determines whether or not deletion of a further pathway included in the candidate model for reconstruction is possible. If it is determined that deletion of the further pathway is possible, the candidate model for reconstruction is subjected to each of the training, the pruning, and the generating.
    Type: Application
    Filed: February 24, 2023
    Publication date: January 25, 2024
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Albert RODRIGUEZ MULET, Shuhei NITTA, Ryusuke HIRAI
  • Publication number: 20240028901
    Abstract: According to one embodiment, a learning apparatus includes a processor. The processor performs, on a neural network model, an adaptation processing that includes at least either insertion of an activation function, or correction of the activation function. The processor generates a trained model by training the neural network model on which the adaptation processing has been performed. The processor performs pruning on the trained model to generate a reconstructed model from which a parameter has been reduced.
    Type: Application
    Filed: February 22, 2023
    Publication date: January 25, 2024
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Albert RODRIGUEZ MULET, Shuhei NITTA, Yoshiyuki KOKOJIMA, Ryusuke HIRAI, Yasutaka FURUSHO, Manabu NISHIYAMA, Yusuke NATSUI
  • Publication number: 20240005172
    Abstract: According to one embodiment, a learning system includes a plurality of local devices and a server. Each of the local devices includes a processor. The processor selects a mini-batch from local data. The processor trains a local model using the mini-batch. The processor generates local data information relating to the local data included in the mini-batch and indicating information different from a label. The processor transmits a local model parameter relating to the local model and the local data information to the server. The server includes a processor. The processor calculates an integrated parameter using the local data information acquired from each of the local devices. The processor updates a global model using the integrated parameter and the local model parameter acquired from each of the local devices.
    Type: Application
    Filed: February 15, 2023
    Publication date: January 4, 2024
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Shuhei NITTA, Ryusuke HIRAI, Yoshiyuki KOKOJIMA
  • Publication number: 20230297811
    Abstract: According to one embodiment, a learning apparatus includes a processor. The processor divides target data into pieces of partial data. The processor inputs the pieces of partial data into a first network model to output a first prediction result and calculates a first confidence indicating a degree of contribution to the first prediction result. The processor inputs the target data into a second network model to output a second prediction result and calculates a second confidence indicating a degree of contribution to the second prediction result. The processor updates a parameter of the first network model, based on the first prediction result, the second prediction result, the first confidence and the second confidence.
    Type: Application
    Filed: August 31, 2022
    Publication date: September 21, 2023
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Atsushi YAGUCHI, Shuhei NITTA, Akiyuki TANIZAWA, Ryusuke HIRAI
  • Patent number: 11747424
    Abstract: A magnetic resonance imaging apparatus according to an embodiment includes an MRI system and a processing circuitry. The MRI system includes a receiving coil to receive a magnetic resonance signal. The processing circuitry is configured to generate an image based on the magnetic resonance signal, the image including a plurality of pixels; calculate a feature value corresponding to a signal value of the pixel; correct the feature values based on a sensitivity of the receiving coil; and reduce noise in the image based on distribution of the corrected feature values.
    Type: Grant
    Filed: March 2, 2022
    Date of Patent: September 5, 2023
    Assignee: CANON MEDICAL SYSTEMS CORPORATION
    Inventors: Kenzo Isogawa, Toshiyuki Ono, Kenichi Shimoyama, Nobuyuki Matsumoto, Shuhei Nitta, Satoshi Kawata, Toshimitsu Kaneko, Mai Murashima
  • Patent number: 11704570
    Abstract: A learning device includes a structure search unit that searches for a first learned model structure obtained by selecting search space information in accordance with a target constraint condition of target hardware for each of a plurality of convolution processing blocks included in a base model structure in a neural network model; a parameter search unit that searches for a learning parameter of the neural network model in accordance with the target constraint condition; and a pruning unit that deletes a unit of at least one of the plurality of convolution processing blocks in the first learned model structure based on the target constraint condition and generates a second learned model structure.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: July 18, 2023
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Akiyuki Tanizawa, Wataru Asano, Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata
  • Patent number: 11654553
    Abstract: A robot system according to an embodiment includes one or more processors. The processors acquire first input data predetermined as data affecting an operation of a robot. The processors calculate a calculation cost of inference processing using a machine learning model for inferring control data used for controlling the robot, on the basis of the first input data. The processors infer the control data by the machine learning model set according to the calculation cost. The processors control the robot using the inferred control data.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: May 23, 2023
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Shuhei Nitta, Atsushi Yaguchi, Yukinobu Sakata, Akiyuki Tanizawa, Yasutoyo Takeyama, Tomoki Watanabe
  • Patent number: 11640530
    Abstract: A learning device includes one or more processors. The processors acquire input data and a target label indicating a correct answer of inference based on the input data. The processors add noise to at least one of the input data and intermediate layer data of the neural network and perform inference by the neural network with respect to the input data. The noise is based on contributions of a plurality of elements included in the input data with respect to an inference result when the input data is input to a neural network. The processors update parameters of the neural network so that the inference result by the neural network matches the target label.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: May 2, 2023
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventor: Shuhei Nitta
  • Publication number: 20230090616
    Abstract: According to one embodiment, a learning system includes a plurality of local devices and a server. Each of the local devices includes a processor. The processor of the local device selects a first parameter set from a plurality of parameters related to the local model, and transmits the first parameter set to the server. At least one of the local devices is different from other local devices in a size of the local model in accordance with a resolution of input data. The server comprises a processor. The processor of the server integrates first parameter sets acquired from the local devices and update a global model. The processor of the server transmits the second parameter set to a local device that has transmitted the corresponding first parameter set.
    Type: Application
    Filed: February 23, 2022
    Publication date: March 23, 2023
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Shuhei NITTA, Atsushi YAGUCHI, Akiyuki TANIZAWA
  • Patent number: 11604999
    Abstract: A learning device according to an embodiment includes one or more hardware processors configured to function as a generation unit, an inference unit, and a training unit. The generation unit generates input data with which an error between a value output from each of one or more target nodes and a preset aimed value is equal to or less than a preset value, the target nodes being in a target layer of a plurality of layers included in a first neural network. The inference unit causes the input data to propagate in a forward direction of the first neural network to generate output data. The training unit trains a second neural network differing from the first neural network by using training data including a set of the input data and the output data.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: March 14, 2023
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Wataru Asano, Akiyuki Tanizawa, Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata