Patents by Inventor Atsushi YAGUCHI
Atsushi YAGUCHI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250045588Abstract: According to an embodiment, a learning method of optimizing a neural network, includes updating and specifying. In the updating, each of a plurality of weight coefficients included in the neural network is updated so that an objective function obtained by adding a basic loss function and an L2 regularization term multiplied by a regularization strength is minimized. In the specifying, an inactive node and an inactive channel are specified among a plurality of nodes and a plurality of channels included in the neural network.Type: ApplicationFiled: August 14, 2024Publication date: February 6, 2025Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Atsushi YAGUCHI, Wataru ASANO, Shuhei NITTA, Yukinobu SAKATA, Akiyuki TANIZAWA
-
Patent number: 12175365Abstract: According to one embodiment, a learning apparatus includes a setting unit, a training unit, and a display. The setting unit sets one or more second training conditions based on a first training condition relating to a first trained model. The training unit trains one or more neural networks in accordance with the one or more second training conditions and generates one or more second trained models which execute a task identical to a task executed by the first trained model. The display displays a graph showing an inference performance and calculation cost of each of the one or more second trained models.Type: GrantFiled: February 26, 2021Date of Patent: December 24, 2024Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata, Akiyuki Tanizawa
-
Publication number: 20230297811Abstract: According to one embodiment, a learning apparatus includes a processor. The processor divides target data into pieces of partial data. The processor inputs the pieces of partial data into a first network model to output a first prediction result and calculates a first confidence indicating a degree of contribution to the first prediction result. The processor inputs the target data into a second network model to output a second prediction result and calculates a second confidence indicating a degree of contribution to the second prediction result. The processor updates a parameter of the first network model, based on the first prediction result, the second prediction result, the first confidence and the second confidence.Type: ApplicationFiled: August 31, 2022Publication date: September 21, 2023Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Atsushi YAGUCHI, Shuhei NITTA, Akiyuki TANIZAWA, Ryusuke HIRAI
-
Patent number: 11704570Abstract: A learning device includes a structure search unit that searches for a first learned model structure obtained by selecting search space information in accordance with a target constraint condition of target hardware for each of a plurality of convolution processing blocks included in a base model structure in a neural network model; a parameter search unit that searches for a learning parameter of the neural network model in accordance with the target constraint condition; and a pruning unit that deletes a unit of at least one of the plurality of convolution processing blocks in the first learned model structure based on the target constraint condition and generates a second learned model structure.Type: GrantFiled: February 26, 2020Date of Patent: July 18, 2023Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Akiyuki Tanizawa, Wataru Asano, Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata
-
Patent number: 11654553Abstract: A robot system according to an embodiment includes one or more processors. The processors acquire first input data predetermined as data affecting an operation of a robot. The processors calculate a calculation cost of inference processing using a machine learning model for inferring control data used for controlling the robot, on the basis of the first input data. The processors infer the control data by the machine learning model set according to the calculation cost. The processors control the robot using the inferred control data.Type: GrantFiled: February 25, 2020Date of Patent: May 23, 2023Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Shuhei Nitta, Atsushi Yaguchi, Yukinobu Sakata, Akiyuki Tanizawa, Yasutoyo Takeyama, Tomoki Watanabe
-
Publication number: 20230090616Abstract: According to one embodiment, a learning system includes a plurality of local devices and a server. Each of the local devices includes a processor. The processor of the local device selects a first parameter set from a plurality of parameters related to the local model, and transmits the first parameter set to the server. At least one of the local devices is different from other local devices in a size of the local model in accordance with a resolution of input data. The server comprises a processor. The processor of the server integrates first parameter sets acquired from the local devices and update a global model. The processor of the server transmits the second parameter set to a local device that has transmitted the corresponding first parameter set.Type: ApplicationFiled: February 23, 2022Publication date: March 23, 2023Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Shuhei NITTA, Atsushi YAGUCHI, Akiyuki TANIZAWA
-
Patent number: 11604999Abstract: A learning device according to an embodiment includes one or more hardware processors configured to function as a generation unit, an inference unit, and a training unit. The generation unit generates input data with which an error between a value output from each of one or more target nodes and a preset aimed value is equal to or less than a preset value, the target nodes being in a target layer of a plurality of layers included in a first neural network. The inference unit causes the input data to propagate in a forward direction of the first neural network to generate output data. The training unit trains a second neural network differing from the first neural network by using training data including a set of the input data and the output data.Type: GrantFiled: February 26, 2020Date of Patent: March 14, 2023Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Wataru Asano, Akiyuki Tanizawa, Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata
-
Publication number: 20230056947Abstract: According to one embodiment, a learning apparatus includes a processing circuit. The processing circuit acquires a first training condition and a first model trained in accordance with the first training condition, sets a second training condition used to reduce a model size of the first model, different from the first training condition, in accordance with the second training condition and based on the first model, trains a second model whose model size is smaller than that of the first model, and in accordance with a third training condition that is not the same as the second training condition and complies with the first training condition, trains a third model based on the second model.Type: ApplicationFiled: February 28, 2022Publication date: February 23, 2023Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Shuhei NITTA, Yasutaka Furusho, Albert Rodriguez Mulet, Atsushi Yaguchi, Akiyuki TANIZAWA
-
Patent number: 11436490Abstract: A providing apparatus according to an embodiment of the present disclosure includes a memory and a hardware processor coupled to the memory. The hardware processor is configured to: store, in the memory, a first machine learning model capable of changing an amount of calculation of a model of a neural network; acquire device information; set, based on the device information, extraction conditions representing conditions for extracting second machine learning models from the first machine learning model; extract the second machine learning models from the first machine learning model based on the extraction conditions; and provide the second machine learning models to a device specified by the device information.Type: GrantFiled: February 26, 2020Date of Patent: September 6, 2022Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Akiyuki Tanizawa, Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata
-
Publication number: 20220138569Abstract: According to one embodiment, a learning apparatus includes a processing circuit. The processing circuit acquires first sequence data representing transition of inference performance according to a training progress of a first model trained in accordance with a first training parameter value concerning a specific training condition. The processing circuit performs iterative learning of a second model in accordance with a second training parameter value concerning the specific training condition and changes the second training parameter value based on the inference performance of the second model and the first sequence data in a training process of the second model.Type: ApplicationFiled: August 31, 2021Publication date: May 5, 2022Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Shuhei Nitta, Atsushi Yaguchi, Yukinobu Sakata, Akiyuki Tanizawa
-
Publication number: 20220083856Abstract: According to one embodiment, a learning apparatus includes a setting unit, a training unit, and a display. The setting unit sets one or more second training conditions based on a first training condition relating to a first trained model. The training unit trains one or more neural networks in accordance with the one or more second training conditions and generates one or more second trained models which execute a task identical to a task executed by the first trained model. The display displays a graph showing an inference performance and calculation cost of each of the one or more second trained models.Type: ApplicationFiled: February 26, 2021Publication date: March 17, 2022Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Atsushi YAGUCHI, Shuhei NITTA, Yukinobu SAKATA, Akiyuki TANIZAWA
-
Publication number: 20220076116Abstract: According to one embodiment, a learning apparatus includes processing circuitry. The processing circuitry acquires a plurality of learning samples to be learned and a plurality of target labels associated with the respective learning samples, iteratively learns a learning model so that a learning error between output data corresponding to the learning sample and the target label is small with respect to the learning model to which the output data is output by inputting the learning sample, and displays a layout image in which at least some of the learning samples are arranged based on a learning progress regarding the iterative learning of the learning model and a plurality of the learning errors.Type: ApplicationFiled: February 26, 2021Publication date: March 10, 2022Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Shuhei NITTA, Atsushi YAGUCHI, Yukinobu SAKATA, Akiyuki TANIZAWA
-
Patent number: 11227391Abstract: According to one embodiment, an image processing apparatus includes processing circuitry. The processing circuitry is configured to acquire medical image data. The processing circuitry is configured to obtain spatial distribution of likelihood values representing a likelihood of corresponding to a textual pattern in a predetermined region of a medical image for each of a plurality of textual patterns based on the medical image data. The processing circuitry is configured to calculate feature values in the predetermined region of the medical image based on the spatial distribution obtained for the each of the plurality of textual patterns.Type: GrantFiled: December 9, 2020Date of Patent: January 18, 2022Assignee: Canon Medical Systems CorporationInventors: Atsushi Yaguchi, Tomoya Okazaki, Yasunori Taguchi
-
Patent number: 11138735Abstract: An image processing apparatus includes processing circuitry configured: to obtain a plurality of images taken so as to include a target site of a subject in temporal phases; and to calculate an index indicating a state of an adhesion at a boundary between a first site of the subject corresponding to a first region and a second site of the subject corresponding to a second region, by using classification information used for classifying each of pixels into one selected from between a first class related to the first region and a second class related to a second region positioned adjacent to the first region in a predetermined direction, on a basis of mobility information among the images in the temporal phases with respect to the pixels in the images that are arranged in the predetermined direction across the boundary between the first region and the second region of the images.Type: GrantFiled: October 17, 2018Date of Patent: October 5, 2021Assignee: CANON MEDICAL SYSTEMS CORPORATIONInventors: Misaki Haratake, Toshimitsu Kaneko, Atsushi Yaguchi, Tatsuya Kimoto, Shinsuke Tsukagoshi
-
Publication number: 20210241172Abstract: A machine learning model compression system according to an embodiment includes one or more hardware processors configured to: select a layer of a trained machine learning model in order from an output side to an input side of the trained machine learning model; calculate, in units of an input channel, a first evaluation value evaluating a plurality of weights included in the selected layer; sort, in ascending order or descending order, the first evaluation values each calculated in units of the input channel; select a given number of the first evaluation values in ascending order of the first evaluation values; and delete the input channels used for calculation of the selected first evaluation values.Type: ApplicationFiled: August 26, 2020Publication date: August 5, 2021Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Takahiro TANAKA, Kosuke HARUKI, Ryuji SAKAI, Akiyuki TANIZAWA, Atsushi YAGUCHI, Shuhei NITTA, Yukinobu SAKATA
-
Publication number: 20210104044Abstract: According to one embodiment, an image processing apparatus includes processing circuitry. The processing circuitry is configured to acquire medical image data. The processing circuitry is configured to obtain spatial distribution of likelihood values representing a likelihood of corresponding to a textual pattern in a predetermined region of a medical image for each of a plurality of textual patterns based on the medical image data. The processing circuitry is configured to calculate feature values in the predetermined region of the medical image based on the spatial distribution obtained for the each of the plurality of textual patterns.Type: ApplicationFiled: December 9, 2020Publication date: April 8, 2021Applicant: Canon Medical Systems CorporationInventors: Atsushi YAGUCHI, Tomoya OKAZAKI, Yasunori TAGUCHI
-
Publication number: 20210081781Abstract: A providing apparatus according to an embodiment of the present disclosure includes a memory and a hardware processor coupled to the memory. The hardware processor is configured to: store, in the memory, a first machine learning model capable of changing an amount of calculation of a model of a neural network; acquire device information; set, based on the device information, extraction conditions representing conditions for extracting second machine learning models from the first machine learning model; extract the second machine learning models from the first machine learning model based on the extraction conditions; and provide the second machine learning models to a device specified by the device information.Type: ApplicationFiled: February 26, 2020Publication date: March 18, 2021Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Akiyuki TANIZAWA, Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata
-
Publication number: 20210073641Abstract: According to an embodiment, a learning device includes one or more hardware processors configured to function as a structure search unit. The structure search unit searches for a first learned model structure. The first learned model structure is obtained by selecting search space information in accordance with a target constraint condition of target hardware for each of a plurality of convolution processing blocks included in a base model structure in a neural network model.Type: ApplicationFiled: February 26, 2020Publication date: March 11, 2021Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Akiyuki TANIZAWA, Wataru ASANO, Atsushi YAGUCHI, Shuhei NITTA, Yukinobu SAKATA
-
Publication number: 20210060768Abstract: A robot system according to an embodiment includes one or more processors. The processors acquire first input data predetermined as data affecting an operation of a robot. The processors calculate a calculation cost of inference processing using a machine learning model for inferring control data used for controlling the robot, on the basis of the first input data. The processors infer the control data by the machine learning model set according to the calculation cost. The processors control the robot using the inferred control data.Type: ApplicationFiled: February 25, 2020Publication date: March 4, 2021Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Shuhei NITTA, Atsushi Yaguchi, Yukinobu Sakata, Akiyuki Tanizawa, Yasutoyo Takeyama, Tomoki Watanabe
-
Publication number: 20210034983Abstract: A learning device according to an embodiment includes one or more hardware processors configured to function as a generation unit, an inference unit, and a training unit. The generation unit generates input data with which an error between a value output from each of one or more target nodes and a preset aimed value is equal to or less than a preset value, the target nodes being in a target layer of a plurality of layers included in a first neural network. The inference unit causes the input data to propagate in a forward direction of the first neural network to generate output data. The training unit trains a second neural network differing from the first neural network by using training data including a set of the input data and the output data.Type: ApplicationFiled: February 26, 2020Publication date: February 4, 2021Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Wataru Asano, Akiyuki Tanizawa, Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata