Patents by Inventor Wataru Asano
Wataru Asano has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250045588Abstract: According to an embodiment, a learning method of optimizing a neural network, includes updating and specifying. In the updating, each of a plurality of weight coefficients included in the neural network is updated so that an objective function obtained by adding a basic loss function and an L2 regularization term multiplied by a regularization strength is minimized. In the specifying, an inactive node and an inactive channel are specified among a plurality of nodes and a plurality of channels included in the neural network.Type: ApplicationFiled: August 14, 2024Publication date: February 6, 2025Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Atsushi YAGUCHI, Wataru ASANO, Shuhei NITTA, Yukinobu SAKATA, Akiyuki TANIZAWA
-
Patent number: 11704570Abstract: A learning device includes a structure search unit that searches for a first learned model structure obtained by selecting search space information in accordance with a target constraint condition of target hardware for each of a plurality of convolution processing blocks included in a base model structure in a neural network model; a parameter search unit that searches for a learning parameter of the neural network model in accordance with the target constraint condition; and a pruning unit that deletes a unit of at least one of the plurality of convolution processing blocks in the first learned model structure based on the target constraint condition and generates a second learned model structure.Type: GrantFiled: February 26, 2020Date of Patent: July 18, 2023Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Akiyuki Tanizawa, Wataru Asano, Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata
-
Patent number: 11604999Abstract: A learning device according to an embodiment includes one or more hardware processors configured to function as a generation unit, an inference unit, and a training unit. The generation unit generates input data with which an error between a value output from each of one or more target nodes and a preset aimed value is equal to or less than a preset value, the target nodes being in a target layer of a plurality of layers included in a first neural network. The inference unit causes the input data to propagate in a forward direction of the first neural network to generate output data. The training unit trains a second neural network differing from the first neural network by using training data including a set of the input data and the output data.Type: GrantFiled: February 26, 2020Date of Patent: March 14, 2023Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Wataru Asano, Akiyuki Tanizawa, Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata
-
Patent number: 11411575Abstract: According to an embodiment, an information processing apparatus includes a computing unit and a compressing unit. The computing unit is configured to execute computation of an input layer, a hidden layer, and an output layer of a neural network. The compressing unit is configured to irreversibly compress output data of at least a part of the input layer, the hidden layer, and the output layer and output the compressed data.Type: GrantFiled: February 16, 2018Date of Patent: August 9, 2022Assignee: Kabushiki Kaisha ToshibaInventors: Takuya Matsuo, Wataru Asano
-
Publication number: 20210073641Abstract: According to an embodiment, a learning device includes one or more hardware processors configured to function as a structure search unit. The structure search unit searches for a first learned model structure. The first learned model structure is obtained by selecting search space information in accordance with a target constraint condition of target hardware for each of a plurality of convolution processing blocks included in a base model structure in a neural network model.Type: ApplicationFiled: February 26, 2020Publication date: March 11, 2021Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Akiyuki TANIZAWA, Wataru ASANO, Atsushi YAGUCHI, Shuhei NITTA, Yukinobu SAKATA
-
Publication number: 20210034983Abstract: A learning device according to an embodiment includes one or more hardware processors configured to function as a generation unit, an inference unit, and a training unit. The generation unit generates input data with which an error between a value output from each of one or more target nodes and a preset aimed value is equal to or less than a preset value, the target nodes being in a target layer of a plurality of layers included in a first neural network. The inference unit causes the input data to propagate in a forward direction of the first neural network to generate output data. The training unit trains a second neural network differing from the first neural network by using training data including a set of the input data and the output data.Type: ApplicationFiled: February 26, 2020Publication date: February 4, 2021Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Wataru Asano, Akiyuki Tanizawa, Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata
-
Publication number: 20210012228Abstract: An inference apparatus according to an embodiment of the present disclosure includes a memory and a hardware processor coupled to the memory. The hardware processor is configured to: acquire at least one control parameter of second machine learning model, the second machine learning model having a size smaller than a size of a first machine learning model input to the inference apparatus; change the first machine learning model to the second machine learning model based on the at least one control parameter; and perform inference in response to input data by using the second machine learning model.Type: ApplicationFiled: February 27, 2020Publication date: January 14, 2021Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Atsushi YAGUCHI, Akiyuki TANIZAWA, Wataru ASANO, Shuhei NITTA, Yukinobu SAKATA
-
Publication number: 20200012945Abstract: According to an embodiment, a learning method of optimizing a neural network, includes updating and specifying. In the updating, each of a plurality of weight coefficients included in the neural network is updated so that an objective function obtained by adding a basic loss function and an L2 regularization term multiplied by a regularization strength is minimized. In the specifying, an inactive node and an inactive channel are specified among a plurality of nodes and a plurality of channels included in the neural network.Type: ApplicationFiled: February 27, 2019Publication date: January 9, 2020Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Atsushi YAGUCHI, Wataru ASANO, Shuhei NITTA, Yukinobu SAKATA, Akiyuki TANIZAWA
-
Patent number: 10341660Abstract: According to an embodiment, a video compression apparatus includes a first compressor, a second compressor, a partitioner and a communicator. The first compressor compresses a first video to generate a first bitstream. The second compressor sets regions in a second video and compresses the regions so as to enable each region to be independently decoded, to generate a second bitstream. The partitioner partitions the second bitstream according to the set regions to obtain a partitioned second bitstream. The communicator receives region information indicating a specific region that corresponds to one or more regions and selects and transmits a bitstream corresponding to the specific region from the partitioned second bitstream.Type: GrantFiled: August 26, 2015Date of Patent: July 2, 2019Assignee: Kabushiki Kaisha ToshibaInventors: Akiyuki Tanizawa, Tomoya Kodama, Takeshi Chujoh, Shunichi Gondo, Wataru Asano, Takayuki Itoh
-
Publication number: 20190058489Abstract: According to an embodiment, an information processing apparatus includes a computing unit and a compressing unit. The computing unit is configured to execute computation of an input layer, a hidden layer, and an output layer of a neural network. The compressing unit is configured to irreversibly compress output data of at least a part of the input layer, the hidden layer, and the output layer and output the compressed data.Type: ApplicationFiled: February 16, 2018Publication date: February 21, 2019Applicant: Kabushiki Kaisha ToshibaInventors: Takuya MATSUO, Wataru Asano
-
Publication number: 20190034781Abstract: According to an embodiment, a network coefficient compression method includes: outputting, with respect to input data input into an input layer of a learned neural network, an output value in a hidden layer or an output layer of the neural network; and generating a compressed network coefficient by learning a network coefficient of the neural network, with the input data and the output value as training data, while performing lossy compression of the network coefficient.Type: ApplicationFiled: February 2, 2018Publication date: January 31, 2019Applicant: Kabushiki Kaisha ToshibaInventors: Wataru ASANO, Takuya MATSUO
-
Patent number: 10164654Abstract: A data compressing device according to an embodiment includes a data cutting unit configured to divide continuously inputted data into W-bit data blocks and to output the data blocks in segments such that each of the segments is composed of N data blocks, and a compression-method determining unit configured to select, as a compression portion for each of the segments, a run length system, a flag system, or no compression, according to a ratio of data blocks of specific data in any of the segments. The data compressing device further includes an RL compression unit configured to execute, on any of the segments, a run length system of storing a consecutive amount of the specific data into compressed data, and a flag compression unit configured to execute, on any of the segments, a flag system of storing positional information of the specific data into compressed data.Type: GrantFiled: August 28, 2017Date of Patent: December 25, 2018Assignee: Kabushiki Kaisha ToshibaInventors: Kazuki Inoue, Keiri Nakanishi, Yasuki Tanabe, Wataru Asano
-
Publication number: 20180275659Abstract: According to one embodiment, a route generation apparatus includes a memory and a circuit coupled with the memory. The circuit acquires a depth image regarding a capturing object including a first object, generates three-dimensional data by using the depth image receives first region information that specifies a first region including at least part of the first object based on the three-dimensional data, and generates route data by using the first region information and the three-dimensional data.Type: ApplicationFiled: August 31, 2017Publication date: September 27, 2018Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Toshiyuki ONO., Yusuke MORIUCHI., Wataru ASANO.
-
Publication number: 20180102788Abstract: A data compressing device according to an embodiment includes a data cutting unit configured to divide continuously inputted data into W-bit data blocks and to output the data blocks in segments such that each of the segments is composed of N data blocks, and a compression-method determining unit configured to select, as a compression portion for each of the segments, a run length system, a flag system, or no compression, according to a ratio of data blocks of specific data in any of the segments. The data compressing device further includes an RL compression unit configured to execute, on any of the segments, a run length system of storing a consecutive amount of the specific data into compressed data, and a flag compression unit configured to execute, on any of the segments, a flag system of storing positional information of the specific data into compressed data.Type: ApplicationFiled: August 28, 2017Publication date: April 12, 2018Inventors: Kazuki Inoue, Keiri Nakanishi, Yasuki Tanabe, Wataru Asano
-
Patent number: 9872654Abstract: A medical image processing apparatus according to an embodiment includes compressing circuitry. The compressing circuitry is configured to compress, for each of a first data group and a second data group both of which being included in data pertaining to a medical image, each of the first data group and the second data group separately, starting from data corresponding to a boundary between the first data group and the second data group and shifting sequentially to a direction away from the boundary.Type: GrantFiled: June 30, 2015Date of Patent: January 23, 2018Assignee: Toshiba Medical Systems CorporationInventors: Nakaba Kogure, Tomoya Kodama, Shinichiro Koto, Wataru Asano, Hiroaki Nakai
-
Patent number: 9844350Abstract: A medical image processing apparatus according to an embodiment includes compressing circuitry. The compressing circuitry is configured to compress, for each of a first data group and a second data group both of which being included in data pertaining to a medical image, each of the first data group and the second data group separately, starting from data corresponding to a boundary between the first data group and the second data group and shifting sequentially to a direction away from the boundary.Type: GrantFiled: June 30, 2015Date of Patent: December 19, 2017Assignee: Toshiba Medical Systems CorporationInventors: Nakaba Kogure, Tomoya Kodama, Shinichiro Koto, Wataru Asano, Hiroaki Nakai
-
Publication number: 20160065993Abstract: According to an embodiment, a video compression apparatus includes a first compressor, a second compressor, a partitioner and a communicator. The first compressor compresses a first video to generate a first bitstream. The second compressor sets regions in a second video and compresses the regions so as to enable each region to be independently decoded, to generate a second bitstream. The partitioner partitions the second bitstream according to the set regions to obtain a partitioned second bitstream. The communicator receives region information indicating a specific region that corresponds to one or more regions and selects and transmits a bitstream corresponding to the specific region from the partitioned second bitstream.Type: ApplicationFiled: August 26, 2015Publication date: March 3, 2016Applicant: Kabushiki Kaisha ToshibaInventors: Akiyuki TANIZAWA, Tomoya Kodama, Takeshi Chujoh, Shunichi Gondo, Wataru Asano, Takayuki Itoh
-
Publication number: 20150374313Abstract: A medical image processing apparatus according to an embodiment includes compressing circuitry. The compressing circuitry is configured to compress, for each of a first data group and a second data group both of which being included in data pertaining to a medical image, each of the first data group and the second data group separately, starting from data corresponding to a boundary between the first data group and the second data group and shifting sequentially to a direction away from the boundary.Type: ApplicationFiled: June 30, 2015Publication date: December 31, 2015Applicants: Kabushiki Kaisha Toshiba, Toshiba Medical Systems CorporationInventors: Nakaba KOGURE, Tomoya Kodama, Shinichiro Koto, Wataru Asano, Hiroaki Nakai
-
Publication number: 20150117525Abstract: According to an embodiment, an encoding apparatus includes a processor and a memory. The memory stores processor-executable instructions that, when executed by the processor, cause the processor to: divide an image included in an image group into a plurality of regions; calculate a priority for each of the regions on the basis of levels of importance of the regions; determine an order of processing for the regions on the basis of the corresponding priority; and encode the regions according to the determined order of processing.Type: ApplicationFiled: August 18, 2014Publication date: April 30, 2015Applicant: Kabushiki Kaisha ToshibaInventors: Wataru ASANO, Tomoya Kodama, Jun Yamaguchi, Akiyuki Tanizawa
-
Publication number: 20140376628Abstract: According to an embodiment, a multi-view image encoding device encodes a multi-view image including a plurality of viewpoint images. The device includes an assignor, a predictor, a subtractor, and an encoder. The assignor assigns reference image numbers to the reference images according to a number of reference images used in predicting already-encoded blocks obtained by dividing the viewpoint images. The predictor generates a prediction image with respect to an encoding target block obtained by dividing the viewpoint images by referring to the reference images. The subtractor calculates a residual error between an encoding target image and the prediction image. The encoder encodes: a coefficient of transformation which is obtained by performing orthogonal transformation and quantization with respect to the residual error; and the reference image numbers of the reference images used in generating the prediction image.Type: ApplicationFiled: September 9, 2014Publication date: December 25, 2014Applicant: Kabushiki Kaisha ToshibaInventors: Youhei Fukazawa, Tomoya Kodama, Wataru Asano, Tatsuya Tanaka