Patents by Inventor Wataru Asano

Wataru Asano has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250045588
    Abstract: According to an embodiment, a learning method of optimizing a neural network, includes updating and specifying. In the updating, each of a plurality of weight coefficients included in the neural network is updated so that an objective function obtained by adding a basic loss function and an L2 regularization term multiplied by a regularization strength is minimized. In the specifying, an inactive node and an inactive channel are specified among a plurality of nodes and a plurality of channels included in the neural network.
    Type: Application
    Filed: August 14, 2024
    Publication date: February 6, 2025
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Atsushi YAGUCHI, Wataru ASANO, Shuhei NITTA, Yukinobu SAKATA, Akiyuki TANIZAWA
  • Patent number: 11704570
    Abstract: A learning device includes a structure search unit that searches for a first learned model structure obtained by selecting search space information in accordance with a target constraint condition of target hardware for each of a plurality of convolution processing blocks included in a base model structure in a neural network model; a parameter search unit that searches for a learning parameter of the neural network model in accordance with the target constraint condition; and a pruning unit that deletes a unit of at least one of the plurality of convolution processing blocks in the first learned model structure based on the target constraint condition and generates a second learned model structure.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: July 18, 2023
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Akiyuki Tanizawa, Wataru Asano, Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata
  • Patent number: 11604999
    Abstract: A learning device according to an embodiment includes one or more hardware processors configured to function as a generation unit, an inference unit, and a training unit. The generation unit generates input data with which an error between a value output from each of one or more target nodes and a preset aimed value is equal to or less than a preset value, the target nodes being in a target layer of a plurality of layers included in a first neural network. The inference unit causes the input data to propagate in a forward direction of the first neural network to generate output data. The training unit trains a second neural network differing from the first neural network by using training data including a set of the input data and the output data.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: March 14, 2023
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Wataru Asano, Akiyuki Tanizawa, Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata
  • Patent number: 11411575
    Abstract: According to an embodiment, an information processing apparatus includes a computing unit and a compressing unit. The computing unit is configured to execute computation of an input layer, a hidden layer, and an output layer of a neural network. The compressing unit is configured to irreversibly compress output data of at least a part of the input layer, the hidden layer, and the output layer and output the compressed data.
    Type: Grant
    Filed: February 16, 2018
    Date of Patent: August 9, 2022
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Takuya Matsuo, Wataru Asano
  • Publication number: 20210073641
    Abstract: According to an embodiment, a learning device includes one or more hardware processors configured to function as a structure search unit. The structure search unit searches for a first learned model structure. The first learned model structure is obtained by selecting search space information in accordance with a target constraint condition of target hardware for each of a plurality of convolution processing blocks included in a base model structure in a neural network model.
    Type: Application
    Filed: February 26, 2020
    Publication date: March 11, 2021
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Akiyuki TANIZAWA, Wataru ASANO, Atsushi YAGUCHI, Shuhei NITTA, Yukinobu SAKATA
  • Publication number: 20210034983
    Abstract: A learning device according to an embodiment includes one or more hardware processors configured to function as a generation unit, an inference unit, and a training unit. The generation unit generates input data with which an error between a value output from each of one or more target nodes and a preset aimed value is equal to or less than a preset value, the target nodes being in a target layer of a plurality of layers included in a first neural network. The inference unit causes the input data to propagate in a forward direction of the first neural network to generate output data. The training unit trains a second neural network differing from the first neural network by using training data including a set of the input data and the output data.
    Type: Application
    Filed: February 26, 2020
    Publication date: February 4, 2021
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Wataru Asano, Akiyuki Tanizawa, Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata
  • Publication number: 20210012228
    Abstract: An inference apparatus according to an embodiment of the present disclosure includes a memory and a hardware processor coupled to the memory. The hardware processor is configured to: acquire at least one control parameter of second machine learning model, the second machine learning model having a size smaller than a size of a first machine learning model input to the inference apparatus; change the first machine learning model to the second machine learning model based on the at least one control parameter; and perform inference in response to input data by using the second machine learning model.
    Type: Application
    Filed: February 27, 2020
    Publication date: January 14, 2021
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Atsushi YAGUCHI, Akiyuki TANIZAWA, Wataru ASANO, Shuhei NITTA, Yukinobu SAKATA
  • Publication number: 20200012945
    Abstract: According to an embodiment, a learning method of optimizing a neural network, includes updating and specifying. In the updating, each of a plurality of weight coefficients included in the neural network is updated so that an objective function obtained by adding a basic loss function and an L2 regularization term multiplied by a regularization strength is minimized. In the specifying, an inactive node and an inactive channel are specified among a plurality of nodes and a plurality of channels included in the neural network.
    Type: Application
    Filed: February 27, 2019
    Publication date: January 9, 2020
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Atsushi YAGUCHI, Wataru ASANO, Shuhei NITTA, Yukinobu SAKATA, Akiyuki TANIZAWA
  • Patent number: 10341660
    Abstract: According to an embodiment, a video compression apparatus includes a first compressor, a second compressor, a partitioner and a communicator. The first compressor compresses a first video to generate a first bitstream. The second compressor sets regions in a second video and compresses the regions so as to enable each region to be independently decoded, to generate a second bitstream. The partitioner partitions the second bitstream according to the set regions to obtain a partitioned second bitstream. The communicator receives region information indicating a specific region that corresponds to one or more regions and selects and transmits a bitstream corresponding to the specific region from the partitioned second bitstream.
    Type: Grant
    Filed: August 26, 2015
    Date of Patent: July 2, 2019
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Akiyuki Tanizawa, Tomoya Kodama, Takeshi Chujoh, Shunichi Gondo, Wataru Asano, Takayuki Itoh
  • Publication number: 20190058489
    Abstract: According to an embodiment, an information processing apparatus includes a computing unit and a compressing unit. The computing unit is configured to execute computation of an input layer, a hidden layer, and an output layer of a neural network. The compressing unit is configured to irreversibly compress output data of at least a part of the input layer, the hidden layer, and the output layer and output the compressed data.
    Type: Application
    Filed: February 16, 2018
    Publication date: February 21, 2019
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Takuya MATSUO, Wataru Asano
  • Publication number: 20190034781
    Abstract: According to an embodiment, a network coefficient compression method includes: outputting, with respect to input data input into an input layer of a learned neural network, an output value in a hidden layer or an output layer of the neural network; and generating a compressed network coefficient by learning a network coefficient of the neural network, with the input data and the output value as training data, while performing lossy compression of the network coefficient.
    Type: Application
    Filed: February 2, 2018
    Publication date: January 31, 2019
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Wataru ASANO, Takuya MATSUO
  • Patent number: 10164654
    Abstract: A data compressing device according to an embodiment includes a data cutting unit configured to divide continuously inputted data into W-bit data blocks and to output the data blocks in segments such that each of the segments is composed of N data blocks, and a compression-method determining unit configured to select, as a compression portion for each of the segments, a run length system, a flag system, or no compression, according to a ratio of data blocks of specific data in any of the segments. The data compressing device further includes an RL compression unit configured to execute, on any of the segments, a run length system of storing a consecutive amount of the specific data into compressed data, and a flag compression unit configured to execute, on any of the segments, a flag system of storing positional information of the specific data into compressed data.
    Type: Grant
    Filed: August 28, 2017
    Date of Patent: December 25, 2018
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Kazuki Inoue, Keiri Nakanishi, Yasuki Tanabe, Wataru Asano
  • Publication number: 20180275659
    Abstract: According to one embodiment, a route generation apparatus includes a memory and a circuit coupled with the memory. The circuit acquires a depth image regarding a capturing object including a first object, generates three-dimensional data by using the depth image receives first region information that specifies a first region including at least part of the first object based on the three-dimensional data, and generates route data by using the first region information and the three-dimensional data.
    Type: Application
    Filed: August 31, 2017
    Publication date: September 27, 2018
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Toshiyuki ONO., Yusuke MORIUCHI., Wataru ASANO.
  • Publication number: 20180102788
    Abstract: A data compressing device according to an embodiment includes a data cutting unit configured to divide continuously inputted data into W-bit data blocks and to output the data blocks in segments such that each of the segments is composed of N data blocks, and a compression-method determining unit configured to select, as a compression portion for each of the segments, a run length system, a flag system, or no compression, according to a ratio of data blocks of specific data in any of the segments. The data compressing device further includes an RL compression unit configured to execute, on any of the segments, a run length system of storing a consecutive amount of the specific data into compressed data, and a flag compression unit configured to execute, on any of the segments, a flag system of storing positional information of the specific data into compressed data.
    Type: Application
    Filed: August 28, 2017
    Publication date: April 12, 2018
    Inventors: Kazuki Inoue, Keiri Nakanishi, Yasuki Tanabe, Wataru Asano
  • Patent number: 9872654
    Abstract: A medical image processing apparatus according to an embodiment includes compressing circuitry. The compressing circuitry is configured to compress, for each of a first data group and a second data group both of which being included in data pertaining to a medical image, each of the first data group and the second data group separately, starting from data corresponding to a boundary between the first data group and the second data group and shifting sequentially to a direction away from the boundary.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: January 23, 2018
    Assignee: Toshiba Medical Systems Corporation
    Inventors: Nakaba Kogure, Tomoya Kodama, Shinichiro Koto, Wataru Asano, Hiroaki Nakai
  • Patent number: 9844350
    Abstract: A medical image processing apparatus according to an embodiment includes compressing circuitry. The compressing circuitry is configured to compress, for each of a first data group and a second data group both of which being included in data pertaining to a medical image, each of the first data group and the second data group separately, starting from data corresponding to a boundary between the first data group and the second data group and shifting sequentially to a direction away from the boundary.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: December 19, 2017
    Assignee: Toshiba Medical Systems Corporation
    Inventors: Nakaba Kogure, Tomoya Kodama, Shinichiro Koto, Wataru Asano, Hiroaki Nakai
  • Publication number: 20160065993
    Abstract: According to an embodiment, a video compression apparatus includes a first compressor, a second compressor, a partitioner and a communicator. The first compressor compresses a first video to generate a first bitstream. The second compressor sets regions in a second video and compresses the regions so as to enable each region to be independently decoded, to generate a second bitstream. The partitioner partitions the second bitstream according to the set regions to obtain a partitioned second bitstream. The communicator receives region information indicating a specific region that corresponds to one or more regions and selects and transmits a bitstream corresponding to the specific region from the partitioned second bitstream.
    Type: Application
    Filed: August 26, 2015
    Publication date: March 3, 2016
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Akiyuki TANIZAWA, Tomoya Kodama, Takeshi Chujoh, Shunichi Gondo, Wataru Asano, Takayuki Itoh
  • Publication number: 20150374313
    Abstract: A medical image processing apparatus according to an embodiment includes compressing circuitry. The compressing circuitry is configured to compress, for each of a first data group and a second data group both of which being included in data pertaining to a medical image, each of the first data group and the second data group separately, starting from data corresponding to a boundary between the first data group and the second data group and shifting sequentially to a direction away from the boundary.
    Type: Application
    Filed: June 30, 2015
    Publication date: December 31, 2015
    Applicants: Kabushiki Kaisha Toshiba, Toshiba Medical Systems Corporation
    Inventors: Nakaba KOGURE, Tomoya Kodama, Shinichiro Koto, Wataru Asano, Hiroaki Nakai
  • Publication number: 20150117525
    Abstract: According to an embodiment, an encoding apparatus includes a processor and a memory. The memory stores processor-executable instructions that, when executed by the processor, cause the processor to: divide an image included in an image group into a plurality of regions; calculate a priority for each of the regions on the basis of levels of importance of the regions; determine an order of processing for the regions on the basis of the corresponding priority; and encode the regions according to the determined order of processing.
    Type: Application
    Filed: August 18, 2014
    Publication date: April 30, 2015
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Wataru ASANO, Tomoya Kodama, Jun Yamaguchi, Akiyuki Tanizawa
  • Publication number: 20140376628
    Abstract: According to an embodiment, a multi-view image encoding device encodes a multi-view image including a plurality of viewpoint images. The device includes an assignor, a predictor, a subtractor, and an encoder. The assignor assigns reference image numbers to the reference images according to a number of reference images used in predicting already-encoded blocks obtained by dividing the viewpoint images. The predictor generates a prediction image with respect to an encoding target block obtained by dividing the viewpoint images by referring to the reference images. The subtractor calculates a residual error between an encoding target image and the prediction image. The encoder encodes: a coefficient of transformation which is obtained by performing orthogonal transformation and quantization with respect to the residual error; and the reference image numbers of the reference images used in generating the prediction image.
    Type: Application
    Filed: September 9, 2014
    Publication date: December 25, 2014
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Youhei Fukazawa, Tomoya Kodama, Wataru Asano, Tatsuya Tanaka