Patents by Inventor Ryota Tomioka

Ryota Tomioka has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220222531
    Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.
    Type: Application
    Filed: March 28, 2022
    Publication date: July 14, 2022
    Inventors: Ryota TOMIOKA, Matthew Alastair JOHNSON, Daniel Stefan TARLOW, Samuel Alexander WEBSTER, Dimitrios VYTINIOTIS, Alexander Lloyd GAUNT, Maik RIECHERT
  • Patent number: 11288575
    Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: March 29, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ryota Tomioka, Matthew Alastair Johnson, Daniel Stefan Tarlow, Samuel Alexander Webster, Dimitrios Vytiniotis, Alexander Lloyd Gaunt, Maik Riechert
  • Publication number: 20210304066
    Abstract: A computation graph of a machine learning model is accessed from memory and a constraint solver is used to compute a partition of the computation graph into ordered stages of an execution pipeline. In use, when inference or training of the machine learning model takes place by executing the pipeline, execution cost of the stages are balanced according to the computed partition.
    Type: Application
    Filed: April 21, 2020
    Publication date: September 30, 2021
    Inventors: Ryota TOMIOKA, Juliana PatrĂ­cia VICENTE FRANCO, Alberto MAGNI, Nuno CLAUDINO PEREIRA LOPES, Siddharth KRISHNA, Renato GOLIN
  • Patent number: 10956535
    Abstract: Disclosed in some examples are methods, systems, machine-readable media, and devices which operate a neural network defined by user code. A method includes identifying, operations from user code that are integral in operating the neural network, combining a subset of the identified operations into a single processing sequence to be transmitted to an array of hardware processors, performing operations that are not integral in operation of the neural network in a separate thread of execution from the operations that are integral in operating the neural network; and mapping results to the combined operations that were included in the single processing sequence.
    Type: Grant
    Filed: June 15, 2017
    Date of Patent: March 23, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Frank Torsten Bernd Seide, Ryota Tomioka, Wilhelm Richert, Bruno S Bozza
  • Patent number: 10742990
    Abstract: A data compression apparatus is described which has an encoder configured to receive an input data item and to compress the data item into an encoding comprising a plurality of numerical values. The numerical values are grouped at least according to whether they relate to content of the input data item or style of the input data item. The encoder has been trained using a plurality of groups of training data items grouped according to the content and where training data items within individual ones of the groups vary with respect to the style. The encoder has been trained using a training objective which takes into account the groups.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: August 11, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sebastian Nowozin, Ryota Tomioka, Diane Bouchacourt
  • Publication number: 20190297328
    Abstract: A data compression apparatus is described which has an encoder configured to receive an input data item and to compress the data item into an encoding comprising a plurality of numerical values. The numerical values are grouped at least according to whether they relate to content of the input data item or style of the input data item. The encoder has been trained using a plurality of groups of training data items grouped according to the content and where training data items within individual ones of the groups vary with respect to the style. The encoder has been trained using a training objective which takes into account the groups.
    Type: Application
    Filed: September 20, 2018
    Publication date: September 26, 2019
    Inventors: Sebastian NOWOZIN, Ryota TOMIOKA, Diane BOUCHACOURT
  • Patent number: 10158859
    Abstract: A data compression apparatus is described which has an encoder configured to receive an input data item and to compress the data item into an encoding comprising a plurality of numerical values. The numerical values are grouped at least according to whether they relate to content of the input data item or style of the input data item. The encoder has been trained using a plurality of groups of training data items grouped according to the content and where training data items within individual ones of the groups vary with respect to the style. The encoder has been trained using a training objective which takes into account the groups.
    Type: Grant
    Filed: June 29, 2017
    Date of Patent: December 18, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sebastian Nowozin, Ryota Tomioka, Diane Bouchacourt
  • Publication number: 20180336461
    Abstract: Disclosed in some examples are methods, systems, machine-readable media, and devices which operate a neural network defined by user code. A method includes identifying, operations from user code that are integral in operating the neural network, combining a subset of the identified operations into a single processing sequence to be transmitted to an array of hardware processors, performing operations that are not integral in operation of the neural network in a separate thread of execution from the operations that are integral in operating the neural network; and mapping results to the combined operations that were included in the single processing sequence.
    Type: Application
    Filed: June 15, 2017
    Publication date: November 22, 2018
    Inventors: FRANK TORSTEN BERND SEIDE, RYOTA TOMIOKA, WILHELM RICHERT, BRUNO S. BOZZA
  • Publication number: 20180338147
    Abstract: A data compression apparatus is described which has an encoder configured to receive an input data item and to compress the data item into an encoding comprising a plurality of numerical values. The numerical values are grouped at least according to whether they relate to content of the input data item or style of the input data item. The encoder has been trained using a plurality of groups of training data items grouped according to the content and where training data items within individual ones of the groups vary with respect to the style. The encoder has been trained using a training objective which takes into account the groups.
    Type: Application
    Filed: June 29, 2017
    Publication date: November 22, 2018
    Inventors: Sebastian NOWOZIN, Ryota TOMIOKA, Diane BOUCHACOURT
  • Publication number: 20180336458
    Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.
    Type: Application
    Filed: May 18, 2017
    Publication date: November 22, 2018
    Inventors: Ryota TOMIOKA, Matthew Alastair JOHNSON, Daniel Stefan TARLOW, Samuel Alexander WEBSTER, Dimitrios VYTINIOTIS, Alexander Lloyd GAUNT, Maik RIECHERT
  • Publication number: 20180075347
    Abstract: A computation node of a neural network training system is described. The node has a memory storing a plurality of gradients of a loss function of the neural network and an encoder. The encoder encodes the plurality of gradients by setting individual ones of the gradients either to zero or to a quantization level according to a probability related to at least the magnitude of the individual gradient. The node has a processor which sends the encoded plurality of gradients to one or more other computation nodes of the neural network training system over a communications network.
    Type: Application
    Filed: September 15, 2016
    Publication date: March 15, 2018
    Inventors: Dan Alistarh, Jerry Zheng Li, Ryota Tomioka, Milan Vojnovic