Patents by Inventor Ryota Tomioka
Ryota Tomioka has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220222531Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.Type: ApplicationFiled: March 28, 2022Publication date: July 14, 2022Inventors: Ryota TOMIOKA, Matthew Alastair JOHNSON, Daniel Stefan TARLOW, Samuel Alexander WEBSTER, Dimitrios VYTINIOTIS, Alexander Lloyd GAUNT, Maik RIECHERT
-
Patent number: 11288575Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.Type: GrantFiled: May 18, 2017Date of Patent: March 29, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Ryota Tomioka, Matthew Alastair Johnson, Daniel Stefan Tarlow, Samuel Alexander Webster, Dimitrios Vytiniotis, Alexander Lloyd Gaunt, Maik Riechert
-
Publication number: 20210304066Abstract: A computation graph of a machine learning model is accessed from memory and a constraint solver is used to compute a partition of the computation graph into ordered stages of an execution pipeline. In use, when inference or training of the machine learning model takes place by executing the pipeline, execution cost of the stages are balanced according to the computed partition.Type: ApplicationFiled: April 21, 2020Publication date: September 30, 2021Inventors: Ryota TOMIOKA, Juliana PatrĂcia VICENTE FRANCO, Alberto MAGNI, Nuno CLAUDINO PEREIRA LOPES, Siddharth KRISHNA, Renato GOLIN
-
Patent number: 10956535Abstract: Disclosed in some examples are methods, systems, machine-readable media, and devices which operate a neural network defined by user code. A method includes identifying, operations from user code that are integral in operating the neural network, combining a subset of the identified operations into a single processing sequence to be transmitted to an array of hardware processors, performing operations that are not integral in operation of the neural network in a separate thread of execution from the operations that are integral in operating the neural network; and mapping results to the combined operations that were included in the single processing sequence.Type: GrantFiled: June 15, 2017Date of Patent: March 23, 2021Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Frank Torsten Bernd Seide, Ryota Tomioka, Wilhelm Richert, Bruno S Bozza
-
Patent number: 10742990Abstract: A data compression apparatus is described which has an encoder configured to receive an input data item and to compress the data item into an encoding comprising a plurality of numerical values. The numerical values are grouped at least according to whether they relate to content of the input data item or style of the input data item. The encoder has been trained using a plurality of groups of training data items grouped according to the content and where training data items within individual ones of the groups vary with respect to the style. The encoder has been trained using a training objective which takes into account the groups.Type: GrantFiled: September 20, 2018Date of Patent: August 11, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Sebastian Nowozin, Ryota Tomioka, Diane Bouchacourt
-
Publication number: 20190297328Abstract: A data compression apparatus is described which has an encoder configured to receive an input data item and to compress the data item into an encoding comprising a plurality of numerical values. The numerical values are grouped at least according to whether they relate to content of the input data item or style of the input data item. The encoder has been trained using a plurality of groups of training data items grouped according to the content and where training data items within individual ones of the groups vary with respect to the style. The encoder has been trained using a training objective which takes into account the groups.Type: ApplicationFiled: September 20, 2018Publication date: September 26, 2019Inventors: Sebastian NOWOZIN, Ryota TOMIOKA, Diane BOUCHACOURT
-
Patent number: 10158859Abstract: A data compression apparatus is described which has an encoder configured to receive an input data item and to compress the data item into an encoding comprising a plurality of numerical values. The numerical values are grouped at least according to whether they relate to content of the input data item or style of the input data item. The encoder has been trained using a plurality of groups of training data items grouped according to the content and where training data items within individual ones of the groups vary with respect to the style. The encoder has been trained using a training objective which takes into account the groups.Type: GrantFiled: June 29, 2017Date of Patent: December 18, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Sebastian Nowozin, Ryota Tomioka, Diane Bouchacourt
-
Publication number: 20180336461Abstract: Disclosed in some examples are methods, systems, machine-readable media, and devices which operate a neural network defined by user code. A method includes identifying, operations from user code that are integral in operating the neural network, combining a subset of the identified operations into a single processing sequence to be transmitted to an array of hardware processors, performing operations that are not integral in operation of the neural network in a separate thread of execution from the operations that are integral in operating the neural network; and mapping results to the combined operations that were included in the single processing sequence.Type: ApplicationFiled: June 15, 2017Publication date: November 22, 2018Inventors: FRANK TORSTEN BERND SEIDE, RYOTA TOMIOKA, WILHELM RICHERT, BRUNO S. BOZZA
-
Publication number: 20180338147Abstract: A data compression apparatus is described which has an encoder configured to receive an input data item and to compress the data item into an encoding comprising a plurality of numerical values. The numerical values are grouped at least according to whether they relate to content of the input data item or style of the input data item. The encoder has been trained using a plurality of groups of training data items grouped according to the content and where training data items within individual ones of the groups vary with respect to the style. The encoder has been trained using a training objective which takes into account the groups.Type: ApplicationFiled: June 29, 2017Publication date: November 22, 2018Inventors: Sebastian NOWOZIN, Ryota TOMIOKA, Diane BOUCHACOURT
-
Publication number: 20180336458Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.Type: ApplicationFiled: May 18, 2017Publication date: November 22, 2018Inventors: Ryota TOMIOKA, Matthew Alastair JOHNSON, Daniel Stefan TARLOW, Samuel Alexander WEBSTER, Dimitrios VYTINIOTIS, Alexander Lloyd GAUNT, Maik RIECHERT
-
Publication number: 20180075347Abstract: A computation node of a neural network training system is described. The node has a memory storing a plurality of gradients of a loss function of the neural network and an encoder. The encoder encodes the plurality of gradients by setting individual ones of the gradients either to zero or to a quantization level according to a probability related to at least the magnitude of the individual gradient. The node has a processor which sends the encoded plurality of gradients to one or more other computation nodes of the neural network training system over a communications network.Type: ApplicationFiled: September 15, 2016Publication date: March 15, 2018Inventors: Dan Alistarh, Jerry Zheng Li, Ryota Tomioka, Milan Vojnovic