Patents by Inventor Ryota Tomioka

Ryota Tomioka has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250156692
    Abstract: Examples are disclosed that relate to a generative model for generating inorganic material candidates, such as crystalline structures. One example provides a method, comprising training an unconditional generative model using a dataset of stable periodic material structures, the unconditional generative model comprising a diffusion model. The training comprises learning the diffusion model to iteratively noise the stable periodic material structures of the dataset towards a random periodic structure by noising atom types of atoms in the periodic material structure, noising fractional coordinates of the atoms in the periodic material structure, and noising a lattice of the periodic material structure. The method further comprises using the trained unconditional generative model to generate a material structure by iteratively denoising an initial structure sampled from a random distribution.
    Type: Application
    Filed: June 28, 2024
    Publication date: May 15, 2025
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Tian XIE, Andrew Thomas FOWLER, Claudio ZENI, Daniel ZUEGNER, Robert PINSLER, Ryota TOMIOKA, Matthew Kristofer HORTON, Xiang FU
  • Publication number: 20250005455
    Abstract: A computation graph of a machine learning model is accessed from memory and a constraint solver is used to compute a partition of the computation graph into ordered stages of an execution pipeline. In use, when inference or training of the machine learning model takes place by executing the pipeline, execution cost of the stages are balanced according to the computed partition.
    Type: Application
    Filed: July 2, 2024
    Publication date: January 2, 2025
    Inventors: Ryota TOMIOKA, Juliana Patrícia VICENTE FRANCO, Alberto MAGNI, Nuno CLAUDINO PEREIRA LOPES, Siddharth KRISHNA, Renato GOLIN
  • Publication number: 20250003920
    Abstract: A sensor element for detecting a target gas to be measured in a measurement-object gas includes: a base part in an elongated plate shape, including an oxygen-ion-conductive solid electrolyte layer; a measurement-object gas flow part formed from one end part in a longitudinal direction of the base part; an inner main pump electrode disposed on an inner surface of the measurement-object gas flow part; a porous coating layer covering at least an electrode end part of the inner main pump electrode closer to the one end part in the longitudinal direction of the base part; and a measurement electrode disposed at a position farther from the one end part in the longitudinal direction of the base part than the inner main pump electrode on the inner surface of the measurement-object gas flow part.
    Type: Application
    Filed: September 12, 2024
    Publication date: January 2, 2025
    Inventors: Shiho IWAI, Sotaro INAGAKI, Ryota TOMIOKA, Takayuki SEKIYA
  • Publication number: 20240419967
    Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.
    Type: Application
    Filed: August 23, 2024
    Publication date: December 19, 2024
    Inventors: Ryota TOMIOKA, Matthew Alastair JOHNSON, Daniel Stefan TARLOW, Samuel Alexander WEBSTER, Dimitrios VYTINIOTIS, Alexander Lloyd GAUNT, Maik RIECHERT
  • Publication number: 20240377350
    Abstract: A sensor element includes: a base part in an elongated plate shape; a measurement-object gas flow part formed from one end part in a longitudinal direction of the base part; an inner main pump electrode disposed on an inner surface of the measurement-object gas flow part; an outer pump electrode disposed in correspondence with the inner main pump electrode; and a measurement electrode disposed at a position farther from the one end part than the inner main pump electrode on the inner surface of the measurement-object gas flow part. The outer pump electrode is disposed at a position farther from the one end part in the longitudinal direction of the base part than one electrode end of the inner main pump electrode closer to the one end part in the longitudinal direction of the base part, with respect to the longitudinal direction of the base part.
    Type: Application
    Filed: July 25, 2024
    Publication date: November 14, 2024
    Inventors: Sotaro INAGAKI, Shiho IWAI, Ryota TOMIOKA, Takayuki SEKIYA
  • Patent number: 12099927
    Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.
    Type: Grant
    Filed: March 28, 2022
    Date of Patent: September 24, 2024
    Assignee: Microsoft Technology Licensing, LLC.
    Inventors: Ryota Tomioka, Matthew Alastair Johnson, Daniel Stefan Tarlow, Samuel Alexander Webster, Dimitrios Vytiniotis, Alexander Lloyd Gaunt, Maik Riechert
  • Patent number: 12093791
    Abstract: A computation graph of a machine learning model is accessed from memory and a constraint solver is used to compute a partition of the computation graph into ordered stages of an execution pipeline. In use, when inference or training of the machine learning model takes place by executing the pipeline, execution cost of the stages are balanced according to the computed partition.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: September 17, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ryota Tomioka, Juliana Patrícia Vicente Franco, Alberto Magni, Nuno Claudino Pereira Lopes, Siddharth Krishna, Renato Golin
  • Publication number: 20240249800
    Abstract: A computerized method for forecasting a future conformation of a molecular system based on a current conformation of the molecular system comprises (a) receiving the current conformation in a trained machine-learning model that has been previously trained to map a plurality of conformations received to a corresponding plurality of conformations proposed; (b) mapping the current conformation to a proposed conformation via the trained machine-learning model, wherein the proposed conformation is appended to a Markov chain; and (c) returning the proposed conformation as the future conformation.
    Type: Application
    Filed: March 7, 2023
    Publication date: July 25, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Leon Immanuel KLEIN, Yue Kwang FOONG, Tor Erlend FJELDE, Bruno Kacper MLODOZENIEC, Marc Manuel Johannes BROCKSCHMIDT, Reinhard Sebastian Bernhard NOWOZIN, Frank NOE, Ryota TOMIOKA
  • Publication number: 20220222531
    Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.
    Type: Application
    Filed: March 28, 2022
    Publication date: July 14, 2022
    Inventors: Ryota TOMIOKA, Matthew Alastair JOHNSON, Daniel Stefan TARLOW, Samuel Alexander WEBSTER, Dimitrios VYTINIOTIS, Alexander Lloyd GAUNT, Maik RIECHERT
  • Patent number: 11288575
    Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: March 29, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ryota Tomioka, Matthew Alastair Johnson, Daniel Stefan Tarlow, Samuel Alexander Webster, Dimitrios Vytiniotis, Alexander Lloyd Gaunt, Maik Riechert
  • Publication number: 20210304066
    Abstract: A computation graph of a machine learning model is accessed from memory and a constraint solver is used to compute a partition of the computation graph into ordered stages of an execution pipeline. In use, when inference or training of the machine learning model takes place by executing the pipeline, execution cost of the stages are balanced according to the computed partition.
    Type: Application
    Filed: April 21, 2020
    Publication date: September 30, 2021
    Inventors: Ryota TOMIOKA, Juliana Patrícia VICENTE FRANCO, Alberto MAGNI, Nuno CLAUDINO PEREIRA LOPES, Siddharth KRISHNA, Renato GOLIN
  • Patent number: 10956535
    Abstract: Disclosed in some examples are methods, systems, machine-readable media, and devices which operate a neural network defined by user code. A method includes identifying, operations from user code that are integral in operating the neural network, combining a subset of the identified operations into a single processing sequence to be transmitted to an array of hardware processors, performing operations that are not integral in operation of the neural network in a separate thread of execution from the operations that are integral in operating the neural network; and mapping results to the combined operations that were included in the single processing sequence.
    Type: Grant
    Filed: June 15, 2017
    Date of Patent: March 23, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Frank Torsten Bernd Seide, Ryota Tomioka, Wilhelm Richert, Bruno S Bozza
  • Patent number: 10742990
    Abstract: A data compression apparatus is described which has an encoder configured to receive an input data item and to compress the data item into an encoding comprising a plurality of numerical values. The numerical values are grouped at least according to whether they relate to content of the input data item or style of the input data item. The encoder has been trained using a plurality of groups of training data items grouped according to the content and where training data items within individual ones of the groups vary with respect to the style. The encoder has been trained using a training objective which takes into account the groups.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: August 11, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sebastian Nowozin, Ryota Tomioka, Diane Bouchacourt
  • Publication number: 20190297328
    Abstract: A data compression apparatus is described which has an encoder configured to receive an input data item and to compress the data item into an encoding comprising a plurality of numerical values. The numerical values are grouped at least according to whether they relate to content of the input data item or style of the input data item. The encoder has been trained using a plurality of groups of training data items grouped according to the content and where training data items within individual ones of the groups vary with respect to the style. The encoder has been trained using a training objective which takes into account the groups.
    Type: Application
    Filed: September 20, 2018
    Publication date: September 26, 2019
    Inventors: Sebastian NOWOZIN, Ryota TOMIOKA, Diane BOUCHACOURT
  • Patent number: 10158859
    Abstract: A data compression apparatus is described which has an encoder configured to receive an input data item and to compress the data item into an encoding comprising a plurality of numerical values. The numerical values are grouped at least according to whether they relate to content of the input data item or style of the input data item. The encoder has been trained using a plurality of groups of training data items grouped according to the content and where training data items within individual ones of the groups vary with respect to the style. The encoder has been trained using a training objective which takes into account the groups.
    Type: Grant
    Filed: June 29, 2017
    Date of Patent: December 18, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sebastian Nowozin, Ryota Tomioka, Diane Bouchacourt
  • Publication number: 20180336461
    Abstract: Disclosed in some examples are methods, systems, machine-readable media, and devices which operate a neural network defined by user code. A method includes identifying, operations from user code that are integral in operating the neural network, combining a subset of the identified operations into a single processing sequence to be transmitted to an array of hardware processors, performing operations that are not integral in operation of the neural network in a separate thread of execution from the operations that are integral in operating the neural network; and mapping results to the combined operations that were included in the single processing sequence.
    Type: Application
    Filed: June 15, 2017
    Publication date: November 22, 2018
    Inventors: FRANK TORSTEN BERND SEIDE, RYOTA TOMIOKA, WILHELM RICHERT, BRUNO S. BOZZA
  • Publication number: 20180336458
    Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.
    Type: Application
    Filed: May 18, 2017
    Publication date: November 22, 2018
    Inventors: Ryota TOMIOKA, Matthew Alastair JOHNSON, Daniel Stefan TARLOW, Samuel Alexander WEBSTER, Dimitrios VYTINIOTIS, Alexander Lloyd GAUNT, Maik RIECHERT
  • Publication number: 20180338147
    Abstract: A data compression apparatus is described which has an encoder configured to receive an input data item and to compress the data item into an encoding comprising a plurality of numerical values. The numerical values are grouped at least according to whether they relate to content of the input data item or style of the input data item. The encoder has been trained using a plurality of groups of training data items grouped according to the content and where training data items within individual ones of the groups vary with respect to the style. The encoder has been trained using a training objective which takes into account the groups.
    Type: Application
    Filed: June 29, 2017
    Publication date: November 22, 2018
    Inventors: Sebastian NOWOZIN, Ryota TOMIOKA, Diane BOUCHACOURT
  • Publication number: 20180075347
    Abstract: A computation node of a neural network training system is described. The node has a memory storing a plurality of gradients of a loss function of the neural network and an encoder. The encoder encodes the plurality of gradients by setting individual ones of the gradients either to zero or to a quantization level according to a probability related to at least the magnitude of the individual gradient. The node has a processor which sends the encoded plurality of gradients to one or more other computation nodes of the neural network training system over a communications network.
    Type: Application
    Filed: September 15, 2016
    Publication date: March 15, 2018
    Inventors: Dan Alistarh, Jerry Zheng Li, Ryota Tomioka, Milan Vojnovic