Patents by Inventor Misha E. Kilmer

Misha E. Kilmer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11948093
    Abstract: Techniques for generating and managing, including simulating and training, deep tensor neural networks are presented. A deep tensor neural network comprises a graph of nodes connected via weighted edges. A network management component (NMC) extracts features from tensor-formatted input data based on tensor-formatted parameters. NMC evolves tensor-formatted input data based on a defined tensor-tensor layer evolution rule, the network generating output data based on evolution of the tensor-formatted input data. The network is activated by non-linear activation functions, wherein the weighted edges and non-linear activation functions operate, based on tensor-tensor functions, to evolve tensor-formatted input data. NMC trains the network based on tensor-formatted training data, comparing output training data output from the network to simulated output data, based on a defined loss function, to determine an update.
    Type: Grant
    Filed: October 5, 2022
    Date of Patent: April 2, 2024
    Assignees: INTERNATIONAL BUSINESS MACHINES CORPORATION, TRUSTEES OF TUFTS COLLEGE, RAMOT AT TEL-AVIV UNIVERSITY LTD.
    Inventors: Lior Horesh, Elizabeth Newman, Misha E. Kilmer, Haim Avron
  • Publication number: 20230306276
    Abstract: Techniques for generating and managing, including simulating and training, deep tensor neural networks are presented. A deep tensor neural network comprises a graph of nodes connected via weighted edges. A network management component (NMC) extracts features from tensor-formatted input data based on tensor-formatted parameters. NMC evolves tensor-formatted input data based on a defined tensor-tensor layer evolution rule, the network generating output data based on evolution of the tensor-formatted input data. The network is activated by non-linear activation functions, wherein the weighted edges and non-linear activation functions operate, based on tensor-tensor functions, to evolve tensor-formatted input data. NMC trains the network based on tensor-formatted training data, comparing output training data output from the network to simulated output data, based on a defined loss function, to determine an update.
    Type: Application
    Filed: October 5, 2022
    Publication date: September 28, 2023
    Inventors: Lior Horesh, Elizabeth Newman, Misha E. Kilmer, Haim Avron
  • Patent number: 11531902
    Abstract: Techniques for generating and managing, including simulating and training, deep tensor neural networks are presented. A deep tensor neural network comprises a graph of nodes connected via weighted edges. A network management component (NMC) extracts features from tensor-formatted input data based on tensor-formatted parameters. NMC evolves tensor-formatted input data based on a defined tensor-tensor layer evolution rule, the network generating output data based on evolution of the tensor-formatted input data. The network is activated by non-linear activation functions, wherein the weighted edges and non-linear activation functions operate, based on tensor-tensor functions, to evolve tensor-formatted input data. NMC trains the network based on tensor-formatted training data, comparing output training data output from the network to simulated output data, based on a defined loss function, to determine an update.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: December 20, 2022
    Assignees: INTERNATIONAL BUSINESS MACHINES CORPORATION, TRUSTEES OF TUFTS COLLEGE, RAMOT AT TEL-AVIV UNIVERSITY LTD.
    Inventors: Lior Horesh, Elizabeth Newman, Misha E. Kilmer, Haim Avron
  • Patent number: 11386507
    Abstract: A computer-implemented method for analyzing a time-varying graph is provided. The time-varying graph includes nodes representing elements in a network, edges representing transactions between elements, and data associated with the nodes and the edges. The computer-implemented method includes constructing, using a processor, adjacency and feature matrices describing each node and edge of each time-varying graph for stacking into an adjacency tensor and describing the data of each time-varying graph for stacking into a feature tensor, respectively. The adjacency and feature tensors are partitioned into adjacency and feature training tensors and into adjacency and feature validation tensors, respectively. An embedding model and a prediction model are created using the adjacency and feature training tensors. The embedding and prediction models are validated using the adjacency and feature validation tensors to identify an optimized embedding-prediction model pair.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: July 12, 2022
    Assignees: INTERNATIONAL BUSINESS MACHINES CORPORATION, Trustees of Tufts College, RAMOT AT TEL-AVIV UNIVERSITY LTD.
    Inventors: Lior Horesh, Osman Asif Malik, Shashanka Ubaru, Misha E. Kilmer, Haim Avron
  • Publication number: 20210090182
    Abstract: A computer-implemented method for analyzing a time-varying graph is provided. The time-varying graph includes nodes representing elements in a network, edges representing transactions between elements, and data associated with the nodes and the edges. The computer-implemented method includes constructing, using a processor, adjacency and feature matrices describing each node and edge of each time-varying graph for stacking into an adjacency tensor and describing the data of each time-varying graph for stacking into a feature tensor, respectively. The adjacency and feature tensors are partitioned into adjacency and feature training tensors and into adjacency and feature validation tensors, respectively. An embedding model and a prediction model are created using the adjacency and feature training tensors. The embedding and prediction models are validated using the adjacency and feature validation tensors to identify an optimized embedding-prediction model pair.
    Type: Application
    Filed: September 23, 2019
    Publication date: March 25, 2021
    Inventors: Lior Horesh, Osman Asif Malik, Shashanka Ubaru, Misha E. Kilmer, Haim Avron
  • Patent number: 10885241
    Abstract: Methods and systems for generating output of a simulation model in a simulation system are described. In an example, a processor may retrieve observed output data from a memory. The observed output data may be generated based on a simulation operator of a simulation model. The processor may further optimize a generalization error of a distance measure between the observed output data and model output data. The model output data may be generated based on a high-fidelity operator. The processor may further determine a correction operator based on the optimized generalization error of the distance measure. The processor may further append the correction operator to the simulation operator to produce a supplemented operator. The processor may further generate supplemented output data by applying the simulation model with the supplemented operator on a set of inputs.
    Type: Grant
    Filed: January 3, 2018
    Date of Patent: January 5, 2021
    Assignees: International Business Machines Corporation, Trustees of Tufts College
    Inventors: Lior Horesh, Ning Hao, Raya Horesh, David Nahamoo, Misha E. Kilmer
  • Patent number: 10771088
    Abstract: A tensor decomposition method, system, and computer program product include compressing multi-dimensional data by truncated tensor-tensor decompositions.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: September 8, 2020
    Assignees: INTERNATIONAL BUSINESS MACHINES CORPORATION, TUFTS UNIVERSITY, TEL AVIV-YAFO UNIVERSITY
    Inventors: Lior Horesh, Misha E Kilmer, Haim Avron, Elizabeth Newman
  • Publication number: 20200280322
    Abstract: A tensor decomposition method, system, and computer program product include compressing multi-dimensional data by truncated tensor-tensor decompositions.
    Type: Application
    Filed: February 28, 2019
    Publication date: September 3, 2020
    Inventors: Lior Horesh, Misha E. Kilmer, Haim Avron, Elizabeth Newman
  • Publication number: 20200151580
    Abstract: Techniques for generating and managing, including simulating and training, deep tensor neural networks are presented. A deep tensor neural network comprises a graph of nodes connected via weighted edges. A network management component (NMC) extracts features from tensor-formatted input data based on tensor-formatted parameters. NMC evolves tensor-formatted input data based on a defined tensor-tensor layer evolution rule, the network generating output data based on evolution of the tensor-formatted input data. The network is activated by non-linear activation functions, wherein the weighted edges and non-linear activation functions operate, based on tensor-tensor functions, to evolve tensor-formatted input data. NMC trains the network based on tensor-formatted training data, comparing output training data output from the network to simulated output data, based on a defined loss function, to determine an update.
    Type: Application
    Filed: November 13, 2018
    Publication date: May 14, 2020
    Inventors: Lior Horesh, Elizabeth Newman, Misha E. Kilmer, Haim Avron
  • Publication number: 20190205488
    Abstract: Methods and systems for generating output of a simulation model in a simulation system are described. In an example, a processor may retrieve observed output data from a memory. The observed output data may be generated based on a simulation operator of a simulation model. The processor may further optimize a generalization error of a distance measure between the observed output data and model output data. The model output data may be generated based on a high-fidelity operator. The processor may further determine a correction operator based on the optimized generalization error of the distance measure. The processor may further append the correction operator to the simulation operator to produce a supplemented operator. The processor may further generate supplemented output data by applying the simulation model with the supplemented operator on a set of inputs.
    Type: Application
    Filed: January 3, 2018
    Publication date: July 4, 2019
    Inventors: Lior Horesh, Ning Hao, Raya Horesh, David Nahamoo, Misha E. Kilmer