Patents by Inventor Matteo INTERLANDI

Matteo INTERLANDI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240126604
    Abstract: A system provisioning resources of a processing unit. The system predicts a performance impact on a workload attributable to a performance constraint of the processing unit for the workload according to a resource model, wherein the workload includes a query and the resource model characterizes attainable compute bandwidth, attainable memory bandwidth, and arithmetic intensity based on peak compute bandwidth and peak memory bandwidth of the processing unit. The system determines a resource allocation of the processing unit, based on the predicted performance impact and instructs the processing unit to allocate the resources for processing the workload based on the determined resource allocation.
    Type: Application
    Filed: January 30, 2023
    Publication date: April 18, 2024
    Inventors: Rathijit SEN, Matteo INTERLANDI, Jiashen CAO
  • Publication number: 20240119050
    Abstract: Example aspects include techniques for query processing over deep neural network runtimes. These techniques include receiving a query including a query operator and a trainable user defined function (UDF). In addition, the techniques include determining a query representation based on the query, and determining, for performing the query in a neural network runtime, an initial neural network program based on the query representation, the initial neural network program including a differentiable operators corresponding to the query operator. and executing the neural network program in the neural network runtime over the neural network data structure to generate a query result. Further, the techniques include training the initial neural network program via the neural network runtime to determine a trained neural network program, and executing the trained neural network program in the neural network runtime to generate inference information.
    Type: Application
    Filed: October 11, 2022
    Publication date: April 11, 2024
    Inventors: Matteo INTERLANDI, Apurva Sandeep Gandhi, Yuki Asada, Advitya Gemawat, Victor Renjie Fu, Lihao Zhang, Rathijit Sen, Dalitso Hansini Banda
  • Patent number: 11922315
    Abstract: Solutions for adapting machine learning (ML) models to neural networks (NNs) include receiving an ML pipeline comprising a plurality of operators; determining operator dependencies within the ML pipeline; determining recognized operators; for each of at least two recognized operators, selecting a corresponding NN module from a translation dictionary; and wiring the selected NN modules in accordance with the operator dependencies to generate a translated NN. Some examples determine a starting operator for translation, which is the earliest recognized operator having parameters. Some examples connect inputs of the translated NN to upstream operators of the ML pipeline that had not been translated. Some examples further tune the translated NN using backpropagation. Some examples determine whether an operator is trainable or non-trainable and flag related parameters accordingly for later training.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: March 5, 2024
    Assignee: Microsoft Technology Licensing, LLC.
    Inventors: Matteo Interlandi, Byung-Gon Chun, Markus Weimer, Gyeongin Yu, Saeed Amizadeh
  • Publication number: 20230244662
    Abstract: Example aspects include techniques for query processing over deep neural network runtimes. These techniques may include receiving a query including one or more query operators and determining a query representation based on the one or more query operators. In addition, the techniques may include determining a neural network program based on the query representation, the neural network program including one or more neural network operators for performing the query in a neural network runtime, generating a neural network data structure based on a dataset associated with the query, and executing the neural network program in the neural network runtime over the neural network data structure to generate a query result.
    Type: Application
    Filed: January 28, 2022
    Publication date: August 3, 2023
    Inventors: Matteo INTERLANDI, Konstantinos KARANASOS, Dong HE, Dalitso Hansini BANDA, Jesus CAMACHO RODRIGUEZ, Rathijit SEN, Supun Chathurang NAKANDALA
  • Publication number: 20230177053
    Abstract: Methods for optimization in query plans are performed by computing systems via a query optimizer advisor. A query optimizer advisor (QO-Advisor) is configured to steer a query plan optimizer towards more efficient plan choices by providing rule hints to improve navigation of the search space for each query in formulation of its query plan. The QO-Advisor receives historical information of a distributed data processing system as an input, and then generates a set of rule hint pairs based on the historical information. The QO-Advisor provides the set of rule hint pairs to a query plan optimizer, which then optimizes a query plan of an incoming query through application of a rule hint pair in the set. This application is based at least on a characteristic of the incoming query matching a portion of the rule hint pair.
    Type: Application
    Filed: March 28, 2022
    Publication date: June 8, 2023
    Inventors: Matteo INTERLANDI, Wangda ZHANG, Paul S. MINEIRO, Marc T. FRIEDMAN, Alekh JINDAL, Hiren S. PATEL, Rafah Aboul HOSN, Shi QIAO
  • Publication number: 20220051104
    Abstract: Methods, systems, and computer program products are provided for generating a neural network model. A ML pipeline parser is configured to identify a set of ML operators for a previously trained ML pipeline, and map the set of ML operators to a set of neural network operators. The ML pipeline parser generates a first neural network representation using the set of neural network operators. A neural network optimizer is configured to perform an optimization on the first neural network representation to generate a second neural network representation. A tensor set provider outputs a set of tensor operations based on the second neural network representation for execution on a neural network framework. In this manner, a traditional ML pipeline can be converted into a neural network pipeline that may be executed on an appropriate framework, such as one that utilizes specialized hardware accelerators.
    Type: Application
    Filed: August 14, 2020
    Publication date: February 17, 2022
    Inventors: Matteo INTERLANDI, Markus WEIMER, Saeed AMIZADEH, Konstantinos KARANASOS, Supun Chathuranga NAKANDALA, Karla J. SAUR, Carlo Aldo CURINO, Gyeongin YU
  • Publication number: 20210124739
    Abstract: The description relates to executing an inference query relative to a database management system, such as a relational database management system. In one example a trained machine learning model can be stored within the database management system. An inference query can be received that applies the trained machine learning model on data local to the database management system. Analysis can be performed on the inference query and the trained machine learning model to generate a unified intermediate representation of the inference query and the trained model. Cross optimization can be performed on the unified intermediate representation. Based upon the cross-optimization, a first portion of the unified intermediate representation to be executed by a database engine of the database management system can be determined, and, a second portion of the unified intermediate representation to be executed by a machine learning runtime can be determined.
    Type: Application
    Filed: August 11, 2020
    Publication date: April 29, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Konstantinos KARANASOS, Matteo INTERLANDI, Fotios PSALLIDAS, Rathijit SEN, Kwanghyun PARK, Ivan POPIVANOV, Subramaniam VENKATRAMAN KRISHNAN, Markus WEIMER, Yuan YU, Raghunath RAMAKRISHNAN, Carlo Aldo CURINO, Doris Suiyi XIN, Karla Jean SAUR
  • Publication number: 20210065007
    Abstract: Solutions for adapting machine learning (ML) models to neural networks (NNs) include receiving an ML pipeline comprising a plurality of operators; determining operator dependencies within the ML pipeline; determining recognized operators; for each of at least two recognized operators, selecting a corresponding NN module from a translation dictionary; and wiring the selected NN modules in accordance with the operator dependencies to generate a translated NN. Some examples determine a starting operator for translation, which is the earliest recognized operator having parameters. Some examples connect inputs of the translated NN to upstream operators of the ML pipeline that had not been translated. Some examples further tune the translated NN using backpropagation. Some examples determine whether an operator is trainable or non-trainable and flag related parameters accordingly for later training.
    Type: Application
    Filed: August 26, 2019
    Publication date: March 4, 2021
    Inventors: Matteo INTERLANDI, Byung-Gon CHUN, Markus WEIMER, Gyeongin YU, Saeed AMIZADEH