Patents by Inventor Eric Anthony Erhardt

Eric Anthony Erhardt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11609746
    Abstract: Methods, systems, and computer products are herein provided for lazy evaluation of input data by a machine learning (ML) framework. An ML pipeline receives input data and compiles a chain of operators into a chain of dataviews configured for lazy evaluation of the input data. Each dataview in the chain represents a computation over data as a non-materialized view of the data. The ML pipeline receives a request for column data and selects a chain of delegates comprising one or more delegates for one or more dataviews in the chain to fulfill the request. The ML pipeline processes the input data with the selected chain of delegates. The ML pipeline performs delegate chaining on a dataview. A feature value for a feature column of the dataview is determined based on the delegate chaining and provided to an ML algorithm to predict column data.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: March 21, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Gary Shon Katzenberger, Thomas William Finley, Pete Luferenko, Mohammad Zeeshan Siddiqui, Costin Eseanu, Eric Anthony Erhardt, Yael Dekel, Ivan Matantsev
  • Patent number: 10977006
    Abstract: Embodiments provide a machine learning framework that enables developers to author and deploy machine learning pipelines into their applications regardless of the programming language in which the applications are structured. The framework may provide a programming language-specific API that enables the application to call a plurality of operators provided by the framework. The framework provides any number of APIs, each for a different programming language. The pipeline generated via the application is represented as an execution graph comprising node(s), where each node represents a particular operator. When a pipeline is submitted for execution, calls to the operators are detected, and nodes corresponding to the operators are generated for the execution graph.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: April 13, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Gary Shon Katzenberger, Thomas William Finley, Petro Luferenko, Mohammad Zeeshan Siddiqui, Costin I. Eseanu, Eric Anthony Erhardt, Yael Dekel, Ivan Matantsev
  • Publication number: 20200348912
    Abstract: Embodiments provide a machine learning framework that enables developers to author and deploy machine learning pipelines into their applications regardless of the programming language in which the applications are structured. The framework may provide a programming language-specific API that enables the application to call a plurality of operators provided by the framework. The framework provides any number of APIs, each for a different programming language. The pipeline generated via the application is represented as an execution graph comprising node(s), where each node represents a particular operator. When a pipeline is submitted for execution, calls to the operators are detected, and nodes corresponding to the operators are generated for the execution graph.
    Type: Application
    Filed: October 10, 2019
    Publication date: November 5, 2020
    Inventors: Gary Shon Katzenberger, Thomas William Finley, Petro Luferenko, Mohammad Zeeshan Siddiqui, Costin I. Eseanu, Eric Anthony Erhardt, Yael Dekel, Ivan Matantsev
  • Publication number: 20200349469
    Abstract: An efficient, streaming-based, lazily-evaluated machine learning (ML) framework is provided. An ML pipeline of operators produce and consume a chain of dataviews representing a computation over data. Non-materialized (e.g., virtual) views of data in dataviews permit efficient, lazy evaluation of data on demand regardless of size (e.g., in excess of main memory). Data may be materialized by DataView cursors (e.g., movable windows over rows of an input dataset or DataView). Computation and data movement may be limited to rows for active columns without processing or materializing unnecessary data. A chain of dataviews may comprise a chain of delegates that reference a chain of functions. Assembled pipelines of schematized compositions of operators may be validated and optimized with efficient execution plans. A compiled chain of functions may be optimized and executed in a single call. Dataview based ML pipelines may be developed, trained, evaluated and integrated into applications.
    Type: Application
    Filed: October 23, 2019
    Publication date: November 5, 2020
    Inventors: Gary Shon Katzenberger, Thomas William Finley, Pete Luferenko, Mohammad Zeeshan Siddiqui, Costin Eseanu, Eric Anthony Erhardt, Yael Dekel, Ivan Matantsev
  • Publication number: 20200348990
    Abstract: Sparse data handling and/or buffer sharing are implemented. Data may be buffered in reusable buffer arrays. Data may comprise fixed or variable length vectors, which may be represented as sparse or dense vectors in a values array and indices array. Data may be materialized from a dataview comprising a non-materialized view of data in a machine-learning pipeline by cursoring over rows of the dataview and calling delegate functions to compute data for rows in an active column. A buffer and/or its set of arrays storing a first vector may be reused for a second and additional vectors, for example, when the length of buffer arrays is equal to or greater than the length of the second and additional vectors, which may be selectively stored as sparse or dense vectors to fit the array set. Shared buffers may be passed as references between delegate functions for reuse.
    Type: Application
    Filed: October 31, 2019
    Publication date: November 5, 2020
    Inventors: Gary Shon Katzenberger, Petro Luferenko, Costin I. Eseanu, Eric Anthony Erhardt, Ivan Matantsev