Patents by Inventor Michael Merideth

Michael Merideth has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230053164
    Abstract: Edge server compute capacity demand in an overlay network is predicted and used to pre-position compute capacity in advance of application-specific demands. Preferably, machine learning is used to proactively predict anticipated compute capacity needs for an edge server region (e.g., a set of co-located edge servers). In advance, compute capacity (application instances) are made available in-region, and data associated with an application instance is migrated to be close to the instance. The approach facilitates compute-at-the-edge services, which require data (state) to be close to a pre-positioned latency-sensitive application instance. Overlay network mapping (globally) may be used for more long-term positioning, with short-duration scheduling then being done in-region as needed. Compute instances and associated state are migrated intelligently based on predicted (e.g., machine-learned) demand, and with full data consistency enforced.
    Type: Application
    Filed: November 1, 2022
    Publication date: February 16, 2023
    Applicant: Akamai Technologies, Inc.
    Inventors: Vinay Kanitkar, Robert B. Bird, Aniruddha Bohra, Michael Merideth
  • Patent number: 11490307
    Abstract: Edge server compute capacity demand in an overlay network is predicted and used to pre-position compute capacity in advance of application-specific demands. Preferably, machine learning is used to proactively predict anticipated compute capacity needs for an edge server region (e.g., a set of co-located edge servers). In advance, compute capacity (application instances) are made available in-region, and data associated with an application instance is migrated to be close to the instance. The approach facilitates compute-at-the-edge services, which require data (state) to be close to a pre-positioned latency-sensitive application instance. Overlay network mapping (globally) may be used for more long-term positioning, with short-duration scheduling then being done in-region as needed. Compute instances and associated state are migrated intelligently based on predicted (e.g., machine-learned) demand, and with full data consistency enforced.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: November 1, 2022
    Assignee: Akamai Technologies, Inc.
    Inventors: Vinay Kanitkar, Robert B. Bird, Aniruddha Bohra, Michael Merideth
  • Publication number: 20200196210
    Abstract: Edge server compute capacity demand in an overlay network is predicted and used to pre-position compute capacity in advance of application-specific demands. Preferably, machine learning is used to proactively predict anticipated compute capacity needs for an edge server region (e.g., a set of co-located edge servers). In advance, compute capacity (application instances) are made available in-region, and data associated with an application instance is migrated to be close to the instance. The approach facilitates compute-at-the-edge services, which require data (state) to be close to a pre-positioned latency-sensitive application instance. Overlay network mapping (globally) may be used for more long-term positioning, with short-duration scheduling then being done in-region as needed. Compute instances and associated state are migrated intelligently based on predicted (e.g., machine-learned) demand, and with full data consistency enforced.
    Type: Application
    Filed: June 13, 2019
    Publication date: June 18, 2020
    Applicant: Akamai Technologies, Inc.
    Inventors: Vinay Kanitkar, Robert B. Bird, Aniruddha Bohra, Michael Merideth