Patents by Inventor Robert B. Bird

Robert B. Bird has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230053164
    Abstract: Edge server compute capacity demand in an overlay network is predicted and used to pre-position compute capacity in advance of application-specific demands. Preferably, machine learning is used to proactively predict anticipated compute capacity needs for an edge server region (e.g., a set of co-located edge servers). In advance, compute capacity (application instances) are made available in-region, and data associated with an application instance is migrated to be close to the instance. The approach facilitates compute-at-the-edge services, which require data (state) to be close to a pre-positioned latency-sensitive application instance. Overlay network mapping (globally) may be used for more long-term positioning, with short-duration scheduling then being done in-region as needed. Compute instances and associated state are migrated intelligently based on predicted (e.g., machine-learned) demand, and with full data consistency enforced.
    Type: Application
    Filed: November 1, 2022
    Publication date: February 16, 2023
    Applicant: Akamai Technologies, Inc.
    Inventors: Vinay Kanitkar, Robert B. Bird, Aniruddha Bohra, Michael Merideth
  • Publication number: 20220377079
    Abstract: A distributed computing system provides a distributed data store for network enabled devices at the edge. The distributed database is partitioned such that each node in the system has its own partition and some number of followers that replicate the data in the partition. The data in the partition is typically used in providing services to network enabled devices from the edge. The set of data for a particular network enabled device is owned by the node to which the network enabled device connects. Ownership of the data (and the data itself) may move around the distributed computing system to different nodes, e.g., for load balancing, fault-resilience, and/or due to device movement. Security/health checks are enforced at the edge as part of a process of transferring data ownership, thereby providing a mechanism to mitigate compromised or malfunctioning network enabled devices.
    Type: Application
    Filed: May 18, 2021
    Publication date: November 24, 2022
    Inventors: Mark M. Ingerman, Robert B. Bird
  • Patent number: 11490307
    Abstract: Edge server compute capacity demand in an overlay network is predicted and used to pre-position compute capacity in advance of application-specific demands. Preferably, machine learning is used to proactively predict anticipated compute capacity needs for an edge server region (e.g., a set of co-located edge servers). In advance, compute capacity (application instances) are made available in-region, and data associated with an application instance is migrated to be close to the instance. The approach facilitates compute-at-the-edge services, which require data (state) to be close to a pre-positioned latency-sensitive application instance. Overlay network mapping (globally) may be used for more long-term positioning, with short-duration scheduling then being done in-region as needed. Compute instances and associated state are migrated intelligently based on predicted (e.g., machine-learned) demand, and with full data consistency enforced.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: November 1, 2022
    Assignee: Akamai Technologies, Inc.
    Inventors: Vinay Kanitkar, Robert B. Bird, Aniruddha Bohra, Michael Merideth
  • Publication number: 20200196210
    Abstract: Edge server compute capacity demand in an overlay network is predicted and used to pre-position compute capacity in advance of application-specific demands. Preferably, machine learning is used to proactively predict anticipated compute capacity needs for an edge server region (e.g., a set of co-located edge servers). In advance, compute capacity (application instances) are made available in-region, and data associated with an application instance is migrated to be close to the instance. The approach facilitates compute-at-the-edge services, which require data (state) to be close to a pre-positioned latency-sensitive application instance. Overlay network mapping (globally) may be used for more long-term positioning, with short-duration scheduling then being done in-region as needed. Compute instances and associated state are migrated intelligently based on predicted (e.g., machine-learned) demand, and with full data consistency enforced.
    Type: Application
    Filed: June 13, 2019
    Publication date: June 18, 2020
    Applicant: Akamai Technologies, Inc.
    Inventors: Vinay Kanitkar, Robert B. Bird, Aniruddha Bohra, Michael Merideth
  • Publication number: 20200175419
    Abstract: Individual nodes (e.g., edge machines) in an overlay network each build local machine learning (ML) models associated with a particular behavior of interest. Through a communication mechanism, nodes exchange some portion of their ML models between or among each other. The portion of the local model that is exchanged with one or more other nodes encodes or encapsulates relevant knowledge (learned at the source node) for the particular behavior of interest; in this manner, relevant transfer learning is enabled such that individual node models become smarter. Sets of machines that collaborate converge their models toward a solution that is then used to facilitate another overlay network function or optimization. The local knowledge exchange among the nodes creates an emergent behavioral profile used to control the edge machine behavior. Example functions managed with this ML front-end include predictive pre-fetching, anomaly detection, image management, forecasting to allocate resources, and others.
    Type: Application
    Filed: June 11, 2019
    Publication date: June 4, 2020
    Applicant: Akamai Technologies, Inc.
    Inventors: Robert B. Bird, Jan Galkowski