Patents by Inventor William E. Weihl

William E. Weihl has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10270817
    Abstract: A server in a distributed environment includes a process that manages incoming client requests and selectively forwards service requests to other servers in the network. The server includes storage in which at least one forwarding queue is established. The server includes code for aggregating service requests in the forwarding queue and then selectively releasing the requests, or some of them, to another server. The queuing mechanism preferably is managed by metadata, which, for example, controls how many service requests may be placed in the queue, how long a given service request may remain in the queue, what action to take in response to a client request if the forwarding queue's capacity is reached, etc. In one embodiment, the server generates an estimate of a current load on an origin server (to which it is sending forwarding requests) and instantiates the forward request queuing when that current load is reached.
    Type: Grant
    Filed: September 6, 2016
    Date of Patent: April 23, 2019
    Assignee: Akamai Technologies, Inc.
    Inventors: William E. Weihl, Gene Shekhtman
  • Publication number: 20160381088
    Abstract: A server in a distributed environment includes a process that manages incoming client requests and selectively forwards service requests to other servers in the network. The server includes storage in which at least one forwarding queue is established. The server includes code for aggregating service requests in the forwarding queue and then selectively releasing the requests, or some of them, to another server. The queuing mechanism preferably is managed by metadata, which, for example, controls how many service requests may be placed in the queue, how long a given service request may remain in the queue, what action to take in response to a client request if the forwarding queue's capacity is reached, etc. In one embodiment, the server generates an estimate of a current load on an origin server (to which it is sending forwarding requests) and instantiates the forward request queuing when that current load is reached.
    Type: Application
    Filed: September 6, 2016
    Publication date: December 29, 2016
    Inventors: William E. Weihl, Gene Shekhtman
  • Patent number: 9438482
    Abstract: A server in a distributed environment includes a process that manages incoming client requests and selectively forwards service requests to other servers in the network. The server includes storage in which at least one forwarding queue is established. The server includes code for aggregating service requests in the forwarding queue and then selectively releasing the requests, or some of them, to another server. The queuing mechanism preferably is managed by metadata, which, for example, controls how many service requests may be placed in the queue, how long a given service request may remain in the queue, what action to take in response to a client request if the forwarding queue's capacity is reached, etc. In one embodiment, the server generates an estimate of a current load on an origin server (to which it is sending forwarding requests) and instantiates the forward request queuing when that current load is reached.
    Type: Grant
    Filed: April 15, 2013
    Date of Patent: September 6, 2016
    Assignee: Akamai Technologies, Inc.
    Inventors: William E. Weihl, Gene Shekhtman
  • Patent number: 9009267
    Abstract: A content file purge mechanism for a content delivery network (CDN) is described. A Web-enabled portal is used by CDN customers to enter purge requests securely. A purge request identifies one or more content files to be purged. The purge request is pushed over a secure link from the portal to a purge server, which validates purge requests from multiple CDN customers and batches the requests into an aggregate purge request. The aggregate purge request is pushed from the purge server to a set of staging servers. Periodically, CDN content servers poll the staging servers to determine whether an aggregate purge request exists. If so, the CDN content servers obtain the aggregate purge request and process the request to remove the identified content files from their local storage.
    Type: Grant
    Filed: September 10, 2012
    Date of Patent: April 14, 2015
    Assignee: Akamai Technologies, Inc.
    Inventors: Alexander Sherman, Philip A. Lisiecki, Joel M. Wein, Don A. Dailey, John A. Dilley, William E. Weihl
  • Patent number: 8972461
    Abstract: Content is dynamically assembled at the edge of the Internet, preferably on content delivery network (CDN) edge servers. A content provider leverages an “edge side include” (ESI) markup language that is used to define Web page fragments for dynamic assembly at the edge. Dynamic assembly improves site performance by caching objects that comprise dynamically-generated pages at the edge of the Internet, close to the end user. Instead of being assembled by an application/web server in a centralized data center, the application/web server sends a page template and content fragments to a CDN edge server where the page is assembled. Each content fragment can have its own cacheability profile to manage the “freshness” of the content. Once a user requests a page, the edge server examines its cache for the included fragments and assembles the page on-the-fly.
    Type: Grant
    Filed: October 28, 2013
    Date of Patent: March 3, 2015
    Assignee: Akamai Technologies, Inc.
    Inventors: Daniel M. Lewin, Andrew T. Davis, Samuel D. Gendler, Marty Kagan, Jay G. Parikh, William E. Weihl
  • Publication number: 20140052811
    Abstract: Content is dynamically assembled at the edge of the Internet, preferably on content delivery network (CDN) edge servers. A content provider leverages an “edge side include” (ESI) markup language that is used to define Web page fragments for dynamic assembly at the edge. Dynamic assembly improves site performance by caching objects that comprise dynamically-generated pages at the edge of the Internet, close to the end user. Instead of being assembled by an application/web server in a centralized data center, the application/web server sends a page template and content fragments to a CDN edge server where the page is assembled. Each content fragment can have its own cacheability profile to manage the “freshness” of the content. Once a user requests a page, the edge server examines its cache for the included fragments and assembles the page on-the-fly.
    Type: Application
    Filed: October 28, 2013
    Publication date: February 20, 2014
    Applicant: Akamai Technologies, Inc.
    Inventors: Daniel M. Lewin, Andrew T. Davis, Samuel D. Gendler, Marty Kagan, Jay G. Parikh, William E. Weihl
  • Patent number: 8572132
    Abstract: Content is dynamically assembled at the edge of the Internet, preferably on content delivery network (CDN) edge servers. A content provider leverages an “edge side include” (ESI) markup language that is used to define Web page fragments for dynamic assembly at the edge. Dynamic assembly improves site performance by caching objects that comprise dynamically-generated pages at the edge of the Internet, close to the end user. Instead of being assembled by an application/web server in a centralized data center, the application/web server sends a page template and content fragments to a CDN edge server where the page is assembled. Each content fragment can have its own cacheability profile to manage the “freshness” of the content. Once a user requests a page, the edge server examines its cache for the included fragments and assembles the page on-the-fly.
    Type: Grant
    Filed: April 23, 2012
    Date of Patent: October 29, 2013
    Assignee: Akamai Technologies, Inc.
    Inventors: Andrew T. Davis, Samuel D. Gendler, Marty Kagan, Jay G. Parikh, William E. Weihl, Anne E. Lewin
  • Publication number: 20130232249
    Abstract: A server in a distributed environment includes a process that manages incoming client requests and selectively forwards service requests to other servers in the network. The server includes storage in which at least one forwarding queue is established. The server includes code for aggregating service requests in the forwarding queue and then selectively releasing the requests, or some of them, to another server. The queuing mechanism preferably is managed by metadata, which, for example, controls how many service requests may be placed in the queue, how long a given service request may remain in the queue, what action to take in response to a client request if the forwarding queue's capacity is reached, etc. In one embodiment, the server generates an estimate of a current load on an origin server (to which it is sending forwarding requests) and instantiates the forward request queuing when that current load is reached.
    Type: Application
    Filed: April 15, 2013
    Publication date: September 5, 2013
    Applicant: Akamai Technologies, Inc.
    Inventors: William E. Weihl, Gene Shekhtman
  • Patent number: 8438291
    Abstract: Business applications running on a content delivery network (CDN) having a distributed application framework can create, access and modify state for each client. Over time, a single client may desire to access a given application on different CDN edge servers within the same region and even across different regions. Each time, the application may need to access the latest “state” of the client even if the state was last modified by an application on a different server. A difficulty arises when a process or a machine that last modified the state dies or is temporarily or permanently unavailable. The present invention provides techniques for migrating session state data across CDN servers in a manner transparent to the user. A distributed application thus can access a latest “state” of a client even if the state was last modified by an application instance executing on a different CDN server, including a nearby (in-region) or a remote (out-of-region) server.
    Type: Grant
    Filed: July 26, 2010
    Date of Patent: May 7, 2013
    Assignee: Akamai Technologies, Inc.
    Inventors: Andrew T. Davis, Jay G. Parikh, Srikanth Thirumalai, William E. Weihl, Mark Tsimelzon
  • Patent number: 8423662
    Abstract: An edge server in a distributed processing environment includes at least one process that manages incoming client requests and selectively forwards given service requests to other servers in the distributed network. According to the invention, the edge server includes storage (e.g., disk and/or memory) in which at least one forwarding queue is established. The server includes code for aggregating service requests in the forwarding queue and then selectively releasing the service requests, or some of them, to another server. The forward request queuing mechanism preferably is managed by metadata, which, for example, controls how many service requests may be placed in the queue, how long a given service request may remain in the queue, what action to take in response to a client request if the forwarding queue's capacity is reached, and the like.
    Type: Grant
    Filed: April 28, 2004
    Date of Patent: April 16, 2013
    Assignee: Akamai Technologies, Inc.
    Inventors: William E. Weihl, Gene Shekhtman
  • Patent number: 8392912
    Abstract: An application deployment model for enterprise applications to enable applications to be deployed to and executed from a globally distributed computing platform, such as an Internet content delivery network (CDN). According to the invention, application developers separate their Web application into two layers: a highly distributed edge layer and a centralized origin layer. In a representative embodiment, the edge layer supports a servlet container that executes a Web tier, typically the presentation layer of a given Java-based application. Where necessary, the edge layer communicates with code running on an origin server to respond to a given request. In an alternative embodiment, the edge layer supports a more fully-provisioned application server that executes both Web tier (e.g., presentation) and Enterprise tier application (e.g., business logic) components.
    Type: Grant
    Filed: October 23, 2006
    Date of Patent: March 5, 2013
    Assignee: Akamai Technologies, Inc.
    Inventors: Andrew Thomas Davis, Jay Parikh, Srinivasan Pichai, Eddie Ruvinsky, Daniel Stodolsky, Mark Tsimelzon, William E. Weihl
  • Publication number: 20130007282
    Abstract: A method and system of load balancing application server resources operating in a distributed set of servers is described. In a representative embodiment, the set of servers comprise a region of a content delivery network. Each server in the set typically includes a server manager process, and an application server on which edge-enabled applications or application components are executed. As service requests are directed to servers in the region, the application servers manage the requests in a load-balanced manner, and without any requirement that a particular application server spawned on-demand.
    Type: Application
    Filed: September 10, 2012
    Publication date: January 3, 2013
    Applicant: AKAMAI TECHNOLOGIES, INC.
    Inventors: Andrew T. Davis, Nate Kushman, Jay G. Parikh, Srinivasan Pichai, Daniel Stodolsky, Ashis Tarafdar, William E. Weihl
  • Publication number: 20130007228
    Abstract: A content file purge mechanism for a content delivery network (CDN) is described. A Web-enabled portal is used by CDN customers to enter purge requests securely. A purge request identifies one or more content files to be purged. The purge request is pushed over a secure link from the portal to a purge server, which validates purge requests from multiple CDN customers and batches the requests into an aggregate purge request. The aggregate purge request is pushed from the purge server to a set of staging servers. Periodically, CDN content servers poll the staging servers to determine whether an aggregate purge request exists. If so, the CDN content servers obtain the aggregate purge request and process the request to remove the identified content files from their local storage.
    Type: Application
    Filed: September 10, 2012
    Publication date: January 3, 2013
    Applicant: AKAMAI TECHNOLOGIES, INC.
    Inventors: Alexander Sherman, Philip A. Lisiecki, Joel M. Wein, Don A. Dailey, John Dilley, William E. Weihl
  • Patent number: 8266305
    Abstract: A content file purge mechanism for a content delivery network (CDN) is described. A Web-enabled portal is used by CDN customers to enter purge requests securely. A purge request identifies one or more content files to be purged. The purge request is pushed over a secure link from the portal to a purge server, which validates purge requests from multiple CDN customers and batches the requests into an aggregate purge request. The aggregate purge request is pushed from the purge server to a set of staging servers. Periodically, CDN content servers poll the staging servers to determine whether an aggregate purge request exists. If so, the CDN content servers obtain the aggregate purge request and process the request to remove the identified content files from their local storage.
    Type: Grant
    Filed: September 18, 2006
    Date of Patent: September 11, 2012
    Assignee: Akamai Technologies, Inc.
    Inventors: Alexander Sherman, Philip A. Lisiecki, Joel M. Wein, Don A. Dailey, John Dilley, William E. Weihl
  • Patent number: 8266293
    Abstract: A method and system of load balancing application server resources operating in a distributed set of servers is described. In a representative embodiment, the set of servers comprise a region of a content delivery network. Each server in the set typically includes a server manager process, and an application server on which edge-enabled applications or application components are executed. As service requests are directed to servers in the region, the application servers manage the requests in a load-balanced manner, and without any requirement that a particular application server spawned on-demand.
    Type: Grant
    Filed: March 5, 2012
    Date of Patent: September 11, 2012
    Assignee: Akamai Technologies, Inc.
    Inventors: Andrew T. Davis, Nate Kushman, Jay G. Parikh, Srinivasan Pichai, Daniel Stodolsky, Ashis Tarafdar, William E. Weihl
  • Publication number: 20120166650
    Abstract: A method and system of load balancing application server resources operating in a distributed set of servers is described. In a representative embodiment, the set of servers comprise a region of a content delivery network. Each server in the set typically includes a server manager process, and an application server on which edge-enabled applications or application components are executed. As service requests are directed to servers in the region, the application servers manage the requests in a load-balanced manner, and without any requirement that a particular application server spawned on-demand.
    Type: Application
    Filed: March 5, 2012
    Publication date: June 28, 2012
    Applicant: AKAMAI TECHNOLOGIES, INC.
    Inventors: Andrew T. Davis, Nate Kushman, Jay G. Parikh, Srinivasan Pichai, Daniel Stodolsky, Ashis Tarafdar, William E. Weihl
  • Patent number: 8166079
    Abstract: The disclosed technique enables a content provider to dynamically assemble content at the edge of the Internet, preferably on content delivery network (CDN) edge servers. Preferably, the content provider leverages an “edge side include” (ESI) markup language that is used to define Web page fragments for dynamic assembly at the edge. Dynamic assembly improves site performance by catching the objects that comprise dynamically generated pages at the edge of the Internet, close to the end user. The content provider designs and develops the business logic to form and assemble the pages, for example, by using the ESI language within its development environment. Instead of being assembled by an application/web server in a centralized data center, the application/web server sends a page template and content fragments to a CDN edge server where the page is assembled. Each content fragment can have its own cacheability profile to manage the “freshness” of the content.
    Type: Grant
    Filed: June 29, 2010
    Date of Patent: April 24, 2012
    Assignee: Akamai Technologies, Inc.
    Inventors: Daniel M. Lewin, Anne E. Lewin, legal representative, Andrew T. Davis, Samuel D. Gendler, Marty Kagan, Jay G. Parikh, William E. Weihl
  • Patent number: 8131835
    Abstract: A method and system of load balancing application server resources operating in a distributed set of servers is described. In a representative embodiment, the set of servers comprise a region of a content delivery network. Each server in the set typically includes a server manager process, and an application server on which edge-enabled applications or application components are executed. As service requests are directed to servers in the region, the application servers manage the requests in a load-balanced manner, and without any requirement that a particular application server spawned on-demand.
    Type: Grant
    Filed: February 8, 2010
    Date of Patent: March 6, 2012
    Assignee: Akamai Technologies, Inc.
    Inventors: Andrew T. Davis, Nate Kushman, Jay G. Parikh, Srinivasan Pichai, Daniel Stodolsky, Ashis Tarafdar, William E. Weihl
  • Patent number: 7958249
    Abstract: A file transport mechanism according to the invention is responsible for accepting, storing and distributing files, such as configuration or control files, to a large number of field machines. The mechanism is comprised of a set of servers that accept, store and maintain submitted files. The file transport mechanism implements a distributed agreement protocol based on “vector exchange.” A vector exchange is a knowledge-based algorithm that works by passing around to potential participants a commitment bit vector. A participant that observes a quorum of commit bits in a vector assumes agreement. Servers use vector exchange to achieve consensus on file submissions. Once a server learns of an agreement, it persistently marks (in a local data store) the request as “agreed.” Once the submission is agreed, the server can stage the new file for download.
    Type: Grant
    Filed: August 2, 2010
    Date of Patent: June 7, 2011
    Assignee: Akamai Technologies, Inc.
    Inventors: Alexander Sherman, Andrew D. Berkheimer, Philip A. Lisiecki, William E. Weihl, Joel M. Wein
  • Publication number: 20100293281
    Abstract: Business applications running on a content delivery network (CDN) having a distributed application framework can create, access and modify state for each client. Over time, a single client may desire to access a given application on different CDN edge servers within the same region and even across different regions. Each time, the application may need to access the latest “state” of the client even if the state was last modified by an application on a different server. A difficulty arises when a process or a machine that last modified the state dies or is temporarily or permanently unavailable. The present invention provides techniques for migrating session state data across CDN servers in a manner transparent to the user. A distributed application thus can access a latest “state” of a client even if the state was last modified by an application instance executing on a different CDN server, including a nearby (in-region) or a remote (out-of-region) server.
    Type: Application
    Filed: July 26, 2010
    Publication date: November 18, 2010
    Applicant: AKAMAI TECHNOLOGIES, INC.
    Inventors: Mark Tsimelzon, Srikanth Thirumalai, Andrew T. Davis, Jay G. Parikh, William E. Weihl