Patents by Inventor Derek Shiell

Derek Shiell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190208554
    Abstract: Provided is a device that persistently distributes connectionless traffic across different simultaneously executing server instances in a manner that allows a first set of server instances of the device to commence a new first set of connectionless data streams during a first interval, and a different second set of server instances of the device to commence a different second set of connectionless data streams as the first set of server instances respond to ongoing connectionless data streams of the first set of connectionless data streams during a subsequent second interval. The persistent distribution further supports virtual connection migration by distributing, to the same server instance, data packets that are directed to the same connectionless data stream even when the sending user equipment changes addressing during the connectionless data stream.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 4, 2019
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Sergio Leonardo Ruiz, Derek Shiell
  • Patent number: 10284674
    Abstract: Disclosed are systems and methods for performing consistent request distribution across a set of servers based on a request Uniform Resource Locator (URL) and one or more cache keys, wherein some but not all cache keys modify the content requested by the URL. The cache keys include query string parameters and header parameters. A request director parses a received request, excludes irrelevant cache keys, reorders relevant cache keys, and distributes the request to a server from the set of servers tasked with serving content differentiated from the request URL by the relevant cache keys. The exclusion and reordering preserves the consistent distribution of requests directed to the same URL but different content as a result of different cache key irrespective of the placement of the relevant cache keys and inclusion of irrelevant cache keys in the request.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: May 7, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Publication number: 20190081862
    Abstract: Disclosed are different implementations for rapid configuration propagation across multiple servers of a distributed platform. One implementation is a push based distribution of update segments that are generated from a onetime differential analysis of an updated particular configuration relative to a previous instance of the particular configuration. Sequence numbers attached to the updated segments identify is a server's copy of a configuration is up-to-date and can receive a new updated segment or if missing intervening segments are to be retrieved from peers and applied prior to applying the new updated segment. Another implementation is a pull based distribution of compressed images of the configurations. A complete set of configurations are distributed as a compressed file system that is loaded into server memory. Individual configurations are read out of the file system and loaded into memory when implicated by client requests.
    Type: Application
    Filed: September 13, 2017
    Publication date: March 14, 2019
    Inventors: Daniel Lockhart, Derek Shiell, Harkeerat Bedi, Paulo Tioseco, William Rosecrans, David Andrews
  • Publication number: 20190075182
    Abstract: Some embodiments provide partitioned serialized caching and delivery of large sized content and files. Some embodiments partition requests for large sized content into segment requests with each segment request identifying a different byte range of the requested content. Each segment request is hashed to identify a particular server from a set of servers tasked with caching and delivering a different segment of the requested content. In this manner, no single server caches or delivers the entirety of large sized content. The segment requests are distributed serially across the set of servers so that the segments are passed in order, wherein the serial distribution involves handing-off the requesting user's connection serially to each server of the set of server in the order with which the set of servers deliver the content segments.
    Type: Application
    Filed: November 8, 2018
    Publication date: March 7, 2019
    Inventors: Juan Bran, Derek Shiell
  • Publication number: 20190058755
    Abstract: A network device receives, from a customer, a customer subscription to a media transformation service; receives, from the customer as a first component of the subscription, data associated with customer media; and receives, from the customer as a second component of the subscription, one or more customer-selected parameters that specify media transformations to be performed upon the customer media. The network device receives, from a client browser, a request for the customer media, and transforms, responsive to receipt of the request from the client browser, the customer media based on the one or more customer-selected parameters to produce a transformed version of the customer media.
    Type: Application
    Filed: October 23, 2018
    Publication date: February 21, 2019
    Inventors: Brian W. Joe, Hayes Kim, Derek Shiell
  • Publication number: 20190020541
    Abstract: A dynamic runtime reconfigurable server its operational logic as it performs tasks for different customers at different times based on different configurations defined by the customers. The embodiments reduce resource overhead associated with reconfiguring and loading the different customer configurations into server memory at runtime. The server selectively loads different customer configurations from storage into server memory as the configurations are accessed in response to received client requests. The server also selectively preloads configurations into memory at periodic server restarts based on configurations accessed during the interval prior to each restart. The restart allows the server to remove old configurations from memory and maintain the most recently accessed ones. Restarting is also performed without interrupting server operation in responding to user requests.
    Type: Application
    Filed: July 12, 2017
    Publication date: January 17, 2019
    Inventors: Dian Peng, Derek Shiell
  • Patent number: 10133673
    Abstract: The embodiments implement file size variance caching optimizations. The optimizations are based on a differentiated caching implementation involving a small size content optimized first cache and a large size content optimized second cache optimized. The first cache reads and writes data using a first block size. The second cache reads and writes data using a different second block size that is larger than the first block size. A request management server controls request distribution across the first and second caches. The request management server differentiates large size content requests from small size content requests. The request management server uses a first request distribution scheme to restrict large size content request distribution across the first cache and a second request distribution scheme to restrict small size content request distribution across the second cache.
    Type: Grant
    Filed: March 9, 2016
    Date of Patent: November 20, 2018
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Harkeerat Singh Bedi, Amir Reza Khakpour, Derek Shiell
  • Patent number: 10129358
    Abstract: Some embodiments provide partitioned serialized caching and delivery of large sized content and files. Some embodiments partition requests for large sized content into segment requests with each segment request identifying a different byte range of the requested content. Each segment request is hashed to identify a particular server from a set of servers tasked with caching and delivering a different segment of the requested content. In this manner, no single server caches or delivers the entirety of large sized content. The segment requests are distributed serially across the set of servers so that the segments are passed in order, wherein the serial distribution involves handing-off the requesting user's connection serially to each server of the set of server in the order with which the set of servers deliver the content segments.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: November 13, 2018
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Juan Bran, Derek Shiell
  • Patent number: 10120871
    Abstract: Some embodiments provide a file system for purging content based on a single traversal of the file system that identifies the directory containing the content without performing a secondary traversal within the directory to target the operation to only the file that are associated with content such that other files contained in the directory are unaffected. The file system supplements traditional directory structures with file-level directories. Each file-level directory is created to contain a root file associated with particular content, different variants of the particular content, and supporting files. Consequently, the file system can complete an operation targeting particular content by performing that operation on the file-level directory containing the particular content, thereby eliminating the need to conduct a file-by-file traversal of the containing directory as a prerequisite to identifying the files associated with the particular content and performing the operation on the files individually.
    Type: Grant
    Filed: July 21, 2015
    Date of Patent: November 6, 2018
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Derek Shiell, Robert J. Peters
  • Patent number: 10116729
    Abstract: A network device receives, from a customer, a customer subscription to a media transformation service; receives, from the customer as a first component of the subscription, data associated with customer media; and receives, from the customer as a second component of the subscription, one or more customer-selected parameters that specify media transformations to be performed upon the customer media. The network device receives, from a client browser, a request for the customer media, and transforms, responsive to receipt of the request from the client browser, the customer media based on the one or more customer-selected parameters to produce a transformed version of the customer media. The network device sends the transformed version of the customer media, via a content delivery network, to the client browser.
    Type: Grant
    Filed: October 6, 2015
    Date of Patent: October 30, 2018
    Assignee: VERIZON DIGITAL MEDIA SERVICES INC.
    Inventors: Brian W. Joe, Hayes Kim, Derek Shiell
  • Patent number: 10069859
    Abstract: Some embodiments provide distributed rate limiting to combat network based attacks launched against a distributed platform or customers thereof. The distributed rate limiting involves graduated monitoring to identify when an attack expands beyond a single server to other servers operating from within the same distributed platform distribution point, and when the attack further expands from one distributed platform distribution point to other distribution points. Once request rates across the distributed platform distribution points exceed a global threshold, a first set of attack protections are invoked across the distributed platform. Should request rates increase or continue to exceed the threshold, additional attack protections can be invoked. Distributed rate limiting allows any server within the distributed platform to assume command and control over the graduated monitoring as well as escalating the response to any identified attack.
    Type: Grant
    Filed: December 16, 2015
    Date of Patent: September 4, 2018
    Assignee: Verizon Digital Media Services Inc.
    Inventors: David Andrews, Reed Morrison, Derek Shiell, Robert J. Peters
  • Publication number: 20180241771
    Abstract: Some embodiments provide techniques for mitigating against layer 7 distributed denial of service attacks. Some embodiments submit a computational intensive problem, also referred to as a bot detection problem, in response to a user request. Bots that lack sophistication needed to render websites or are configured to not respond to the server response will be unable to provide a solution to the problem and their requests will therefore be denied. If the requesting user is a bot and has the sophisticated to correctly solve the problem, the server will monitor the user request rate. For subsequent requests from that same user, the server can increase the difficulty of the problem when the request rate exceeds different thresholds. In so doing, the problem consumes greater resources of the user, slowing the rate at which the user can submit subsequent requests, and thereby preventing the user from overwhelming the server.
    Type: Application
    Filed: April 17, 2018
    Publication date: August 23, 2018
    Inventors: Derek Shiell, Amir Reza Khakpour, Robert J. Peters, David Andrews
  • Patent number: 10044602
    Abstract: Some embodiments provide loop detection and loop prevention mechanisms for messaging passing in between peers in a multi-tier hierarchy. In some embodiments, the messaging header is modified to track which peers have received a copy of the message. Each peer appends its identifier to the message header before passing the message to another peer. When selecting a receiving peer, the sending peer ensures that the receiving peer is not already identified in the message header. If the receiving peer has already received the message, then another peer from a next-peer list is selected to receive the message. If all peers in the next-peer have been traversed, the sending peer returns an error message via a reverse traversal of the peers in the message header.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: August 7, 2018
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Amir Reza Khakpour, Robert J. Peters, Derek Shiell
  • Publication number: 20180213053
    Abstract: Disclosed are systems and methods for performing consistent request distribution across a set of servers based on a request Uniform Resource Locator (URL) and one or more cache keys, wherein some but not all cache keys modify the content requested by the URL. The cache keys include query string parameters and header parameters. A request director parses a received request, excludes irrelevant cache keys, reorders relevant cache keys, and distributes the request to a server from the set of servers tasked with serving content differentiated from the request URL by the relevant cache keys. The exclusion and reordering preserves the consistent distribution of requests directed to the same URL but different content as a result of different cache key irrespective of the placement of the relevant cache keys and inclusion of irrelevant cache keys in the request.
    Type: Application
    Filed: January 23, 2017
    Publication date: July 26, 2018
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Publication number: 20180167434
    Abstract: The solution distributes the management of stream segments from a central storage cluster to different edge servers that upload stream segments to and receive stream segments from the central storage cluster. Each edge server tracks the stream segments it has uploaded to the central storage cluster as well as the expiration times for those segments. The tracking is performed without a database using a log file and file system arrangement. First-tier directories are created in the file system for different expiration intervals. Entries under the first-tier directories track individual segments that expire within the expiration interval of the first-tier directory with the file system entries being files or a combination of subdirectories and files. Upon identifying expired stream segments, the edge servers instruct the central storage cluster to delete those stream segments. This removes the management overhead from the central storage cluster and implements the distributed management without a database.
    Type: Application
    Filed: December 6, 2017
    Publication date: June 14, 2018
    Inventors: Karthik Sathyanarayana, Harkeerat Singh Bedi, Derek Shiell, Robert J. Peters
  • Patent number: 9954891
    Abstract: Some embodiments provide techniques for mitigating against layer 7 distributed denial of service attacks. Some embodiments submit a computational intensive problem, also referred to as a bot detection problem, in response to a user request. Bots that lack sophistication needed to render websites or are configured to not respond to the server response will be unable to provide a solution to the problem and their requests will therefore be denied. If the requesting user is a bot and has the sophisticated to correctly solve the problem, the server will monitor the user request rate. For subsequent requests from that same user, the server can increase the difficulty of the problem when the request rate exceeds different thresholds. In so doing, the problem consumes greater resources of the user, slowing the rate at which the user can submit subsequent requests, and thereby preventing the user from overwhelming the server.
    Type: Grant
    Filed: May 18, 2015
    Date of Patent: April 24, 2018
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Derek Shiell, Amir Reza Khakpour, Robert J. Peters, David Andrews
  • Publication number: 20180054482
    Abstract: The embodiments provide request multiplexing whereby a server receiving a first request for content clones and issues the cloned request to an origin to initiate retrieval of the content. The first request and subsequent requests for the same content are placed in a queue. The server empties a receive buffer that is populated with packets of the requested content as the packets arrive from the origin by writing the packets directly to local storage without directly distributing packets from the receive buffer to any user. The rate at which the server empties the receive buffer is therefore independent of the rate at which any user receives the packets. A first set of packets written to local storage can then be simultaneously distributed to one or more queued requests as the server continues emptying the receive buffer and writing a second set of packets to local storage.
    Type: Application
    Filed: August 16, 2016
    Publication date: February 22, 2018
    Inventors: Sergio Leonardo Ruiz, Derek Shiell
  • Publication number: 20180048731
    Abstract: The embodiments provide peer cache filling. The peer cache filling allocates a set of caching servers to distribute content in response to user requests with a limited first subset of the set of servers having access to retrieve the content from an origin and with a larger second subset of the set of servers retrieving the content from the first subset of servers without accessing the origin. The peer cache filling dynamically escalates and deesclataes the allocation of the caching servers to the first and second subsets as demand for the content rises and falls. Peer cache filling is implemented by modifying request headers to identify designated hot content, provide a request identifier hash result for identifying the ordering of servers, and provide a value for designating which servers in the ordering as primary server with access to the origin.
    Type: Application
    Filed: August 15, 2016
    Publication date: February 15, 2018
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Publication number: 20170366447
    Abstract: Some embodiments provide loop detection and loop prevention mechanisms for messaging passing in between peers in a multi-tier hierarchy. In some embodiments, the messaging header is modified to track which peers have received a copy of the message. Each peer appends its identifier to the message header before passing the message to another peer. When selecting a receiving peer, the sending peer ensures that the receiving peer is not already identified in the message header. If the receiving peer has already received the message, then another peer from a next-peer list is selected to receive the message. If all peers in the next-peer have been traversed, the sending peer returns an error message via a reverse traversal of the peers in the message header.
    Type: Application
    Filed: September 1, 2017
    Publication date: December 21, 2017
    Inventors: Amir Reza Khakpour, Robert J. Peters, Derek Shiell
  • Publication number: 20170366590
    Abstract: Some embodiments provide a multi-tenant over-the-top multicast solution that integrates the per user stream customizability of unicast with the large scale streaming efficiencies of multicast. The solution involves an application, different multicast groups streaming an event with different customizations, and a manifest file or metadata identifying the different groups and customizations. The solution leverages the different multicast groups in order to provide different time shifts in the event stream, different quality level encodings of the event stream, and different secondary content to be included with a primary content stream. The application configured with the manifest file or metadata dynamically switches between the groups in order to customize the experience for a user or user device on which the application executes. Switching from multicast to unicast is also supported to supplement available customizations and for failover.
    Type: Application
    Filed: September 1, 2017
    Publication date: December 21, 2017
    Inventors: Alexander A. Kazerani, Jayson G. Sakata, Robert J. Peters, Amir Khakpour, Derek Shiell