Patents by Inventor Derek Shiell

Derek Shiell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170339047
    Abstract: Some embodiments provide redundancy and failover for accelerating and improving the processing of commands across a distributed platform. A distributed platform administrative server distributes commands to different distributed platform points-of-presence (PoPs) for execution. The administrative server distributes the commands over a first set of transit provider paths that connect the server to each PoP. The administrative server selects the first set of paths based on different addressing associated with each of the paths. If any of the first paths is unavailable or underperforming, the administrative server selects a second path by changing a destination address and resends the command to the particular PoP over the second path. Some embodiments further modify PoP server operation so that the PoP servers can identify commands issued according to the different path addressing and distribute such commands to all other servers of the same PoP upon identifying the different path addressing.
    Type: Application
    Filed: August 7, 2017
    Publication date: November 23, 2017
    Inventors: Amir Reza Khakpour, Derek Shiell
  • Patent number: 9787579
    Abstract: Some embodiments override network or router level path selection with application or server controlled path selection by repurposing the type-of-service (ToS) or differentiated services header field. A mapping table maps different ToS values to different available transit provider paths to a particular destination. A server generating a packet to the destination selects one of the available paths according to any of load balanced, failover, or performance optimization criteria. The server sets the packet header ToS field with the value assigned to the selected path. A router operating in the same network as the server is configured with policy based routing rules that similarly map the ToS values to different transit provider paths to the particular destination network. Upon receiving the server generated packet, the router routes the packet to the destination network through the transit provider path identified in the packet header by the server set ToS value.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: October 10, 2017
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Amir Reza Khakpour, Alexander A. Kazerani, Robert J. Peters, Derek Shiell
  • Publication number: 20170262373
    Abstract: The embodiments implement file size variance caching optimizations. The optimizations are based on a differentiated caching implementation involving a small size content optimized first cache and a large size content optimized second cache optimized. The first cache reads and writes data using a first block size. The second cache reads and writes data using a different second block size that is larger than the first block size. A request management server controls request distribution across the first and second caches. The request management server differentiates large size content requests from small size content requests. The request management server uses a first request distribution scheme to restrict large size content request distribution across the first cache and a second request distribution scheme to restrict small size content request distribution across the second cache.
    Type: Application
    Filed: March 9, 2016
    Publication date: September 14, 2017
    Inventors: Harkeerat Singh Bedi, Amir Reza Khakpour, Derek Shiell
  • Publication number: 20170262767
    Abstract: The embodiments provide systems and methods for efficiently and accurately differentiating requests directed to uncacheable content from requests directed to cacheable content based on identifiers from the requests. The differentiation occurs without analysis or retrieval of the content being requested. Some embodiments hash identifiers of prior requests that resulted in uncacheable content being served in order to set indices within a bloom filter. The bloom filter then tracks prior uncacheable requests without storing each of the identifiers so that subsequent requests for uncacheable requests can be easily identified based on a hash of the request identifier and set indices of the bloom filter. Some embodiments produce a predictive model identifying uncacheable content requests by tracking various characteristics found in identifiers of prior requests that resulted in uncacheable content being served.
    Type: Application
    Filed: March 9, 2016
    Publication date: September 14, 2017
    Inventors: Hooman Mahyar, Amir Reza Khakpour, Derek Shiell, Robert J. Peters
  • Patent number: 9755949
    Abstract: Some embodiments provide loop detection and loop prevention mechanisms for messaging passing in between peers in a multi-tier hierarchy. In some embodiments, the messaging header is modified to track which peers have received a copy of the message. Each peer appends its identifier to the message header before passing the message to another peer. When selecting a receiving peer, the sending peer ensures that the receiving peer is not already identified in the message header. If the receiving peer has already received the message, then another peer from a next-peer list is selected to receive the message. If all peers in the next-peer have been traversed, the sending peer returns an error message via a reverse traversal of the peers in the message header.
    Type: Grant
    Filed: September 21, 2015
    Date of Patent: September 5, 2017
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Amir Reza Khakpour, Robert J. Peters, Derek Shiell
  • Patent number: 9756098
    Abstract: Some embodiments provide a multi-tenant over-the-top multicast solution that integrates the per user stream customizability of unicast with the large scale streaming efficiencies of multicast. The solution involves an application, different multicast groups streaming an event with different customizations, and a manifest file or metadata identifying the different groups and customizations. The solution leverages the different multicast groups in order to provide different time shifts in the event stream, different quality level encodings of the event stream, and different secondary content to be included with a primary content stream. The application configured with the manifest file or metadata dynamically switches between the groups in order to customize the experience for a user or user device on which the application executes. Switching from multicast to unicast is also supported to supplement available customizations and for failover.
    Type: Grant
    Filed: September 15, 2014
    Date of Patent: September 5, 2017
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Alexander A. Kazerani, Jayson G. Sakata, Robert J. Peters, Amir Khakpour, Derek Shiell
  • Patent number: 9736059
    Abstract: Some embodiments provide redundancy and failover for accelerating and improving the processing of commands across a distributed platform. A distributed platform administrative server distributes commands to different distributed platform points-of-presence (PoPs) for execution. The administrative server distributes the commands over a first set of transit provider paths that connect the server to each PoP. The administrative server selects the first set of paths based on different addressing associated with each of the paths. If any of the first paths is unavailable or underperforming, the administrative server selects a second path by changing a destination address and resends the command to the particular PoP over the second path. Some embodiments further modify PoP server operation so that the PoP servers can identify commands issued according to the different path addressing and distribute such commands to all other servers of the same PoP upon identifying the different path addressing.
    Type: Grant
    Filed: March 10, 2016
    Date of Patent: August 15, 2017
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Amir Reza Khakpour, Derek Shiell
  • Publication number: 20170208148
    Abstract: Some embodiments provide partitioned serialized caching and delivery of large sized content and files. Some embodiments partition requests for large sized content into segment requests with each segment request identifying a different byte range of the requested content. Each segment request is hashed to identify a particular server from a set of servers tasked with caching and delivering a different segment of the requested content. In this manner, no single server caches or delivers the entirety of large sized content. The segment requests are distributed serially across the set of servers so that the segments are passed in order, wherein the serial distribution involves handing-off the requesting user's connection serially to each server of the set of server in the order with which the set of servers deliver the content segments.
    Type: Application
    Filed: January 15, 2016
    Publication date: July 20, 2017
    Inventors: Juan Bran, Derek Shiell
  • Publication number: 20170180414
    Abstract: Some embodiments provide distributed rate limiting to combat network based attacks launched against a distributed platform or customers thereof. The distributed rate limiting involves graduated monitoring to identify when an attack expands beyond a single server to other servers operating from within the same distributed platform distribution point, and when the attack further expands from one distributed platform distribution point to other distribution points. Once request rates across the distributed platform distribution points exceed a global threshold, a first set of attack protections are invoked across the distributed platform. Should request rates increase or continue to exceed the threshold, additional attack protections can be invoked. Distributed rate limiting allows any server within the distributed platform to assume command and control over the graduated monitoring as well as escalating the response to any identified attack.
    Type: Application
    Filed: December 16, 2015
    Publication date: June 22, 2017
    Inventors: David Andrews, Reed Morrison, Derek Shiell, Robert J. Peters
  • Publication number: 20170099341
    Abstract: A network device receives, from a customer, a customer subscription to a media transformation service; receives, from the customer as a first component of the subscription, data associated with customer media; and receives, from the customer as a second component of the subscription, one or more customer-selected parameters that specify media transformations to be performed upon the customer media. The network device receives, from a client browser, a request for the customer media, and transforms, responsive to receipt of the request from the client browser, the customer media based on the one or more customer-selected parameters to produce a transformed version of the customer media. The network device sends the transformed version of the customer media, via a content delivery network, to the client browser.
    Type: Application
    Filed: October 6, 2015
    Publication date: April 6, 2017
    Inventors: Brian W. Joe, Hayes Kim, Derek Shiell
  • Publication number: 20170085464
    Abstract: Some embodiments provide loop detection and loop prevention mechanisms for messaging passing in between peers in a multi-tier hierarchy. In some embodiments, the messaging header is modified to track which peers have received a copy of the message. Each peer appends its identifier to the message header before passing the message to another peer. When selecting a receiving peer, the sending peer ensures that the receiving peer is not already identified in the message header. If the receiving peer has already received the message, then another peer from a next-peer list is selected to receive the message. If all peers in the next-peer have been traversed, the sending peer returns an error message via a reverse traversal of the peers in the message header.
    Type: Application
    Filed: September 21, 2015
    Publication date: March 23, 2017
    Inventors: Amir Reza Khakpour, Robert J. Peters, Derek Shiell
  • Publication number: 20160344765
    Abstract: Some embodiments provide techniques for mitigating against layer 7 distributed denial of service attacks. Some embodiments submit a computational intensive problem, also referred to as a bot detection problem, in response to a user request. Bots that lack sophistication needed to render websites or are configured to not respond to the server response will be unable to provide a solution to the problem and their requests will therefore be denied. If the requesting user is a bot and has the sophisticated to correctly solve the problem, the server will monitor the user request rate. For subsequent requests from that same user, the server can increase the difficulty of the problem when the request rate exceeds different thresholds. In so doing, the problem consumes greater resources of the user, slowing the rate at which the user can submit subsequent requests, and thereby preventing the user from overwhelming the server.
    Type: Application
    Filed: May 18, 2015
    Publication date: November 24, 2016
    Inventors: Derek Shiell, Amir Reza Khakpour, Robert J. Peters, David Andrews
  • Publication number: 20160294678
    Abstract: Some embodiments provide redundancy and failover for accelerating and improving the processing of commands across a distributed platform. A distributed platform administrative server distributes commands to different distributed platform points-of-presence (PoPs) for execution. The administrative server distributes the commands over a first set of transit provider paths that connect the server to each PoP. The administrative server selects the first set of paths based on different addressing associated with each of the paths. If any of the first paths is unavailable or underperforming, the administrative server selects a second path by changing a destination address and resends the command to the particular PoP over the second path. Some embodiments further modify PoP server operation so that the PoP servers can identify commands issued according to the different path addressing and distribute such commands to all other servers of the same PoP upon identifying the different path addressing.
    Type: Application
    Filed: March 10, 2016
    Publication date: October 6, 2016
    Inventors: Amir Reza Khakpour, Derek Shiell
  • Publication number: 20160294681
    Abstract: Some embodiments override network or router level path selection with application or server controlled path selection by repurposing the type-of-service (ToS) or differentiated services header field. A mapping table maps different ToS values to different available transit provider paths to a particular destination. A server generating a packet to the destination selects one of the available paths according to any of load balanced, failover, or performance optimization criteria. The server sets the packet header ToS field with the value assigned to the selected path. A router operating in the same network as the server is configured with policy based routing rules that similarly map the ToS values to different transit provider paths to the particular destination network. Upon receiving the server generated packet, the router routes the packet to the destination network through the transit provider path identified in the packet header by the server set ToS value.
    Type: Application
    Filed: August 3, 2015
    Publication date: October 6, 2016
    Inventors: Amir Reza Khakpour, Alexander A. Kazerani, Robert J. Peters, Derek Shiell
  • Patent number: 9444718
    Abstract: A test network is provided to test updates to configurations and resources of a distributed platform and to warm servers prior to their deployment in the distributed platform. The test network tests and warms using real-time production traffic of the distributed platform in a manner that does not impact users or performance of the distributed platform. At least one distributed platform caching server passes content requests that it receives to the test network using a connectionless protocol. The test network includes a test server that is loaded with any of a configuration or resource under test or whose cache is to be loaded prior to the server's deployment into the distributed platform. The test network also includes a replay server that receives the requests from the caching server, distributes the requests to the test server, and monitors the test server responses.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: September 13, 2016
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Amir Khakpour, Robert J. Peters, Derek Shiell, Hossein Lotfi, Thomren Boyd
  • Publication number: 20160241670
    Abstract: Some embodiments provide instantaneous and non-blocking content purging across storage servers of a distributed platform. When a server receives a purge operation, it extracts an identifier from the purge operation. The server then generates a content purge pattern from the identifier and injects the pattern to its configuration. Instantaneous purging is then realized as the server averts access to any cached content identified by the pattern. The purging also occurs in a non-blocking fashion as the physical purge of the content occurs in-line with the server's cache miss operation. The content purge pattern causes the server to respond to a subsequently received content request with a cache miss, whereby the server retrieves the requested content from an origin source, serves the retrieved content to the requesting user, and replaces a previously cached copy of the content that is to be purged with the newly retrieved copy.
    Type: Application
    Filed: April 25, 2016
    Publication date: August 18, 2016
    Inventors: Derek Shiell, Robert J. Peters, Amir Khakpour, Alexander A. Kazerani
  • Patent number: 9413842
    Abstract: Some embodiments provide instantaneous and non-blocking content purging across storage servers of a distributed platform. When a server receives a purge operation, it extracts an identifier from the purge operation. The server then generates a content purge pattern from the identifier and injects the pattern to its configuration. Instantaneous purging is then realized as the server averts access to any cached content identified by the pattern. The purging also occurs in a non-blocking fashion as the physical purge of the content occurs in-line with the server's cache miss operation. The content purge pattern causes the server to respond to a subsequently received content request with a cache miss, whereby the server retrieves the requested content from an origin source, serves the retrieved content to the requesting user, and replaces a previously cached copy of the content that is to be purged with the newly retrieved copy.
    Type: Grant
    Filed: September 25, 2013
    Date of Patent: August 9, 2016
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Derek Shiell, Robert J. Peters, Amir Khakpour, Alexander A. Kazerani
  • Publication number: 20160080445
    Abstract: Some embodiments provide a multi-tenant over-the-top multicast solution that integrates the per user stream customizability of unicast with the large scale streaming efficiencies of multicast. The solution involves an application, different multicast groups streaming an event with different customizations, and a manifest file or metadata identifying the different groups and customizations. The solution leverages the different multicast groups in order to provide different time shifts in the event stream, different quality level encodings of the event stream, and different secondary content to be included with a primary content stream. The application configured with the manifest file or metadata dynamically switches between the groups in order to customize the experience for a user or user device on which the application executes. Switching from multicast to unicast is also supported to supplement available customizations and for failover.
    Type: Application
    Filed: September 15, 2014
    Publication date: March 17, 2016
    Inventors: Alexander A. Kazerani, Jayson G. Sakata, Robert J. Peters, Amir Khakpour, Derek Shiell
  • Publication number: 20160028598
    Abstract: A test network is provided to test updates to configurations and resources of a distributed platform and to warm servers prior to their deployment in the distributed platform. The test network tests and warms using real-time production traffic of the distributed platform in a manner that does not impact users or performance of the distributed platform. At least one distributed platform caching server passes content requests that it receives to the test network using a connectionless protocol. The test network includes a test server that is loaded with any of a configuration or resource under test or whose cache is to be loaded prior to the server's deployment into the distributed platform. The test network also includes a replay server that receives the requests from the caching server, distributes the requests to the test server, and monitors the test server responses.
    Type: Application
    Filed: July 28, 2014
    Publication date: January 28, 2016
    Inventors: Amir Khakpour, Robert J. Peters, Derek Shiell, Hossein Lotfi, Thomren Boyd
  • Publication number: 20150324380
    Abstract: Some embodiments provide a file system for purging content based on a single traversal of the file system that identifies the directory containing the content without performing a secondary traversal within the directory to target the operation to only the file that are associated with content such that other files contained in the directory are unaffected. The file system supplements traditional directory structures with file-level directories. Each file-level directory is created to contain a root file associated with particular content, different variants of the particular content, and supporting files. Consequently, the file system can complete an operation targeting particular content by performing that operation on the file-level directory containing the particular content, thereby eliminating the need to conduct a file-by-file traversal of the containing directory as a prerequisite to identifying the files associated with the particular content and performing the operation on the files individually.
    Type: Application
    Filed: July 21, 2015
    Publication date: November 12, 2015
    Inventors: Derek Shiell, Robert J. Peters