Patents by Inventor Derek Shiell

Derek Shiell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10791157
    Abstract: Some embodiments provide a multi-tenant over-the-top multicast solution that integrates the per user stream customizability of unicast with the large scale streaming efficiencies of multicast. The solution involves an application, different multicast groups streaming an event with different customizations, and a manifest file or metadata identifying the different groups and customizations. The solution leverages the different multicast groups in order to provide different time shifts in the event stream, different quality level encodings of the event stream, and different secondary content to be included with a primary content stream. The application configured with the manifest file or metadata dynamically switches between the groups in order to customize the experience for a user or user device on which the application executes. Switching from multicast to unicast is also supported to supplement available customizations and for failover.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: September 29, 2020
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Alexander A. Kazerani, Jayson G. Sakata, Robert J. Peters, Amir Khakpour, Derek Shiell
  • Patent number: 10715588
    Abstract: Multiple hit load balancing provides a quasi-persistent request distribution for encrypted requests passing over secure connections as well as for multiple requests passing over the same connection. The multiple hit load balancing involves tracking object demand at each server of a set of servers. The multiple hit load balancing further involves dynamically scaling the servers that cache and directly serve frequently requested objects based on the demand that is tracked by each of the servers. For infrequently requested objects, the servers perform a peer retrieval of the objects so to limit the number of the same object being redundantly cached by multiple servers of the set of servers.
    Type: Grant
    Filed: January 29, 2018
    Date of Patent: July 14, 2020
    Assignee: Verizon Digital Media Services, Inc.
    Inventors: Derek Shiell, Marcel Eric Schechner Flores, Harkeerat Singh Bedi
  • Publication number: 20200219011
    Abstract: The embodiments provide systems and methods for efficiently and accurately differentiating requests directed to uncacheable content from requests directed to cacheable content based on identifiers from the requests. The differentiation occurs without analysis or retrieval of the content being requested. Some embodiments hash identifiers of prior requests that resulted in uncacheable content being served in order to set indices within a bloom filter. The bloom filter then tracks prior uncacheable requests without storing each of the identifiers so that subsequent requests for uncacheable requests can be easily identified based on a hash of the request identifier and set indices of the bloom filter. Some embodiments produce a predictive model identifying uncacheable content requests by tracking various characteristics found in identifiers of prior requests that resulted in uncacheable content being served.
    Type: Application
    Filed: November 11, 2019
    Publication date: July 9, 2020
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Hooman Mahyar, Amir Reza Khakpour, Derek Shiell, Robert J. Peters
  • Patent number: 10705978
    Abstract: Asynchronous file tracking may include a first process that adds files to a cache and that generates different instances of a tracking file to track the files as they are entered into the cache. A second process, executing on the device, asynchronously accesses one or more instances of the tracking file at a different rate than the first process generates the tracking file instances. The second process may update a record of cached files based on a set of entries from each of the different instances of the tracking file accessed by the second process. Each set of entries may identify a different set of files that are cached by the device. The second process may then purge one or more cached files that satisfy eviction criteria while the first process continues to asynchronously add files to the cache and create new instances to track the newly cached files.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: July 7, 2020
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Harkeerat Singh Bedi, Donnevan Scott Yeager, Derek Shiell, Hayes Kim
  • Patent number: 10666709
    Abstract: A network device receives, from a customer, a customer subscription to a media transformation service; receives, from the customer as a first component of the subscription, data associated with customer media; and receives, from the customer as a second component of the subscription, one or more customer-selected parameters that specify media transformations to be performed upon the customer media. The network device receives, from a client browser, a request for the customer media, and transforms, responsive to receipt of the request from the client browser, the customer media based on the one or more customer-selected parameters to produce a transformed version of the customer media. The network device sends the transformed version of the customer media, via a content delivery network, to the client browser.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: May 26, 2020
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Brian W. Joe, Hayes Kim, Derek Shiell
  • Publication number: 20200133883
    Abstract: Asynchronous file tracking may include a first process that adds files to a cache and that generates different instances of a tracking file to track the files as they are entered into the cache. A second process, executing on the device, asynchronously accesses one or more instances of the tracking file at a different rate than the first process generates the tracking file instances. The second process may update a record of cached files based on a set of entries from each of the different instances of the tracking file accessed by the second process. Each set of entries may identify a different set of files that are cached by the device. The second process may then purge one or more cached files that satisfy eviction criteria while the first process continues to asynchronously add files to the cache and create new instances to track the newly cached files.
    Type: Application
    Filed: October 29, 2018
    Publication date: April 30, 2020
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Harkeerat Singh Bedi, Donnevan Scott Yeager, Derek Shiell, Hayes Kim
  • Patent number: 10574520
    Abstract: A dynamic runtime reconfigurable server its operational logic as it performs tasks for different customers at different times based on different configurations defined by the customers. The embodiments reduce resource overhead associated with reconfiguring and loading the different customer configurations into server memory at runtime. The server selectively loads different customer configurations from storage into server memory as the configurations are accessed in response to received client requests. The server also selectively preloads configurations into memory at periodic server restarts based on configurations accessed during the interval prior to each restart. The restart allows the server to remove old configurations from memory and maintain the most recently accessed ones. Restarting is also performed without interrupting server operation in responding to user requests.
    Type: Grant
    Filed: July 12, 2017
    Date of Patent: February 25, 2020
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Dian Peng, Derek Shiell
  • Patent number: 10567540
    Abstract: Disclosed are systems and methods for performing consistent request distribution across a set of servers based on a request Uniform Resource Locator (URL) and one or more cache keys, wherein some but not all cache keys modify the content requested by the URL. The cache keys include query string parameters and header parameters. A request director parses a received request, excludes irrelevant cache keys, reorders relevant cache keys, and distributes the request to a server from the set of servers tasked with serving content differentiated from the request URL by the relevant cache keys. The exclusion and reordering preserves the consistent distribution of requests directed to the same URL but different content as a result of different cache key irrespective of the placement of the relevant cache keys and inclusion of irrelevant cache keys in the request.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: February 18, 2020
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Patent number: 10567427
    Abstract: Some embodiments provide techniques for mitigating against layer 7 distributed denial of service attacks. Some embodiments submit a computational intensive problem, also referred to as a bot detection problem, in response to a user request. Bots that lack sophistication needed to render websites or are configured to not respond to the server response will be unable to provide a solution to the problem and their requests will therefore be denied. If the requesting user is a bot and has the sophisticated to correctly solve the problem, the server will monitor the user request rate. For subsequent requests from that same user, the server can increase the difficulty of the problem when the request rate exceeds different thresholds. In so doing, the problem consumes greater resources of the user, slowing the rate at which the user can submit subsequent requests, and thereby preventing the user from overwhelming the server.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: February 18, 2020
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Derek Shiell, Amir Reza Khakpour, Robert J. Peters, David Andrews
  • Patent number: 10530679
    Abstract: Some embodiments provide redundancy and failover for accelerating and improving the processing of commands across a distributed platform. A distributed platform administrative server distributes commands to different distributed platform points-of-presence (PoPs) for execution. The administrative server distributes the commands over a first set of transit provider paths that connect the server to each PoP. The administrative server selects the first set of paths based on different addressing associated with each of the paths. If any of the first paths is unavailable or underperforming, the administrative server selects a second path by changing a destination address and resends the command to the particular PoP over the second path. Some embodiments further modify PoP server operation so that the PoP servers can identify commands issued according to the different path addressing and distribute such commands to all other servers of the same PoP upon identifying the different path addressing.
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: January 7, 2020
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Amir Reza Khakpour, Derek Shiell
  • Publication number: 20190394512
    Abstract: Systems and methods provide low latency video streaming from a delivery platform via a distributed key-value store. The distributed key-value store is implemented by enhancing distribution devices of the delivery platform with a reflector. The reflector may detect publish messages for entering data into the distributed key-value store. The reflector may generate and issue a first message to cause a distribution device to enter a retrieval state/mode based on the first message specifying a request for the key-value of the publish message, and the request resulting in a cache miss at the distribution device. The reflector may also generate and issue a second message to cause the distribution device to store the key-value from the publish message in the key-value store. The second message may be a response to the first message request, and may include data from the key-value of the publish message.
    Type: Application
    Filed: June 25, 2018
    Publication date: December 26, 2019
    Inventors: David Andrews, Derek Shiell
  • Patent number: 10476800
    Abstract: A load balancing appliance distributes data packets across different virtual connections for ongoing communications with clients over a connectionless communication protocol including User Datagram Protocol (UDP) or Quick Internet UDP Connections (QUIC). The load balancing appliance a distributor that binds and listens on a port through which the connectionless traffic is received. The distributor distributes the traffic to a different set of managers at each interval based on each set of managers binding to and listening on a different set of ports.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: November 12, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Derek Shiell, Marcel Eric Schechner Flores, Sergio Leonardo Ruiz, David Andrews
  • Patent number: 10474965
    Abstract: The embodiments provide systems and methods for efficiently and accurately differentiating requests directed to uncacheable content from requests directed to cacheable content based on identifiers from the requests. The differentiation occurs without analysis or retrieval of the content being requested. Some embodiments hash identifiers of prior requests that resulted in uncacheable content being served in order to set indices within a bloom filter. The bloom filter then tracks prior uncacheable requests without storing each of the identifiers so that subsequent requests for uncacheable requests can be easily identified based on a hash of the request identifier and set indices of the bloom filter. Some embodiments produce a predictive model identifying uncacheable content requests by tracking various characteristics found in identifiers of prior requests that resulted in uncacheable content being served.
    Type: Grant
    Filed: March 9, 2016
    Date of Patent: November 12, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Hooman Mahyar, Amir Reza Khakpour, Derek Shiell, Robert J. Peters
  • Publication number: 20190342420
    Abstract: The embodiments provide peer cache filling. The peer cache filling allocates a set of caching servers to distribute content in response to user requests with a limited first subset of the set of servers having access to retrieve the content from an origin and with a larger second subset of the set of servers retrieving the content from the first subset of servers without accessing the origin. The peer cache filling dynamically escalates and deescalates the allocation of the caching servers to the first and second subsets as demand for the content rises and falls. Peer cache filling is implemented by modifying request headers to identify designated hot content, provide a request identifier hash result for identifying the ordering of servers, and provide a value for designating which servers in the ordering as primary server with access to the origin.
    Type: Application
    Filed: July 15, 2019
    Publication date: November 7, 2019
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Publication number: 20190334992
    Abstract: The embodiments provide request multiplexing whereby a server receiving a first request for content clones and issues the cloned request to an origin to initiate retrieval of the content. The first request and subsequent requests for the same content are placed in a queue. The server empties a receive buffer that is populated with packets of the requested content as the packets arrive from the origin by writing the packets directly to local storage without directly distributing packets from the receive buffer to any user. The rate at which the server empties the receive buffer is therefore independent of the rate at which any user receives the packets. A first set of packets written to local storage can then be simultaneously distributed to one or more queued requests as the server continues emptying the receive buffer and writing a second set of packets to local storage.
    Type: Application
    Filed: July 9, 2019
    Publication date: October 31, 2019
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Sergio Leonardo Ruiz, Derek Shiell
  • Publication number: 20190260846
    Abstract: Disclosed are systems and methods for performing consistent request distribution across a set of servers based on a request Uniform Resource Locator (URL) and one or more cache keys, wherein some but not all cache keys modify the content requested by the URL. The cache keys include query string parameters and header parameters. A request director parses a received request, excludes irrelevant cache keys, reorders relevant cache keys, and distributes the request to a server from the set of servers tasked with serving content differentiated from the request URL by the relevant cache keys. The exclusion and reordering preserves the consistent distribution of requests directed to the same URL but different content as a result of different cache key irrespective of the placement of the relevant cache keys and inclusion of irrelevant cache keys in the request.
    Type: Application
    Filed: May 1, 2019
    Publication date: August 22, 2019
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Publication number: 20190238627
    Abstract: Multiple hit load balancing provides a quasi-persistent request distribution for encrypted requests passing over secure connections as well as for multiple requests passing over the same connection. The multiple hit load balancing involves tracking object demand at each server of a set of servers. The multiple hit load balancing further involves dynamically scaling the servers that cache and directly serve frequently requested objects based on the demand that is tracked by each of the servers. For infrequently requested objects, the servers perform a peer retrieval of the objects so to limit the number of the same object being redundantly cached by multiple servers of the set of servers.
    Type: Application
    Filed: January 29, 2018
    Publication date: August 1, 2019
    Inventors: Derek Shiell, Marcel Eric Schechner Flores, Harkeerat Singh Bedi
  • Patent number: 10367910
    Abstract: Some embodiments provide instantaneous and non-blocking content purging across storage servers of a distributed platform. When a server receives a purge operation, it extracts an identifier from the purge operation. The server then generates a content purge pattern from the identifier and injects the pattern to its configuration. Instantaneous purging is then realized as the server averts access to any cached content identified by the pattern. The purging also occurs in a non-blocking fashion as the physical purge of the content occurs in-line with the server's cache miss operation. The content purge pattern causes the server to respond to a subsequently received content request with a cache miss, whereby the server retrieves the requested content from an origin source, serves the retrieved content to the requesting user, and replaces a previously cached copy of the content that is to be purged with the newly retrieved copy.
    Type: Grant
    Filed: April 25, 2016
    Date of Patent: July 30, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Derek Shiell, Robert J. Peters, Amir Khakpour, Alexander A. Kazerani
  • Patent number: 10362134
    Abstract: The embodiments provide peer cache filling. The peer cache filling allocates a set of caching servers to distribute content in response to user requests with a limited first subset of the set of servers having access to retrieve the content from an origin and with a larger second subset of the set of servers retrieving the content from the first subset of servers without accessing the origin. The peer cache filling dynamically escalates and deesclataes the allocation of the caching servers to the first and second subsets as demand for the content rises and falls. Peer cache filling is implemented by modifying request headers to identify designated hot content, provide a request identifier hash result for identifying the ordering of servers, and provide a value for designating which servers in the ordering as primary server with access to the origin.
    Type: Grant
    Filed: August 15, 2016
    Date of Patent: July 23, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Patent number: 10356175
    Abstract: The embodiments provide request multiplexing whereby a server receiving a first request for content clones and issues the cloned request to an origin to initiate retrieval of the content. The first request and subsequent requests for the same content are placed in a queue. The server empties a receive buffer that is populated with packets of the requested content as the packets arrive from the origin by writing the packets directly to local storage without directly distributing packets from the receive buffer to any user. The rate at which the server empties the receive buffer is therefore independent of the rate at which any user receives the packets. A first set of packets written to local storage can then be simultaneously distributed to one or more queued requests as the server continues emptying the receive buffer and writing a second set of packets to local storage.
    Type: Grant
    Filed: August 16, 2016
    Date of Patent: July 16, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Sergio Leonardo Ruiz, Derek Shiell