Patents by Inventor Donnevan Scott Yeager

Donnevan Scott Yeager has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11451623
    Abstract: Provided is a controller for dynamically balancing load between different servers using different thresholds that are continually modified for each of the servers. The controller may generate a baseline load measure based on load measures received from the different servers, and may configure a first threshold for a first server and a second threshold for a second server based on the load measure at the first server deviating from the baseline load measure by a first amount that is greater than a second amount by which the load measure at the second server deviates from the baseline load measure. The controller may allocate an additional server to distribute first content with the first server in response to first content load at the first server satisfying the first threshold and the same load or a greater load of second content at the second server not satisfying the second threshold.
    Type: Grant
    Filed: May 25, 2021
    Date of Patent: September 20, 2022
    Assignee: Edgecast Inc.
    Inventors: Kyriakos Zarifis, Harkeerat Singh Bedi, Donnevan Scott Yeager, Derek Shiell
  • Patent number: 11354271
    Abstract: A Multi-Threaded Indexed (“MTI”) file system may use a first set of threads, processes, or executable instances to index desired file attributes in a database while simultaneously but independently executing file operations with a second set of threads, processes, or executable instances. In response to receiving a file operation, the second set of threads, processes, or executable instance may query the database to directly identify files that are indirectly implicated by the file operation with a wildcard, regular expression, and/or other expression that indirectly identifies the files based on different file attributes, paths, name expressions, or combinations thereof. The second set of threads, processes, or executable instances are therefore able to identify the files implicated by the file operation based solely on the indexed file attributes already entered in the database without the need to load and scan the metadata of files in directories targeted by the file operation.
    Type: Grant
    Filed: November 7, 2019
    Date of Patent: June 7, 2022
    Assignee: Edgecast Inc.
    Inventors: Donnevan Scott Yeager, Harkeerat Singh Bedi, Derek Shiell
  • Publication number: 20220131933
    Abstract: Provided is a controller for dynamically balancing load between different servers using different thresholds that are continually modified for each of the servers. The controller may generate a baseline load measure based on load measures received from the different servers, and may configure a first threshold for a first server and a second threshold for a second server based on the load measure at the first server deviating from the baseline load measure by a first amount that is greater than a second amount by which the load measure at the second server deviates from the baseline load measure. The controller may allocate an additional server to distribute first content with the first server in response to first content load at the first server satisfying the first threshold and the same load or a greater load of second content at the second server not satisfying the second threshold.
    Type: Application
    Filed: May 25, 2021
    Publication date: April 28, 2022
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Kyriakos Zarifis, Harkeerat Singh Bedi, Donnevan Scott Yeager, Derek Shiell
  • Patent number: 11025710
    Abstract: Provided is a controller for dynamically balancing load between different servers using different thresholds that are continually modified for each of the servers. The controller may generate a baseline load measure based on load measures received from the different servers, and may configure a first threshold for a first server and a second threshold for a second server based on the load measure at the first server deviating from the baseline load measure by a first amount that is greater than a second amount by which the load measure at the second server deviates from the baseline load measure. The controller may allocate an additional server to distribute first content with the first server in response to first content load at the first server satisfying the first threshold and the same load or a greater load of second content at the second server not satisfying the second threshold.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: June 1, 2021
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Kyriakos Zarifis, Harkeerat Singh Bedi, Donnevan Scott Yeager, Derek Shiell
  • Publication number: 20210141763
    Abstract: A Multi-Threaded Indexed (“MTI”) file system may use a first set of threads, processes, or executable instances to index desired file attributes in a database while simultaneously but independently executing file operations with a second set of threads, processes, or executable instances. In response to receiving a file operation, the second set of threads, processes, or executable instance may query the database to directly identify files that are indirectly implicated by the file operation with a wildcard, regular expression, and/or other expression that indirectly identifies the files based on different file attributes, paths, name expressions, or combinations thereof. The second set of threads, processes, or executable instances are therefore able to identify the files implicated by the file operation based solely on the indexed file attributes already entered in the database without the need to load and scan the metadata of files in directories targeted by the file operation.
    Type: Application
    Filed: November 7, 2019
    Publication date: May 13, 2021
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Harkeerat Singh Bedi, Derek Shiell
  • Patent number: 10827027
    Abstract: The embodiments provide peer cache filling. The peer cache filling allocates a set of caching servers to distribute content in response to user requests with a limited first subset of the set of servers having access to retrieve the content from an origin and with a larger second subset of the set of servers retrieving the content from the first subset of servers without accessing the origin. The peer cache filling dynamically escalates and deescalates the allocation of the caching servers to the first and second subsets as demand for the content rises and falls. Peer cache filling is implemented by modifying request headers to identify designated hot content, provide a request identifier hash result for identifying the ordering of servers, and provide a value for designating which servers in the ordering as primary server with access to the origin.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: November 3, 2020
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Patent number: 10705978
    Abstract: Asynchronous file tracking may include a first process that adds files to a cache and that generates different instances of a tracking file to track the files as they are entered into the cache. A second process, executing on the device, asynchronously accesses one or more instances of the tracking file at a different rate than the first process generates the tracking file instances. The second process may update a record of cached files based on a set of entries from each of the different instances of the tracking file accessed by the second process. Each set of entries may identify a different set of files that are cached by the device. The second process may then purge one or more cached files that satisfy eviction criteria while the first process continues to asynchronously add files to the cache and create new instances to track the newly cached files.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: July 7, 2020
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Harkeerat Singh Bedi, Donnevan Scott Yeager, Derek Shiell, Hayes Kim
  • Publication number: 20200133883
    Abstract: Asynchronous file tracking may include a first process that adds files to a cache and that generates different instances of a tracking file to track the files as they are entered into the cache. A second process, executing on the device, asynchronously accesses one or more instances of the tracking file at a different rate than the first process generates the tracking file instances. The second process may update a record of cached files based on a set of entries from each of the different instances of the tracking file accessed by the second process. Each set of entries may identify a different set of files that are cached by the device. The second process may then purge one or more cached files that satisfy eviction criteria while the first process continues to asynchronously add files to the cache and create new instances to track the newly cached files.
    Type: Application
    Filed: October 29, 2018
    Publication date: April 30, 2020
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Harkeerat Singh Bedi, Donnevan Scott Yeager, Derek Shiell, Hayes Kim
  • Patent number: 10567540
    Abstract: Disclosed are systems and methods for performing consistent request distribution across a set of servers based on a request Uniform Resource Locator (URL) and one or more cache keys, wherein some but not all cache keys modify the content requested by the URL. The cache keys include query string parameters and header parameters. A request director parses a received request, excludes irrelevant cache keys, reorders relevant cache keys, and distributes the request to a server from the set of servers tasked with serving content differentiated from the request URL by the relevant cache keys. The exclusion and reordering preserves the consistent distribution of requests directed to the same URL but different content as a result of different cache key irrespective of the placement of the relevant cache keys and inclusion of irrelevant cache keys in the request.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: February 18, 2020
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Publication number: 20190342420
    Abstract: The embodiments provide peer cache filling. The peer cache filling allocates a set of caching servers to distribute content in response to user requests with a limited first subset of the set of servers having access to retrieve the content from an origin and with a larger second subset of the set of servers retrieving the content from the first subset of servers without accessing the origin. The peer cache filling dynamically escalates and deescalates the allocation of the caching servers to the first and second subsets as demand for the content rises and falls. Peer cache filling is implemented by modifying request headers to identify designated hot content, provide a request identifier hash result for identifying the ordering of servers, and provide a value for designating which servers in the ordering as primary server with access to the origin.
    Type: Application
    Filed: July 15, 2019
    Publication date: November 7, 2019
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Publication number: 20190260846
    Abstract: Disclosed are systems and methods for performing consistent request distribution across a set of servers based on a request Uniform Resource Locator (URL) and one or more cache keys, wherein some but not all cache keys modify the content requested by the URL. The cache keys include query string parameters and header parameters. A request director parses a received request, excludes irrelevant cache keys, reorders relevant cache keys, and distributes the request to a server from the set of servers tasked with serving content differentiated from the request URL by the relevant cache keys. The exclusion and reordering preserves the consistent distribution of requests directed to the same URL but different content as a result of different cache key irrespective of the placement of the relevant cache keys and inclusion of irrelevant cache keys in the request.
    Type: Application
    Filed: May 1, 2019
    Publication date: August 22, 2019
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Patent number: 10362134
    Abstract: The embodiments provide peer cache filling. The peer cache filling allocates a set of caching servers to distribute content in response to user requests with a limited first subset of the set of servers having access to retrieve the content from an origin and with a larger second subset of the set of servers retrieving the content from the first subset of servers without accessing the origin. The peer cache filling dynamically escalates and deesclataes the allocation of the caching servers to the first and second subsets as demand for the content rises and falls. Peer cache filling is implemented by modifying request headers to identify designated hot content, provide a request identifier hash result for identifying the ordering of servers, and provide a value for designating which servers in the ordering as primary server with access to the origin.
    Type: Grant
    Filed: August 15, 2016
    Date of Patent: July 23, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Patent number: 10284674
    Abstract: Disclosed are systems and methods for performing consistent request distribution across a set of servers based on a request Uniform Resource Locator (URL) and one or more cache keys, wherein some but not all cache keys modify the content requested by the URL. The cache keys include query string parameters and header parameters. A request director parses a received request, excludes irrelevant cache keys, reorders relevant cache keys, and distributes the request to a server from the set of servers tasked with serving content differentiated from the request URL by the relevant cache keys. The exclusion and reordering preserves the consistent distribution of requests directed to the same URL but different content as a result of different cache key irrespective of the placement of the relevant cache keys and inclusion of irrelevant cache keys in the request.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: May 7, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Patent number: 10116737
    Abstract: Some embodiments provide a proprietary 64-bit consistent distribution scheme that preserves the efficiencies of CARP while providing a significantly more balanced distribution of requests that is on par with schemes reliant on computationally expensive cryptographic hashes. The scheme performs hashing of requested URLs and identifiers of available servers over a 64-bit space while optimizing the hashing to remove computationally expensive operations. Some embodiments provide a variant of the scheme to provide a differentiated distribution on the basis of one or more differentiating factors. A first variant utilizes load factor values to adjust the resulting hashes and to produce a first distribution of differentiated content that varies from a second distribution of undifferentiated content.
    Type: Grant
    Filed: February 3, 2016
    Date of Patent: October 30, 2018
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Timothy W. Hartrick, Robert J. Peters
  • Publication number: 20180213053
    Abstract: Disclosed are systems and methods for performing consistent request distribution across a set of servers based on a request Uniform Resource Locator (URL) and one or more cache keys, wherein some but not all cache keys modify the content requested by the URL. The cache keys include query string parameters and header parameters. A request director parses a received request, excludes irrelevant cache keys, reorders relevant cache keys, and distributes the request to a server from the set of servers tasked with serving content differentiated from the request URL by the relevant cache keys. The exclusion and reordering preserves the consistent distribution of requests directed to the same URL but different content as a result of different cache key irrespective of the placement of the relevant cache keys and inclusion of irrelevant cache keys in the request.
    Type: Application
    Filed: January 23, 2017
    Publication date: July 26, 2018
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Publication number: 20180048731
    Abstract: The embodiments provide peer cache filling. The peer cache filling allocates a set of caching servers to distribute content in response to user requests with a limited first subset of the set of servers having access to retrieve the content from an origin and with a larger second subset of the set of servers retrieving the content from the first subset of servers without accessing the origin. The peer cache filling dynamically escalates and deesclataes the allocation of the caching servers to the first and second subsets as demand for the content rises and falls. Peer cache filling is implemented by modifying request headers to identify designated hot content, provide a request identifier hash result for identifying the ordering of servers, and provide a value for designating which servers in the ordering as primary server with access to the origin.
    Type: Application
    Filed: August 15, 2016
    Publication date: February 15, 2018
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Publication number: 20160164964
    Abstract: Some embodiments provide a proprietary 64-bit consistent distribution scheme that preserves the efficiencies of CARP while providing a significantly more balanced distribution of requests that is on par with schemes reliant on computationally expensive cryptographic hashes. The scheme performs hashing of requested URLs and identifiers of available servers over a 64-bit space while optimizing the hashing to remove computationally expensive operations. Some embodiments provide a variant of the scheme to provide a differentiated distribution on the basis of one or more differentiating factors. A first variant utilizes load factor values to adjust the resulting hashes and to produce a first distribution of differentiated content that varies from a second distribution of undifferentiated content.
    Type: Application
    Filed: February 3, 2016
    Publication date: June 9, 2016
    Inventors: Donnevan Scott Yeager, Timothy W. Hartrick, Robert J. Peters
  • Patent number: 9277005
    Abstract: Some embodiments provide a proprietary 64-bit consistent distribution scheme that preserves the efficiencies of CARP while providing a significantly more balanced distribution of requests that is on par with schemes reliant on computationally expensive cryptographic hashes. The scheme performs hashing of requested URLs and identifiers of available servers over a 64-bit space while optimizing the hashing to remove computationally expensive operations. Some embodiments provide a variant of the scheme to provide a differentiated distribution on the basis of one or more differentiating factors. A first variant utilizes load factor values to adjust the resulting hashes and to produce a first distribution of differentiated content that varies from a second distribution of undifferentiated content.
    Type: Grant
    Filed: January 9, 2013
    Date of Patent: March 1, 2016
    Assignee: EDGECAST NETWORKS, INC.
    Inventors: Donnevan Scott Yeager, Timothy W. Hartrick, Robert J. Peters
  • Publication number: 20140195686
    Abstract: Some embodiments provide a proprietary 64-bit consistent distribution scheme that preserves the efficiencies of CARP while providing a significantly more balanced distribution of requests that is on par with schemes reliant on computationally expensive cryptographic hashes. The scheme performs hashing of requested URLs and identifiers of available servers over a 64-bit space while optimizing the hashing to remove computationally expensive operations. Some embodiments provide a variant of the scheme to provide a differentiated distribution on the basis of one or more differentiating factors. A first variant utilizes load factor values to adjust the resulting hashes and to produce a first distribution of differentiated content that varies from a second distribution of undifferentiated content.
    Type: Application
    Filed: January 9, 2013
    Publication date: July 10, 2014
    Applicant: EDGECAST NETWORKS, INC.
    Inventors: Donnevan Scott Yeager, Timothy W. Hartrick, Robert J. Peters