Patents by Inventor Harkeerat Singh Bedi

Harkeerat Singh Bedi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10621110
    Abstract: Some embodiments modify caching server operation to evict cached content based on a deterministic and multifactor modeling of the cached content. The modeling produces eviction scores for the cached items. The eviction scores are derived from two or more factors of age, size, cost, and content type. The eviction scores determine what content is to be evicted based on the two or more factors included in the eviction score derivation. The eviction scores modify caching server eviction operation for specific traffic or content patterns. The eviction scores further modify caching server eviction operation for granular control over an item's lifetime on cache.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: April 14, 2020
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Harkeerat Singh Bedi, Amir Reza Khakpour, Robert J. Peters
  • Patent number: 10609173
    Abstract: Some embodiments set forth probability based caching, whereby a probability value determines in part whether content identified by an incoming request should be cached or not. Some embodiments further set forth probability based eviction, whereby a probability value determines in part whether cached content should be evicted from the cache. Selection of the content for possible eviction can be based on recency and/or frequency of the content being requested. The probability values can be configured manually or automatically. Automatic configuration involves using a function to compute the probability values. In such scenarios, the probability values can be computed as a function of any of fairness, cost, content size, and content type as some examples.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: March 31, 2020
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Amir Reza Khakpour, Harkeerat Singh Bedi
  • Publication number: 20200036768
    Abstract: A server distributing a stream to a client device may use server-side metrics to detect issues that interrupt or otherwise affect playback of the stream by the client device. The server reproduces the issues experienced by the client device from the server-side metrics without accessing or using client-side metrics. The server-side metrics may include data that may be produced or obtained by the server such as requested stream segment filenames that identify changes in the stream bitrate, and timestamps at which the client device requests different segments. The client-side metrics may include metrics that are produced by the client device, and that directly identify the same client-side issues the server reproduces via the server-side metrics. The server or a distributed platform in which the server operates may dynamically alter the delivery of the stream or perform other remedial actions if the server detects various client-side issues from the server-side metrics.
    Type: Application
    Filed: July 27, 2018
    Publication date: January 30, 2020
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Harkeerat Singh Bedi, Satheesh Ravi
  • Publication number: 20200028927
    Abstract: Hybrid pull and push based streaming selectively performs a pull-based distribution of a stream to a first point-of-presence (“PoP”) of a distributed platform having low demand for the stream, and a push-based distribution of the stream to a second PoP of the distributed platform having high demand for the stream. The push-based distribution may be used to prepopulate the second PoP cache with the live stream data as the live stream data is uploaded from an encoder to a source PoP of the distributed platform, and before that live stream data is requested by the second PoP. In doing so, requests for the live stream data received at the second PoP may result in cache hits with the requested live stream data being immediately served from the second PoP cache without having to retrieve the live stream data from outside the second PoP.
    Type: Application
    Filed: July 19, 2018
    Publication date: January 23, 2020
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Karthik Sathyanarayana, Harkeerat Singh Bedi, Sergio Leonardo Ruiz
  • Publication number: 20190312948
    Abstract: Some embodiments set forth probability based caching, whereby a probability value determines in part whether content identified by an incoming request should be cached or not. Some embodiments further set forth probability based eviction, whereby a probability value determines in part whether cached content should be evicted from the cache. Selection of the content for possible eviction can be based on recency and/or frequency of the content being requested. The probability values can be configured manually or automatically. Automatic configuration involves using a function to compute the probability values. In such scenarios, the probability values can be computed as a function of any of fairness, cost, content size, and content type as some examples.
    Type: Application
    Filed: April 16, 2019
    Publication date: October 10, 2019
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Amir Reza Khakpour, Harkeerat Singh Bedi
  • Publication number: 20190238627
    Abstract: Multiple hit load balancing provides a quasi-persistent request distribution for encrypted requests passing over secure connections as well as for multiple requests passing over the same connection. The multiple hit load balancing involves tracking object demand at each server of a set of servers. The multiple hit load balancing further involves dynamically scaling the servers that cache and directly serve frequently requested objects based on the demand that is tracked by each of the servers. For infrequently requested objects, the servers perform a peer retrieval of the objects so to limit the number of the same object being redundantly cached by multiple servers of the set of servers.
    Type: Application
    Filed: January 29, 2018
    Publication date: August 1, 2019
    Inventors: Derek Shiell, Marcel Eric Schechner Flores, Harkeerat Singh Bedi
  • Patent number: 10270876
    Abstract: Some embodiments set forth probability based caching, whereby a probability value determines in part whether content identified by an incoming request should be cached or not. Some embodiments further set forth probability based eviction, whereby a probability value determines in part whether cached content should be evicted from the cache. Selection of the content for possible eviction can be based on recency and/or frequency of the content being requested. The probability values can be configured manually or automatically. Automatic configuration involves using a function to compute the probability values. In such scenarios, the probability values can be computed as a function of any of fairness, cost, content size, and content type as some examples.
    Type: Grant
    Filed: June 2, 2014
    Date of Patent: April 23, 2019
    Assignee: VERIZON DIGITAL MEDIA SERVICES INC.
    Inventors: Amir Reza Khakpour, Harkeerat Singh Bedi
  • Patent number: 10133673
    Abstract: The embodiments implement file size variance caching optimizations. The optimizations are based on a differentiated caching implementation involving a small size content optimized first cache and a large size content optimized second cache optimized. The first cache reads and writes data using a first block size. The second cache reads and writes data using a different second block size that is larger than the first block size. A request management server controls request distribution across the first and second caches. The request management server differentiates large size content requests from small size content requests. The request management server uses a first request distribution scheme to restrict large size content request distribution across the first cache and a second request distribution scheme to restrict small size content request distribution across the second cache.
    Type: Grant
    Filed: March 9, 2016
    Date of Patent: November 20, 2018
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Harkeerat Singh Bedi, Amir Reza Khakpour, Derek Shiell
  • Publication number: 20180314647
    Abstract: Some embodiments modify caching server operation to evict cached content based on a deterministic and multifactor modeling of the cached content. The modeling produces eviction scores for the cached items. The eviction scores are derived from two or more factors of age, size, cost, and content type. The eviction scores determine what content is to be evicted based on the two or more factors included in the eviction score derivation. The eviction scores modify caching server eviction operation for specific traffic or content patterns. The eviction scores further modify caching server eviction operation for granular control over an item's lifetime on cache.
    Type: Application
    Filed: June 26, 2018
    Publication date: November 1, 2018
    Inventors: Harkeerat Singh Bedi, Amir Reza Khakpour, Robert J. Peters
  • Publication number: 20180167434
    Abstract: The solution distributes the management of stream segments from a central storage cluster to different edge servers that upload stream segments to and receive stream segments from the central storage cluster. Each edge server tracks the stream segments it has uploaded to the central storage cluster as well as the expiration times for those segments. The tracking is performed without a database using a log file and file system arrangement. First-tier directories are created in the file system for different expiration intervals. Entries under the first-tier directories track individual segments that expire within the expiration interval of the first-tier directory with the file system entries being files or a combination of subdirectories and files. Upon identifying expired stream segments, the edge servers instruct the central storage cluster to delete those stream segments. This removes the management overhead from the central storage cluster and implements the distributed management without a database.
    Type: Application
    Filed: December 6, 2017
    Publication date: June 14, 2018
    Inventors: Karthik Sathyanarayana, Harkeerat Singh Bedi, Derek Shiell, Robert J. Peters
  • Publication number: 20170262373
    Abstract: The embodiments implement file size variance caching optimizations. The optimizations are based on a differentiated caching implementation involving a small size content optimized first cache and a large size content optimized second cache optimized. The first cache reads and writes data using a first block size. The second cache reads and writes data using a different second block size that is larger than the first block size. A request management server controls request distribution across the first and second caches. The request management server differentiates large size content requests from small size content requests. The request management server uses a first request distribution scheme to restrict large size content request distribution across the first cache and a second request distribution scheme to restrict small size content request distribution across the second cache.
    Type: Application
    Filed: March 9, 2016
    Publication date: September 14, 2017
    Inventors: Harkeerat Singh Bedi, Amir Reza Khakpour, Derek Shiell
  • Publication number: 20150350365
    Abstract: Some embodiments set forth probability based caching, whereby a probability value determines in part whether content identified by an incoming request should be cached or not. Some embodiments further set forth probability based eviction, whereby a probability value determines in part whether cached content should be evicted from the cache. Selection of the content for possible eviction can be based on recency and/or frequency of the content being requested. The probability values can be configured manually or automatically. Automatic configuration involves using a function to compute the probability values. In such scenarios, the probability values can be computed as a function of any of fairness, cost, content size, and content type as some examples.
    Type: Application
    Filed: June 2, 2014
    Publication date: December 3, 2015
    Applicant: EDGECAST NETWORKS, INC.
    Inventors: Amir Reza Khakpour, Harkeerat Singh Bedi