Patents Assigned to VERIZON DIGITAL MEDIA SERVICES INC.
  • Patent number: 10491509
    Abstract: Some embodiments move the task of selecting between different transit provider paths from the network level to the application level. Some embodiments perform network level configurations involving a destination network router advertising over a first transit provider path, a unique first address identifying a destination network server as reachable via the first path and advertising over a second transit provider path, a unique second address identifying the destination network server as reachable via the second path. Some embodiments further perform application level configurations involving a source network server passing a first packet to the destination network server over the first path by addressing the first packet to the first address and passing a second packet to the destination network server over the second path by addressing the second packet to the second address. The path selection may be based on policies accounting for congestion, performance, and other metrics.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: November 26, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Alexander A. Kazerani, Amir Reza Khakpour, Kyle Duren
  • Publication number: 20190356711
    Abstract: An encodingless transmuxer produces a manifest file and segments for streaming media content over different streaming protocols without encoding or otherwise modifying the binary data from the original encoding of the media content file. The transmuxer detects key frame positions from the media content file metadata. The transmuxer maps segment start times to a subset of the identified key frames based on a segment duration parameter. The transmuxer generates a manifest file listing the segments with each segment identifier comprising a timestamp specifying a time offset for the key frame at which the segment commences. In response to a request for a particular segment, the transmuxer or a streaming server copies or reuses from the original media content file, the binary data for the key frame that commences the particular segment up to the bit immediately before the start of the next segment key frame.
    Type: Application
    Filed: July 23, 2019
    Publication date: November 21, 2019
    Applicant: Verizon Digital Media Services Inc.
    Inventor: Seungyeob Choi
  • Patent number: 10474965
    Abstract: The embodiments provide systems and methods for efficiently and accurately differentiating requests directed to uncacheable content from requests directed to cacheable content based on identifiers from the requests. The differentiation occurs without analysis or retrieval of the content being requested. Some embodiments hash identifiers of prior requests that resulted in uncacheable content being served in order to set indices within a bloom filter. The bloom filter then tracks prior uncacheable requests without storing each of the identifiers so that subsequent requests for uncacheable requests can be easily identified based on a hash of the request identifier and set indices of the bloom filter. Some embodiments produce a predictive model identifying uncacheable content requests by tracking various characteristics found in identifiers of prior requests that resulted in uncacheable content being served.
    Type: Grant
    Filed: March 9, 2016
    Date of Patent: November 12, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Hooman Mahyar, Amir Reza Khakpour, Derek Shiell, Robert J. Peters
  • Patent number: 10476800
    Abstract: A load balancing appliance distributes data packets across different virtual connections for ongoing communications with clients over a connectionless communication protocol including User Datagram Protocol (UDP) or Quick Internet UDP Connections (QUIC). The load balancing appliance a distributor that binds and listens on a port through which the connectionless traffic is received. The distributor distributes the traffic to a different set of managers at each interval based on each set of managers binding to and listening on a different set of ports.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: November 12, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Derek Shiell, Marcel Eric Schechner Flores, Sergio Leonardo Ruiz, David Andrews
  • Publication number: 20190342420
    Abstract: The embodiments provide peer cache filling. The peer cache filling allocates a set of caching servers to distribute content in response to user requests with a limited first subset of the set of servers having access to retrieve the content from an origin and with a larger second subset of the set of servers retrieving the content from the first subset of servers without accessing the origin. The peer cache filling dynamically escalates and deescalates the allocation of the caching servers to the first and second subsets as demand for the content rises and falls. Peer cache filling is implemented by modifying request headers to identify designated hot content, provide a request identifier hash result for identifying the ordering of servers, and provide a value for designating which servers in the ordering as primary server with access to the origin.
    Type: Application
    Filed: July 15, 2019
    Publication date: November 7, 2019
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Publication number: 20190334992
    Abstract: The embodiments provide request multiplexing whereby a server receiving a first request for content clones and issues the cloned request to an origin to initiate retrieval of the content. The first request and subsequent requests for the same content are placed in a queue. The server empties a receive buffer that is populated with packets of the requested content as the packets arrive from the origin by writing the packets directly to local storage without directly distributing packets from the receive buffer to any user. The rate at which the server empties the receive buffer is therefore independent of the rate at which any user receives the packets. A first set of packets written to local storage can then be simultaneously distributed to one or more queued requests as the server continues emptying the receive buffer and writing a second set of packets to local storage.
    Type: Application
    Filed: July 9, 2019
    Publication date: October 31, 2019
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Sergio Leonardo Ruiz, Derek Shiell
  • Patent number: 10447753
    Abstract: A scalable architecture is provided for decentralized scaling of resources in a media content encoding platform. The scalable architecture is comprised of a first slicing tier, a second broker tier, and a third encoding tier. Each tier can be horizontally and vertically scaled independent of one another. The second broker tier receives media content slices from the first slicing tier. The second broker tier retains the slices directly in main memory of different brokers without writing the slices to a database or disk. The brokers distribute the slices from main memory across the third encoding tier for encoding based on availability of different encoders in the third tier. This architecture improves overall encoding performance as some of the delays associated with managing and distributing the slices at the second tier are eliminated by operation of the brokers.
    Type: Grant
    Filed: October 13, 2016
    Date of Patent: October 15, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Grady Player, Calvin Ryan Owen, David Frederick Brueck
  • Publication number: 20190312948
    Abstract: Some embodiments set forth probability based caching, whereby a probability value determines in part whether content identified by an incoming request should be cached or not. Some embodiments further set forth probability based eviction, whereby a probability value determines in part whether cached content should be evicted from the cache. Selection of the content for possible eviction can be based on recency and/or frequency of the content being requested. The probability values can be configured manually or automatically. Automatic configuration involves using a function to compute the probability values. In such scenarios, the probability values can be computed as a function of any of fairness, cost, content size, and content type as some examples.
    Type: Application
    Filed: April 16, 2019
    Publication date: October 10, 2019
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Amir Reza Khakpour, Harkeerat Singh Bedi
  • Patent number: 10440156
    Abstract: Some embodiments provide a director agent, a server agent, and a specialized hand-off protocol for improving scalability and resource usage within a server farm. A first network connection is established between a client and the director agent in order to receive a content request from the client from which to select a server from a set of servers that is responsible for hosting the requested content. A second network connection is established between the server agent that is associated with the selected server and a protocol stack of the selected server. The first network connection is handed-off to the server agent using the specialized hand-off protocol. The server agent performs network connection state parameter transformations between the two connections to create a network connection through which content can be passed from the selected server to the client without passing through the director.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: October 8, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Timothy W. Hartrick, Alexander A. Kazerani, Jayson G. Sakata
  • Publication number: 20190260846
    Abstract: Disclosed are systems and methods for performing consistent request distribution across a set of servers based on a request Uniform Resource Locator (URL) and one or more cache keys, wherein some but not all cache keys modify the content requested by the URL. The cache keys include query string parameters and header parameters. A request director parses a received request, excludes irrelevant cache keys, reorders relevant cache keys, and distributes the request to a server from the set of servers tasked with serving content differentiated from the request URL by the relevant cache keys. The exclusion and reordering preserves the consistent distribution of requests directed to the same URL but different content as a result of different cache key irrespective of the placement of the relevant cache keys and inclusion of irrelevant cache keys in the request.
    Type: Application
    Filed: May 1, 2019
    Publication date: August 22, 2019
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Patent number: 10389840
    Abstract: Disclosed is a dynamically adaptable stream segment prefetcher for prefetching stream segments from different media streams with different segment name formats and with different positioning of the segment name iterator within the differing segment name formats. In response to receiving a client issued request for a particular segment of a particular media stream, the prefetcher identifies the segment name format and iterator location using a regular expression matching to the client issued request. The prefetcher then generates prefetch requests based on the segment name format and incrementing a current value for the iterator in the segment name of the client issued request.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: August 20, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventor: Ravikiran Patil
  • Patent number: 10373219
    Abstract: Some embodiments provide a capacity exchange whereby capacity from different content delivery networks (CDNs) can be bought, sold, and traded. The capacity exchange is part of an “Open CDN” platform. The Open CDN platform federates the independent operation of CDNs and other operators of and service providers to distributed platforms participating in the Open CDN platform so that each participant can (1) dynamically scale its capacity without incurring additional infrastructure costs, (2) expand its service into previously untapped geographic regions without physically establishing points of presence (POPs) at those geographic regions, and (3) reduce sunk costs associated with unused capacity of already deployed infrastructure by selling that unused capacity to other participants that are in need of additional capacity.
    Type: Grant
    Filed: August 18, 2014
    Date of Patent: August 6, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Ted Middleton, Alexander A. Kazerani
  • Patent number: 10367865
    Abstract: An encodingless transmuxer produces a manifest file and segments for streaming media content over different streaming protocols without encoding or otherwise modifying the binary data from the original encoding of the media content file. The transmuxer detects key frame positions from the media content file metadata. The transmuxer maps segment start times to a subset of the identified key frames based on a segment duration parameter. The transmuxer generates a manifest file listing the segments with each segment identifier comprising a timestamp specifying a time offset for the key frame at which the segment commences. In response to a request for a particular segment, the transmuxer or a streaming server copies or reuses from the original media content file, the binary data for the key frame that commences the particular segment up to the bit immediately before the start of the next segment key frame.
    Type: Grant
    Filed: July 28, 2016
    Date of Patent: July 30, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventor: Seungyeob Choi
  • Patent number: 10367910
    Abstract: Some embodiments provide instantaneous and non-blocking content purging across storage servers of a distributed platform. When a server receives a purge operation, it extracts an identifier from the purge operation. The server then generates a content purge pattern from the identifier and injects the pattern to its configuration. Instantaneous purging is then realized as the server averts access to any cached content identified by the pattern. The purging also occurs in a non-blocking fashion as the physical purge of the content occurs in-line with the server's cache miss operation. The content purge pattern causes the server to respond to a subsequently received content request with a cache miss, whereby the server retrieves the requested content from an origin source, serves the retrieved content to the requesting user, and replaces a previously cached copy of the content that is to be purged with the newly retrieved copy.
    Type: Grant
    Filed: April 25, 2016
    Date of Patent: July 30, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Derek Shiell, Robert J. Peters, Amir Khakpour, Alexander A. Kazerani
  • Patent number: 10362134
    Abstract: The embodiments provide peer cache filling. The peer cache filling allocates a set of caching servers to distribute content in response to user requests with a limited first subset of the set of servers having access to retrieve the content from an origin and with a larger second subset of the set of servers retrieving the content from the first subset of servers without accessing the origin. The peer cache filling dynamically escalates and deesclataes the allocation of the caching servers to the first and second subsets as demand for the content rises and falls. Peer cache filling is implemented by modifying request headers to identify designated hot content, provide a request identifier hash result for identifying the ordering of servers, and provide a value for designating which servers in the ordering as primary server with access to the origin.
    Type: Grant
    Filed: August 15, 2016
    Date of Patent: July 23, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Patent number: 10356175
    Abstract: The embodiments provide request multiplexing whereby a server receiving a first request for content clones and issues the cloned request to an origin to initiate retrieval of the content. The first request and subsequent requests for the same content are placed in a queue. The server empties a receive buffer that is populated with packets of the requested content as the packets arrive from the origin by writing the packets directly to local storage without directly distributing packets from the receive buffer to any user. The rate at which the server empties the receive buffer is therefore independent of the rate at which any user receives the packets. A first set of packets written to local storage can then be simultaneously distributed to one or more queued requests as the server continues emptying the receive buffer and writing a second set of packets to local storage.
    Type: Grant
    Filed: August 16, 2016
    Date of Patent: July 16, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Sergio Leonardo Ruiz, Derek Shiell
  • Publication number: 20190208554
    Abstract: Provided is a device that persistently distributes connectionless traffic across different simultaneously executing server instances in a manner that allows a first set of server instances of the device to commence a new first set of connectionless data streams during a first interval, and a different second set of server instances of the device to commence a different second set of connectionless data streams as the first set of server instances respond to ongoing connectionless data streams of the first set of connectionless data streams during a subsequent second interval. The persistent distribution further supports virtual connection migration by distributing, to the same server instance, data packets that are directed to the same connectionless data stream even when the sending user equipment changes addressing during the connectionless data stream.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 4, 2019
    Applicant: Verizon Digital Media Services Inc.
    Inventors: Sergio Leonardo Ruiz, Derek Shiell
  • Patent number: 10326703
    Abstract: Some embodiments increase throughput across a connection between a host and a client by initializing the congestion window for that connection dynamically using a previously settled value from a prior instance of the connection established between the same or similar endpoints. An initialization agent tracks congestion window values for previously established connections between a host and various clients. For the tracked congestion window values of each monitored connection, the initialization agent stores an address identifying the client endpoint. When establishing a new connection, the initialization agent determines if the new connection is a recurring connection. A new connection is recurring when the new connection client address is similar or related to an address identified for a previous monitored connection.
    Type: Grant
    Filed: October 10, 2017
    Date of Patent: June 18, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Marcel Eric Schechner Flores, Amir Reza Khakpour, Robert J. Peters
  • Patent number: 10284674
    Abstract: Disclosed are systems and methods for performing consistent request distribution across a set of servers based on a request Uniform Resource Locator (URL) and one or more cache keys, wherein some but not all cache keys modify the content requested by the URL. The cache keys include query string parameters and header parameters. A request director parses a received request, excludes irrelevant cache keys, reorders relevant cache keys, and distributes the request to a server from the set of servers tasked with serving content differentiated from the request URL by the relevant cache keys. The exclusion and reordering preserves the consistent distribution of requests directed to the same URL but different content as a result of different cache key irrespective of the placement of the relevant cache keys and inclusion of irrelevant cache keys in the request.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: May 7, 2019
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Donnevan Scott Yeager, Derek Shiell
  • Patent number: 10270876
    Abstract: Some embodiments set forth probability based caching, whereby a probability value determines in part whether content identified by an incoming request should be cached or not. Some embodiments further set forth probability based eviction, whereby a probability value determines in part whether cached content should be evicted from the cache. Selection of the content for possible eviction can be based on recency and/or frequency of the content being requested. The probability values can be configured manually or automatically. Automatic configuration involves using a function to compute the probability values. In such scenarios, the probability values can be computed as a function of any of fairness, cost, content size, and content type as some examples.
    Type: Grant
    Filed: June 2, 2014
    Date of Patent: April 23, 2019
    Assignee: VERIZON DIGITAL MEDIA SERVICES INC.
    Inventors: Amir Reza Khakpour, Harkeerat Singh Bedi