NETWORK PARAMETER CONFIGURATION BASED ON END USER DEVICE CHARACTERISTICS
Systems, methods, and software for operating a content node are provided herein. In one example, a method of operating a content node is presented. The method includes receiving a characteristic of an end user device, and configuring one or more transmission control protocol (TCP) parameter for communications with the end user device based on at least the characteristic of the end user device. The method also includes transferring the communications using the one or more TCP parameters for delivery to the end user device.
This application is a continuation of, and claims the benefit of priority to, U.S. patent application Ser. No. 14/331,333, filed on Jul. 15, 2014, and entitled NETWORK PARAMETER CONFIGURATION BASED ON END USER DEVICE CHARACTERISTICS, which itself claims the benefit of priority to U.S. Patent Application 61/846,821, entitled “TCP PARAMETER CONFIGURATION BASED ON END USER DEVICE CHARACTERISTICS,” and filed Jul. 16, 2013, both of which are hereby incorporated by reference in their entirety.
TECHNICAL FIELDAspects of the disclosure are related to the field of data transfer, and in particular, data transfer between content nodes of a content delivery network and end user devices.
TECHNICAL BACKGROUNDNetwork-provided content, such as Internet web pages or media content such as video, pictures, music, and the like, are typically served to end users via networked computer systems. End user requests for the network content are processed and the content is responsively provided over various network links. These networked computer systems can include origin hosting servers which originally host network content of content creators or originators, such as web servers for hosting a news website. However, these computer systems of individual content creators can become overloaded and slow due to frequent requests of content by end users.
Content delivery networks have been developed which add a layer of caching between the origin servers of the content providers and the end users. The content delivery networks typically have one or more content nodes distributed across a large geographic region to provide faster and lower latency access to the content for the end users. When end users request content, such as a web page, which is handled through a content node, the content node is configured to respond to the end user requests instead of the origin servers. In this manner, a content node can act as a proxy for the origin servers.
Content of the origin servers can be cached into the content nodes, and can be requested via the content nodes from the origin servers of the content originators when the content has not yet been cached. Content nodes usually cache only a portion of the original source content rather than caching all content or data associated with an original content source. The content nodes can thus maintain only recently accessed and most popular content as cached from the original content sources. Thus, content nodes exchange data with the original content sources when new or un-cached information is requested by the end users or if something has changed in the original content source data.
However, various slowdowns and latency problems in content nodes can exist due to components and software included in the content nodes, such as data storage using spinning hard disk drives, poor management of caching processes, and slow handling of changes to the original content and content configurations. Other slowdowns and latency problems exist due to the capabilities of the end user devices that are accessing content from the content nodes.
OVERVIEWSystems, methods, and software for operating a content node are provided herein. In one example, a method of operating a content node is presented. The method includes receiving a characteristic of an end user device, and configuring one or more transmission control protocol (TCP) parameter for communications with the end user device based on at least the characteristic of the end user device. The method also includes transferring the communications using the one or more TCP parameters for delivery to the end user device.
In another example, a content delivery network for delivering content to an end user device is provided. The content delivery network includes a local data storage system configured to store content, and a content node coupled to the data storage system. The content node is configured to receive a characteristic of the end user device, configure one or more transmission control protocol (TCP) parameter for communications with the end user device based on at least the characteristic of the end user device, and transfer the communications using the one or more TCP parameters for delivery to the end user device.
In a further example, one or more computer readable storage media having program instructions stored thereon for delivering content to an end user. The program instructions, when executed by a content node direct the content node to at least receive a characteristic of the end user device, configure one or more transmission control protocol (TCP) parameter for communications with the end user device based on at least the characteristic of the end user device, and transfer the communications using the one or more TCP parameters for delivery to the end user device.
Many aspects of the disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the views. While multiple embodiments are described in connection with these drawings, the disclosure is not limited to the embodiments disclosed herein. On the contrary, the intent is to cover all alternatives, modifications, and equivalents.
Network content, such as web page content, typically includes content such as text, hypertext markup language (HTML) pages, pictures, video, audio, code, scripts, or other content viewable by an end user in a browser or other application. This various network content can be stored and served by origin servers and equipment. The network content includes example website content referenced in
Content delivery networks can add a layer of caching between origin servers of the content providers and the end users. The content delivery networks typically have one or more content nodes distributed across a large geographic region to provide faster and lower latency local access to the content for the end users. When end users request content, such as a web page, a locally proximate content node will respond to the content request instead of the associated origin server. Various techniques can be employed to ensure the content node responds to content requests instead of the origin servers, such as associating web content of the origin servers with network addresses of the content nodes instead of network addresses of the origin servers using domain name system (DNS) registration and lookup procedures.
Some embodiments of content delivery systems use the Transmission Control Protocol (TCP) of the Internet Protocol suite (IP) to transfer data to end user devices. TCP/IP allows for configuration of the TCP segments transferred between devices to optimize the speed and efficiency of the transfer. End user devices, in particular mobile devices, have a wide variety of characteristics affecting the speed at which data may be transferred to them. By configuring TCP headers appropriately, optimum transfer speeds may be obtained between the content delivery network and a mobile end user device. Although TCP parameters are discussed herein, it should be understood that these parameters can include other network parameters and content transmission parameters.
As a first example employing a content delivery network,
To further illustrate
Although
In
Content node 111 configures (202) one or more transmission control protocol (TCP) parameters based on the one or more characteristics 121 of the end user device 120. Example TCP parameters can include congestion settings and transmission window sizes. Other TCP parameters can include the various fields included in a TCP header. An example TCP header is illustrated in detail in
Content node 111 transfers (203) the communications using the TCP parameters for delivery to the end user device, such as end user device 120. The communications can include user content requested by the end user device. The communications can include at least one TCP segment transferred to end user device 120 including the customized TCP header fields. Each TCP segment can include a TCP header and a data section containing content carried from content node 111 to end user device 120.
In further examples, end user device 120 can execute a user application which provides content to end user device 120. The user application can be a browser or Internet web browser which can access any website via a uniform resource locator (URL). In other examples, the user application is a custom application for the specific content desired by the user, such as a news application or video streaming application. The custom user application can be configured to identify one or more characteristics of the end user device and transfer the one or more characteristics of the end user device for delivery to any of CN 111-112.
For example, end user device 120 can be presently communicating over a fourth generation (4G) wireless communication protocol, such as Long Term Evolution (LTE). The custom application executed on end user device 120 can identify the 4G mode and transfer an indication of this 4G mode for delivery to any of CN 111-112. Other characteristics can also be transferred by end user device 120. Responsively, any of CN 111-112 can process these characteristics, such as the 4G mode indicator, and modify TCP transfer settings based on these characteristics. For example, a TCP window size can be altered based on these characteristics. Other TCP parameters can be altered, as described herein. Typically, if a user of end user device 120 uses a web browser application, these characteristics are not provided to any of CN 111-112. Instead, a custom user application can be employed which is configured to identify and transfer these characteristics.
As a second example employing a content delivery system,
To further illustrate
Although
Management system 360 handles configuration changes and status information collection and delivery for system operators and for the origin server operators or managers. For example, operator device 350 can transfer configuration 351 for delivery to management system 360, where configuration 351 can alter the handling of network content requests by CN 311-313, among other operations. Also, management system 360 can monitor status information for the operation of CN 311-313, such as operational statistics, and provide this status information as 353 to operator device 350. Furthermore, operator device 350 can transfer content 352 for delivery to origin servers 340-341 to include in content 345-346. Although one operator device 350 is shown in
End user device 402 responds to the request by sending one or more characteristics of end user device 402 to content node 404 (operation 416). Content node 404 processes the one or more characteristics of end user device 402 and configures TCP parameters based on the one or more characteristics (operation 418). Content node 404 then sends the content to end user device 402 using the TCP parameters (operation 420).
In the case where content node 404 does not have the content stored locally, content node 404 requests the content from origin server 406. Origin server 406 responsively delivers content 408 to end user device 402, and content node 404 may store some or all of the delivered content 408 locally for future requests.
To further describe the operation of any of CN 311-313 of
Processing system 501 can include one or more of processor 511, RAM 512, and storage 513. In operation, processor 511 is operatively linked to communication interface 510, RAM 512, and storage 513. Processor 511 is capable of executing software stored in RAM 512 or storage 513. When executing the software, processor 511 drives CN 500 to operate as described herein. CN 500 can also include other elements, such as user interfaces, computer systems, databases, distributed storage and processing elements, and the like.
Processor 511 can be implemented within a single processing device but can also be distributed across multiple processing devices or sub-systems that cooperate in executing program instructions. Examples of processor 511 include general purpose central processing units, microprocessors, application specific processors, and logic devices, as well as any other type of processing device.
Communication interface 510 includes one or more network interfaces for communicating over communication networks, such as packet networks, the Internet, and the like. The network interfaces can include one or more local or wide area network communication interfaces which can communicate over Ethernet or Internet protocol (IP) links. Examples of communication interface 510 include network interface card equipment, transceivers, modems, and other communication circuitry.
RAM 512 and storage 513 together can comprise a data storage system, such as that illustrated in data storage system 320 in
Software stored on or in RAM 512 or storage 513 can comprise computer program instructions, firmware, or some other form of machine-readable processing instructions having processes that when executed by processor 511 direct CN 500 to operate as described herein. For example, software drives CN 500 to receive requests for content, determine if the content is stored in CN 500, retrieve content from origin servers, transfer content to end user devices, manage data storage systems for handling and storing the content, and configure a transmission control protocol (TCP) header based on the characteristic of the mobile device, among other operations. The software can also include user software applications. The software can be implemented as a single application or as multiple applications. In general, the software can, when loaded into processor 511 and executed, transform processor 511 from a general-purpose device into a special-purpose device customized as described herein.
RAM space 520 illustrates a detailed view of an example configuration of RAM 512. It should be understood that different configurations are possible. RAM space 520 includes applications 530, operating system (OS) 540, and content RAM cache 550. Content RAM cache 550 includes RAM space for temporary storage of content received over content interface 531, such as dynamic random access memory (DRAM).
Applications 530 include content interface 531, configuration interface 532, statistics interface 533, and content caching application 534. Content caching application 534 handles caching of content and management of storage spaces, such as content RAM cache 550 and storage space 565, as well as exchanges content, data, and instructions via content interface 531, configuration interface 532, and statistics interface 533. Content caching application 534 can comprise a custom application, Varnish caching software, hypertext transfer protocol (HTTP) accelerator software, or other content caching and storage applications, including variation, modifications, and improvements thereof. Applications 530 and OS 540 can reside in RAM space 520 during execution and operation of CN 500, and can reside in system software storage space 562 on solid state storage system 560 during a powered-off state, among other locations and states. Applications 530 and OS 540 can be loaded into RAM space 520 during a startup or boot procedure as described for computer operating systems and applications.
Content interface 531, configuration interface 532, and statistics interface 533 each allow a user to interact with and exchange data with content caching application 534. In some examples, each of content interface 531, configuration interface 532, and statistics interface 533 comprise an application programming interface (API). Content interface 531 allows for exchanging content for caching in CN 500 by content caching application 534, and can also receive instructions to purge or erase data from CN 500. Configuration interface 532 allows for altering the configuration of various operational features of content caching application 534. In some examples, configuration interface 532 comprises a scripting language interface, such as Varnish Configuration Language (VCL), Perl, PHP, Javascript, or other scripting or interpreted language-based interfaces. Statistics interface 533 allows for exchange of statistical information related to the operation of CN 500, such as cache hits/misses, cache fullness information, cache performance statistics, timing statistics, history metrics, among other statistical information. Content interface 531, configuration interface 532, and statistics interface 533 each can communicate with external systems via communication interface 510 over any associated network links.
Solid state storage system 560 illustrates a detailed view of an example configuration of storage 513. Solid state storage system 560 can comprise flash memory such as NAND flash or NOR flash memory, among other solid state storage technologies. As shown in
It should be understood that content node 500 is generally intended to represent a computing system with which at least software 530 and 540 are deployed and executed in order to render or otherwise implement the operations described herein. However, content node 500 can also represent any computing system on which at least software 530 and 540 can be staged and from where software 530 and 540 can be distributed, transported, downloaded, or otherwise provided to yet another computing system for deployment and execution, or yet additional distribution.
TCP headers include the following fields:
-
- Source port (16 bits)—identifies the sending port.
- Destination port (16 bits)—identifies the receiving port
- Sequence number (32 bits)—has a dual role:
- If the SYN flag is set (1), then this is the initial sequence number. The sequence number of the actual first data byte and the acknowledged number in the corresponding ACK are this sequence number plus 1.
- If the SYN flag is clear (0), then this is the accumulated sequence number of the first data byte of this segment for the current session.
- Acknowledgment number (32 bits)—if the ACK flag is set then the value of this field is the next sequence number that the receiver is expecting. This acknowledges receipt of all prior bytes (if any). The first ACK sent by each end acknowledges the other end's initial sequence number itself, but no data.
- Data offset (4 bits)—specifies the size of the TCP header in 32-bit words. The minimum header size is 5 words (20 bytes) and the maximum is 15 words (60 bytes), allowing for up to 40 bytes of options in the header. The Data offset is also the offset from the start of the TCP segment to the actual data.
- Reserved (3 bits)—reserved for future use and should be set to 0.
- Flags (9 bits) (aka Control bits)—contains 9 1-bit flags:
- NS (1 bit)—ECN-nonce concealment protection (added to header by RFC 3540).
- CWR (1 bit)—Congestion Window Reduced (CWR) flag is set by the sending host to indicate that it received a TCP segment with the ECE flag set and had responded in the congestion control mechanism (added to header by RFC 3168).
- ECE (1 bit)—ECN-Echo indicates:
- If the SYN flag is set (1), that the TCP peer is ECN capable.
- If the SYN flag is clear (0), that a packet with Congestion Experienced flag in the IP header set is received during normal transmission (added to header by RFC 3168).
- URG (1 bit)—indicates that the Urgent pointer field is significant.
- ACK (1 bit)—indicates that the Acknowledgment field is significant. All segments after the initial SYN segment sent by the client should have this flag set.
- PSH (1 bit)—Push function. Asks to push the buffered data to the receiving application.
- RST (1 bit)—Reset the connection.
- SYN (1 bit)—Synchronize sequence numbers. Only the first segment sent from each end should have this flag set. Some other flags change meaning based on this flag, and some are only valid when it is set, and others when it is clear.
- FIN (1 bit)—No more data from sender.
- Window size (16 bits)—the size of the receive window, which specifies the number of window size units (by default, bytes) (beyond the sequence number in the acknowledgment field) that the sender of this segment is currently willing to receive.
- Checksum (16 bits)—The 16-bit checksum field is used for error-checking of the header and data.
- Urgent pointer (16 bits)—if the URG flag is set, then this 16-bit field is an offset from the sequence number indicating the last urgent data byte.
- Options (Variable 0-320 bits, divisible by 32)—The length of this field is determined by the data offset field. Options have up to three fields: Option-Kind (1 byte), Option-Length (1 byte), Option-Data (variable). The Option-Kind field indicates the type of option, and is the only field that is not optional. Depending on what kind of option is being dealt with, the next two fields may be set: the Option-Length field indicates the total length of the option, and the Option-Data field contains the value of the option, if applicable. For example, an Option-Kind byte of 0x01 indicates that this is a No-Op option used only for padding, and does not have an Option-Length or Option-Data byte following it. An Option-Kind byte of 0 is the End Of Options option, and is also only one byte. An Option-Kind byte of 0x02 indicates that this is the Maximum Segment Size option, and will be followed by a byte specifying the length of the MSS field (should be 0x04). Note that this length is the total length of the given options field, including Option-Kind and Option-Length bytes. So while the MSS value is typically expressed in two bytes, the length of the field will be 4 bytes (+2 bytes of kind and length). In short, an MSS option field with a value of 0x05B4 will show up as (0x02 0x04 0x05B4) in the TCP options section.
- Padding—The TCP header padding is used to ensure that the TCP header ends and data begins on a 32 bit boundary. The padding is composed of zeros.
Content node 311 may configure one or more of these fields within TCP header 600 based on one or more characteristics of mobile end user device 330. These fields within the TCP header are configured to optimize data transfer between content node 311 and mobile end user device 330.
Referring back to
End user devices 330-332 can each be a user device, subscriber equipment, customer equipment, access terminal, smartphone, personal digital assistant (PDA), computer, tablet computing device, e-book, Internet appliance, media player, game console, or some other user communication apparatus, including combinations thereof.
Communication links 370-375 each use metal, glass, optical, air, space, or some other material as the transport media. Communication links 370-375 can each use various communication protocols, such as Time Division Multiplex (TDM), asynchronous transfer mode (ATM), Internet Protocol (IP), Ethernet, synchronous optical networking (SONET), hybrid fiber-coax (HFC), circuit-switched, communication signaling, wireless communications, or some other communication format, including combinations, improvements, or variations thereof. Communication links 370-375 can each be a direct link or can include intermediate networks, systems, or devices, and can include a logical network link transported over multiple physical links. Although one main link for each of links 370-375 is shown in
The included descriptions and figures depict specific embodiments to teach those skilled in the art how to make and use the best mode. For the purpose of teaching inventive principles, some conventional aspects have been simplified or omitted. Those skilled in the art will appreciate variations from these embodiments that fall within the scope of the invention. Those skilled in the art will also appreciate that the features described above can be combined in various ways to form multiple embodiments. As a result, the invention is not limited to the specific embodiments described above, but only by the claims and their equivalents.
Claims
1. A method of operating a server in a content network, the method comprising:
- receiving, over a connection between the server and a user application on an end-user device, a request for content from the user application;
- sending, to the user application, a request for device information;
- receiving a reply to the request for the device information, wherein the reply comprises one or more characteristics of the end-user device;
- configuring one or more parameters of the connection based on at least the one or more characteristics of the end user device; and
- sending the content to the user application over the connection and in accordance with the one or more parameters.
2. The method of claim 1 wherein the connection comprises a transmission control protocol (TCP) connection.
3. The method of claim 2 wherein the one or more characteristics of the end-user device comprise memory capacity and available memory.
4. The method of claim 3 wherein the one or more parameters of the connection comprise a TCP window size.
5. The method of claim 1 wherein the user application comprises a video streaming application.
6. The method of claim 1 wherein the user application comprises a news application.
7. A computer server comprising:
- one or more computer readable storage media;
- one or more processors operatively coupled with the one or more computer readable storage media; and
- program instructions stored on the one or more computer readable storage media that, when executed by the one or more processors, direct the computer server to at least:
- receive, over a connection between the server and a user application on an end-user device, a request for content from the user application;
- send, to the user application, a request for device information;
- receive a reply to the request for the device information, wherein the reply comprises one or more characteristics of the end-user device;
- configure one or more parameters of the connection based on at least the one or more characteristics of the end user device; and
- send the content to the user application over the connection and in accordance with the one or more parameters.
8. The computer server of claim 7 wherein the connection comprises a transmission control protocol (TCP) connection.
9. The computer server of claim 8 wherein the one or more characteristics of the end-user device comprise memory capacity and available memory.
10. The computer server of claim 9 wherein the one or more parameters of the connection comprise a TCP window size.
11. The computer server of claim 7 wherein the user application comprises a video streaming application.
12. The computer server of claim 7 wherein the user application comprises a news application.
13. A computing apparatus comprising:
- one or more computer readable storage media;
- one or more processors operatively coupled with the one or more computer readable storage media; and
- a user application comprising program instructions stored on the one or more computer readable storage media that, when executed by the one or more processors, direct the computing apparatus to at least:
- send, over a connection between the computing apparatus and a content server, a request for content served by the content server;
- receive, from the content server, a request for device information;
- send a reply to the request for the device information, wherein the reply comprises one or more characteristics of the computing apparatus;
- receive the content from the content server the connection configured in accordance with the one or more parameters.
14. The computing apparatus of claim 13 wherein the connection comprises a transmission control protocol (TCP) connection.
15. The computing apparatus of claim 14 wherein the one or more characteristics of the end-user device comprise memory capacity and available memory.
16. The computing apparatus of claim 15 wherein the one or more parameters of the connection comprise a TCP window size.
17. The computing apparatus of claim 13 wherein the user application comprises a video streaming application.
18. The computing apparatus of claim 13 wherein the user application comprises a new application.
19. The computing apparatus of claim 13 wherein the content server comprises a cache node in a content delivery network.
20. A system comprising:
- a means for receiving, over a connection between a server and a user application on an end-user device, a request for content from the user application;
- a means for sending, to the user application, a request for device information;
- a means for receiving a reply to the request for the device information, wherein the reply comprises one or more characteristics of the end-user device;
- a means for configuring one or more parameters of the connection based on at least the one or more characteristics of the end user device; and
- a means for sending the content to the user application over the connection and in accordance with the one or more parameters.
Type: Application
Filed: Sep 13, 2019
Publication Date: Jan 2, 2020
Inventors: Artur Bergman (San Francisco, CA), Simon Wistow (Oakland, CA), Tyler B. McMullen (San Francisco, CA)
Application Number: 16/570,671