DATA PACKET NETWORK
The invention includes a network node, and a method of controlling the network node in a data packet network, wherein the method comprising the steps of: receiving a first data packet from a first external network node; analysing the first data packet to determine if it is of a class of service deemed to be queuable or unqueuable; and, if it is queuable, sending the first data packet to a second external node via an intermediate node, wherein the first data packet is reclassified to be of the unqueuable class of service such that the intermediate node should not forward the first data packet to the second external network node if a packet queue exists at the intermediate node.
The present invention relates to a data packet network and to a method of controlling packets in a data packet network.
BACKGROUNDA majority of networks in use today use discrete data packets which are transferred between a sender and receiver node via one or more intermediate nodes. A common problem in these data packet networks is that the sender node has little or no information on the available capacity in the data packet network, and thus cannot immediately determine the appropriate transmission rate at which it may send data packets. The appropriate transmission rate would be the maximum rate at which data packets can be sent without causing congestion in the network, which would otherwise cause some of the data packets to be dropped and can also cause data packets on other data flows (e.g. between other pairs of nodes which share one or more intermediate nodes along their respective transmission paths) to be dropped.
To address this problem, nodes in data packet networks use either a closed or open-loop congestion control algorithm. Closed loop algorithms rely on some congestion feedback being supplied to the sender node, allowing it to determine or estimate the appropriate rate at which to send future data packets. However, this congestion feedback can become useless in a very short amount of time, as other pairs of nodes in the network (sharing one or more intermediate nodes along their transmission paths) may start or stop data flows at any time. Accordingly, the congestion feedback can quickly become outdated and the closed loop algorithms do not accurately predict the appropriate rate to send data packets. This shortcoming becomes ever more serious as capacities of links in data packet networks increase, meaning that large increases or decreases in capacity and congestion can occur.
Open-loop congestion control algorithms are commonly used at the start of a new data flow when there is little or no congestion information from the network. One of the most common congestion control algorithms is the Transmission Control Protocol, TCP, ‘Slow-Start’ algorithm for Internet Protocol, IP, networks, which has an initial exponential growth phase followed by a congestion avoidance phase. When a new TCP Slow-Start flow begins, the sender's congestion window (a value representing an estimate of the congestion on the network) is set to an initial value and a first set of packets is sent to the receiver node. The receiver node sends back an acknowledgement to the sender node for each data packet it receives. During the initial exponential growth phase, the sender node increases its congestion window by one packet for every acknowledgment packet received. The congestion window, and thus the transmission rate, is therefore doubled every round trip time. Once the congestion window reaches the sender node's Slow-Start Threshold (‘ssthresh’), then the exponential growth phase ends and it starts the congestion avoidance phase in which the congestion window is only increased by one packet for every round-trip it receives an acknowledgement, regardless of how many acknowledgment packets are received. If at any point an acknowledgement packet (or its absence) indicates that a loss has occurred, which is likely due to congestion on the network, then the sender node responds by halving the congestion window in an attempt to reduce the amount of congestion caused by that particular data flow. However, the sender node receives this feedback (i.e. the acknowledgment packet indicating that a loss had occurred) one round trip time after its transmission rate exceeded the available capacity. By the time it receives this feedback it will already be sending data twice as fast as the available capacity. This is known as ‘overshoot’.
The exponential growth phase can cause issues with non-TCP traffic. Consider the case of a low-rate (e.g. 64 kB/s) constant bit-rate voice flow in progress over an otherwise empty 1 GB/s link. Further imagine a large TCP flow starts on the same link with an initial congestion window of ten 1500 B packets and a round trip time of 200 ms. The flow keeps doubling its congestion window every round trip until, after nearly eleven round trips, its window is 16,666 packets per round (1 Gb/s). In the next round it will double to 2 Gb/s before it gets the first feedback detecting drops that imply it exceeded the available capacity in the network a round trip earlier. About 50% of the packets in this next round (16,666 packets) will be dropped.
In this example, the TCP Slow-Start algorithm has taken eleven round-trip times (over two seconds) to find its correct operating rate. Furthermore, when TCP drops such a large number of packets, it can take a long time to recover, sometimes leading to a black-out of many more seconds. The voice flow is also likely to black-out for at least 200 ms and often much longer, due to at least 50% of the voice packets being dropped over this period.
There are thus two main issues with the overshoot problem. Firstly, it takes a long time for data flows to stabilise at an appropriate rate for the available network capacity and, secondly, a very large amount of damage occurs to any data flow having a transmission path sharing the now congested part of the network.
Further concepts of data packet networks will now be described.
A node typically has a receiver for receiving data packets, a transmitter for transmitting data packets, and a buffer for storing data packets. When the node receives a data packet at the receiver, it is temporarily stored in the buffer. If there are no other packets currently stored in the buffer (i.e. the new packet is not in a ‘queue’) then the packet is immediately forwarded to the transmitter. If there are other packets in the buffer such that the new packet is in a queue, then it must wait its turn before being forwarded to the transmitter. A few concepts regarding the management and exploitation of node buffers will now be described.
A node implementing a very basic management technique for its buffer would simply store any arriving packet in its buffer until it reaches capacity. At this point, any data packet which is larger than the remaining capacity of the buffer will be discarded. This is known as drop-tail. However, this results in larger packets being dropped more often that smaller packets, which may be still be added to the end of the buffer queue. An improvement on this technique was a process known as Active Queue Management (AQM), in which data packets are dropped when it is detected that the queue of packets in the buffer is starting to grow above a threshold rate, but before the buffer is full. This gives the buffer sufficient capacity to absorb bursts of packets, even during long-running data flows.
Some nodes may treat each data packet in its buffer the same, such that data packets are transmitted in the same sequence in which they were received (known as “First In First Out”). However, node buffer management techniques introduced the concept of marking data packets with different classes of service. This technique can be used by defining certain classes as higher than others, and a network node can then implement a forwarding function that prevents or mitigates the loss or delay of packets in a higher class at the expense of a packet in a lower class. Examples of techniques that manage packet buffers using differing classes of service include:
-
- (Non-strict) Prioritisation: In this technique, higher class packets will be forwarded by a network node before a lower class packet, even if the lower class packet arrived at the node earlier. This is often implemented by assigning a lower weight to a lower class, and serving each class in proportion to its weight.
- Strict Prioritisation: Similar to the non-strict prioritisation, although a lower class packet will never be forwarded whilst a higher class packet is present in the buffer.
- Traffic Policer: A network node may enforce a traffic profile specifying, for example, limits on the average rate and the maximum size of bursts. Any data flow that does not meet the profile is marked accordingly and may be discarded if congestion reaches a certain level.
- Preferential Discard: If a buffer is filled with a queue of data packets, then any lower class packets will be preferentially discarded before higher class packets.
- Selective Packet Discard: A proportion of the buffer is reserved for higher class data packets. The lower class packets may only occupy a smaller proportion of the buffer (relative to the buffer of that node without selective packet discard), and packets will be discarded if that smaller buffer is full.
- AQM: AQM, as mentioned above, drops packets when it is detected that the queue of packets in the buffer is starting to grow above a threshold rate. This can be modified such that the packets dropped by AQM are those of a lower class of service.
The approaches of Strict Prioritisation and Preferential Discard were both proposed to ensure lower class packets cannot cause harm to higher class packets. However, there are still problems with these techniques. In Strict Prioritisation, some network nodes may have one or more higher priority packets in the buffer for long periods (many seconds or even minutes), particularly during peak hours. This causes any lower class data packets to remain in the buffer for a long period of time. During this period, the sending/receiving nodes would probably time out and the data packet would be retransmitted in a higher class (on the assumption that the lower class packet was discarded). When the busy period in the higher priority buffer ends, the buffer of lower class data packets is finally transmitted. This merely wastes capacity as the data has already been received from the retransmitted higher-priority packet.
Network nodes can exploit the lower class data packets to determine the available capacity in the network (known as ‘probing’). In Preferential Discard, a burst of ‘discard eligible’ probing data packets may fill up a buffer, and only then is Preferential Discard triggered. During probing the discard eligible packets will cause a queue up to the discard threshold even if newly arriving probing traffic is discarded. Thus, probing will not be non-intrusive because higher class traffic from established flows will experience increased delay.
It is therefore desirable to alleviate some or all of the above problems.
SUMMARY OF THE INVENTIONAccording to a first aspect of the invention, there is provided a method of controlling a network node in a data packet network, the method comprising the steps of: receiving a first data packet from a first external network node; analysing the first data packet to determine if it is of a class of service deemed to be queuable or unqueuable; and, if it is queuable, sending the first data packet to a second external node via an intermediate node, wherein the first data packet is reclassified to be of the unqueuable class of service such that the intermediate node should not forward the first data packet to the second external network node if a packet queue exists at the intermediate node.
The present invention allows a network node, being one of a plurality of intermediate nodes between a source and receiver node, to reclassify any queuable packet as an unqueuable packet. The network node may then send this reclassified packet towards the receiver node via one or more intermediate nodes. The network node and source node may therefore establish a data flow using a conventional congestion control algorithm, which has the advantage that the transmission rate between the source node and network node will increase rapidly due to the short round trip time, whilst the network node and receiver node may establish a data flow over a wide area network using unqueuable packets, which has the advantage that the transmission rate between the network node and receiver node should increase up to the bottleneck rate quickly and, in doing so, drop fewer data packets (compared to conventional congestion control algorithms, such as TCP Slow-Start). The present invention has the additional benefit of being able to exploit the unqueuable class of service for data packets without having to modify the source and receiver nodes. Thus, only intermediate nodes between the source and receiver nodes (which are typically owned and maintained by network operators) need to be upgraded to exploit the new unqueuable class of service.
A non-transitory computer-readable storage medium storing a computer program or suite of computer programs, which upon execution by a computer system performs the method of the first aspect of the invention, is also provided.
According to a third aspect of the invention, there is provided a network node for a data packet network, the network node comprising a receiver adapted to receive a first data packet from a first external network node; a processor adapted to analyse the first data packet to determine if it is of a class of service deemed to be queuable or unqueuable; and a transmitter adapted to transmit the first data packet to a second network node, via an intermediate node, if the processor determines that the first data packet is queuable, wherein the processor is further adapted to reclassify the first data packet to be of the unqueuable class of service such that the intermediate node should not forward the first data packet to the second network node if a packet queue exists at the intermediate node.
In order that the present invention may be better understood, embodiments thereof will now be described, by way of example only, with reference to the accompanying drawings in which:
A first embodiment of a communications network 10 of the present invention will now be described with reference to
When the client 11 sends a data packet along path 12, it is initially forwarded to a first customer edge router 13, which forwards it on to the first provider edge router 14. The first provider edge router 14 forwards the data packet to a core router 15, which in turn forwards it on to a second provider edge router 16 (which may be via one or more other core routers). The second provider edge router 16 forwards the data packet to a second customer edge router 17, which forwards it on to the server 18.
A core router 15 is shown in more detail in
The skilled person will understand that the identifier may be stored in the 6-bit Differentiated Services field (DSfield) of an IPv4 or IPv6 packet, the 3-bit 802.1p Class of Service (CoS) field of an Ethernet frame or the 3-bit Traffic Class field of an MPLS frame. The skilled person will also understand that other identifiers or codepoints could be used, so long as the relevant nodes in the network understand that this identifier/codepoint indicates that the data packet is unqueuable. This will now be explained with reference to two scenarios illustrated in
A schematic diagram illustrating an overview of the processing of data packets by core router 15 in accordance with the present invention is shown in
Whilst the first packet 23 is being forwarded to the transmitter 15d, a second packet 24 arrives at the receiver 15a. The management function 22 determines that the second packet 24 is a queuable BE packet. In this scenario, the first packet 23 has not yet been fully transmitted and is thus still present in the buffer 20. The second packet 24 is thus stored in the buffer 20 behind the first packet 23. A third packet 25 then arrives at the receiver 15a whilst the first and second packets 23, 24 are still present in the buffer 20. The management function 22 determines that the third packet 25 is a UQ packet and that there are already data packets in the buffer 20. In this case, the management function 20 discards the data packet (i.e. it is prevented from being transmitted to the server 18). Lastly, a fourth packet 26 arrives, and is again determined to be a queuable BE packet and is therefore stored in the buffer 20.
A second scenario is illustrated in
In the above two scenarios, the packets are deemed to have left the buffer at the time the transmitter completes its transmission of the last byte of the packet. Once this last byte has completed its transmission, then the buffer may store an unqueuable packet.
A flow diagram representing a first embodiment of the management function 22 of the processor 15b is shown in
If the processor 15b determines that the new data packet is of a queuable class, the processor 15b passes the new data packet to the enqueuing function and it is stored in buffer 20 (step S2). However, if the processor 15b determines that the new data packet is unqueuable, then the processor 15b determines whether the buffer 20 is empty or not. If it is empty, then the processor 15b again passes the new data packet to the enqueuing function and it is stored in buffer 20 (step S3). Alternatively, if the processor 15b determines that that buffer is not empty, then the processor 15b discards the packet (step S4).
A flow diagram illustrating a second embodiment of the management function 22 of the processor 15b is shown in
The unqueuable class of service can be exploited by a sender/receiver node 11, 18 pair in order to determine an appropriate transfer rate to use in the communications network 10 (i.e. the maximum rate at which data can be transmitted without causing any packets to be dropped or causing packets on data flow sharing part of the same transmission path to be dropped). Before an embodiment of this algorithm is described, an overview of the conventional TCP Slow-Start process and its corresponding timing diagram will be presented with reference to
In this example, these three packets do not experience any congestion and are all received by the client in a timely manner. The client therefore sends an acknowledgment packet (represented by thin unbroken arrows) for each of the three packets of data to the server. The server receives these acknowledgements and, in response, increases the congestion window (by one packet for each acknowledgement received). The server therefore sends six data packets in the next transmission. In
The skilled person would understand that if the data stream were much larger, then the TCP Slow-Start algorithm would increase its congestion window by one packet for each acknowledgement received until it reaches its slow start threshold. Once this threshold is reached, then the congestion window is increased by one packet if it receives an acknowledgment within one round-trip time (i.e. before a time-out occurs), regardless of how many acknowledgments are received in that time. The algorithm therefore moves from an exponential growth phase to a linear congestion avoidance phase. The skilled person would also understand that if a time-out occurs without receiving any acknowledgements, or an acknowledgement is received indicating that packets have been dropped, then the congestion window is halved.
An embodiment of a method of the present invention will now be described with reference to
The initial steps of the method of the present invention are very similar to the Slow-Start method outlined above. The client 11 sends an initial request 52 to the server 18 for data. The server 18 responds by buffering a stream of data packets to send to the client 11 and sets its initial congestion window to the current standard TCP size of three packets. Accordingly, the server 18 sends three packets of data 54 from the buffer towards the client 11, which are all marked as BE class of service (represented by thick, unbroken arrows).
At this point, the method of the present invention differs from the conventional Slow-Start algorithm. Following the initial three BE packets of data, the server 18 continues to send further data packets 55 from the buffer towards the client 11. Each of these further data packets are marked as UQ (e.g. the header portions contain an identifier/codepoint which all nodes in the communications network 10 recognise as being of the unqueuable class), and, in this embodiment, are sent at a higher transmission rate than the first three BE packets. These UQ data packets are represented by dashed arrows in
The initial BE data packets and the following burst of UQ data packets leave the server 18 at the maximum rate of its transmitter. In this example, this is over a 1 GB/s connection between the network interface on the server 18 and the second customer edge router 17 (e.g. a 1 Gb/s Ethernet link). Once these BE and UQ packets arrive at the second customer edge router 17, they are forwarded to the second provider edge router 16. In this example, this is over a 500 Mb/s access link. Thus, when the first UQ packet arrives at the second customer edge router 17, the second customer edge router's 17 relatively slower output rate (i.e. the slower transmission rate of forwarding packets to the second provider edge router 16 relative to the transmission rate of receiving packets from the server 18) represents a bottleneck in the communications network 10. The second customer edge router's 17 buffer 20 will therefore have to queue the received data packets according to the management function 22 described earlier.
Accordingly, the first three BE packets arrive at the second customer edge router 17. The header portions of all these BE packets are decoded and the management function 22 determines that they are all queuable BE packets. In this example, there are initially no other data packets in buffer 20. Accordingly, all three BE packets are stored in the buffer 20 and the first of these BE packets is forwarded to the transmitter.
As noted above, a stream of UQ packets are sent from the server 18 to the second customer edge router 17 after these initial three BE packets. The first of these UQ packets arrive at the second customer edge router 17 and the header portion is decoded. The management function 22 determines that it is an UQ packet. It also determines that the buffer 20 is not empty (as the three BE packets have not all been transmitted when the first UQ packet arrives) and thus discards the first UQ packet. The discarded UQ packet is represented by a line having a diamond head (rather than an arrow head) terminating in the area between the server 18 and client 11 in
The second of the UQ packets arrives at the second customer edge router 17 and the header portion is decoded. The management function 22 again determines that it is an UQ packet and again also determines that the buffer 20 is not empty. The second UQ packet is therefore discarded.
Eventually, all three BE packets are successfully transmitted to the second provider edge router 16 and the buffer 20 of the second customer edge router 17 is empty. The third UQ packet then arrives at the second customer edge router 17 and the header portion is decoded. Again, the management function 22 determines that it is an UQ packet but now determines that the buffer 20 is empty. The third UQ packet is therefore stored in the buffer 20 and forwarded to the transmitter 57 for onward transmission to the provider edge router 16 (and ultimately the client 11). This is illustrated in
Whilst the third UQ packet is being transmitted, a fourth UQ packet arrives and the header portion is decoded. The management function 22 determines that it is an UQ packet and that the buffer is not empty (as the third UQ packet is stored in the buffer 20 whilst it is being transmitted). The fourth UQ packet is therefore discarded.
Meanwhile, as shown in
Whilst these BE acknowledgment messages traverse the communications network 10 to the server 18, the server 18 continues sending UQ packets to the client 11. As noted above and as shown in
As shown in
When the first BE acknowledgment message arrives at the server 18, the server 18 stops sending UQ data packets to the client 11. The server 18 is configured, on receipt of this BE acknowledgment message, to end its start-up phase and enter a congestion-avoidance phase. Like the conventional TCP Slow-Start algorithm, the algorithm of this embodiment of the present invention is ‘self-clocking’, such that a new data packet is transmitted from the server 18 towards the client 11 in response to each acknowledgement it receives. In this embodiment, following receipt of the first BE acknowledgment packet from the client 11, the server 18 starts sending a second batch of BE packets 60 to the client 11. The first three BE packets of this second batch is sent at a transmission rate corresponding to the rate at which it receives the first three BE acknowledgment messages. However, it will be seen from
This self-clocking nature can be explained using the schematic diagram shown in
Accordingly, as shown in
The skilled person will understand that the first UQ acknowledgment message to arrive at the server 18 will indicate that some data has not arrived at the client 11 (due to some UQ packets being dropped). The server 18 therefore retransmits this data by including it in the second batch of BE packets. This behaviour therefore repairs all losses of data in the UQ packets. Once all this lost data has been retransmitted, the server 18 will send out any remaining new data until its buffered data has all been sent. The server will then terminate the connection (not shown).
The method of the present invention therefore uses the new UQ packets to probe the network and more rapidly establish the appropriate transmission rate of the end-to-end path through the network. This is clear when the algorithm of the present invention is compared to TCP Slow-Start for a larger data stream, as shown in
It will be seen from
A second embodiment of the present invention will now be described with reference to
The client 81 sends a request packet 82 to the server 85 for a data transfer. In this embodiment, the middlebox 83 intercepts this request packet 82 (for example, by monitoring all data packets passing through the second customer edge router 17 and determining if any are request packets), and opens a connection back to the client 81. The middlebox 83 cannot yet send the data the client 81 has requested from the server, as it does not store it. The middlebox 83 therefore forwards the request onwards (84) to the server 85. The server 85 then starts a traditional TCP data transfer to the middlebox 83.
In this embodiment, the server 85 does not need to be modified in any way. The data transfer between the server 85 and the middlebox 83 can therefore proceed according to the traditional TCP Slow-Start algorithm, which is illustrated in
However, as can be seen in
The advantages of the second embodiment are that the traditional TCP Slow-Start exchange between the server 85 and the middlebox 83 may accelerate to a very fast rate in a relatively short of amount of time (compared to a traditional TCP exchange over a WAN), and then the data transfer is translated into a unqueuable class of service data transfer to establish the bottleneck rate over the WAN. This may also be implemented without any modifications to the server 85, such that only the nodes from the customer edge router onwards (which are maintained by network operators) need to be able to distinguish an unqueuable packet from a packet of any other class of service.
The skilled person would understand that the network could implement two middleboxes of the second embodiment, such that one is associated with the server and another is associated with the client, such that the advantages of the present invention could be realised in both the forward and reverse directions.
In an enhancement to the above embodiments, any intermediate node between the client and server could dequeue packets at a slightly lower rate than its normal transmission rate. In this manner, a greater number of UQ packets would be dropped by the intermediate node, and consequently the rate of UQ acknowledgment packets being returned to the server decreases. As these UQ acknowledgment packets clock out further packets from the server, the new transmission rate may be artificially lowered below the rate that would be established by the method outlined above. This can therefore provide a safer transmission rate, which is just less than the bottleneck rate of the network.
In another enhancement, a management entity could be connected to a node in the network (preferably the provider edge node), which may monitor data packets passing through the node to determine the proportion of packets which are being sent in the unqueuable class of service. This may be achieved by an interface with the header decoder function of the node, and appropriate logging mechanisms. Alternatively, deep packet inspection techniques could be used. The management entity allows the network operator to determine the usage of the unqueuable class of service by different clients and can thus help in deployment planning.
In the above embodiment, the server 18 transmits the packets towards the core network routers via customer edge and provider edge routers. However, this is non-essential and the skilled person would understand that the invention may be implemented between any two network nodes communicating via at least one intermediate node. For example, the server may be connected directly to a core router 15 (which may be the case, for example, where the server is a high-bandwidth storage server for popular video streaming websites). In this case, the bottleneck node is likely to be at a more distant intermediate node (such as a provider edge router associated with the client), and the bottleneck rate can be established by this node dropping the UQ packets. Furthermore, the two network nodes implementing the invention could be in a peer-to-peer arrangement, rather than a server/client arrangement detailed above.
In the above embodiments, the UQ packets are marked as unqueuable by a specific identifier in the header portion of the packet. However, the skilled person will understand that this method of ensuring a packet is unqueuable is non-essential. That is, the packets may be marked as unqueuable by using an identifier at any point in the packet, so long as any node in the network is able to decode this identifier. Furthermore, this marking does not necessarily need to be consistent, as a node may use deep packet inspection to determine the class of service without having to decode the identifier. The skilled person will understand that the UQ packet does not require any marking at all to be identifiable as of the unqueuable class of service. Instead, the unqueuable class of service may be inferred from a particular characteristic of the packet, such as its protocol, it being addressed to a particular range of addresses, etc. An intermediate node can then treat the packet as unqueuable based on this inference. Thus, the skilled person will understand that an ‘unqueuable’ data packet is one which network nodes generally understand should not be queued if a packet queue exists in the node
In the above embodiments, the UQ packets include data that is part of the data to be transmitted from the server to the client, and any data lost as a result of a dropped UQ packet is resent by the server. However, the UQ packets may instead include dummy data (i.e. data which is not part of the data requested by the client, and typically just a random collection of bits). In this way, there are fewer packets of data which need to be retransmitted by the server.
The skilled person will also understand that the use of the TCP protocol is non-essential, and the present invention may be applied in many other transport protocols implementing congestion control, such as the Stream Control Transmission Protocol or Real-time Transport Protocol over Datagram Congestion Control Protocol.
The above embodiments describe the present invention operating between a server and client at the start of a new data flow. However, the skilled person will understand that the present invention may be used at any time in order to establish the bottleneck rate in the network. For example, the server may have established data flows with several clients, and one of the data flows may terminate. The server may then use the method of the present invention to quickly probe the network and establish the new bottleneck rate for its remaining data flow(s). Furthermore, the skilled person will understand that the second embodiment of the method of the invention, in which a middlebox is provided at an ingress and/or egress point of the core network, may be used to probe the network to determine a bottleneck capacity. Thereafter, when a new flow starts from a client associated with that middlebox, the transmission rate can be set based on this information.
In the above embodiments, the intermediate node is configured to determine that its buffer is empty once the final byte of data for the last packet leaves the transmitter. However, the skilled person will understand that the transmitter may also implement a buffer to temporarily store packets as they are transmitted. The node may therefore disregard any packets stored in this temporary transmitter buffer when determining whether or not the node buffer is empty and thus whether a new UQ packet can be queued or not.
The skilled person will understand that any combination of features is possible within the scope of the invention, as claimed.
Claims
1. A method of controlling a network node in a data packet network, the method comprising the steps of:
- receiving a first data packet from a first external network node;
- analysing the first data packet to determine if it is of a class of service deemed to be queuable or unqueuable;
- determining that the first data packet is queuable;
- reclassifying the first data packet to be of the unqueuable class of service; and
- sending the first data packet to a second external node via an intermediate node,
- wherein the first data packet is of the queuable class of service if it may be queued by the intermediate node and is of the unqueuable class of service if the intermediate node should not forward the first data packet to the second external network node if a buffer of the intermediate node is not empty.
2. A non-transitory computer-readable storage medium storing a computer program or suite of computer programs, which upon execution by a computer system performs the method of claim 1.
3. A network node for a data packet network, the network node comprising:
- a receiver adapted to receive a first data packet from a first external network node;
- a processor adapted to analyse the first data packet to determine if it is of a class of service deemed to be queuable or unqueuable; and if it is determined that the first data packet is queuable, to reclassify the first data packet to be of the unqueuable class of service; and
- a transmitter adapted to transmit the first data packet to a second network node, via an intermediate node, if the processor determines that the first data packet is queuable,
- wherein the first data packet if of a queuable class of service if it may be queued by the intermediate node and is of an unqueuable class of service if the intermediate node should not forward the first data packet to the second network node if a buffer of the intermediate node is not empty.
4. A network including a network node as claimed in claim 3.
Type: Application
Filed: Jun 16, 2016
Publication Date: Nov 7, 2019
Inventors: Robert BRISCOE (London), Philip EARDLEY (London)
Application Number: 15/746,957