Single cycle weighted random early detection circuit and method
A system and method is provided for traffic management and regulation in a packet-based communication network, the system and method facilitating proactive, discriminating congestion control on a per flow basis of packets traversing the Internet via use of a Weighted Random Early Detection (WRED) algorithm that monitors the incoming packet queue and optimizes enqueuing or discard of incoming packets to stabilize queue length and promote efficient packet processing. During optimized discard conditions, the system and method discern a relative priority among incoming packets, distribute packets with a relatively high priority and discard packets with a relatively low priority. Additionally, packet traffic are policed and discarded according to packet type, quantity or other predetermined criteria. The present invention performs in periodic mode, demand mode or both, and can be implemented as a hardware solution, a software solution, or a combination thereof.
Latest Tundra Semiconductor Corporation Patents:
The present application claims priority to U.S. Provisional Patent Application Ser. No. 60/341,342, filing date Dec. 14, 2001, the entire content of which is incorporated herein by reference.
BACKGROUND OF THE INVENTION1. Field of the Invention
The present invention relates generally to a system and method for traffic management of packets traversing a communications network. More particularly, the present invention provides an intelligent solution for discriminatory packet discard according to predetermined priority and traffic levels.
2. Description of the Background Art
Current technology provides communication services via a connectionless network such as the Internet. These services are implemented using packet technology, whereby relatively small units called packets are routed through the network to an input queue in a destination system based on the destination address contained within each packet. Breaking communication down into packets allows the same data path to be shared among many users in the network. Such use, however, often results in network congestion and resultant delays in receipt of communications. For example, network traffic often fills input queues faster than processing mechanisms can disperse the content, thus causing a bottleneck in the communication process.
Current art and prior art technology utilize various methods of traffic management in an attempt to alleviate packet glut of this kind. Typically, current art and prior art utilize various algorithms to check queue levels. When the queue level exceeds acceptable thresholds, the incoming packets are randomly discarded. While the prior art and current art methods are able to alleviate queue congestion to some degree, such methods fall prey to a number of inherent disadvantages. Specifically, the aforementioned methods arbitrarily discard all packets when the queue reaches a predetermined level, and continue to drop all packets received until the queue level recedes to a point below the predetermined level of congestion. This necessitates the retransmission and processing of all dropped packets, resulting in process inefficiency, distribution delay, and general substandard processing performance. These methods further fail to discriminate between packets, merely selecting all packets for discard after a predetermined queue threshold is reached. Thus, packets deemed high priority are discarded at the same rate and same time as packets having lower priorities, necessitating a resend of all discarded packets, regardless of priority. The time lost in reprocessing packets of higher priority negates the benefit of an orderly system of transmission according to priorities, with resultant negative business ramifications.
Yet another disadvantage resides in the current state-of-art hardware devices, which limit the use of efficient algorithms and utilize cumbersome hardware configurations, resulting in inefficient processes. For example, certain algorithms utilize a division step. This step generally requires several cycles to complete, thus extending latency time during this particular phase of operations. (Hereafter, the terms “clock cycle” and “cycle” are used interchangeably to mean the time between two adjacent pulses of an oscillator that sets the tempo of a computer processor.) Alternatively, multiple dividers or other hardware components are required to complete the operation in parallel, resulting in inflated design, manufacturing, and purchase costs.
What is needed, therefore, is a system and method capable of optimizing both packet distribution and queue conditions without sacrificing performance objectives. Further, the system and method should encompass both hardware and software embodiments in various configurations to suit business and performance objectives.
SUMMARYThe system and method of the present invention address the issues of the current art and prior art by providing an innovative traffic management and processing solution with complex functionality whereby an average queue volume is calculated and compared to predetermined threshold values for the queue; e.g., a minimum threshold and maximum threshold. The present invention enqueues all packets so long as the queue size remains smaller than a minimum threshold value, thus permitting optimal distribution of the packets when traffic conditions are optimal. If the queue exceeds a maximum threshold value, the present invention provides the functionality to discard all packets, thus immediately acting to reduce the queue to a level conducive to efficient packet processing in spite of heavy traffic conditions. For periods of time during which the queue size remains between the minimum and maximum threshold values, the present invention calculates and discards an optimal number of packets, thus optimizing packet delivery in a congested traffic climate while alleviating queue congestion. Further, upon a discard condition, the present invention intelligently discerns between packets of varying priorities, marks low priority packets, and discards the same to permit delivery of critical packets while alleviating queue congestion. The present invention can be implemented in any combination of hardware or software.
Various embodiments of the present invention include use of a weighted random early detection (WRED) algorithm and policing functionality. Typically, WRED applies to queued packets, while policing applies to packets that cut through the buffer (hereafter, cut-through packets). The WRED algorithm randomly discards packets when input queues begin to fill. The weighted nature of the WRED enables packets deemed to be of a relatively high priority to be discarded at a lower probability than those packets of relatively low priority. The WRED functionality facilitates various implementations that perform a discard probability calculation in a single cycle; e.g., the system and method embody a hardware component solution that performs a 12-bit probability calculation in a single cycle to determine discard probabilities. The present invention supports both periodic and on-demand (per packet) single cycle operations.
The policing function marks and drops packets according to predetermined criteria. For example, if packets violate a level established by service agreement, the policing function monitors the level, selects offending packets, and discards the same. In various embodiments, the system and method supports periodic policing functionality.
In one embodiment of the present invention, a system includes an interface block for initiating memory references for label reads and updates and for interacting with an ICU; a calculation block for calculating an average queue size and calculating probability in a single cycle, the calculation block associated with the interface block. Alternatively, the system may also include a policing block for performing police updates, the policing block associated with the interface block.
In another embodiment of the present invention, a method includes the steps of determining a minimum threshold and a maximum threshold; if the average queue size is less than the minimum threshold, enqueuing an arriving packet; if the average queue size is greater than the maximum threshold, dropping the packet; if the average queue size is between the minimum threshold and the maximum threshold, calculating a packet drop probability to determine packet disposition and either enqueuing or dropping the packet, according to the determined packet disposition.
Further advantages of the invention will be brought out in the following portions of the specification, wherein the detailed description is for the purpose of fully disclosing preferred embodiments of the invention without placing limitations thereon.
The present invention provides an intelligent system and method for proactive, discriminating congestion control on a per flow basis of packets traversing the Internet. In various embodiments, a Weighted Random Early Detection (WRED) monitors the incoming packet queue and optimizes distribution versus discard of packets to stabilize queue length. During optimized discard conditions, the system and method discern a relative priority among packets, distributing packets with a relatively high priority and discarding packets with a relatively low priority. Additionally, packet traffic may be policed for packet types, quantity or other criteria and discarded. Typically, the present invention performs in periodic mode, demand mode or both. A skilled artisan will recognize that the system and method disclosed herein may be implemented as a hardware solution, a software solution, or a combination thereof.
Referring specifically to the drawings, wherein like references are made to the same items throughout, a method of the present invention is generally exemplified stepwise in
In the first step, the minimum threshold and maximum threshold values are determined. The minimum threshold value represents the lowest level of congestion and the point at which proactive traffic management must occur to prevent further congestion. The maximum threshold value represents the highest level of congestion, at which point all incoming packets must be discarded. These respective values are generally environmentally specific, based on such criteria as traffic pattern, flow, computer system capacity, etc.
In the second step, an average queue size is calculated using, for example, the formula:
average=(old_average*(1−½^n))+(current_queue_size*½^n),
where average is the average size of the queue; old-average is the previous average of the queue; current_queue_size is the current size of the queue; and n is the exponential weight factor and a user-configurable value. The user-configurability of n permits the user to meet various objectives by varying n. For example, relatively high values of n minimize extreme highs or lows in queue size. The WRED process does not immediately initiate a packet drop if the queue size exceeds the maximum threshold. Once the WRED process does begin to drop packets in response to a maximum threshold condition, it may continue to drop packets for a period of time after the actual queue size has receded below a minimum threshold. Therefore, average queue size is unlikely to change very quickly over time. This slow-moving average accommodates temporary bursts in traffic. Thus, to avoid drastic swings in queue size caused by bursts in traffic, the user selects a relatively high value for n.
Conversely, if the user selects a relatively low value for n, the average queue size closely tracks the current queue size. The resultant average fluctuates with changes in the traffic levels, responding quickly to long queues. Once the queue falls below the minimum threshold, the WRED process stops dropping packets. For relatively consistent traffic patterns, the user selects a relatively low value for n, thus permitting the average queue size to fluctuate in response to traffic levels and permit packet queuing when the queue has receded below the minimum threshold.
As a skilled artisan will note, values for n falling within the range generally defined between relatively high and relatively low permit the user to configure the process according to varying degrees and varying objectives; e.g., setting n to a value whereby some fluctuation of the queue size occurs, resulting in the benefits associated with the curtailment of packet dropping when the queue size recedes below the minimum threshold vis-a-vis an extreme low value of n whereby the WRED process overreacts to temporary traffic bursts and drop packets unnecessarily or an extreme high value of n, whereby complete nonreaction to congestion occurs, and packets are transmitted and dropped as if the WRED process were not in effect.
In the third step, the average queue size is compared to the minimum threshold value. If the average queue value is less than the minimum threshold value, then the packet is enqueued. If the average queue size exceeds the maximum threshold value, then the packet is automatically discarded. Finally, for average queue values in the range of minimum threshold value to maximum threshold value, the method calculates a packet drop probability (Pdrop) at 24 and accordingly either enqueues or drops the incoming packet.
The drop probability calculation is based on the minimum threshold value, maximum threshold value, and a mark probability denominator. For example, the drop probability Pdrop may be calculated according to the formula:
Pdrop=Pdmax*(avqlen−min_th)/(max_th−min_th),
where Pdmax is a maximum probability; avqlen is a periodic or per packet average queue length; min_th is a minimum threshold under which all packets must be accepted; and max_th is a maximum threshold over which all packets can be dropped.
When the average queue size exceeds the minimum threshold value, the WRED process begins to drop packets. The rate of drop increases linearly as the average queue size increases until the average queue size reaches the maximum threshold. Referring now to
The mark probability denominator is the fraction of packets dropped when the average queue size reaches the maximum threshold. For example, if the denominator is 512, one out of every 512 packets is dropped when the average queue size is greater than the maximum threshold value. The minimum threshold value should be set high enough to maximize the link utilization else packets may be unnecessarily dropped, resulting in less than optimal utilization of the link. The difference between the maximum threshold and the minimum threshold should be large enough to avoid global synchronization. If the difference is too small, many packets may be dropped at once, resulting in global synchronization.
Turning now to
The following Table 1 provides an exemplar of interface signals that are associated with WRED process components such as the interface block. The table includes columned information categorized according to Signal Name, Dir and Signal Description.
With reference to
With reference to
With reference to
Turning to
Having illustrated and described the principles of the system and method of the present invention in various embodiments, it should be apparent to those skilled in the art that the embodiment can be modified in arrangement and detail without departing from such principles. For example, the physical manifestation of the hardware media may be changed if preferred. Therefore, the illustrated embodiments should be considered only as example of the invention and not as a limitation on its scope. Although the description above contains much specificity, this should not be construed as limiting the scope of the invention but as merely providing illustrations of some of the presently preferred embodiments of this invention. Further, it is appreciated that the scope of the present invention encompasses other embodiments which may become obvious to those skilled in the art, and that the scope of the present invention is accordingly to be limited by nothing other than the appended claims, in which reference to an element in the singular is not intended to mean “one and only one” unless explicitly so stated, but rather “one or more”. All structural and functional equivalents to the elements of the above-described preferred embodiment that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims. Moreover, it is not necessary for a device or method to address each and every problem sought to be solved by the present invention for it to be encompassed by the present claims. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claim. No claim element herein is to be construed under the provisions of 35 U.S.C. 112, sixth paragraph, unless the element is expressly recited using the phrase “means for”.
Claims
1. A method for regulating network packet traffic, the method comprising the steps of:
- determining a minimum threshold and a maximum threshold;
- calculating an average queue size according to a following formula: average=(old_average*(1−½^n))+(current_queue_size*½^n), wherein the average is the average size of a queue; the old-average is a previous average of the queue; the current_queue_size is a current size of the queue; and n is an exponential weight factor and a user-configurable value;
- when the average queue size is less than the minimum threshold, enqueuing an arriving packet;
- when the average queue size is greater than the maximum threshold, dropping the packet;
- when the average queue size is between the minimum threshold and the maximum threshold, calculating a packet drop probability (Pdrop); and
- performing one of a set consisting of enqueuing and dropping the packet, according to the calculated probability.
2. The method of claim 1 wherein the minimum threshold and the maximum threshold are individually determined based on at least one criterion selected from a group consisting essentially of traffic pattern, flow, and computer system capacity.
3. The method of claim 1, wherein the step of calculating a packet drop probability further comprises calculating the packet drop probability according to formula:
- Pdrop=Pdmax*(avqlen−min_th)/(max_th−min_th)
- wherein Pdmax is a maximum probability; avqlen is one of a set consisting of a periodic and a per packet average queue length; min_th is a minimum threshold under which all packets must be accepted; and max_th is a maximum threshold over which all packets can be dropped.
4. The method of claim 1, wherein the step of calculating a packet drop probability is performed in a single cycle.
5. The method of claim 1, further comprising the step of policing packets according to a predetermined criterion.
6. The method of claim 1, further comprising the step of calculating the packet drop probability on at least one of a set consisting of a demand basis and a periodic basis.
7. The method of claim 1, further comprising the step of generating a new average queue length per flow.
8. A method for regulating network packet traffic, the method comprising the steps of:
- determining a minimum threshold and a maximum threshold;
- calculating an average queue size according to a following formula, average=(old average*(1−½^n))+(current_queue_size*½^n), wherein the average is the average size of a queue; the old-average is a previous average of the queue; the current_queue_size is a current size of the queue; and n is an exponential weight factor and a user-configurable value;
- when the average queue size is less than the minimum threshold, enqueuing an arriving packet;
- when the average queue size is greater than the maximum threshold, dropping the packet;
- when the average queue size is between the minimum threshold and the maximum threshold, calculating a packet drop probability in a single cycle, and either enqueuing or dropping the packet according to the calculated probability; and
- policing packets according to a predetermined criterion.
9. The method of claim 8, wherein the step of calculating a packet drop probability further comprises calculating the packet drop probability according to formula:
- Pdrop=Pdmax*(avqlen−min_th)/(max_th−min_th)
- wherein Pdmax is a maximum probability; avqlen is one of a set consisting of a periodic and a per packet average queue length; min_th is a minimum threshold under which all packets must be accepted; and max_th is a maximum threshold over which all packets can be dropped.
10. A system for management of packet traffic, the system comprising:
- an interface block for initiating memory references for label reads and updates, and interacting with an Ingress Control Unit (ICU);
- a calculation block for calculating an average queue size according to a formula and calculating packet drop probability depending on the average queue size, the calculation block associated with the interface block, the formula being: average=(old average*(1−½^n))+(current_queue_size*½^n),
- wherein the average is the average size of a queue; the old-average is a previous average of the queue; the current_queue_size is a current size of the queue; and n is an exponential weight factor and a user-configurable value; and
- a policing block for performing police updates, the policing block associated with the interface block.
11. The system of claim 10, wherein the packet drop probability is calculated in a single cycle.
12. The system of claim 10, wherein the interface block further comprises a timer.
13. The system of claim 10, further comprising at least one interface signal.
14. The system of claim 13, wherein the at least one interface signal is selected from a group essentially consisting of wru_enable; first_flowid_addr max_flowcnt; wred_cyclecnt; zcu_rddata; zcu_rddata_calid; zcu_readvalid; and icu_rddata.
15. The system of claim 10, wherein the calculation block performs packet drop probability calculations on a periodic basis.
16. The system of claim 10, wherein the calculation block performs packet drop probability calculations on a demand basis.
17. The system of claim 10, wherein the calculation block further calculates a new average queue size.
5737314 | April 7, 1998 | Hatono et al. |
6252848 | June 26, 2001 | Skirmont |
6333917 | December 25, 2001 | Lyon et al. |
6463068 | October 8, 2002 | Lin et al. |
6856596 | February 15, 2005 | Blumer et al. |
6996062 | February 7, 2006 | Freed et al. |
7106731 | September 12, 2006 | Lin et al. |
20020188648 | December 12, 2002 | Aweya et al. |
20030086140 | May 8, 2003 | Thomas et al. |
Type: Grant
Filed: Dec 13, 2002
Date of Patent: Oct 23, 2007
Patent Publication Number: 20030112814
Assignee: Tundra Semiconductor Corporation
Inventors: Prasad Modali (San Jose, CA), Nirmal Raj Saxena (Los Altos Hills, CA)
Primary Examiner: Brian Nguyen
Attorney: LaRiviere, Grubman & Payne, LLP
Application Number: 10/318,769
International Classification: H04L 12/28 (20060101);