Apparatus and Method for Adaptive Throttling of Traffic Across Multiple Network Nodes
One embodiment of a method of throttling network traffic comprises obtaining traffic rate data from available peer network nodes; computing a maximum permissible rate for a network node based on the traffic rate data from the peer network nodes, wherein the maximum permissible rate represents a maximum number of transactions permitted to pass into that network node for processing during a current period; and employing the maximum permissible rate to govern a number of transactions admitted for processing by the network node in the current period.
Latest Hewlett Packard Patents:
This application claims priority to copending U.S. provisional application entitled, “Apparatus and Methods for Adaptive Throttling of Traffic Across Multiple Network Nodes,” having Ser. No. 60/923,288, filed Apr. 13, 2007, which is entirely incorporated herein by reference.
TECHNICAL FIELDThe present disclosure is generally related to computer networks and, more particularly, is related to managing network traffic.
BACKGROUNDIn the computer art, users often need to employ the computing facilities of a service provider to satisfy their computing needs. For example, one or more users, each utilizing one or more applications, may contract with a service provider to request that the service provider perform transaction, processing using the service provider's computing facilities. For a variety of economic and technical reasons, the service provider typically employs a server farm that comprises of a plurality of servers to service the transaction requests from the users. In an example case, each user may contract with the service provider to require the service provider to provide a certain contracted processing rate or a contracted service level e.g. 1000 transactions per second, 100 transactions per second, and 10 transactions per second, etc.
Generally speaking, the service provider desires to provide at least the contracted transaction processing rate to each customer to keep the customers happy. However, there is also a strong need to manage the traffic (e.g., transactions) such that no user would be able to grossly abuse his contractual arrangement by vastly exceeding his contracted transaction processing rate. If the traffic is not properly managed by the service provider, the volume of incoming transactions may result in clogged traffic at the servers in the server farm. Due to the clogged traffic, transaction processing performance may be reduced, and a user who legitimately pays for his/her contracted transaction processing rate may not be able to satisfactorily process transactions, leading to reduced customer satisfaction.
SUMMARYEmbodiments of the present disclosure provide systems and methods of throttling network traffic. One embodiment, among others, of a method comprises obtaining traffic rate data from available peer network nodes; computing a maximum permissible rate for a network node based on the traffic rate data from the peer network nodes, wherein the maximum permissible rate represents a maximum number of transactions permitted to pass into that network node for processing during a current period; and employing the maximum permissible rate to govern a number of transactions admitted for processing by the network node in the current period.
Briefly described, one embodiment of a system of throttling network traffic, among others, comprises a peer communication module configured to obtain traffic rate data from available peer network nodes; a threshold stats module configured to compute a maximum permissible rate for a network node based on the traffic rate data from the peer network nodes, wherein the maximum permissible rate represents a maximum number of transactions permitted to pass into that network node for processing during a current period; and a gatekeeper module configured to employ the maximum permissible rate to govern a number of transactions admitted for processing by the network node in a current period.
Other systems, methods, features, and advantages of the present disclosure will be or become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the present disclosure, and be protected by the accompanying claims.
Many aspects of the present disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in'the drawings, like reference numerals designate corresponding parts throughout the several views.
The present disclosure will now be described in detail with reference to a few embodiments thereof as illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art, that the present disclosure may be practiced without some or all of these specific details. In other instances, well known process steps and/or structures have not been described in detail in order to not unnecessarily obscure the present disclosure.
Various embodiments are described herein below, including methods and techniques.
It should be kept in mind that the present disclosure might also cover articles of manufacture that includes a computer readable medium on which computer-readable instructions for carrying out embodiments of the inventive technique are stored. The computer readable medium may include, for example, semiconductor, magnetic, opto-magnetic, optical, or other forms of computer readable medium for storing computer readable code. Further, the disclosure may also cover apparatuses for practicing embodiments of the present disclosure. Such apparatus may include circuits, dedicated and/or programmable, to carry out tasks pertaining to embodiments of the present disclosure. Examples of such apparatus include a general-purpose computer and/or a dedicated computing device when appropriately programmed and may include a combination of a computer/computing device and dedicated/programmable circuits adapted for the various tasks pertaining to embodiments of the present disclosure.
Embodiments of the present disclosure relate to apparatus and methods for implementing adaptive throttling across a plurality of network nodes (e.g., servers of a server farm). For example, traffic (e.g., transactions received by the individual network nodes of the server farm) may be adaptively throttled to control the amount of traffic admitted by the plurality of network nodes for processing. In accordance with the inventive adaptive throttling technique, the allowed traffic rate for each network node (i.e., the individual, specific rate at which transactions are admitted by each network node for processing) is adaptively controlled on a period-by-period or cycle-by-cycle basis instead of on a transaction-by-transaction basis.
Each period or cycle (the terms period and cycle are used synonymously herein) typically includes multiple transactions. During a given cycle of the adaptive throttling technique, each network node communicates with available peer network nodes to obtain traffic rate data from the peer network nodes. The obtained peer traffic rate data, among other data, enables a given network node to compute its own allowable threshold value. The allowable threshold value for that network node is then employed to govern the number of transactions admitted for processing by that network node in the current period. As will be discussed in details herein, this determination is based on the total threshold value of admitted transactions for the network nodes as a whole in the current cycle, and based on the traffic rates allowed in the last cycle for the peer network nodes.
By employing period-by-period adaptive throttling, the communication overhead required for coordinating traffic throttling across multiple network nodes is reduced. The reduction in network node communication overhead thus reduces network traffic usage and potential congestion. Concomitantly, network delay is reduced, and overall transaction processing performance is improved.
The features and advantages of the present disclosure may be better understood with reference to the figures and discussions that follow.
In the example of
As shown in
At each network node, a gatekeeper function is implemented to throttle incoming traffic to ensure that transaction processing performance is maximized. Although the examples herein will be discussed in connection with traffic throttling at the network nodes, it should be kept in mind that this traffic throttling function can be readily implemented by one skilled in the art given this disclosure at any suitable functional component in the network, including for example at the applications, at routers or switches in the network, in the middleware layer, etc.
With reference to
Note that this allowed traffic rate may at times differ from the actual traffic rate that arrives at the peer network node awaiting processing. For example, 15 transactions may arrive at a given network node but only 8 transactions may be admitted for processing in a given cycle. In this case, the actual traffic rate is 15 (signifying the number of transactions actually received by the network node) while the allowed traffic rate is 8 (signifying the number of transactions allowed to pass into the network node for processing by the network node's processing logic).
Thus, as can be seen by reference number 202,
where TV equals to the threshold value for the plurality of network nodes, n represents the number of network nodes, and PRx represents the peer allowed traffic rate in the previous cycle for a given network node x.
If temporary variable X≦0 (the condition shown by reference number 204), then MPR=MTV, wherein the Maximum Threshold Value, or MTV, is defined as
On the other hand, if temporary variable X>0 (as shown by reference number 208), then MPR=the value of temporary variable X.
The steps of
Turning now to the aggressive throttling case (
When X≦0 (the condition shown by reference number 214), MPR=0 (as shown by reference number 216). In the other words, unlike the conservative throttling case, aggressive throttling shuts down the MPR value to zero when the temporary variable is less than or equal to zero. However, if X>0 (the condition shown by reference number 218), the following considerations apply. If the MPR for this network node in the previous cycle is zero and the temporary variable X=TV for the current cycle (the condition shown by reference number 220), then MPR=MTV, wherein MTV is again equal to
(as shown by reference number 222). On the other hand, if the MPR for the previous cycle is not zero or the temporary variable X≠TV for the current cycle (the condition shown by reference number 224), then MPR=temporary variable X (as shown by reference number 226).
The Tables that follow provide examples of the aggressive and conservative traffic throttling.
Time is set to be UNIX time since 1970 for ease of calculation, and a cycle is defined to be (time) minus (time percentage duration), where the percentage indicates a remainder operation.
As shown in the Table of
Under the column “ACTUAL” (314) and in the cycle 1 row, the value “3” represents the number of transactions actually received by the Transaction Throttling Mechanism, or “TTM”, of a given network node. This actual value represents for network node A the number of transactions transmitted by the plurality of applications and destined to be processed by network node A (336) during cycle 1. In this example, the actual value measured for the number of transactions destined for network node B (338) is 3. The actual value measured for the number of transactions destined for network node C (340) is 3. The actual value measured for the number of transactions destined for network node D (342) is 3. Note that these actual transaction values represent only transactions received at the various network nodes in a given cycle (e.g., cycle 1 as indicated by row 1). The number of transactions actually allowed in to be processed at each network node during that cycle will be computed as follows to implement traffic throttling.
As can be seen in
Furthermore, under the column “TOTAL” (318) and the cycle 1 row, the value therein represents the total number of transactions that is admitted during the current cycle by all network nodes for processing. The total number of transactions actually admitted for processing during cycle 1 is calculated to be 12.
It should be understood that this total allowed transaction value is not the total number of transactions actually being processed by the plurality of network nodes during a given cycle. The total allowed transaction value is the additional number of transactions passed or admitted to the plurality of network nodes for processing during the cycle. Since it is possible that the plurality of network nodes may be processing other transactions allowed in one or more previous cycles, the total number of transactions actually processed by the plurality of network nodes during a given cycle may be greater than the total allowed transaction value for that cycle. For cycle 1, since there was no previous cycle and no pending transactions being processed, the total number of transactions actually processed by the plurality of network nodes during cycle 1 equals the number of transactions allowed during cycle 1. However, such will not always be the case in subsequent cycles.
As shown in the Table of
Furthermore, under the column “ACTUAL” (314) and in the cycle 2 row, the value “4” represents the number of transactions actually received by the Transaction Throttling Mechanism, or “TTM”, which number of transactions represents the transactions transmitted by the plurality of applications to be processed by network node A (336) during cycle 2. In this example, the actual value measured for the number of transactions destined for network node B (338) is 5 during cycle 2. The actual value measured for the number of transactions destined for network node C (340) is 6 during cycle 2. The actual value measured for the number of transactions destined for network node D (342) is 7 during cycle 2.
Furthermore, as can be seen in
The allowed number of transactions processed by network node B (346) is 3 during cycle 2 since the number of transactions actually received (5) is greater than the MPR (3) for network node B during cycle 2. Likewise, the allowed number of transactions processed by network node C (348) is 3 during cycle 2. The allowed number of transactions processed by network node D (350) is also 3 during cycle 2.
Furthermore, under the column “TOTAL” (318) and the cycle 2 row, the value represents the total number of transactions admitted to be processed by all network nodes during cycle 2. The total value of transactions actually allowed for cycle 2 is calculated to be 12. Again, it should be understood that this total allowed transaction value is not the total number of transactions actually processed by the plurality of network nodes during cycle 2. The total allowed transaction value is the additional number of transactions passed to the plurality of network nodes for processing during cycle 2. Since it is possible that the plurality of network nodes may be processing other transactions allowed in one or more previous cycles, the total number of transactions actually processed by the plurality of network nodes during cycle 2 may be greater than the total allowed transaction value shown in
As shown in the Table of
Furthermore, under the column “ACTUAL” (314) and in the cycle 3 row, the value “2” represents the number of transactions actually received by the Transaction Throttling Mechanism, or “TTM”, which number of transactions represents the transactions transmitted by the plurality of applications to be processed by network node A (336) during cycle 3. In this example, the actual value measured for the number of transactions destined for network node B (338) is 3 during cycle 3. The actual value measured for the number of transactions destined for network node C (340) is 4 during cycle 3. The actual value measured for the number of transactions destined for network node D (342) is 5 during cycle 3.
Furthermore, as can be seen in
Furthermore, under the column “TOTAL” (318) and the cycle 3 row, the value 11 represents the total number of transactions admitted to be processed by all network nodes during cycle 3.
As shown in the Table of
Furthermore, under the column “ACTUAL” (314) and in the cycle 4 row, the value “2” represents the number of transactions actually received by the Transaction Throttling Mechanism, or “TTM”, which number of transactions represents the transactions transmitted by the plurality of applications to be processed by network node A (336) during cycle 4. In this example, the actual value measured for the number of transactions destined for network node B (338) is 3 during cycle 4. The actual value measured for the number of transactions destined for network node C (340) is 4 during cycle 4. The actual value measured for the number of transactions destined for network node D (342) is 5 during cycle 4.
Furthermore, as can be seen in
Furthermore, under the column “TOTAL” (318) and the cycle 4 row, the value 13 represents the total number of transactions admitted to be processed by all network nodes during cycle 4. The total value of transactions actually allowed for cycle 4 is calculated to be 13.
As it can be seen in
As shown with the example of
The Table of
As shown in the Table of
Furthermore, as can be seen in
Furthermore, under the column “TOTAL” (410) and the cycle 1 row, the value 48 represents the total number of transactions admitted to be processed by all network network nodes. The total value of transactions actually allowed for cycle 1 is calculated to be 48. In this case, all network nodes admit for processing the maximum number of transactions set by the TV during cycle 1.
As shown in the Table of
Furthermore, under the column “ACTUAL” (406) and in the cycle 2 row, the value “12” represents the number of transactions actually received by the Transaction Throttling Mechanism, or “TTM”, which number of transactions represents the transactions transmitted by the plurality of applications to be processed by network node A (420) during cycle 2. In this example, the actual value measured for the number of transactions destined for network node B (422) is 13. The actual value measured for the number of transactions destined for network node C (424) is 14. The actual value measured for the number of transactions destined for network node D (426) is 15.
Furthermore, as can be seen in
Furthermore, under the column “TOTAL” (410) and the cycle 2 row, the value represents the total number of transactions admitted to be processed by all network nodes. The total value of transactions actually allowed for cycle 2 is calculated to be 12.
As shown in the Table of
Furthermore, under the column “ACTUAL” (406) and in the cycle 3 row, the value “2” represents the number of transactions actually received by the Transaction Throttling Mechanism, or “TTM”, which number of transactions represents the transactions transmitted by the plurality of applications to be processed by network node A (420) during cycle 2. In this example, the actual value measured for the number of transactions destined for network node B (422) is 3. The actual value measured for the number of transactions destined for network node C (424) is 2. The actual value measured for the number of transactions destined for network node D (426) is 3.
Furthermore, as can be seen in
Furthermore, under the column “TOTAL” (410) and the cycle 3 row, the value represents the total number of transactions admitted to be processed by all network nodes. The total value of transactions actually allowed for cycle 3 is calculated to be 10.
As shown in the Table of
Furthermore, under the column “ACTUAL” (406) and in the cycle 4 row, the value “5” represents the number of transactions actually received by the Transaction Throttling Mechanism, or “TIM”, which number of transactions represents the transactions transmitted by the plurality of applications to be processed by network node A during cycle 4 (420). In this example, the actual value measured for the number of transactions destined for network node B (422) is 5. The actual value measured for the number of transactions destined for network node C (424) is 5. The actual value measured for the number of transactions destined for network node D (426) is 5.
Furthermore, as can be seen in
Furthermore, under the column “TOTAL” (410) and the cycle 4 row, the value represents the total number of transactions admitted to be processed by all network nodes during cycle 4. The total value of transactions actually allowed for cycle 4 is calculated to be 18.
As shown in the Table of
As it can be seen in
As shown with the example of
The Table of
As shown in the Table of
Furthermore, under the column “ACTUAL” (450) and in the cycle 1 row, the value “4” represents the number of transactions actually received by the Transaction Throttling Mechanism, or “TTM”, which number of transactions represents the transactions transmitted by the plurality of applications to be processed by network node A (464) during cycle 1. In this example, the actual value measured for the number of transactions destined for network node B (468) is 3. The actual value measured for the number of transactions destined for network node C (470) is 2. The actual value measured for the number of transactions destined for network node D (472) is 2.
Furthermore, as can be seen in
Furthermore, under the column “TOTAL” (454) and the cycle 1 row, the value represents the total number of transactions admitted to be processed by all network nodes. The total value of transactions actually allowed for cycle 1 is calculated to be 9.
As shown in the Table of
Furthermore, under the column “ACTUAL” (450) and in the cycle 2 row, the value “2” represents the number of transactions actually received by the Transaction Throttling Mechanism, or “TTM”, which number of transactions represents the transactions transmitted by the plurality of applications to be processed by network node A (464) during cycle 2. In this example, the actual value measured for the number of transactions destined for network node B (468) is 2. The actual value measured for the number of transactions destined for network node C (470) is 2. The actual value measured for the number of transactions destined for network node D (472) is 2.
Furthermore, as can be seen in
Furthermore, under the column “TOTAL” (454) and the cycle 2 row, the value represents the total number of transactions admitted to be processed by all network nodes. The total value of transactions actually allowed for cycle 2 is calculated to be 8.
As shown in the Table of
Furthermore, under the column “ACTUAL” (450) and in the cycle 3 row, the value “2” represents the number of transactions actually received by the Transaction Throttling Mechanism, or “TTM”, which number of transactions represents the transactions transmitted by the plurality of applications to be processed by network node A (464) during cycle 2. In this example, the actual value measured for the number of transactions destined for network node B (468) is 2. The actual value measured for the number of transactions destined for network node C (470) is 2. The actual value measured for the number of transactions destined for network node D (472) is 2.
Furthermore, as can be seen in
Furthermore, under the column “TOTAL” (454) and the cycle 3 row, the value represents the total number of transactions admitted to be processed by all network nodes. The total value of transactions actually allowed for cycle 3 is calculated to be 8.
As shown in the Table of
Furthermore, as can be seen in
Furthermore, under the column “TOTAL” (454) and the cycle 4 row, the value represents the total number of transactions admitted to be processed by all network nodes. The total value of transactions actually allowed for cycle 4 is calculated to be 24.
As shown in the Table of
Furthermore, as can be seen in
Furthermore, under the column “TOTAL” (454) and the cycle 5 row, the value represents the total number of transactions admitted to be processed by all network nodes. The total value of transactions actually allowed for cycle 5 is calculated to be 12.
As it can be seen in
As shown with the example of
The Table of
As shown in the Table of
Furthermore, as can be seen in
Furthermore, under the column “TOTAL” (510) and the cycle 1 row, the value represents the total number of transactions admitted to be processed by all network nodes. The total value of transactions actually allowed for cycle 1 is calculated to be 9.
As shown in the Table of
Furthermore, under the column “ACTUAL” (506) and in the cycle 2 row, the value “2” represents the number of transactions actually received by the Transaction Throttling Mechanism, or “TIM”, which number of transactions represents the transactions transmitted by the plurality of applications to be processed by network node A (520) during cycle 2. In this example, the actual value measured for the number of transactions destined for network node B (522) is 2. The actual value measured for the number of transactions destined for network node C (524) is 2. The actual value measured for the number of transactions destined for network node D (526) is 2.
Furthermore, as can be seen in
Furthermore, under the column “TOTAL” (510) and the cycle 2 row, the value represents the total number of transactions admitted to be processed by all network nodes. The total value of transactions actually allowed for cycle 2 is calculated to be 8.
As shown in the Table of
Furthermore, under the column “ACTUAL” (506) and in the cycle 3 row, the value “6” represents the number of transactions actually received by the Transaction Throttling Mechanism, or “TIM”, which number of transactions represents the transactions transmitted by the plurality of applications to be processed by network node A (520) during cycle 2. In this example, the actual value measured for the number of transactions destined for network node B (522) is 6. The actual value measured for the number of transactions destined for network node C (524) is 6. The actual value measured for the number of transactions destined for network node D (526) is 6.
Furthermore, as can be seen in
Furthermore, under the column “TOTAL” (510) and the cycle 3 row, the value represents the total number of transactions admitted to be processed by all network nodes. The total value of transactions actually allowed for cycle 3 is calculated to be 24.
As shown in the Table of
Furthermore, under the column “ACTUAL” (506) and in the cycle 4 row, the value “2” represents the number of transactions actually received by the Transaction Throttling Mechanism, or “TTM”, which number of transactions represents the transactions transmitted by the plurality of applications to be processed by network node A (520) during cycle 4. In this example, the actual value measured for the number of transactions destined for network node B (522) is 3. The actual value measured for the number of transactions destined for network node C (524) is 4. The actual value measured for the number of transactions destined for network node D (526) is 5.
Furthermore, as can be seen in
Furthermore, under the column “TOTAL” (510) and the cycle 4 row, the value represents the total number of transactions admitted to be processed by all network nodes. The total value of transactions actually allowed for cycle 4 is calculated to be 0.
As shown in the Table of
Furthermore, under the column “ACTUAL” (506) and in the cycle 4 row, the value “2” represents the number of transactions actually received by the Transaction Throttling Mechanism, or “TTM”, which number of transactions represents the transactions transmitted by the plurality of applications to be processed by network node A (520) during cycle 5. In this example, the actual value measured for the number of transactions destined for network node B (522) is 3. The actual value measured for the number of transactions destined for network node C (524) is 4. The actual value measured for the number of transactions destined for network node D (526) is 5.
Furthermore, as can be seen in
Furthermore, under the column “TOTAL” (510) and the cycle 5 row, the value represents the total number of transactions admitted to be processed by all network nodes. The total value of transactions actually allowed for cycle 5 is calculated to be 11.
As it can be seen in
As shown with the example of
For completeness, a plurality of arrows depicting communication paths among network nodes 608, 610, 612, and 614 are shown to signify that the network nodes can exchange data regarding traffic usage by the applications when traffic throttling is performed by the network nodes. It should be kept in mind that although the traffic throttling has been discussed for traffic from individual applications, such throttling may be performed on any type or class of transactions. By way of example, traffic throttling may be performed based on a certain type of traffic from one or more of the applications or may be based on traffic from one or more users. Further, traffic throttling may be performed on a combination of specific users and/or specific application request types. As another example, traffic throttling may be performed for traffic that is exchanged with only certain types or classes of applications. Accordingly, any combination of users and/or request types and/or other parameters may be specified to be throttled.
Based on this configuration value TV and the peer traffic rates from peer network nodes for the last communication cycle (obtained via peer communication module 618), the MPR value for the network node is set to govern the traffic admittance rate in the current cycle for network node 616. These traffic rate values are computed in threshold stats block 622 in the example of
In the peer network node 626, gatekeeper block 634, threshold stats block 632 and configuration block 630 perform various functions for network node 626, which functions are analogous to functions performed by counterpart blocks in network node 616. Based on this configuration value, the traffic rates from peer network nodes (obtained from peer communication module 628) and in some cases the traffic rate from past cycle, the MPR value for the network node is set to govern the incoming traffic rate in the current cycle for network node 626. These traffic rate values are computed in threshold stats block 632 in the example of
As can be appreciated from the foregoing, adaptive throttling on a period-by-period basis can effectively control the traffic load across multiple network nodes. For both the aggressive and conservative traffic throttling case, the amount of data exchanged among the network nodes to accomplish adaptive throttling is fairly minimal since only the peer allowed rate in the previous cycle is exchanged. Accordingly, network bandwidth overhead is minimized, leading to improved performance.
The flow chart of
In block 710, traffic rate data from available peer network nodes is obtained by a peer communication module 628. Based on the traffic rate data obtained by the peer communication module 628, a maximum permissible rate for a network node is determined (720) by a threshold stats block or module 622. The maximum permissible rate represents a maximum number of transactions permitted to pass into that network node for processing during a current period. The gatekeeper module 624 employs (730) the maximum permissible rate to govern a number of transactions admitted for processing by the network node in the current period.
While this present disclosure has been described in terms of several embodiments, there are alterations, permutations, and equivalents, which fall within the scope of this disclosure. Also, the title, summary, and abstract are provided herein for convenience and should not be used to construe the scope of the claims herein. Further, in this application, a set of “n” refers to one or more “n” in the set. It should also be noted that there are many alternative ways of implementing the methods and apparatuses of the present disclosure. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present disclosure.
Claims
1. A method comprising:
- obtaining traffic rate data from available peer network nodes;
- computing a maximum permissible rate for a network node based on the traffic rate data from the peer network nodes, wherein the maximum permissible rate represents a maximum number of transactions permitted to pass into that network node for processing during a current period; and
- employing the maximum permissible rate to govern a number of transactions admitted for processing by the network node in the current period.
2. The method of claim 1, wherein the maximum permissible rate is based on a total threshold value of admitted transactions for network nodes as a whole in a current period and further based on traffic rates allowed in a prior period for the peer network nodes.
3. The method of claim 1, wherein each of the peer network nodes compute a maximum permissible rate to be applied for its own use.
4. The method of claim 2, wherein the total threshold value is configurable.
5. The method of claim 1, wherein implementation of governing of the number of transactions admitted for processing by the network node is activated based on which users make a transaction request.
6. The method of claim 1, wherein implementation of governing of the number of transactions admitted for processing by the network node is activated based on which type of application makes a transaction request.
7. The method of claim 1, further comprising:
- inhibiting or allowing a transaction to be processed based on whether the maximum permissible rate for a current period has been satisfied for the network node.
8. A system comprising:
- a peer communication module configured to obtain traffic rate data from available peer network nodes;
- a threshold stats module configured to compute a maximum permissible rate for a network node based on the traffic rate data from the peer network nodes, wherein the maximum permissible rate represents a maximum number of transactions permitted to pass into that network node for processing during a current period; and
- a gatekeeper module configured to employ the maximum permissible rate to govern a number of transactions admitted for processing by the network node in a current period.
9. The system of claim 8, wherein the system is located at the network node.
10. The system of claim 8, wherein the maximum permissible rate is based on a total threshold value of admitted transactions for network nodes as a whole in a current period and further based on traffic rates allowed in a prior period for the peer network nodes.
11. The system of claim 10, wherein the total threshold value is configurable.
12. The system of claim 8, wherein implementation of governing of the number of transactions admitted for processing by the network node is activated based on which users make a transaction request.
13. The system of claim 8, wherein implementation of governing of the number of transactions admitted for processing by the network node is activated based on which type of application makes a transaction request.
14. The system of claim 8, wherein the gatekeeper module is further configured to:
- inhibit or allow a transaction to be processed based on whether the maximum permissible rate for a current period has been satisfied for the network node.
15. A system comprising:
- means for obtaining traffic rate data from available peer network nodes;
- means for computing a maximum permissible rate for a network node based on the traffic rate data from the peer network nodes, wherein the maximum permissible rate represents a maximum number of transactions permitted to pass into that network node for processing during a current period; and
- means for employing the maximum permissible rate to govern a number of transactions admitted for processing by the network node in the current period.
16. The system of claim 15, wherein the maximum permissible rate is based on a total threshold value of admitted transactions for network nodes as a whole in a current period and further based on traffic rates allowed in a last period for the peer network nodes.
17. The system of claim 15, wherein each of the peer network nodes compute a maximum permissible rate to be applied for its own use.
18. The system of claim 15, wherein implementation of governing of the number of transactions admitted for processing by the network node is activated based on which users make a transaction request.
19. The system of claim 15, wherein implementation of governing of the number of transactions admitted for processing by the network node is activated based on which type of application makes a transaction request.
20. The system of claim 15, further comprising:
- means for inhibiting or allowing a transaction to be processed based on whether the maximum permissible rate for a current period has been satisfied for the network node.
Type: Application
Filed: Apr 4, 2008
Publication Date: Jun 3, 2010
Patent Grant number: 8553573
Applicant: HEWLETT PACKARD DEVELOPMENT COMPANY, L.P. (Houston, TX)
Inventor: Vasu Sasikanth Sankhavaram (Mt. View, CA)
Application Number: 12/594,245
International Classification: H04L 12/56 (20060101);