Methods, systems, and computer readable media for generating simulated network traffic using different traffic flows and maintaining a configured distribution of traffic between the different traffic flows and a device under test

- IXIA

Methods, systems, and computer readable media for generating simulated network traffic from a plurality of different traffic flows and maintaining a configured distribution among the flows are disclosed. One exemplary method includes determining a number of operations per flow for each of a plurality of flows that generate simulated network traffic between the flows and a device under test. The method further includes determining a desired traffic distribution among the traffic generated by the traffic flows. The method further includes assigning a weight to each flow that determines the number of times to execute each flow during execution of a batch of flows. The method further includes executing the flows in batches according to the assigned weights to transmit the desired distribution of traffic between the different flows and the device under test.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
TECHNICAL FIELD

The subject matter described herein related to generating simulated network traffic. More particularly, the subject matter described herein related to methods, systems, and computer readable media for generating simulated network traffic using different traffic flows and maintaining a configured distribution of traffic between the different traffic flows and a device under test.

BACKGROUND

Some network components, such as firewalls, network address translators (NATs), intrusion detection systems (IDSs), and intrusion protection systems (IPSs), deep packet inspection (DPI) devices, wide area network (WAN) optimization devices, layer 7 acceleration devices, and server load balancers will see a diverse mix of network traffic in operation. Accordingly, before deploying such equipment in a live network, it is desirable to test the equipment with a traffic mix that is representative of the traffic mix that the equipment will see in operation. For example, it may be desirable to test a firewall by repeatedly sending traffic from different applications through the firewall. Each application may generate different amounts of traffic and may operate independently of other applications that generate simulated traffic to test the functionality of the firewall. It may be desirable to maintain a desired distribution of traffic among the applications. However, without some regulation of transmissions by the applications, maintaining a desired traffic distribution cannot be achieved.

Accordingly, there exists a long felt need for methods, systems, and computer readable media for generating simulated traffic using different traffic flows and for maintaining a configured traffic distribution between the traffic flows and a device under test.

SUMMARY

Methods, systems, and computer readable media for generating simulated network traffic between a plurality of different traffic flows and a device under test and maintaining a configured distribution between the flows and the device under test are disclosed. One exemplary method includes determining a number of operations per flow for each of a plurality of flows that generate simulated network traffic between the each flow and a device under test. The method further includes determining a desired traffic distribution among the traffic generated by the traffic flows. The method further includes assigning a weight to each flow that determines the number of times to execute each flow during execution of a batch of flows. The method further includes executing the flows in batches according to the assigned weights to generate the desired distribution of traffic between the different flows and the device under test.

The subject matter described herein can be implemented using a non-transitory computer readable medium having stored thereon executable instructions that when executed by the processor of a computer control the computer to perform steps. Exemplary computer readable media suitable for implementing the subject matter described herein includes chip memory devices, disk memory devices, programmable logic devices, and application specific integrated circuits. In one example, the subject matter described herein can be implemented by a processor and a memory, where the memory stores instructions executable by the processor for implementing the subject matter described herein. In addition, a computer readable medium that implements all or part of the subject matter described herein can be located on a single device or computing platform or may be distributed across multiple devices or computing platforms.

BRIEF DESCRIPTION OF THE DRAWINGS

Preferred embodiments of the subject matter described herein will now be explained with reference to the accompanying drawings, of which:

FIG. 1 is a block diagram of a system for generating simulated traffic using different traffic flows and maintaining a configured traffic distribution between the traffic flows and a device under test according to an embodiment of the subject matter described herein;

FIG. 2 is a flow chart illustrating exemplary steps of a process for generating simulated traffic using different traffic flows and maintaining a configured traffic distribution between the flows and a device under test according to an embodiment of the subject matter described herein; and

FIG. 3 is a flow diagram illustrating execution of batches that may be performed by a network traffic emulator according to an embodiment of the subject matter described herein.

DETAILED DESCRIPTION

The subject matter described herein can be used for any traffic emulation system where the primary objective is to emulate multiple traffic profiles generating traffic at different rates following a certain configured ratio or percentage distribution.

In an application/layer 7 traffic emulation system, one of the most common objectives is to generate traffic at a certain rate configured by the user of the system. The rates covered in this document may be any one of the following:

1. Connection Rate/Connections Initiated Per Second (CPS)

2. Transaction Rate/Transactions Initiated Per Second (TPS)

3. Throughput/Bytes exchanged per second (TPUT)

For the sake of simplicity we can generalize these traffic emulation objectives as operations per second (OPS). The operation can be one connection initiation, one transaction initiation and one byte exchange for CPS, TPS and TPUT objectives, respectively.

When a traffic emulation system emulates multiple traffic profiles (either multiple application profiles, multiple protocol profiles or a mix of application and protocol profiles) simultaneously, the user of the system may need each profile to generate a fixed percentage of the total OPS. In other words, a configured OPS ratio should be maintained across the profiles.

FIG. 1 is a block diagram of a traffic emulation system that generates simulated network traffic between a plurality of traffic flows and a device under test and maintains a configured distribution among the flows according to an embodiment of the subject matter described herein. Referring to FIG. 1, a traffic emulator 100 sends simulated traffic to and received traffic from a device under test 102. Device under test 102 may be any suitable network device whose functionality is being tested. Examples of such devices include NATs, firewalls, IDSes, IPSes, DPIs, server load balancers, layer 7 accelarators, such as video, storage, or cloud accelerators, WAN optimizers, etc.

Traffic emulator 100 includes traffic flows 104, 106, and 108 that generate different types of traffic between the flows and device under test 102. Each flow 104, 106, and 108 may emulate an application, a protocol, or both. For example, flow 104 may emulate Google traffic, flow 106 may emulate Facebook traffic, and flow 108 may emulate streaming video traffic. If flows 104, 106, and 108 emulate protocols, flow 104 may emulate TCP/IP, flow 106 may emulate SCTP/IP, and flow 108 may emulate UDP/IP.

Each flow 104, 106, and 108 may generate a fixed amount of traffic when executed. For example, flow 104 may generate 500 bytes of information per execution, flow 106 may generate 1,000 bytes per execution, and flow 108 may generate 300 bytes per execution. It may be desirable to execute flows 104, 106, and 108 repeatedly. It may also be desirable to maintain a traffic distribution among the flows. For example, it may be desirable to generate 10% of the traffic from flow 104, 30% from flow 106, and 60% from flow 108.

The desired traffic distribution may be unidirectional or bi-directional. Continuing with the previous example, 10%, 30% and 60% may represent the desired bi-directional (i.e., transmitted and received) traffic generated between each flow and the device under test. Alternatively, if the flows are unidirectional (i.e., transmit only or receive only), the desired distribution may be the desired unidirectional traffic generated by a flow relative to the unidirectional traffic generated by all of the flows.

In order to coordinate execution of the flows so that the desired traffic mix is maintained, traffic flow execution controller 110 is provided. Traffic flow execution controller 110 receives user input regarding the desired traffic distribution across the traffic flows. Traffic flow execution controller 110 may also determine the number of operations per execution per flow by executing each flow and maintaining a count of operations (e.g. bytes transmitted, connection initiations, bytes received, bytes transmitted and received, etc.) per flow. Using this input, traffic flow execution controller 110 assigns a weight to each flow that determines the number of times to execute each flow during execution of a batch and executes the flows in batches according to the assigned weights to generate the desired distribution of traffic between the different flows and device under test 102. Traffic emulator 100 may further include an outgoing traffic monitor 112 that monitors traffic received from device under test 102.

FIG. 2 is a flow chart illustrating exemplary steps for generating simulated network traffic from a plurality of different traffic flows and maintaining a configured distribution among the flows according to an embodiment of the subject matter described herein. Referring to FIG. 2, in step 200 a number of operations per execution per flow for each of a plurality of flows that generate different traffic profiles for testing a device under test is determined. For example, traffic flow execution controller 110 may determine the number of operations performed by each of flows 104, 106, and 108. Traffic flow execution controller 110 may make this determination automatically by monitoring each flow or by receiving user input regarding the number of operations performed by each flow. For automatic determination, traffic flow execution controller 110 may monitor the operations per execution per flow periodically and if the number of operations per execution per flow changes, traffic flow execution controller 110 can modify the weights using the new values. If traffic flow execution controller 110 observes that some flow is not doing any operations (for example if a IDS blocks a particular flow), then traffic flow execution controller 110 may remove that flow from the execution and recompute the weights wi for the remaining flows to maintain their relative percentages. If the blocked flow is allowed at a later point in time, traffic flow execution controller 110 may add it back to the execution and recompute the wi values again for all the flows according to the original percentages.

In step 202, a desired traffic distribution among the flows is determined. For example, traffic flow execution controller 110 may receive user input regarding the percentage distribution that it is desired to maintain among the flows. For example, it may be desirable to generate 10% of the traffic from flow 104, 30% from flow 106, and 60% from flow 108.

In step 204, a weight is assigned to each flow that determines the number of times to execute each flow during iteration of a batch of flows. Exemplary weight calculations will be described in detail below. Traffic flow execution controller 110 may calculate these weights using any of the algorithms described herein.

In step 206, the flows are executed in batches according to the assigned weights to generate a distribution of traffic from the different flows to and from, to, or from the device under test that maintains the desired traffic distribution. For example, traffic flow execution controller 110 may assign weights to each flow that determines the number of times to execute each of flows 104, 106, and 108 during a batch to maintain the desired traffic distribution.

In step 208, outgoing traffic from the device under test is monitored to determine a performance metric of the device under test. For example, if the device under test is a firewall, outgoing traffic monitor 112 may verify operation of the firewall by confirming that the correct traffic is blocked or passed by the firewall. In a load or stress testing scenario, traffic monitor 112 may monitor the throughput of the firewall for traffic that the firewall is supposed to pass.

In this document, the term flow is used to refer to a traffic generator that generates well-defined traffic profile.

The subject matter described herein assumes that each flow is a defined sequence of actions. Each action performs a defined or quantifiable number of operations. Based on the number of operations performed by each action, it is possible to determine the number of operations that will be performed per execution of a flow (OPF). A ratio (R) may be assigned to each flow, where the ratio R is the total operations performed by one flow in a given batch over the total operations performed by all flows in the batch.

From the OPF and R associated with a flow, a weight (W) is computed for each flow. W is computed in such a way that (W*OPF) values across the flows follows the configured ratio. In other words, for a flow i, Ri=the desired percentage of traffic for the flow i, which equals

W i O i i = 1 N W i O i
where Wi is the weight for flow i and Oi=the number of operations per execution of flow i, and N is the total number of flows.

A batch of flows is defined in such a way that each single execution of the batch executes each flow W times, but W may be different for different flows. This way the number of operations performed by the flows in a single execution of the batch follows the configured ratio/percentage in a deterministic way (without using random selection).

With the batch properly defined, multiple threads of execution are started in parallel. Each thread executes the same batch again and again in sequence. That way if there are T threads are running in parallel then T number of batches may get executed in parallel and a large number of batches may get executed over time.

The number T is varied to determine a proper value at which the total number of OPS is maximized. This is same as saying batches per second (BPS) is maximized.

Now whatever the maximum BPS may be, since the number of configured ratios is maintained within a batch boundary, the ratios will automatically be maintained across BPS number of batches. If BPS has a significantly large integer, then the ratios will also be maintained per second.

If the lifetime of a batch is so long that it does not finish within one second, then BPS will be a fraction. In that case, the ratios may not be maintained within one second intervals but will be maintained within a longer interval, hence will be maintained on the average if emulation is run for significant amount of time.

The following items will now be described. Together these items ensure that the emulated traffic follows the configured OPS ratios in a deterministic fashion:

    • 1. How the W values are calculated for each flow.
    • 2. How the batch is executed.

The following description includes two approaches of computing the W values. Depending on the way W values are computed, the execution logic of the batch may vary. Varying of the execution logic of a batch based on the W value computation method will also be described below.

The first approach computes W values in such a way that they are always integers. In the second approach, the calculated W values may be real numbers with fractions and the execution process is modified to handle those.

The following terms and variables will be used in describing the subject matter described herein:

  • N: Number of flows in the configuration.
  • Oi: The number of operations performed per execution of ith flow (OPF), for all i=1, 2, . . . , N.
  • Ri: The ratio of OPS that should be maintained by ith flow, for all i=1, 2, . . . , N.
  • Wi: The weight calculated for ith flow, i=1, 2, . . . , N.
  • LCM(x,y): The least common multiple two integers x and y.
  • GCF(x,y): The greatest common factor two integers x and y.
  • LCM(X1,X2, . . . , Xn): LCM (LCM(X1, X2, . . . , Xn−1), Xn)
    First Approach
    Calculating Wi Values:

Here we calculate the Wi values as described below.

Here we may assume that Ri is integer, for all i=1, 2, . . . , N. If some are not then it is always possible to find out a multiplier which can convert all ratios to integers without changing the distribution.

If Ri values have some common factor then we may first divide the Ri values by the common factor to reduce the Ri values before applying them.

We want the Wi values to be as small as possible.

We calculate Di as shown below, for all i=1, 2, . . . , N:
Di=LCM(Oi,Ri)/Ri

Here we may note that Di will always be an integer.

After that we calculate D as LCM(D1, D2, . . . , Dn).

Finally Wi is calculated as Wi=(D*Ri)/Oi, for all i=1, 2, . . . , N.

Clearly W1O1, W2O2, . . . , WNON will follow the ratios R1:R2: . . . :RN.

Here we may also note that D is divisible by Di and hence (D*Ri) is divisible by (Di*Ri). Hence it is divisible by LCM(Oi, Ri) and hence it is divisible by Oi. This means Wi is always integer.

Executing a Batch:

Now we define a batch in such a way that one single execution of the batch will execute ith flow Wi number of times, for all i=1, 2, . . . , N.

One simple approach can be to first run Flow 1 W1 times, then run Flow 2 W2 times and so on. Finally run Flow N WN times.

This approach has one issue. It will generate traffic from different traffic profiles in big burst.

So the batch can be executed in an interleaved fashion as described below:

  • Step 1: Reset all Wi to the calculated values
  • Step 2: Continue while at least one Wi is non-zero
  • Step 2.1: Start from i=1 and continue till i=N
  • Step 2.1.1: If Wi is non-zero
  • Step 2.1.1.1: Execute Flow i
  • Step 2.1.1.2: Decrement Wi by 1

In one exemplary implementation, Step 2.1 may be optimized by iterating over a set of flows which is gradually reduced by removing flows for which Wi becomes zero.

The following example illustrates an exemplary calculation of weights to be assigned to flows in a batch.

Let's consider 3 flows and the test objective to be throughput. Let's assume Flow 1, Flow 2 and Flow 3 exchange 300 bytes, 500 bytes and 900 bytes respectively. Let's say we have to maintain a throughput distribution of 20%, 50% and 30% respectively across Flow 1, Flow 2 and Flow 3.

Here O1, O2 and O3 are 300, 500 and 900 respectively. Reduced R1, R2 and R3 values are 2, 5 and 3 respectively.
D1=LCM(R1,O1)/R1=300/2=150
D2=LCM(R2,O2)/R2=500/5=100
D3=LCM(R3,O3)/R3=900/3=300
D=LCM(D1,D2,D3)=300
W1=(D*R1)/O1=600/300=2
W2=(D*R2)/O2=1500/500=3
W3=(D*R3)/O3=900/900=1

The batch will be executed as shown in FIG. 3. More particularly, FIG. 3 is a graph illustrating one batch of execution of flows, flow 1, flow 2, and flow 3, where the batch executed the flows in the order F1, F2, F3, F1, F2, F2, which maintains the desired traffic distribution of 20%, 50%, and 30%, respectively.

To calculate LCM(x, y), we first calculate GCF(x, y) since GCF calculation is faster than LCM calculation, and then we calculate LCM(x, y) as (x*y)/GCF(x, y). This way of LCM calculation is standard.

Second Approach

Calculating the Wi Values:

We need to compute the weights Wi for each flow such that:
W1*Oi:W2*O2:W3*O3:WN*ON=R1:R2:R3:RN
This can be represented as N−1 individual equations:

W 2 O 2 R 2 = W 1 O 1 R 1 W 3 O 3 R 3 = W 1 O 1 R 1 W i O i R i = W 1 O 1 R 1 W N O N R N = W 1 O 1 R 1

The equations can be rearranged to get the values of Wi in terms of R1, O1, W1, Ri and Oi.

W i = R i O i O 1 R 1 W 1

We can assume any value of W1 and compute the values of all other weights W2, W3, . . . WN. For simplicity, W1 can be assumed to be 1. This leads to the following equation for computing Wi, for all i=2,3, . . . , N:

W i = R i O i O 1 R 1

In order to ensure that the weights are all greater than 1, we must choose the 1st flow carefully. The first flow should be the one which has the smallest (Ri/Oi) value.

Executing a Batch:

Unlike the first approach, the equation to compute the weights in the second approach does not guarantee integer values for the weights. This requires a change to the execution of the batch from the first approach.

Like the first approach, we can execute the batch by interleaving the flows. However, as the weights Wi can be fractional numbers, we have to take the integer values of the weights [Wi] in executing the batch where [ ] represents the closest integer smaller than or equal to the value. Using the integer smaller than the weight Wi introduces an error for each batch:
Δi=Wi−[Wi]

If left uncorrected, this error can add up, causing the actual distribution to vary from the configured ratios. In order to correct this error, we maintain each Δi at the end of each batch. Before executing the next batch the Wi is adjusted to Wi′ using the following equation:
Wi′=Wii
The adjusted [Wi′] value is used as the weight for the ith flow in the next batch.

Similarly, the error between Wi′ and [Wi′] is used as the error for the subsequent batch. This approach guarantees that the error never gets unbounded as eventually Wi′ will become greater than [Wi]+1 which will mean that [Wi′] will be [Wi]+1. This means that the error Δi will always be between 0 and 1.

It will be understood that various details of the subject matter described herein may be changed without departing from the scope of the subject matter described herein. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation.

Claims

1. A method for generating simulated network traffic using a plurality of different traffic flows and maintaining a configured distribution between the flows and a device under test, the method comprising:

at a network emulator: determining a number of operations per flow for each of a plurality of flows that generate simulated network traffic between the flows and a device under test; determining a desired traffic distribution among the traffic generated by the traffic flows; assigning a weight to each flow that determines the number of times to execute each flow during execution of a batch of flows; and executing the flows in batches according to the assigned weights to generate the desired distribution of traffic between the different flows and the device under test; wherein each of the weights indicates a number of times to execute a flow in a given batch and wherein executing the flows in batches includes executing each flow wi times in a given batch, where wi is the weight that defines the number of times for executing the ith flow in a batch to achieve the desired traffic distribution.

2. The method of claim 1 wherein each of the flows generates application layer traffic between itself and the device under test.

3. The method of claim 1 wherein determining a desired traffic distribution includes receiving input from a user regarding a desired ratio of traffic among the flows.

4. The method of claim 1 wherein the device under test includes at least one of: a network address translator (NAT), a firewall, an intrusion detection system (IDS), an intrusion protection system (IPS), a deep packet inspection (DPI) device, a wide area network (WAN) optimization device, a layer 7 accelerator, and a server load balancer (SLB).

5. The method of claim 1 wherein each value wi represents a smallest number of times to execute the ith flow to achieve the desired traffic distribution.

6. The method of claim 1 wherein the desired traffic distribution comprises a ratio of a volume of traffic from each flow to a total volume of traffic from all of the flows.

7. A method for generating simulated network traffic using a plurality of different traffic flows and maintaining a configured distribution between the flows and a device under test, the method comprising:

at a network emulator: determining a number of operations per flow for each of a plurality of flows that generate simulated network traffic between the flows and a device under test; determining a desired traffic distribution among the traffic generated by the traffic flows; assigning a weight to each flow that determines the number of times to execute each flow during execution of a batch of flows; and executing the flows in batches according to the assigned weights to generate the desired distribution of traffic between the different flows and the device under test, wherein executing the flows in batches comprises executing the batches in parallel such that the configured distribution is maintained within each batch boundary.

8. The method of claim 1 wherein executing the flows in batches comprises, within each batch, interleaving the wi executions of the flows with each other.

9. The method of claim 1 wherein executing the flows in batches comprises, within each batch, executing each ith flow wi times before executing the (i+1)th flow.

10. A method for generating simulated network traffic using a plurality of different traffic flows and maintaining a configured distribution between the flows and a device under test, the method comprising:

at a network emulator: determining a number of operations per flow for each of a plurality of flows that generate simulated network traffic between the flows and a device under test; determining a desired traffic distribution among the traffic generated by the traffic flows; assigning a weight to each flow that determines the number of times to execute each flow during execution of a batch of flows; executing the flows in batches according to the assigned weights to generate the desired distribution of traffic between the different flows and the device under test; and monitoring the number of operations per flow and automatically updating the weights in response to changes in the number of operations per flow.

11. A system for generating simulated network traffic using a plurality of different traffic flows and maintaining a configured distribution between the flows and a device under test, the system comprising:

plurality of traffic flows that generate simulated network traffic between the flows and a device under test; and
a traffic flow execution controller for determining a number of operations per flow for each of the traffic flows, for determining a desired traffic distribution among the traffic generated by the traffic flows, for assigning a weight to each flow that determines the number of times to execute each flow during execution of a batch of flows, and for executing the flows in batches according to the assigned weights to generate the desired distribution of traffic between the different flows and the device under test;
wherein each of the weights indicates a number of times to execute a flow in a given batch and wherein executing the flows in batches includes executing each flow wi times in a given batch, where wi is the weight that defines the number of times for executing the ith flow in a batch to achieve the desired traffic distribution.

12. The system of claim 11 wherein each of the flows generates application layer traffic between each flow and the device under test.

13. The system of claim 11 wherein determining a desired traffic distribution includes receiving input from a user regarding a desired ratio of traffic among the flows.

14. The system of claim 11 wherein the device under test includes at least one of: a network address translator (NAT), a firewall, an intrusion detection system (IDS), an intrusion protection system (IPS), a deep packet inspection (DPI) device, a wide area network (WAN) optimization device, a layer 7 accelerator, and a server load balancer (SLB).

15. The system of claim 11 wherein each value wi represents a smallest number of times to execute the ith flow to achieve the desired traffic distribution.

16. The system of claim 11 wherein the desired traffic distribution comprises a ratio of a volume of traffic generated by each flow to a total volume of traffic generated by all of the flows.

17. A system for generating simulated network traffic using a plurality of different traffic flows and maintaining a configured distribution between the flows and a device under test, the system comprising:

a plurality of traffic flows that generate simulated network traffic between the flows and a device under test; and
a traffic flow execution controller for determining a number of operations per flow for each of the traffic flows, for determining a desired traffic distribution among the traffic generated by the traffic flows, for assigning a weight to each flow that determines the number of times to execute each flow during execution of a batch of flows, and for executing the flows in batches according to the assigned weights to generate the desired distribution of traffic between the different flows and the device under test, wherein executing the flows in batches comprises executing the batches in parallel such that the configured distribution is maintained within each batch boundary.

18. The system of claim 15 wherein executing the flows in batches comprises, within each batch, interleaving the wi executions of the flows with each other.

19. The system of claim 15 wherein executing the flows in batches comprises, within each batch, executing each ith flow wi times before executing the (i+)th flow.

20. A system for generating simulated network traffic using a plurality of different traffic flows and maintaining a configured distribution between the flows and a device under test, the system comprising:

a plurality of traffic flows that generate simulated network traffic between the flows and a device under test; and
a traffic flow execution controller for determining a number of operations per flow for each of the traffic flows, for determining a desired traffic distribution among the traffic generated by the traffic flows, for assigning a weight to each flow that determines the number of times to execute each flow during execution of a batch of flows, for executing the flows in batches according to the assigned weights to generate the desired distribution of traffic between the different flows and the device under test for monitoring the number of operations per flow, and for automatically updating the weights in response to changes in the number of operations per flow.

21. A non-transitory computer readable medium having stored thereon executable instructions that when executed by the processor of a computer control the computer to perform steps comprising:

determining a number of operations per flow for each of a plurality of flows that generate simulated network traffic between the flows and a device under test;
determining a desired traffic distribution among the traffic generated by the traffic flows;
assigning a weight to each flow that determines the number of times to execute each flow during execution of a batch of flows; and
executing the flows in batches according to the assigned weights to generate the desired distribution of traffic between the different flows and the device under test;
wherein each of the weights indicates a number of times to execute a flow in a given batch and wherein executing the flows in batches includes executing each flow wi times in a wen batch where wi is the weight that defines the number of times for executing the ith flow in a batch to achieve the desired traffic distribution.
Referenced Cited
U.S. Patent Documents
5247517 September 21, 1993 Ross et al.
5327437 July 5, 1994 Balzer
5343463 August 30, 1994 van Tetering et al.
5477531 December 19, 1995 McKee
5535338 July 9, 1996 Krause et al.
5568471 October 22, 1996 Hershey et al.
5590285 December 31, 1996 Krause et al.
5600632 February 4, 1997 Schulman
5657438 August 12, 1997 Wygodny
5671351 September 23, 1997 Wild
5761486 June 2, 1998 Watanabe
5787253 July 28, 1998 McCreery et al.
5838919 November 17, 1998 Schwaller et al.
5878032 March 2, 1999 Mirek et al.
5881237 March 9, 1999 Schwaller et al.
5905713 May 18, 1999 Anderson et al.
5937165 August 10, 1999 Schwaller et al.
5974237 October 26, 1999 Shurmer et al.
6028847 February 22, 2000 Beanland
6044091 March 28, 2000 Kim
6061725 May 9, 2000 Schwaller et al.
6065137 May 16, 2000 Dunsmore et al.
6108800 August 22, 2000 Asawa
6122670 September 19, 2000 Bennett et al.
6148277 November 14, 2000 Asava
6157955 December 5, 2000 Narad et al.
6172989 January 9, 2001 Yanagihara et al.
6173333 January 9, 2001 Jolitz
6189031 February 13, 2001 Badger
6233256 May 15, 2001 Dieterich et al.
6279124 August 21, 2001 Brouwer
6321264 November 20, 2001 Fletcher
6345302 February 5, 2002 Bennett et al.
6360332 March 19, 2002 Weinberg
6363056 March 26, 2002 Beigi et al.
6397359 May 28, 2002 Chandra et al.
6401117 June 4, 2002 Narad
6408335 June 18, 2002 Schwaller et al.
6421730 July 16, 2002 Narad
6434513 August 13, 2002 Sherman et al.
6446121 September 3, 2002 Shah
6507923 January 14, 2003 Wall et al.
6545979 April 8, 2003 Poulin
6601098 July 29, 2003 Case
6621805 September 16, 2003 Kondylis et al.
6625648 September 23, 2003 Schwaller et al.
6625689 September 23, 2003 Narad
6662227 December 9, 2003 Boyd et al.
6708224 March 16, 2004 Tsun et al.
6763380 July 13, 2004 Mayton et al.
6789100 September 7, 2004 Nemirovsky
6920407 July 19, 2005 Adamian et al.
6950405 September 27, 2005 Van Gerrevink
7006963 February 28, 2006 Maurer
7010782 March 7, 2006 Narayan et al.
7516216 April 7, 2009 Ginsberg et al.
8010469 August 30, 2011 Kapoor et al.
8135657 March 13, 2012 Kapoor et al.
8145949 March 27, 2012 Silver
8341462 December 25, 2012 Broda et al.
8402313 March 19, 2013 Pleis et al.
8510600 August 13, 2013 Broda et al.
8522089 August 27, 2013 Jindal
8676188 March 18, 2014 Olgaard
8839035 September 16, 2014 Dimitrovich et al.
20020080781 June 27, 2002 Gustavsson
20030009544 January 9, 2003 Wach
20030012141 January 16, 2003 Gerrevink
20030033406 February 13, 2003 John et al.
20030043434 March 6, 2003 Brachmann et al.
20030231741 December 18, 2003 Rancu et al.
20060268933 November 30, 2006 Kellerer et al.
20080285467 November 20, 2008 Olgaard
20080298380 December 4, 2008 Rittmeyer et al.
20090100296 April 16, 2009 Srinivasan et al.
20090100297 April 16, 2009 Srinivasan et al.
20100050040 February 25, 2010 Samuels et al.
20110238855 September 29, 2011 Korsunsky et al.
20110283247 November 17, 2011 Ho et al.
20120192021 July 26, 2012 Jindal
20120240185 September 20, 2012 Kapoor et al.
20120314576 December 13, 2012 Hasegawa et al.
20130111257 May 2, 2013 Broda et al.
20130286860 October 31, 2013 Dorenbosch et al.
20140036700 February 6, 2014 Majumdar et al.
20140173094 June 19, 2014 Majumdar et al.
20140289561 September 25, 2014 Majumdar et al.
Foreign Patent Documents
0 895 375 August 2004 EP
Other references
  • Notice of Allowance and Fee(s) Due for U.S. Appl. No. 11/462,351 (Feb. 6, 2009).
  • Restriction Requirement for U.S. Appl. No. 11/462,351 (Jan. 2, 2009).
  • Ye et al., “Large-Scale Network Parameter Configuration Using an On-line Simulation Framework,” Technical report, ECSE Department, Rensselear Polytechnic Institute (2002).
  • Business Wire, “Ixia's Web Stressing and In-Service Monitoring Products Names Best of Show Finalist at NetWorld+Interop 2001, Atlanta,” 2 pages (Sep. 10, 2001).
  • PRNewsWire, “Caw Network Doubles Performance of Real-World Capacity Assessment Appliance Suite: WebAvalanche and WebReflector Now Generate and Respond to 20,000+ HTTP requests per Second With Over One Million Open Connections,” 2 pages (Sep. 10, 2001).
  • Caw Networks, Inc. and Foundry Networks, Inc., “Caw Networks Performance Brief: Caw Networks and Foundry Networks 140,000 Transactions per Second Assessment,” 1 page (Sep. 7, 2001).
  • MacVittie, “Online Only: CAW's WebReflector Makes Load-Testing a Cakewalk,” Network Computing, 2 pages (Sep. 3, 2001).
  • “Caw Networks Unveils New Web-Stressing Appliance,” press release from Caw Networks, Inc., 2 pages (Mar. 5, 2001).
  • Ye et al., “Network Management and Control Using collaborative On-Line Simulation,” Proc. IEEE International Conference on Communications ICC2001, Helsinki, Finland, pp. 1-8 (2001).
  • Business Wire, “NetIQ's Chariot 4.0 Goes Internet-Scale: ASPs and Service Providers Can Conduct Tests With Up to 10,000 Connections; New Visual Test Designer Simplifies Testing of All Sizes,” 1 page (Oct. 23, 2000).
  • Business Wire, “Spirient Communications TeraMetrics and NetIQ's Chariot Work Together to Create First Complete Network Performance Analysis Solution,” 2 pages (Sep. 25, 2000).
  • Kovac, “Validate your equipment performance—Netcom Systems' SmartBits—Hardware Review—Evaluation,” Communications News, 2 pages (May 2000).
  • Marchette, “A Statistical Method for Profiling Network Traffic,” USENIX (Apr. 12-19, 1999).
  • Cooper et al., “Session traces: an enhacement to network simulator,” Performance, computing and Communication Conference, Scottsdale, AZ (Feb. 1, 1999).
  • San-qi, et al., “SMAQ: A Measurement-Based Tool for Traffic Modeling and Queuing Analysis Part I; Design methodologies and software architecture,” IEEE Communications Magazine, pp. 56-65 (Aug. 1, 1998).
  • San-qi, et al., “SMAQ: A Measurement-Based Tool for Traffic Modeling and Queuing Analysis Part II; Network Applications,” IEEE Communications Magazine, pp. (Aug. 1, 1998).
  • Notice of Allowance and Fee(s) Due for U.S. Appl. No. 13/871,909 (Apr. 20, 2015).
  • Notice of Allowance and Fee(s) Due for U.S. App. No. 13/567,747 (Mar. 5, 2015)
  • Non-Final Office Action for U.S. Appl. No. 13/718,813 (Jan. 14, 2015).
  • Non-Final Office Action for U.S. Appl. No. 13/871,909 (Nov. 20, 2014).
  • Non-Final Office Action for U.S. Appl. No. 13/567,747 (Nov. 19, 2014).
  • “Ixload Specifications,” http://web.archive.org/web/20130127053319//www.ixiacom.com/products/networktext/applications/ixload/specifications/index.php. pp. 1-7, (Jan. 27, 2013).
  • “A TCP Tutorial,” ssfnet.org/Exchange/tcp/tcpTutorialNotes.html, pp. 1-10 (Apr. 4, 2012).
  • “IxLoad,” Solution Brief, 915-3030-01. D, Ixia, pp. 1-4 (Feb. 2012).
  • Business Wire, “REMINDER/Caw Networks to Spotlight WebAvalanche 2.0 and WebReflector At Networld+Interop,” 2 pages (May 8, 2001).
  • Mniemneh, “Computer Networks Flow control with TCP,” pp. 1-5 (Publication Date Unknown).
Patent History
Patent number: 9178823
Type: Grant
Filed: Dec 12, 2012
Date of Patent: Nov 3, 2015
Patent Publication Number: 20140160927
Assignee: IXIA (Calabasas, CA)
Inventors: Partha Majumdar (Woodland Hills, CA), Rohan Chitradurga (Simi Valley, CA)
Primary Examiner: Kevin C Harper
Application Number: 13/712,499
Classifications
Current U.S. Class: Of A Switching System (370/250)
International Classification: H04L 1/00 (20060101); H04L 12/801 (20130101);