SYSTEM AND METHOD FOR ECMP LOAD SHARING

- Foundry Networks, LLC

A packet classifier and a method for routing a data packet are provided. The packet classifier includes a content addressable memory, a translation table and a parameter memory. The method includes looking up a content addressable memory for a base address into a parameter memory using a header of the data packet. The base address is related to the routes under ECMP for forwarding the data packet. From among these addresses, using multiple headers of the data packet, an adjustment to the base address is computed. The adjustment specifies an actual address to the parameter memory corresponding to a selected route for forwarding the data packet. The parameter memory is then accessed using the actual address to obtain parameter values relevant to the selected route. The data packet is then forwarded according to the parameter values thus obtained.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

The present application claims priority of U.S. provisional patent application No. 60/823,178, filed Aug. 22, 2006, incorporated herein by reference.

BACKGROUND OF THE INVENTION

1. Field of the Invention

The present invention relates to routing data packets in a computer network. In particular, the present invention relates to routing data packets in a computer network in which equal cost multi-path (ECMP) load sharing is available.

2. Discussion of the Related Art

In a router, a data packet is directed from an input port to an output port based on the data specified in the header of the data packet. Typically, a content addressable memory (CAM) quickly maps the IP header information or layer 3 header of the data packet to a memory location of a parameter random access memory (PRAM) which stores information that indicates the route over which the data packet should be forwarded, and the parameter values relevant to the route. In one prior art router, the data stored in the PRAM includes a forwarding identifier (FID), types of service (TOS), priority, number of copies (e.g., a multicast address) and other information. The circuit including the CAM and the PRAM is sometimes referred to as a “packet classifier.” The packet classifier encodes the FID and other data into an internal header for the data packet which is used by the router to direct the data packet at line speed to an output port, where the data packet is forwarded to the next switch on a path to its destination.

In that prior art router, the CAM entries are mapped one-to-one to the PRAM entries. That is, only one FID can result from the CAM look-up based on the header of the data packet. In an Internet Protocol (IP) network, a data packet may be routed through any of a number of paths through multiple switches to its destination. ECMP load sharing is one method known to those skilled in the art by which multiple-path routing of IP data traffic can be accomplished. One method to support ECMP is to resolve the level 3 (IP layer) and level 4 (“transport control protocol” or TCP layer) headers of a data packet into any one of a number of FIDs, where each FID represents a different output port of the router that is connected to a switch on a different one of the possible paths for the data packet. However, because the CAM entries are mapped one-to-one with the PRAM entries, the prior art router does not provide efficient hardware support for ECMP load sharing.

“JetCore™ Based Chassis System: An Architecture Brief on NetIron, BigIron and FastIron Systems” and “Next Generation Terabit System Architecture: The High Performance Revolution for 10 Gigabit Networks” are white papers available from Foundry Networks, Inc. that disclose designs for high performance routers in the prior art. These white papers are hereby incorporated by reference in their entireties to provide background information.

SUMMARY OF THE INVENTION

According to one embodiment of the present invention, a packet classifier and a method for routing a data packet are provided. The packet classifier includes a content addressable memory, a translation table and a parameter memory. The method includes looking up a content addressable memory for a base address into a parameter memory using a first portion of the data packet. The base address is related to the various routes for forwarding the data packet. From among these addresses, using a second portion of the data packet, an adjustment to the base address is computed, the adjustment specifying an actual address to the parameter memory corresponding to a selected route for forwarding the data packet. The parameter memory is then accessed using the actual address to obtain parameter values relevant to the selected route. The data packet is then forwarded according to the parameter values thus obtained.

The present invention is better understood upon consideration of the detailed description below, in conjunction with the accompany drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 shows packet classifier 100 that provides translation table 102 between CAM 101 and PRAM 103, according to one embodiment of the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

The present invention provides hardware support for ECMP by an indirect coupling between the CAM and the PRAM. (While the detailed description herein uses a CAM in one example to illustrate one way to implement a parameter look-up or search function in a packet classifier, other implementations are possible. Other memory elements, such as dynamic random access memories (DRAM) or static random access memories (SRAM) may also be used). According to one embodiment of the present invention, FIG. 1 shows a packet classifier 100 that provides translation table 102 between CAM 101 and PRAM 103. In the embodiment of FIG. 1, the header of a data packet is used to perform a route look-up in CAM 101 to obtain an index (“route lookup result”) into translation table (or “re-mapping RAM”) 102, which provides a base address to PRAM 103. This base address is then modified to obtain the address of one of a number of entries in PRAM 103. Each of these entries contains a different FID pointing to a different output port representing the next hop in a different ECMP route.

According to the embodiment of the present invention shown in FIG. 1, CAM 101 is logically organized as a 256 k×72×3-bit memory, which provides a 21-bit index (TCAM_INDEX) into translation table 102. Translation table 102 is logically organized into 2 M×32-bit memory. The 32-bit output datum read from translation table 102 includes a base index to PRAM 103. In one embodiment, the 31-bit output datum includes a 19-bit base index (“PRAM_INDEX_BASE”), a 4-bit field “ECMP_MASK” and management information (e.g., an aging bit). The ECMP_MASK field may encode the number of ECMP paths available for that data packet. The 19-bit base index is combined with a hash function of the IP and TCP headers of the data packet to obtain an actual 19-bit index into PRAM 103. Each entry of PRAM 103 contains an FID and other information relevant to the routing of the data packet. Mathematically, the 32-bit output datum CAM2PRAM_DATA is given by

CAM2PRAM_DATA[31:0]=READ (TCAM_INDEX[20:0])

where the function READ( )represents the data read from translation table 102 using the 21-bit TCAM_INDEX as address into translation table 102.

According to one embodiment of the present invention, an 8-bit ECMP index is obtained by a hash function using the MAC, IP and TCP headers of the data packet, and a random number. The portions of the MAC, IP and TCP headers that are used may be, for example, the source and destination MAC addresses, the IP source and destination addresses and the TCP source and destination port number. The random number is a number generated at the initialization process of the router. The random number provides a personalization for the router (i.e., from the same data packet, a different router would map to a different address in the PRAM). One advantage of using the Random_Number is a more distributed load balance in situations where multiple routers are connected in a hierarchical fashion. Mathematically, the 8-bit ECMP index is given by:

ECMP_INDEX[7:0]=hash(L2, L3, L4, Random_Number)

where hash is a hash function with low collision probability, L2, L3 and L4 represent selected fields in the MAC, IP and TCP headers of the data packet, and Random_Number is the personality random number of the router.

In one embodiment, the ECMP_INDEX is mapped into one of possible the routes (up to a maximum of 16, i.e., a 5-bit number). As mentioned above, the number of possible routes is encoded in the ECMP_MASK field. The number of possible routes (“ECMP_BASE”) is given by adding one to the 4-bit ECMP_MASK field:

ECMP_BASE[4:0]=ECMP_MASK[3:0]+4′h1;

Using the ECMP_INDEX and the ECMP_BASE, a 4-bit offset to the 19-bit base index to PRAM 103 (“ECMPADJUST) is obtained using the modulo function (i.e., taking the remainder from an integer divide of ECMP_INDEX by ECMP_BASE):

ECMP_ADJUST[3:0]=(ECMP_INDEX[7:0]% ECMP_BASE[4:0]);

The modulo operation provides close to perfect load balancing across equal cost paths. The 4-bit offset ECMP_ADJUST is logically OR'd with the 19-bit base index PRAM_INDEX_BASE to obtain the actual index into PRAM 103:

PRAM_INDEX_TO_USE=PRAM_INDEX_BASE|ECMP_ADJUST[3:0]

Using this actual index, the parameter values and the FID are obtained from PRAM 103 to perform forwarding of the data packet.

In this manner, hardware support for load balanced ECMP computations is provided without requiring a line card or central processing unit management intervention. ECMP traffic can therefore be processed at line rate. Further, increasing the number of routes handled by the CAM for ECMP can be achieved without a corresponding increase in PRAM size, as the PRAM entries can be shared for the same next hop, thereby providing significant cost savings. Because software controls the many-to-one or one-to many mappings between route lookups of CAM 101 and the associated pram entries in PRAM 103, the present invention allows certain statistics to be collected (e.g., for route groups).

The detailed description above is provided to illustrate specific embodiments of the present invention and is not intended to be limiting. Numerous variations and modifications within the scope of the present invention are possible. The present invention is set forth in the following claims.

Claims

1. A network device comprising:

at least one port configured to receive a data packet; and
a first memory and a second memory,
wherein the network device is configured to: retrieve a first entry in the first memory based on a portion of a first received data packet; retrieve a first entry in the second memory based on the first entry in the first memory, the first entry in the second memory including information associated with a route over which the first received data packet should be forwarded; retrieve the first entry in the first memory based on a portion of a second received data packet; and retrieve a second entry in the second memory based on the first entry in the first memory, the second entry in the second memory being distinct from the first entry in the second memory, the second entry in the second memory including information associated with a route over which the second received data packet should be forwarded,
wherein the portion of the first received data packet and the portion of the second received data packet are substantially the same.

2. The network device of claim 1 wherein the first memory is a content addressable memory (CAM) and wherein the second memory is a parameter random access memory (PRAM).

Patent History
Publication number: 20110044340
Type: Application
Filed: Oct 7, 2010
Publication Date: Feb 24, 2011
Applicant: Foundry Networks, LLC (San Jose, CA)
Inventors: Deepak Bansal (San Jose, CA), Yuen Wong (San Jose, CA)
Application Number: 12/900,279
Classifications
Current U.S. Class: Processing Of Address Header For Routing, Per Se (370/392)
International Classification: H04L 12/56 (20060101);