System and method for providing a loop free topology in a network environment

- Cisco Technology, Inc.

An example method is provided and includes receiving a bridge protocol data unit (BPDU) on a particular link of a network. A port associated with the particular link is blocked in response to receiving the BPDU. The method also includes communicating a claim message to a spanning tree protocol (STP) root of the network; receiving, from a network element, a reply in which a grant bit is set; and changing the port associated with the particular link to a forwarding state.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
TECHNICAL FIELD

This disclosure relates in general to the field of communications and, more particularly, to providing a loop free topology in a network environment.

BACKGROUND

Ethernet architectures have grown in complexity in recent years. This is due, at least in part, to diverse technologies that have emerged to accommodate a plethora of end users. For example, Data Center Ethernet (DCE) represents an extension to Classical Ethernet (CE), and it can offer a lower cost, lower latency, high-bandwidth configuration. The forwarding methodology adopted by DCE networks is generally scalable and, further, provides forwarding paths with equal-cost multipathing with support for different forwarding topologies. In certain network scenarios, topology information may not be current, accurate, and/or consistent. Optimally managing network topologies presents a significant challenge to system designers, network operators, and service providers alike.

BRIEF DESCRIPTION OF THE DRAWINGS

To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which:

FIG. 1 is a simplified block diagram of a communication system for providing a loop free topology in a network environment in accordance with one embodiment of the present disclosure;

FIG. 2 is a simplified block diagram illustrating additional details related to the communication system in accordance with one embodiment;

FIG. 3 is a simplified block diagram illustrating details related to a possible example of the communication system in accordance with one embodiment;

FIG. 4 is a simplified block diagram illustrating details related to a possible example of the communication system in accordance with one embodiment;

FIG. 5 is a simplified block diagram illustrating details related to a possible example of the communication system in accordance with one embodiment;

FIG. 6 is a simplified flowchart illustrating a series of example operations for a flow associated with the communication system;

FIGS. 7-10 are simplified block diagrams illustrating details associated with examples segmented spanning tree architectures in accordance with one embodiment;

FIGS. 11-12 are simplified block diagrams illustrating details associated with cloud merging architectures in accordance with one embodiment;

FIG. 13 is a simplified block diagram illustrating an example network core and gateway switch architecture in accordance with one embodiment; and

FIGS. 14-17 are simplified flowcharts illustrating example operations associated with the present disclosure.

DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

Overview

A method is provided in one example embodiment and includes receiving a bridge protocol data unit (BPDU) on a particular link of a network. The BPDU can be any type of packet communication and/or data segment. A port associated with the particular link is blocked in response to receiving the BPDU. In such a context, ‘blocked’ simply means that data propagation would be limited in some appropriate manner. The method also includes communicating a claim message to a spanning tree protocol (STP) root of the network. The claim message is reflective of any suitable packet communication and/or data segment that indicates of claim (e.g., a signal, an identifier, an object, etc.) to a particular node of the network. The method may also include receiving, from a network element, a reply in which a grant bit is set. For example, a given field of the packet can be set in order to indicate the grant identification, condition, etc. The method may further include changing the port associated with the particular link to a forwarding state. The term ‘forwarding state’ simply refers to the ability for a given network element to facilitate data flow (or data propagation) in the network. For example, certain incoming data may be forwarded to other network elements.

In more specific embodiments, the method may include executing an intermediate system to intermediate system (IS-IS) protocol for a first set of network links in the network; and executing a spanning tree protocol (STP) for a second set of network links in the network. In yet other implementations, the claim message is sent using a spanning tree root Type Length Value (TLV), which is provisioned within the IS-IS protocol.

Other implementations may result in the spanning tree root TLV being flooded to neighboring network elements through a plurality of link-state packets (LSPs). In certain scenarios, the network includes a Data Center Ethernet (DCE) network and a Classical Ethernet (CE) network, which form a layer-2 (L2) broadcast domain. In detailed architectures, a selected port is configured to receive a superior BPDU, and in response to the superior BPDU, the selected port remains in a blocked state. In still other embodiments, the particular link is a CE link, and the claim message includes a TLV associated with a virtual local area network (VLAN).

EXAMPLE EMBODIMENTS

Turning to FIG. 1, FIG. 1 is a simplified block diagram of a communication system 10 for providing a loop free topology in a network environment in accordance with one embodiment. FIG. 1 may include a Data Center Ethernet (DCE) network 12, a Classical Ethernet (CE) switch 14 and a CE switch 16, which can be coupled together via a communication link 15. Additionally, FIG. 1 may include a CE network 18, a set of DCE switches 20 and 22, along with a set of CE-DCE gateway switches 24 and 28.

DCE networks commonly use a routing protocol (e.g., intermediate system to intermediate system (IS-IS)) for forwarding purposes, where CE networks commonly use a spanning tree protocol (STP) as their forwarding protocol. DCE and CE form the same layer-2 (L2) broadcast domain such that a mechanism is needed to avoid the loop that forms across interconnected CE and DCE networks. In the illustration of FIG. 1, DCE switches 20 and 22 can be executing an IS-IS protocol. Concurrently, CE-DCE switches 24 and 28 can be employing the IS-IS protocol on DCE links, and using STP on the CE links. In this particular configuration of FIG. 1, CE switch 16 may be executing STP. In regards to the example of FIG. 1, a loop could exist, where the path is defined from CE switch 16, to CE-DCE switch 24, to DCE switch 20, to DCE switch 22, to CE-DCE switch 28, to CE switch 14, and then returning back to CE switch 16.

In one particular example, DCE network 12 is representative of a layer 2 (L2) multi-pathing (L2MP) network, which may be executing the IS-IS forwarding protocol. DCE network 12 and CE network 18 are associated with the same broadcast domain. This could mean, for example, that a virtual local area network (VLAN) associated with CE network 18 can span into DCE network 12. Because of their inherent protocols, if a loop occurs between DCE network 12 and CE network 18, it is not properly blocked. This is because protocols fail to uniformly evaluate both of these networks, as a collective whole.

In order to prevent loops in DCE/CE hybrid networks, communication system 10 can provide an L2 gateway spanning tree protocol (L2G-STP) mechanism and an L2 Gateway Interconnect Protocol (L2GIP) mechanism. As an overview, L2G-STP can ensure a loop-free CE-DCE L2 domain, while L2GIP can offer extended capabilities to L2G-STP, as discussed below. In a general sense, the main functionality of the L2G-STP mechanism is to offer a segmented spanning tree, whereas the main functionality of the L2GIP mechanism is to build (and to maintain adjacencies) between CE-DCE gateway switches 24 and 28.

Hence, communication system 10 can be employed to prevent loops from occurring within networks (and/or clouds) of FIG. 1. More particularly, certain embodiments of communication system 10 can represent DCE network 12 as a pseudo bridge, or a virtual switch (as outlined below). Furthermore, embodiments presented herein allow an STP block to be the redundant link between CE switch 16 and CE switch 14. This is commonly referred to as the ‘wiring closet’ in switching applications. Communication system 10 allows CE-DCE gateway switches (connected to the same DCE network) to be represented as a single virtual switch such that the STP block port is pushed down. This stands in contrast to blocking a core link, which connects gateway switches. Hence, communication system 10 can use a broadcast tree system identifier (e.g., per-VLAN) as the STP root bridge identifier (ID) on the associated CE-DCE gateway switches. This operation would effectively push the STP block to communication link 15, which offers redundancy for the system. The architecture of communication system 10 can offer seamless interoperability with existing Ethernet switches and end devices.

In terms of advantages, any possible user provisioning for the CE-DCE gateway switches is minimal, when employing the described approach to achieve a virtual switch paradigm. Additionally, such a strategy is pragmatic because it is less prone to errors and, further, because it reduces debugging operations. Moreover, an object is locally derived using IS-IS (instead of STP), which explicitly synchronizes the virtual switch information across CE-DCE gateway switches 24 and 28 in DCE network 12. Note that such a solution is simple to implement and, furthermore, transient loops would be avoided whenever DCE network 12 would split, change, or merge with another entity. Details relating to the possible signaling and interactions between the components of communication system 10 are provided below with reference to FIGS. 2-6.

Note that before turning to the example flows and infrastructure of example embodiments of the present disclosure, a brief overview of the switching environment is provided for purposes of context and explanation. Link state routing is a protocol that allows a node in a network to determine network topology by sharing information about a transmission cost to each of its neighboring nodes. Link state routing packets are transmitted to (and received from) neighbors. The least expensive path to various destinations can be determined using the link state information. Link state information can be used to generate network topology information at various network nodes for creating forwarding tables. The forwarding tables allow network nodes (such as switches and bridges) to forward the received traffic on an appropriate output interface. In order to generate a network topology map and a forwarding table at a specific node, link state information is distributed from various network nodes. Each network node is configured to create a link state packet having information about the distance, delay, or cost to each of its neighbors. A link state record (LSR) can then be transmitted to neighboring nodes.

Transient loops arise when network topology changes because neighboring nodes may not be forwarding transmissions using the same generated network topology. Transient and permanent loops waste network bandwidth and, further, may burden end nodes with duplicate copies of topology information. One mechanism for preventing loops is STP. STP commonly runs on a switch and, further, operates to maintain a loop-free topology in an L2 switched network. The term spanning tree protocol (STP) as used herein includes any version of STP, including for example, traditional STP (IEEE 802.1d), rapid spanning tree protocol (RSTP) (IEEE 802.1w), multiple spanning tree protocol (MSTP) (IEEE 802.1s), or any other spanning tree protocol. CE switches may use STP to prevent loops, whereas other devices such as DCE switches may be configured to use protocols other than STP (e.g., IS-IS) to provide loop-free operations. While STP and other protocols work well for a standalone network comprising switches that utilize only one protocol for preventing loops, the different protocols may not interoperate with each other and, therefore, cannot effectively be used in a combined (i.e., a hybrid) network.

FIG. 1 depicts clouds that form a switching network, where a cloud is defined as a set of one of more network switches/bridges and end hosts: all of which may be interconnected. At the edge of a DCE cloud and a CE cloud, a model for control plane interaction between the two clouds is commonly defined. Specifically, DCE and CE use different protocols to construct their respective forwarding topology (IS-IS versus STP). Thus, even though a single L2 broadcast domain would span the clouds, two different protocols govern the determination of the forwarding topology. This is especially important for the behavior of broadcast traffic (e.g., frames with an unknown Media Access Control address (MAC address) address are flooded using broadcast throughout the network). The inconsistency in topology creates an opportunity for problematic loops to develop. One immediate issue surfaces as to which of the two clouds (DCE or CE) has the responsibility to detect and to break the loops in the broadcast domain.

In certain embodiments, communication system 10 interoperates with STP at the CE cloud level and interacts with IS-IS in an interlock manner at the DCE cloud level to prevent transient loops. Operationally, CE-DCE gateway switches 24 and 28 can be configured to send out the same STP root bridge ID. Stated in different terminology, CE-DCE gateway switches 24 and 28 can virtualize the STP root inside the DCE network. In more specific implementations, the STP mechanism being executed on CE-DCE gateway switches 24 and 28 can use the IS-IS broadcast tree system ID (which may be operating on the DCE links) as the STP root bridge ID. Certain routing protocols (e.g., IS-IS) provide that gateway switches 24 and 28 (after IS-IS converges) should derive their topology at the same root switch (and, therefore, the same system ID). STP on both gateway switches 24 and 28 can advertise the same STP root bridge ID and, hence, the STP block can be pushed down to a redundant link 15, which connects CE switch 16 and CE switch 14 (i.e., the wiring closet).

In a general sense, the STP being executed on a given CE-DCE gateway switch uniformly enforces the STP root that is within DCE network 12. Further, communication system 10 can prevent a loop by enforcing the STP being executed on CE-DCE gateway switches 24 and 28 in order to block the port when it receives a superior STP Bridge Protocol Data Unit (BPDU) from CE network 12. Hence, in an example of FIG. 1, CE-DCE gateway switches 24 and 28 can block the link between gateway switch 24 and CE switch 16 (or between gateway switch 28 and CE switch 14) if they receive a superior BPDU from CE switch 16, or from CE switch 14. Such an operation could effectively prevent the CE-DCE L2 network loops. The notion is to uniformly enforce that STP root that is within DCE network 12.

A given gateway that has connectivity to both CE network 18 and DCE network 12 can be referred to as an L2 gateway switch. In essence, such switches should use the same information in their STP BPDUs. In this sense, the gateway switches (connected to both networks) would view DCE network 12 as a virtual switch. In one particular example, communication system 10 can be provisioned to leverage the IS-IS broadcast tree system ID (e.g., per-VLAN) as the STP root bridge ID on CE-DCE gateway switches 24 and 28. For example, the MAC address of the broadcast tree can be used by CE-DCE gateway switches 24 and 28. In using such a mechanism, the gateway switches that are executing the IS-IS protocol would eventually converge at one root system ID (e.g., one broadcast tree ID, one root MAC address, etc.). Hence, all the gateway switches would communicate BPDUs with the same root bridge ID. This allows the network to represent DCE network 12 as a single switch. Before turning to some of the additional operations of this architecture, a brief discussion is provided about some of the infrastructure of the architecture, which is depicted by FIG. 2.

FIG. 2 is a simplified block diagram illustrating potential details associated with communication system 10. In this particular example, CE-DCE gateway switches 24 and 28 include a respective processor 36a-b, a respective memory element 38a-b, a respective L2GIP module 30a-b, a respective L2G-STP module 32a-b, and respective routing modules 34a-b (e.g., IS-IS routing modules). Also included in FIG. 2 is a dashed line 40, which depicts L2GIP adjacency being built between CE-DCE gateway switches 24 and 28 (e.g., using an IS-IS protocol). Note also that L2GIP modules 30a-b may readily be incorporated into respective L2G-STP modules 32a-b in different embodiments (or vice versa), or routing modules 34a-b can be part of hybrid configurations in which any of the modules of FIG. 2 are suitable consolidated, combined, etc.

DCE network 12 and CE network 18 represent a series of points or nodes of interconnected communication paths for receiving and transmitting packets of information that propagate through communication system 10. These networks offer a communicative interface between network elements (e.g., switches, bridges, gateways, etc.) and may be any IP network, local area network (LAN), virtual LAN (VLAN), wireless LAN (WLAN), metropolitan area network (MAN), wide area network (WAN), extranet, Intranet, virtual private network (VPN), or any other appropriate architecture or system that facilitates communications in a network environment. The networks can support a transmission control protocol (TCP)/IP, or a user datagram protocol (UDP)/IP in particular embodiments of the present disclosure; however, these networks may alternatively implement any other suitable communication protocol for transmitting and receiving data packets within communication system 10.

DCE switches 20, 22 and CE-DCE gateway switches 24, 28 are network elements that route (or that cooperate with each other in order to route) traffic and/or packets in a network environment. As used herein in this Specification, the term ‘network element’ is meant to encompass switches, routers, gateways, bridges, loadbalancers, firewalls, inline service nodes, proxies, servers, processors, modules, or any other suitable device, component, element, or object operable to exchange information in a network environment. This network element may include any suitable hardware, software, components, modules, interfaces, or objects that facilitate the operations thereof. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange (reception and/or transmission) of data or information. DCE switches 20 and 22 are L2MP core switches in one particular example. At a point in time, DCE switches 20 and 22, along with CE-DCE gateway switches 24 and 28, converge at one IS-IS broadcast tree. CE-DCE gateway switches 24 and 28 can share the same perspective of the network via STP BPDUs.

In operation, L2G-STP modules 32a-b are configured to terminate an STP instance at respective CE-DCE gateway switches 24 and 28. L2G-STP defines the ports to be provided in one of the following L2 gateway port types: a backbone gateway port (i.e., default gateway port type), and an uplink gateway port. Switches commonly have a priority list, which can be influenced by configuration, updates, information from its peers, etc. Upon receiving a superior BPDU, a given switch can compare its local priority list to this received information.

DCE addressing and forwarding can include the use of a locally assigned aggregatable (or hierarchical) MAC address for forwarding activities. Edge switches can maintain a mapping between the classical MAC address and the corresponding hierarchical MAC address. The mapping can be established via learning between the network elements, where use of a MAC-in-MAC header can be employed to carry the hierarchical MAC address across the DCE network. A link state protocol can be used to determine the forwarding topology and, further, to support shortest path and equal cost multi-path forwarding for unicast traffic. Multicast frames can also readily use multi-pathing, albeit using a slightly different scheme. Additionally, a single control protocol can be used to compute unicast paths, multicast path, and broadcast distribution trees.

For the L2G-STP mechanism, and in regards to the backbone gateway port, the STP root backbone inside the DCE cloud is enforced. The backbone gateway port can ensure that it is consistently designated as the superior port. [Note that a backbone network or network backbone is commonly a part of computer network infrastructure that interconnects various pieces of network: providing a path for the exchange of information between different LANs or sub-networks. A backbone can tie together diverse networks in the same building, in different buildings in a campus environment, over wide areas, etc. Normally, the backbone's capacity is greater than the networks connected to it.] In cases where the backbone gateway port receives a superior STP BPDU, the architecture will respond by placing the port in a gateway port inconsistency blocked state. For the uplink gateway port, the L2G-STP mechanism allows the STP root to be outside the DCE network. In using an auto-detection approach, when receiving superior BPDUs, the backbone port type can transition to an uplink port type. In addition, for an explicit configuration of the uplink port type approach, the uplink gateway port can ensure that it is not the designated superior port. Hence, if the uplink gateway port received an inferior STP BPDU, then the architecture would respond by putting the port in an uplink gateway port inconsistency blocked state.

Note that DCE switches 20, 22 and CE-DCE gateway switches 24, 28 may share (or coordinate) certain processing operations. Using a similar rationale, their respective memory elements may store, maintain, and/or update data in any number of possible manners. In a general sense, the arrangement depicted in FIG. 2 may be more logical in its representations, whereas a physical architecture may include various permutations/combinations/hybrids of these elements. In one example implementation, CE-DCE gateway switches 24, 28 include software (e.g., as part of L2GIP modules 30a-b and/or L2G-STP modules 32a-b) to achieve the switching operations, as outlined herein in this document. In other embodiments, this feature may be provided externally to any of the aforementioned elements, or included in some other network element to achieve this intended functionality. Alternatively, several elements may include software (or reciprocating software) that can coordinate in order to achieve the operations, as outlined herein. In still other embodiments, any of the devices of FIGS. 1-17 may include any suitable algorithms, hardware, software, components, modules, interfaces, or objects that facilitate these switching operations.

FIG. 3 is a simplified block diagram illustrating one example of a segmented spanning tree system 50. In the aggregate, FIGS. 3-5 can represent an example flow for particular switching events and, therefore, these illustrations are discussed together. FIG. 3 includes a set of DCE networks 52 and 54, which include a set of switches and gateways for propagating information in a network environment. Note that each switch group belongs to the same domain. In this particular example, an assumption is made that the STP root is inside the DCE cloud. A given switch (e.g., switch #12) can terminate the STP data, as shown by an element 58. In addition, a peer switch (e.g., switch #13) can receive a superior BPDU from switch #12. This is illustrated by an element 60 of FIG. 3. In addition, switch #22 can receive a superior BPDU from switch #13, as is depicted by an element 62.

In regards to the possible advantages associated with a segmented spanning tree, there is virtually no configuration obligation. Furthermore, there is a smaller, more manageable sized STP in the CE clouds. Such a strategy can also achieve a rapid convergence. No topology change (TC) is needed across the DCE, and there is no STP mechanism or L2GIP required inside the DCE network. Moreover, such an approach avoids concerns about DCE cloud merging, splitting, changing, etc.

Such a segmented spanning tree approach may present certain challenges. For example, the STP block typically is provided at the DCE-CE gateway switch port. It is desirable to utilize a high-bandwidth CE-DCE gateway link, and have the redundant link blocked in the CE wiring closet. Moreover, such an approach may not provide connectivity between DCE clouds using CE. To resolve the issue associated with utilizing CE-DCE links, a locally derived common root bridge ID can be employed. Example operations associated with the common root bridge ID are detailed below with reference to FIG. 4.

FIG. 4 is a simplified block diagram illustrating another example of a segmented spanning tree system 70. In this particular example, a DCE network 72 is provided with a number of switches and gateways for directing network traffic. Each of the switches use the IS-IS broadcast tree system such that information being exchanged between the gateway switches is similar. An element 74 represents the notion of utilizing high bandwidth core links in this example. Note that a link between switch #13 and switch #15 has been blocked in this scenario. This block is problematic because it represents a core link being blocked, which inhibits an optimal bandwidth usage. Furthermore, an element 76 is provided to illustrate that switch #13 receives a superior BPDU from switch #12. The link between switch #14 and switch #15 is also blocked, where this link is representative of the wiring closet. This link is generally provided for redundancy purposes. Ideally, any block should be pushed to this link (i.e., to the wiring closet).

FIG. 5 is a simplified block diagram illustrating example activities associated with segmented spanning tree system 70. This particular illustration is reflective of a locally derived common root bridge ID. Note that there is a better bridge priority in the CE cloud (e.g., if switch #9 were used instead of switch #15, this would be deemed as a misconfiguration). FIG. 5 also illustrates a set of BPDUs 84 and 86. In regards to BPDU 84, the following parameters are configured: Root BID=R10, Root Path Cost=0, and a Designated BID=S12. More specifically, for the root bridge ID, the bridge priority default value is given as 8192. The system ID is locally derived, and the MAC address is locally derived from the DCE IS-IS broadcast tree root system ID. For BPDU 86, the following parameters are configured: Root BID=R10, Root Path Cost=0, and a Designated BID=S13. In terms of the root bridge ID, the bridge priority has a default value of 8192. In addition, the system ID is locally derived, and the MAC address is locally derived from the DCE IS-IS broadcast tree root system ID.

In operation, each gateway switch locally derives a common root bridge ID. The bridge priority field (e.g., 4 bits) of the common root bridge ID can employ administrative control to relocate the STP root. In terms of how a given gateway switch would understand the configured root bridge priority value, there are several possible approaches. In a first approach, a better bridge priority is adapted from the received BPDUs in the case where the MAC address component of the root bridge ID presents a match. In a second approach, a new link-state packet (LSP) Type Length Value (TLV) can be employed to propagate the incremental bridge priority changes.

Such an approach provides the capability to select CE-DCE core links, while pushing the redundant link block down into a CE wiring closet. However, such a strategy may not provide optimal connectivity between DCE clouds via the CE cloud. Furthermore, such a strategy would not have the STP root outside the DCE. To resolve the DCE cloud connectivity issue via the CE and/or to address the STP root outside of DCE, the L2GIP protocol can be used.

Turning to FIG. 6, FIG. 6 is a simplified flowchart illustrating one example scenario 100 that could be accommodated by communication system 10. The flow may begin at 110, where there is a detection of a gateway port, which serves as a first trigger. The typically happens when the first port is up on the CE side of a particular VLAN, and on the L2MP side. This would essentially allow the VLAN to operate as a traffic gateway. At 120, the IS-IS broadcast tree system ID is used as a root bridge ID to develop a proper network topology. At 130, the root bridge ID is communicated to a neighboring network element. At 140, by enforcing the STP being executed on CE-DCE gateway switches, when a superior STP BPDU is received, the port is blocked (which can effectively block particular links between gateway switches, CE switches, etc.). At 150, the gateway switches that are executing the IS-IS protocol would eventually converge at one root system ID (e.g., one broadcast tree ID, one root MAC address, etc.). Hence, all the gateway switches would communicate BPDUs with the same root bridge ID.

FIGS. 7-10 are simplified block diagrams illustrating example segmented spanning tree architectures. Before turning to those activities, along with the subsequent flow diagrams, it is important to provide an overview for the L2GIP mechanism. One objective for the L2GIP mechanism is to exchange information between CE-DCE gateway switches. This mainly targets L2G STP root summary information, which allows for relocating an STP root outside the DCE network. Additionally, the L2GIP mechanism is configured to discover CE-DCE gateway switches from the DCE IS-IS LSP. Subsequently, the L2GIP mechanism builds adjacency by reliably exchanging the L2G-STP information among the L2 gateway switches. In addition, the L2GIP adjacency is built between gateway switches having uplink ports. The L2GIP mechanism can rely on DCE IS-IS for a reliable distribution.

Consider an example in which:

    • A1=CE switches;
    • S1, S2=CE/DCE gateway switches that execute IS-IS on DCE links, and STP on CE links;
    • S3, S4=DCE switches; and
    • A1 is the STP root.

Initially, when S1 receives the superior BPDU from A1 on its CE link, it will block that link. Subsequently, S1 will send a (e.g., proprietary) spanning-tree root/interested VLAN TLV using IS-IS with a claim to the STP root A1. This TLV can be carried in the IS-IS LSP and, further, reliably flooded to the CE/DCE gateway and core DCE switches (e.g., involving S2, S3, and S4).

When S2 receives the spanning-tree root TLV from S2, and because it does not have connectivity to the STP root A1 (e.g., S2 has not received a superior BPDU from A1 yet), it can reply back using the spanning-tree root TLV with a grant bit set. Also, the DCE core switches S3, S4 can also reply back using the spanning-tree root TLV with the grant bit set. In addition, when S1 receives the grant TLVs from S2, S3, and S4, it can move the CE link connecting to A1 to the STP forwarding state. Next, when S2 receives the superior BPDU from A1 (and since it has already granted A1), it can keep the CE link to A1 STP blocked and, hence, prevent the loop.

These activities provide a loop-free fast-convergence mechanism for an associated architecture. More specifically, the claim-grant mechanism (employed by the CE/DCE and DCE core switches using the DCE (L2MP) IS-IS) can allow for a loop-free fast-convergence CE-DCE L2 hybrid network. This provides a fast-convergence CE/DCE topology change. Additionally, such capabilities can work with STP on CE, with IS-IS on DCE, and would not necessarily involve an overlay protocol.

Operationally, the particular example architecture 700 of FIG. 7 is illustrative of no connectivity existing between DCE 1 and DCE 2 via CE clouds (i.e., switch 16 (S16)). In the particular implementation of FIG. 7, the block that exists at DCE 2 (between S20 and S16) can be unblocked after the L2GIP receives an acknowledgment (e.g., an ACK) from the gateway switches. Additionally, S20 can receive superior BPDU transitions associated with the uplink port.

The example architecture of 800 FIG. 8 illustrates a number of root summaries in a particular database (DB). In the particular example of FIG. 8, a root summary database indicates that S20 is a better ‘selected forwarder’ and, therefore, it remains blocked. In a particular implementation, there can be multiple DBs within each switch. For example, there could be a link-state database (LSDB) configured to store the DCE IS-IS LSPs. Each LSP (e.g., learned or self-generated) could be properly stored at the switch. The data structure used for implementing the LSDB can be designed for fast insertion and retrieval to optimize performance. The LSDB can be maintained efficiently to allow for unicast SPF, broadcast-SPF graph computation (B-SPF), and multicast SPF graph computation (M-SPF) per-topology.

Additionally, an interface DB can be provided to store the configured interfaces. It can contain a list of interfaces configured, along with any IS-IS parameters, such as metrics on each topology associated with them. The operational state and attributes of the interface can be obtained from an interface manager. Certain interface information is cached locally for speed and simplified logic. The interface DB can also maintain the state of the ports, as may be appropriate for implementing the loop-free proposals. In addition, each switch can be provided with an adjacency DB, which may be closely related to the interface database (as each adjacency is associated with an interface). The adjacency database can hold the state of each discovered adjacency. In operation, the adjacency state can change as the protocol state machine performs its functions.

The example architecture 900 of FIG. 9 further illustrates activities associated with the segmented spanning tree. This particular depiction highlights the block that exists between S16 and S22. In this particular implementation, the link between S16 and S22 is unblocked after the L2GIP mechanism receives an acknowledgment (e.g., an ACK) from the gateway switches. In the example architecture 1000 of FIG. 10, particular root summary DBs are being illustrated for S22, S23, and S20. Note that a new TLV type can be added to the DCE IS-IS level I PDU. This can be used to distribute the L2 gateway port root summary information. For example, the L2 gateway port root summary LSP can be used to advertise a set of root bridge summary information of the attached CE segment.

Turning to FIG. 11, an example architecture 1100 is depicted. This particular example is reflective of a DCE cloud merge. For example, at S3, there is a CE port block, where S8 has an uplink port, and S3 has an uplink port. Further, at S4 at DCE 1, there is no need to block the backbone port. Additionally, there is no need to block the backbone port of S2 at DCE 2. Furthermore, there can be a CE port block at DCE 2 (as shown). In FIG. 12, an example architecture 1200 is illustrating additional details associated with the DCE cloud merge. A dashed line 1202 is being shown as connecting S3 (of DCE 1) to S2 (of DCE 2). The L2GIP mechanism can build an adjacency between S3 and S8. Additionally, L2G-STP root summary information can be exchanged over this link. Note also that the STP recomputation port moves to forwarding at S3 of DCE 1. In addition, at agency time out from S3, the STP can be recomputed.

Referring now to FIG. 13, an example architecture 1300 is being illustrated. At S3, L2GIP sends root summary information using an IS-IS LSA. In this particular example, the selection is made for the ‘root’ port (unblocked). After the gateway switch discovery time, the L2G1P to L2G-STP begins the STP computation. In addition, propagating over the hashed link is root summary information using an IS-IS LSA. L2GIP builds an adjacency between S3 and S8. L2G-STP root summary information is exchanged.

Turning to FIG. 14, an example flow 1400 is being depicted. This particular set of activities involves IS-IS, a multicast routing information base (M2RIB), and an L2G-STP/L2GIP. M2RIB is a DCE multicast routing information base component that can store broadcast and multicast state. This particular example may begin at 1402, where a new LSP is sent. Subsequently, an SPF update is sent to the M2RIB at 1404. A new gateway (GW) switch address (SW ADD) request is sent to L2G-STP/L2GIP at 1406. S8 has an uplink port, and because S3 also has an uplink port, all uplink ports are blocked. At 1408, a new GW SW ADD (S8) response is sent back to the M2RIB. At 1410, the root summary LSP is sent to IS-IS. Then, the IS-IS root summary update LSP is sent by the IS-IS protocol at 1412.

FIG. 15 illustrates another example flow 1500 involving interactions with respect to the CE and gateway switch. Initially, at 1502, an SPF update is sent by the IS-IS protocol. Subsequently, at 1504, a new GW SW ADD (local) request is sent to the L2G-STP/L2GIP. STP ports are blocked at this juncture, where a new GW SW ADD (local) response can be sent by the L2G-STP/L2GIP at 1506. At 1508, the L2G-STP/L2GIP communicates the notification for the CE gateway switch capability to the IS-IS protocol.

FIG. 16 illustrates another example flow 1600 involving similar components, along with an Ethernet Port-Channel manager (EthPM), which can be used to retrieve notifications about ports. At 1602, an L2 logical port is functional. At 1604, a notification is sent to IS-IS (indicative of a CE gateway switch capability). At 1606, root summary LSPs are retrieved. Subsequently, a gateway switch discovery timeout occurs. At this juncture, STP can be started. Turning to FIG. 17, another example 1700 is being shown to illustrate how to delete a gateway switch LSP. This particular flow may begin at 1702, where an SPF update is communicated from the IS-IS protocol to the M2RIB. At 1704, the GW SW delete message is sent to the L2G-STP/L2GIP. The STP topology is recomputed, and the SPF update may continue. These activities would effectively delete the gateway switch LSP.

Note that in certain example implementations, the switching functions outlined herein may be implemented by logic encoded in one or more tangible media (e.g., embedded logic provided in an application specific integrated circuit (ASIC), digital signal processor (DSP) instructions, software (potentially inclusive of object code and source code) to be executed by a processor, or other similar machine, etc.). In some of these instances, a memory element (as shown in FIG. 2) can store data used for the operations described herein. This includes the memory element being able to store software, logic, code, or processor instructions that can be executed to carry out the activities described in this Specification. A processor can execute any type of instructions associated with the data to achieve the operations detailed herein in this Specification. In one example, the processor (as shown in FIG. 2) could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), an erasable programmable read only memory (EPROM), an electrically erasable programmable ROM (EEPROM)) or an ASIC that includes digital logic, software, code, electronic instructions, or any suitable combination thereof.

In one example implementation, L2GIP modules 30a-b and/or L2G-STP modules 32a-b include software in order to achieve the switching functions outlined herein. These activities can be facilitated by CE-DCE switches 24, 28 and/or any of the elements of the FIGURES. CE-DCE switches 24, 28 can include memory elements for storing information to be used in achieving the intelligent switching control, as outlined herein. Additionally, CE-DCE switches 24, 28 may include a processor that can execute software or an algorithm to perform the switching activities, as discussed in this Specification. These devices may further keep information in any suitable memory element (random access memory (RAM), ROM, EPROM, EEPROM, ASIC, etc.), software, hardware, or in any other suitable component, device, element, or object where appropriate and based on particular needs. Any possible memory items (e.g., database, table, cache, etc.) should be construed as being encompassed within the broad term ‘memory element.’ Similarly, any of the potential processing elements, modules, and machines described in this Specification should be construed as being encompassed within the broad term ‘processor.’

Note that with the examples provided herein, interaction may be described in terms of two or three elements. However, this has been done for purposes of clarity and example only. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of network elements. It should be appreciated that communication system 10 (and its teachings) are readily scalable and can accommodate a large number of clouds, networks, and/or switches, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided herein should not limit the scope or inhibit the broad teachings of communication system 10 as potentially applied to a myriad of other architectures. Additionally, although described with reference to particular scenarios where L2GIP modules 30a-b, L2G-STP modules 32a-b, and/or routing modules 34a-b are provided separately, these modules can be consolidated or combined in any suitable fashion, or provided in a single proprietary unit.

It is also important to note that the operations discussed with reference to FIGS. 1-17 illustrate only some of the possible scenarios that may be executed by, or within, communication system 10. Some of these operations may be deleted or removed where appropriate, or these steps may be modified or changed considerably without departing from the scope of the present disclosure. In addition, a number of these operations have been described as being executed concurrently with, or in parallel to, one or more additional operations. However, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by communication system 10 in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the present disclosure.

Although the present disclosure has been described in detail with reference to particular embodiments, it should be understood that various other changes, substitutions, and alterations may be made hereto without departing from the spirit and scope of the present disclosure. For example, although the present disclosure has been described as operating in conferencing environments or arrangements, the present disclosure may be used in any communications environment that could benefit from such technology. Virtually any configuration that seeks to intelligently switch packets could enjoy the benefits of the present disclosure.

Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C. section 112 as it exists on the date of the filing hereof unless the words “means for” or “step for” are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims.

Claims

1. A method, comprising:

receiving, at a gateway element, a bridge protocol data unit (BPDU) on a particular link of a network, wherein a port associated with the particular link is blocked in response to receiving the BPDU, and wherein the BDPU is received from a first network element;
communicating, by the gateway element to a second network element, a claim message comprising a claim to a spanning tree protocol (STP) root of the network, wherein the claim message comprises a spanning tree root Type Length Value (TLV) provisioned within intermediate system to intermediate system (IS-IS) protocol;
receiving, from the second network element, a reply in which a grant bit is set, wherein the reply comprises the spanning tree root TLV; and
changing the port associated with the particular link to a forwarding state subsequent to receiving the reply;
wherein the network includes a Data Center Ethernet (DCE) network and a Classical Ethernet (CE) network, which together form a layer-2 (L2) broadcast domain; and
wherein the gateway element is disposed between the DCE network and the CE network.

2. The method of claim 1, further comprising:

executing an intermediate system to intermediate system (IS-IS) protocol for a first set of network links in the network; and
executing a spanning tree protocol (STP) for a second set of network links in the network.

3. The method of claim 1, wherein the spanning tree root TLV is flooded to neighboring network elements through a plurality of link-state packets (LSPs).

4. The method of claim 1, wherein the second network element comprises a part of the Data Center Ethernet (DCE) network and the first network element comprises a part of the Classical Ethernet (CE) network.

5. The method of claim 1, wherein a selected port is configured to receive a superior BPDU, and in response to the superior BPDU, the selected port remains in a blocked state.

6. The method of claim 1, wherein the particular link is a CE link, and wherein the claim message includes a TLV associated with a virtual local area network (VLAN).

7. One or more non-transitory tangible media that includes code for execution, and when executed by a processor operable to perform operations comprising:

receiving, at a gateway element, a bridge protocol data unit (BPDU) on a particular link of a network, wherein a port associated with the particular link is blocked in response to receiving the BPDU, and wherein the BDPU is received from a first network element;
communicating, by the gateway element to a second network element, a claim message comprising a claim to a spanning tree protocol (STP) root of the network, wherein the claim message comprises a spanning tree root Type Length Value (TLV) provisioned within intermediate system to intermediate system (IS-IS) protocol;
receiving, from the second network element, a reply in which a grant bit is set, wherein the reply comprises the spanning tree root TLV; and
changing the port associated with the particular link to a forwarding state subsequent to receiving the reply;
wherein the network includes a Data Center Ethernet (DCE) network and a Classical Ethernet (CE) network, which together form a layer-2 (L2) broadcast domain; and
wherein the gateway element is disposed between the DCE network and the CE network.

8. The media of claim 7, the operations further comprising:

executing an intermediate system to intermediate system (IS-IS) protocol for a first set of network links in the network; and
executing a spanning tree protocol (STP) for a second set of network links in the network.

9. The media of claim 7, wherein the spanning tree root TLV is flooded to neighboring network elements through a plurality of link-state packets (LSPs).

10. The media of claim 7, wherein the second network element comprises a part of the Data Center Ethernet (DCE) network and the first network element comprises a part of the Classical Ethernet (CE) network.

11. The media of claim 7, wherein a selected port is configured to receive a superior BPDU, and in response to the superior BPDU, the selected port remains in a blocked state.

12. The media of claim 7, wherein the particular link is a CE link, and wherein the claim message includes a TLV associated with a virtual local area network (VLAN).

13. An apparatus, comprising:

a memory element configured to store electronic code,
a processor operable to execute instructions associated with the electronic code, and
a layer-2 (L2) gateway interconnect protocol (L2GIP) module configured to interface with the processor and the memory element such that the apparatus is configured for: receiving, at a gateway element, a bridge protocol data unit (BPDU) on a particular link of a network, wherein a port associated with the particular link is blocked in response to receiving the BPDU, and wherein the BDPU is received from a first network element; communicating, by the gateway element to a second network element, a claim message comprising a claim to a spanning tree protocol (STP) root of the network, wherein the claim message comprises a spanning tree root Type Length Value (TLV) provisioned within intermediate system to intermediate system (IS-IS) protocol; receiving, from the second network element, a reply in which a grant bit is set, wherein the reply comprises the spanning tree root TLV; and changing the port associated with the particular link to a forwarding state subsequent to receiving the reply;
wherein the network includes a Data Center Ethernet (DCE) network and a Classical Ethernet (CE) network, which together form a layer-2 (L2) broadcast domain; and
wherein the gateway element is disposed between the DCE network and the CE network.

14. The apparatus of claim 13, wherein the apparatus is further configured for:

executing an intermediate system to intermediate system (IS-IS) protocol for a first set of network links in the network; and
executing a spanning tree protocol (STP) for a second set of network links in the network.

15. The apparatus of claim 13, wherein the spanning tree root TLV is flooded to neighboring network elements through a plurality of link-state packets (LSPs).

16. The apparatus of claim 13, wherein the second network element comprises a part of the Data Center Ethernet (DCE) network and the first network element comprises a part of the Classical Ethernet (CE) network.

17. The apparatus of claim 13, wherein a selected port is configured to receive a superior BPDU, and in response to the superior BPDU, the selected port remains in a blocked state.

18. The method of claim 1, wherein the second network element comprises a second gateway element, the method further comprising:

receiving at the second network element a BDPU on a CE link of the second network element; and
blocking the CE link of the second network element subsequent to the receiving.

19. The media of claim 7, wherein the second network element comprises a second gateway element, the operations further comprising:

receiving at the second network element a BDPU on a CE link of the second network element; and
blocking the CE link of the second network element subsequent to the receiving.

20. The apparatus of claim 13, wherein the second network element comprises a second gateway element, the apparatus further being configured for:

receiving at the second network element a BDPU on a CE link of the second network element; and
blocking the CE link of the second network element subsequent to the receiving.
Referenced Cited
U.S. Patent Documents
4006320 February 1, 1977 Markl
4486877 December 4, 1984 Turner
4569042 February 4, 1986 Larson
4630268 December 16, 1986 Rodenbaugh
4907277 March 6, 1990 Callens et al.
5010544 April 23, 1991 Chang et al.
5014265 May 7, 1991 Hahne et al.
5121382 June 9, 1992 Yang et al.
5159592 October 27, 1992 Perkins
5243342 September 7, 1993 Kattemalalavadi et al.
5265092 November 23, 1993 Soloway et al.
5274643 December 28, 1993 Fisk
5321694 June 14, 1994 Chang et al.
5341477 August 23, 1994 Pitkin et al.
5343461 August 30, 1994 Barton et al.
5353283 October 4, 1994 Tsuchiya
5371852 December 6, 1994 Attanasio et al.
5394402 February 28, 1995 Ross
5416842 May 16, 1995 Aziz
5422876 June 6, 1995 Turudic
5426637 June 20, 1995 Derby et al.
5430715 July 4, 1995 Corbalis et al.
5430727 July 4, 1995 Callon
5450394 September 12, 1995 Gruber
5450449 September 12, 1995 Kroon
5452294 September 19, 1995 Natarajan
5459837 October 17, 1995 Caccavale
5473599 December 5, 1995 Li et al.
5477531 December 19, 1995 McKee et al.
5491692 February 13, 1996 Gunner et al.
5500851 March 19, 1996 Kozaki et al.
5500860 March 19, 1996 Perlman et al.
5509123 April 16, 1996 Dobbins et al.
5519704 May 21, 1996 Farinacci et al.
5521907 May 28, 1996 Ennis et al.
5555256 September 10, 1996 Calamvokis
5561669 October 1, 1996 Lenny et al.
5563875 October 8, 1996 Hefel et al.
5594732 January 14, 1997 Bell et al.
5602918 February 11, 1997 Chen et al.
5604803 February 18, 1997 Aziz
5617417 April 1, 1997 Sathe et al.
5617421 April 1, 1997 Chin et al.
5621721 April 15, 1997 Vantuone
5623492 April 22, 1997 Teraslinna
5623605 April 22, 1997 Keshav et al.
5642515 June 24, 1997 Jones et al.
5650993 July 22, 1997 Lakshman et al.
5651002 July 22, 1997 Van Seters et al.
5659542 August 19, 1997 Bell et al.
5673265 September 30, 1997 Gupta et al.
5675741 October 7, 1997 Aggarwal et al.
5689566 November 18, 1997 Nguyen
5699478 December 16, 1997 Nahumi
5699485 December 16, 1997 Shoham
5708502 January 13, 1998 Denton et al.
5715399 February 3, 1998 Bezos
5740171 April 14, 1998 Mazzola et al.
5740176 April 14, 1998 Gupta et al.
5742604 April 21, 1998 Edsall et al.
5764636 June 9, 1998 Edsall
5793763 August 11, 1998 Mayes et al.
5812528 September 22, 1998 VanDervort
5819089 October 6, 1998 White
5835494 November 10, 1998 Hughes et al.
5838994 November 17, 1998 Valizadeh
5850388 December 15, 1998 Anderson et al.
5867666 February 2, 1999 Harvey
5870397 February 9, 1999 Chauffour et al.
5870557 February 9, 1999 Bellovin et al.
5884010 March 16, 1999 Chen et al.
5894556 April 13, 1999 Grimm et al.
5905871 May 18, 1999 Buskens et al.
5917820 June 29, 1999 Rekhter
5918017 June 29, 1999 Attanasio et al.
5918019 June 29, 1999 Valencia
5931961 August 3, 1999 Ranganathan et al.
5943347 August 24, 1999 Shepard
5983265 November 9, 1999 Martino, II
5987011 November 16, 1999 Toh
5991809 November 23, 1999 Kreigsman
6003079 December 14, 1999 Friedrich et al.
6006264 December 21, 1999 Colby et al.
6009081 December 28, 1999 Wheeler et al.
6018516 January 25, 2000 Packer
6023455 February 8, 2000 Takahashi
6023733 February 8, 2000 Periasamy et al.
6031846 February 29, 2000 Gurusami et al.
6032194 February 29, 2000 Gai et al.
6041352 March 21, 2000 Burdick et al.
6061454 May 9, 2000 Malik et al.
6070190 May 30, 2000 Reps et al.
6078590 June 20, 2000 Farinacci et al.
6078956 June 20, 2000 Bryant et al.
6088717 July 11, 2000 Reed et al.
6094562 July 25, 2000 Zhong
6101180 August 8, 2000 Donahue et al.
6104695 August 15, 2000 Wesley et al.
6115711 September 5, 2000 White
6115752 September 5, 2000 Chauhan
6118765 September 12, 2000 Phillips
6118796 September 12, 2000 Best et al.
6185203 February 6, 2001 Berman
6192036 February 20, 2001 Buhler et al.
6230271 May 8, 2001 Wadlow et al.
6252851 June 26, 2001 Siu et al.
6275471 August 14, 2001 Bushmitch et al.
6278687 August 21, 2001 Hunneyball
6286045 September 4, 2001 Griffiths et al.
6317775 November 13, 2001 Coile et al.
6337861 January 8, 2002 Rosen
6356545 March 12, 2002 Vargo et al.
6363477 March 26, 2002 Fletcher et al.
6389006 May 14, 2002 Bialik
6445717 September 3, 2002 Gibson et al.
6446121 September 3, 2002 Shah et al.
6510150 January 21, 2003 Ngo
6515967 February 4, 2003 Wei et al.
6526044 February 25, 2003 Cookmeyer et al.
6535490 March 18, 2003 Jain
6563796 May 13, 2003 Saito
6578066 June 10, 2003 Logan et al.
6584438 June 24, 2003 Manjunath et al.
6614781 September 2, 2003 Elliott et al.
6628624 September 30, 2003 Mahajan et al.
6665637 December 16, 2003 Bruhn
6680921 January 20, 2004 Svanbro et al.
6687225 February 3, 2004 Kawarai et al.
6687360 February 3, 2004 Kung et al.
6700874 March 2, 2004 Takihiro et al.
6725191 April 20, 2004 Mecayten
6731609 May 4, 2004 Hirni et al.
6741600 May 25, 2004 Weiss et al.
6757654 June 29, 2004 Westerlund et al.
6765881 July 20, 2004 Rajakarunanayake
6775703 August 10, 2004 Burns et al.
6785261 August 31, 2004 Schuster et al.
6798739 September 28, 2004 Lee
6804244 October 12, 2004 Anandakumar et al.
6836804 December 28, 2004 Jagadeesan
6839353 January 4, 2005 DeJager
6845091 January 18, 2005 Ogier et al.
6901048 May 31, 2005 Wang et al.
6917983 July 12, 2005 Li
6940821 September 6, 2005 Wei et al.
6944132 September 13, 2005 Aono et al.
6947381 September 20, 2005 Wen et al.
7013267 March 14, 2006 Huart et al.
7024257 April 4, 2006 Pearce et al.
7039716 May 2, 2006 Jagadeesan
7047190 May 16, 2006 Kapilow
7068607 June 27, 2006 Paratain et al.
7069034 June 27, 2006 Sourour
7072968 July 4, 2006 Mikami et al.
7099820 August 29, 2006 Huart et al.
7133368 November 7, 2006 Zhang et al.
7143184 November 28, 2006 Shah et al.
7212517 May 1, 2007 Dzik
7283474 October 16, 2007 Bergenwall
7286467 October 23, 2007 Sylvain
7289454 October 30, 2007 Bovo et al.
7310334 December 18, 2007 FitzGerald et al.
7336620 February 26, 2008 Bennett
7352700 April 1, 2008 Chan et al.
7352705 April 1, 2008 Adhikari et al.
7406034 July 29, 2008 Cometto et al.
7417993 August 26, 2008 Ebergen et al.
7426577 September 16, 2008 Bardzil et al.
7457877 November 25, 2008 Shah et al.
7483370 January 27, 2009 Dayal et al.
7496044 February 24, 2009 Wing
7519006 April 14, 2009 Wing
7564858 July 21, 2009 Moncada-Elias et al.
7643430 January 5, 2010 Morandin
7660314 February 9, 2010 Wybenga et al.
7672227 March 2, 2010 Santoso et al.
7729267 June 1, 2010 Oran et al.
7780735 August 24, 2010 Chen et al.
7817580 October 19, 2010 Jain et al.
7864712 January 4, 2011 Khan et al.
7870611 January 11, 2011 Ishikawa
7886080 February 8, 2011 Sajassi et al.
7944470 May 17, 2011 Foster et al.
7969894 June 28, 2011 Mangal
8065317 November 22, 2011 Wang et al.
8116213 February 14, 2012 Krygowski
8174996 May 8, 2012 Omar
8244909 August 14, 2012 Hanson et al.
8291077 October 16, 2012 l'Anson
8582467 November 12, 2013 Hirota et al.
20020003775 January 10, 2002 Nakano et al.
20020067693 June 6, 2002 Kodialam et al.
20020073375 June 13, 2002 Hollander
20020083186 June 27, 2002 Stringer
20030053419 March 20, 2003 Kanazawa et al.
20030072269 April 17, 2003 Teruhi et al.
20030097438 May 22, 2003 Bearden et al.
20030110276 June 12, 2003 Riddle
20030137972 July 24, 2003 Kowalewski
20030142680 July 31, 2003 Oguchi
20030163772 August 28, 2003 Jaworski
20030165114 September 4, 2003 Kusama et al.
20030208616 November 6, 2003 Laing et al.
20030219022 November 27, 2003 Dillon et al.
20030220971 November 27, 2003 Kressin
20030225549 December 4, 2003 Shay
20040008715 January 15, 2004 Barrack et al.
20040052257 March 18, 2004 Abdo et al.
20040073690 April 15, 2004 Hepworth et al.
20040114539 June 17, 2004 Beshai et al.
20040125965 July 1, 2004 Alberth et al.
20040170163 September 2, 2004 Yik et al.
20040184323 September 23, 2004 Mori et al.
20040193709 September 30, 2004 Selvaggi et al.
20040218617 November 4, 2004 Sagfors
20040223458 November 11, 2004 Gentle
20040240431 December 2, 2004 Makowski et al.
20040252646 December 16, 2004 Adhikari et al.
20050036519 February 17, 2005 Balakrishnan et al.
20050105474 May 19, 2005 Metzler
20050111487 May 26, 2005 Matta et al.
20050117576 June 2, 2005 McDysan et al.
20050152406 July 14, 2005 Chauveau
20050216599 September 29, 2005 Anderson et al.
20050220123 October 6, 2005 Wybenga et al.
20050226172 October 13, 2005 Richardson et al.
20050243733 November 3, 2005 Crawford
20050246041 November 3, 2005 Kreifeldt et al.
20050259597 November 24, 2005 Benedetto et al.
20050283639 December 22, 2005 Le Pennec et al.
20050286419 December 29, 2005 Joshi et al.
20050286436 December 29, 2005 Flask
20060007869 January 12, 2006 Hirota et al.
20060018333 January 26, 2006 Windisch et al.
20060041431 February 23, 2006 Maes
20060098586 May 11, 2006 Farrell et al.
20060104217 May 18, 2006 Lehane
20060104306 May 18, 2006 Adamczyk et al.
20060112400 May 25, 2006 Zhang et al.
20060122835 June 8, 2006 Huart et al.
20060133286 June 22, 2006 Elie-Dit-Cosaque et al.
20060140136 June 29, 2006 Filsfils et al.
20060159029 July 20, 2006 Samuels et al.
20060179338 August 10, 2006 Sumner
20060215684 September 28, 2006 Capone
20060250967 November 9, 2006 Miller et al.
20060268742 November 30, 2006 Chu et al.
20060274760 December 7, 2006 Loher
20060280130 December 14, 2006 Nomura et al.
20060291385 December 28, 2006 Yang
20070041335 February 22, 2007 Znamova et al.
20070058571 March 15, 2007 Rose
20070064616 March 22, 2007 Miranda
20070107034 May 10, 2007 Gotwals
20070127395 June 7, 2007 Jain et al.
20070150480 June 28, 2007 Hwang et al.
20070153774 July 5, 2007 Shay et al.
20070171835 July 26, 2007 Gobara et al.
20070204017 August 30, 2007 Maes
20070212065 September 13, 2007 Shin et al.
20070223462 September 27, 2007 Hite et al.
20070258359 November 8, 2007 Ogasawara et al.
20070263554 November 15, 2007 Finn
20070286165 December 13, 2007 Chu et al.
20080019282 January 24, 2008 Alaria et al.
20080031149 February 7, 2008 Hughes et al.
20080031154 February 7, 2008 Niazi et al.
20090022069 January 22, 2009 Khan et al.
20090028044 January 29, 2009 Windisch et al.
20090059800 March 5, 2009 Mohan
20090080334 March 26, 2009 DeCusatis et al.
20090125595 May 14, 2009 Maes
20090144403 June 4, 2009 Sajassi et al.
20090175274 July 9, 2009 Aggarwal et al.
20090193057 July 30, 2009 Maes
20090201937 August 13, 2009 Bragg et al.
20090219823 September 3, 2009 Qian et al.
20090219836 September 3, 2009 Khan et al.
20090274153 November 5, 2009 Kuo et al.
20090296588 December 3, 2009 Nishi et al.
20090328051 December 31, 2009 Maes
20100049826 February 25, 2010 Maes
20100061254 March 11, 2010 Thottakkara et al.
20100061269 March 11, 2010 Banerjee et al.
20100069052 March 18, 2010 Ahomaki et al.
20100182937 July 22, 2010 Bellagamba
20100189118 July 29, 2010 Nonaka
20100226244 September 9, 2010 Mizutani et al.
20100302936 December 2, 2010 Jan et al.
20110019678 January 27, 2011 Mehta et al.
20110134804 June 9, 2011 Maes
20120106339 May 3, 2012 Mishra et al.
Foreign Patent Documents
WO 2008/010918 January 2008 WO
WO 2009/014967 January 2009 WO
Other references
  • U.S. Appl. No. 11/297,882, filed Dec. 7, 2005, entitled “Preventing Transient Loops in Broadcast/Multicast Trees During Distribution of Link State Information,” Inventor: Sachin Jain.
  • U.S. Appl. No. 11/378,990, filed Mar. 17, 2006, entitled “Preventing Transient Loops in Broadcast/Multicast Trees During Distribution of Link State Information,” Inventor: Sachin Jain.
  • U.S. Appl. No. 11/490,806, filed Jul. 20, 2006, entitled “Methods and Apparatus for Improved Determination of Network Metrics,” Inventor: Valentina Alaria.
  • U.S. Appl. No. 11/880,322, filed Jul. 20, 2007, entitled “Preventing Loops in Networks Operating Different Protocols to Provide Loop-Free Topology,” Inventor: Tameen Khan.
  • U.S. Appl. No. 12/475,124, filed May 29, 2009, entitled “Transient Loops Prevention in a Hybrid Layer-2 Network,” Inventor: Saurabh Jain.
  • U.S. Appl. No. 12/658,503, filed Feb. 5, 2010, entitled “Fault Isolation in Trill Networks,” Inventor(s): Ali Sajassi et al.
  • U.S. Appl. No. 12/916,763, filed Nov. 1, 2010, entitled “Probing Specific Customer Flow in Layer-2 Multipath Networks,” Inventor(s): Chandan Mishra et al.
  • U.S. Appl. No. 12/938,237, filed Nov. 2, 2010, entitled “System and Method for Providing Proactive Fault Monitoring in a Network Environment,” Inventor(s): Chandan Mishra, et al.
  • U.S. Appl. No. 12/941,881, filed Nov. 8, 2010, entitled “System and Method for Providing a Loop Free Topology in a Network Environment,” Inventor: Shekher Bulusu.
  • U.S. Appl. No. 13/041,148, filed Mar. 4, 2011, entitled “System and Method for Managing Topology Changes in a Network Environment,” Inventor: Shekher Bulusu.
  • U.S. Appl. No. 13/077,262, filed Mar. 31, 2011, entitled “System and Method for Probing Multiple Paths in a Network Environment,” Inventors: Hariharan Balasubramanian, et al.
  • U.S. Appl. No. 13/152,200, filed Jun. 2, 2011, entitled “System and Method for Managing Network Traffic Disruption,” Inventor: Shekher Bulusu.
  • Wikipedia, “IEEE 802.1ag,”Connectivity Fault Management, retrieve and printed Nov. 2, 2010, 4 pages; http://en.wikipedia.org/wiki/IEEE802.1ag.
  • G. Malkin, “Traceroute Using an IP Option,” Network Working Group, RFC 1393, Jan. 1993, 8 pages; http://tools.ietf.org/pdf/rfc1393.pdf.
  • K. Kompella and G. Swallow, “Detecting Multi-Protocol Label Switched (MPLS) Data Plane Failures,” Network Working Group, RFC 4379, Feb. 2006, 51 pages; http://tools.ietf.org/pdf/rfc4379.pdf.
  • PCT “International Preliminary Report on Patentability, Date of Issuance Jan. 20, 2009 (1 page), Written Opinion of the International Searching Authority, Date of Mailing Feb. 7, 2008 (6 pages) and International Search Report, Date of Mailing Feb. 7, 2008 (2 pages),” for PCT/US2007/015506.
  • PCT “International Preliminary Report on Patentability (dated Jan. 26, 2010; 1 page) and Written Opinion of the International Searching Authority and International Search Report (dated Oct. 2, 2008; 7 pages),” for PCT International Application PCT/US2008/070243.
  • IEEE Standards Department, “Virtual Bridged Local Area Networks—Amendment 6: Provider Backbone Bridges—IEEE P802.1ah/D4.2,” © 2008, Institute of Electrical and Electronics Engineers, Inc., Mar. 26, 2008; 116 pages.
  • IEEE Standards Department, “Virtual Bridged Local Area Networks—Amendment 9: Shortest Path Bridging—IEEE P802.1aq/D2.1,” © 2009, Institute of Electrical and Electronics Engineers, Inc., Aug. 21, 2009; 208 pages (Part 1 of 2 [100 pages]); Part 2 of 2 [108 pages]).
  • D. Fedyk, et al., IS-IS Extensions Supporting IEEE 802.1aq Shortest Path Bridging, Network Working Group Internet Draft, Mar. 8, 2011, 42 pages; http://tools.ietf.org/html/draft-ietf-isis-ieee-aq-05.
  • USPTO Sep. 25, 2012 Non-Final Office Action from U.S. Appl. No. 12/941,881.
  • USPTO Dec. 11, 2012 Response to Sep. 25, 2012 Non-Final Office Action from U.S. Appl. No. 12/941,881.
  • USPTO 2012-26 Final Office Action from U.S. Appl. No. 12/941,881.
  • USPTO Mar. 26, 2013 Non-Final Office Action from U.S. Appl. No. 13/077,828.
  • USPTO Nov. 26, 2012 Non-Final Office Action from U.S. Appl. No. 12/938,237.
  • USPTO Feb. 22, 2013 Response to Nov. 26, 2012 Non-Final Office Action from U.S. Appl. No. 12/938,237.
  • USPTO Mar. 26, 2013 Final Office Action from U.S. Appl. No. 12/938,237.
  • Andreasan et al., “RTP No-Op Payload Format,” Internet Draft, Internet Engineering Task Force, Feb. 2004, pp. 1-8.
  • Cheng, Jin et al., “Fast TCP: Motivation, Architecture, Algorithms, Performance,” IEEE INFOCOM 2004, Aug. 2, 2004, 44 pages.
  • Niccolini, S., et al. “How to store traceroute measurements and related metrics,” Internet Draft draft-niccolini-ippm-storetraceroutes-02.txe., Oct. 24, 2005.
  • Woundy et al., “ARIS: Aggregate Route-Based IP Switching,” Internet Draft draft-woundy-aris-ipswitching-00-txt, Nov. 1996.
  • Perlman, Radia, “Rbridges: Transparent Routing,” in Proc. IEEE INFOCOM, Mar. 2004.
  • Perlman, et al., “Rbridges: Base Protocol Specification,” IETF Draft <draft-ietf-trill-rbridge-protocol-11.txt>, Jan. 2009.
  • Touch, et al., Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement, RFC 5556, IETF, May 2009.
  • Kessler, G., “Chapter 2.2 PING of TCP: A Primer on Internet and TCP/IP Tools,” RFC 1739; Dec. 1994; www.ietf.org.
  • Callon et al., “A Framework for Multiprotocol Label Switching,” IETF Network Working Group, Internet Draft draft-ietf-mpls-framework-02.txt, Nov. 21, 1997.
  • Deering, S., et al., “Internet Protocol Version 6,” RFC 1883, Dec. 1995.
  • Feldman, N., “ARIS Specification,” Internet Draft, Mar. 1997.
  • Gobrial, Margret N., “Evaluation of Border Gateway Protocol (BGP) Version 4(V4) in the Tactical Environment,” Military Communications Conference, Oct. 21-24, 1996; Abstract Only http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=569372&url=http%3A%2F%2Fieeexplo re.ieee.org%2Fie13%2F4198%2F12335%2F00569372.pdf%3Farnumber%3D569372.
  • Halabi, Bassam, Internet Routing Architectures (Cisco), Macmillan Technical Publishing, Apr. 23, 1997; Abstract and Table of Contents only. http://www.ciscopress.com/store/internet-routing-architectures-cisco-9781562056520.
  • Handley and V. Jacobson, “SDP: Session Description Protocol,” RFC 2327; Apr. 1998, 43pgs.
  • Heinanen, J., “Multiprotocol Encapsulation over ATM Adaptation Layer 5,” RFC 1483, Jul. 1993.
  • Jennings, C., “NAT Classification Test Results,” Internet Draft draft-jennings-behave-test-results-02draft-jennings-behave-test-results-02.txt, Jun. 25, 2006.
  • Katsube, Y. et al., “Toshiba's Router Architecture Extensions for ATM: Overview,” RFC 2098, Feb. 1997.
  • Laubach, M., “Classical IP and ARP over ATM,” RFC 1577, Jan. 1994.
  • Laubach, M., “IP over ATM Working Group's Recommendations for the ATM Forum's Multiprotocol BOF Version 1,” RFC 1754, Jan. 1995.
  • Liao et al., “Adaptive Recovery Techniques for Real-time Audio Streams,” IEEE INFOCOM2001; Twentieth Annual Joint Conference of the IEE Computer and Communications Societies Proceedings, Apr. 22-26, 2001, vol. 2, pp. 815-823.
  • McGovern, M., et al., “CATNIP: Common Architecture for the Internet,” RFC 1707, Oct. 1994.
  • Nagami, K., et al., “Toshiba's Flow Attribute Notification Protocol (FANP) Specification,” RFC 2129, Apr. 1997.
  • Newman, P. et al., “Ipsilon Flow Management Protocol Specification for IPv4 Version 1.0,” RFC 1953, May 1996.
  • Newman, P. et al., “Ipsilon's General Switch Management Protocol Specification Version 1.1,” RFC 1987, Aug. 1996.
  • PCT Feb. 7, 2008 International Search Report for PCT/US2007/015506.
  • Perez, M., et al., “ATM Signaling Support for IP over ATM,” RFC 1755, Feb. 1995.
  • Rosen et al., “A Proposed Architecture for MPLS,” IETF Network Working Group, Internet Draft draft-ietf-mpls-arch-00.txt, Aug. 1997.
  • Rosen et al., “MPLS Label Stock Encoding,” RFC 3032, Jan. 2001.
  • Rosenberg et al., “STUN—Simple Traversal of User Datagram Protocol (UDP) Through Network Address Translators (NATs),” Network Working Group, RFC 3489, Mar. 2003, 44 pgs.
  • Schulzrinne, H., et al., “RTP, A Transport Protocol for Real-Time Applications,” Network Working Group RFC3550, Jul. 2003, 98 pages.
  • Smith, Bradley R., et al., “Securing the Border Gateway Routing Protocol,” Global Telecommunications Conference, Nov. 18-22, 1996.
  • Townsley, et al., “Layer Two Tunneling Protocol, L2TP,” Network Working Group, RFC 2661, Aug. 1999, 75 pages.
  • Ullman, R., “Rap: Internet Route Access Protocol,” RFC 1476, Jun. 1993.
  • Viswanathan et al., “ARIS: Aggregate Route-Based IP Switching,” Internet Draft, Mar. 1997.
  • Wang, Q. et al., “TCP-Friendly Congestion Control Schemes in the Internet,” National Key Lab of Switching Technology and Telecommunication Networks, Beijing University of Posts & Telecommunications; 2001, pp. 211-216; http://www.sics.se/˜runtong/11.pdf.
  • USPTO May 24, 2013 RCE Response to Final Office Action dated Feb. 27, 2013 from U.S. Appl. No. 12/941,881.
  • USPTO May 24, 2013 Supplemental Response to Final Office Action dated Feb. 27, 2013 from U.S. Appl. No. 12/941,881.
  • USPTO Jun. 14, 2013 Notice of Allowance from U.S. Appl. No. 12/941,881.
  • USPTO Jun. 20, 2013 Non-Final Office Action from U.S. Appl. No. 13/041,148.
  • USPTO Aug. 26, 2013 Response to Jun. 20, 2013 Non-Final Office Action from U.S. Appl. No. 13/041,148.
  • USPTO Jul. 5, 2013 Non-Final Office Action from U.S. Appl. No. 13/152,200.
  • USPTO Jun. 13, 2013 RCE Response to Mar. 26, 2013 Final Office Action from U.S. Appl. No. 12/938,237.
  • USPTO Jul. 19, 2013 Non-Final Office Action from U.S. Appl. No. 12/938,237.
  • USPTO Nov. 11, 2013 Final Office Action from U.S. Appl. No. 13/041,148.
  • USPTO Jan. 14, 2014 Notice of Allowance from U.S. Appl. No. 13/152,200.
  • USPTO Oct. 30, 2013 Notice of Allowance from U.S. Appl. No. 13/077,828.
Patent History
Patent number: 8830875
Type: Grant
Filed: Jun 15, 2011
Date of Patent: Sep 9, 2014
Assignee: Cisco Technology, Inc. (San Jose, CA)
Inventor: Shekher Bulusu (Fremont, CA)
Primary Examiner: Andrew Chriss
Assistant Examiner: John Lequang
Application Number: 13/160,957