SYSTEMS AND METHODS FOR LOAD BALANCING MULTICAST TRAFFIC
A computer-implemented method for load balancing multicast traffic may include (1) identifying a plurality of switches that include at least a first switch that is connected to a second switch by a first path and a second path, (2) calculating a plurality of multicast distribution trees for distributing multicast traffic among the plurality of switches that includes (i) a first tree that includes the first path and whose root is different than the root of a second tree and (ii) the second tree that includes the second path, (3) receiving a plurality of multicast packets ingress to the plurality of switches at the first switch, and (4) using at least two of the plurality of multicast distribution trees to transmit the plurality of multicast packets from the first switch to the second switch. Various other methods, systems, and computer-readable media are also disclosed.
This application is a continuation of U.S. application Ser. No. 14/231,225, filed 31 Mar. 2014, the disclosure of which is incorporated, by this reference, in its entirety.
BACKGROUNDTraditional techniques for distributing multicast traffic within layer-2 networks have generally relied on the Spanning Tree Protocol to prevent multicast packets from looping in and flooding the layer-2 networks. Unfortunately while successfully eliminating looping, the Spanning Tree Protocol may constrain multicast traffic to a single set of data links, which may cause redundant data links to be unused.
Some techniques for distributing multicast traffic have attempted to use more than one set of data links to distribute multicast traffic in layer-2 networks. For example, at least one technique may allocate a set of data links to each switch in a layer-2 network that may include the shortest paths from the switch to other switches in the layer-2 network and may be used to distribute multicast traffic ingress to the layer-2 network at the switch to the other switches within the layer-2 network. Because each switch is allocated its own set of data links, multicast traffic ingress to the layer-2 network at two different switches may be distributed using different sets of data links. Unfortunately, data link utilization may be inefficient as only a single path may be used for distributing traffic between any two switches. Thus, the instant disclosure identifies and addresses a need for improved systems and methods for load balancing multicast traffic.
SUMMARYAs will be described in greater detail below, the instant disclosure generally relates to systems and methods for load balancing multicast traffic across the data links that interconnect a network of switches, such as a network of switches that make up a virtual-chassis fabric. In one example, a computer-implemented method for load balancing multicast traffic may include (1) identifying a plurality of switches that include at least a first switch that is connected to a second switch by a first path and a second path, (2) calculating a plurality of multicast distribution trees for distributing multicast traffic among the plurality of switches that include (i) a first tree that includes the first path and whose root is different than the root of a second tree and (ii) the second tree that includes the second path, (3) receiving a plurality of multicast packets ingress to the plurality of switches at the first switch, and (4) using at least two of the plurality of multicast distribution trees to transmit the plurality of multicast packets from the first switch to the second switch.
Similarly, a system incorporating the above-described method may include (1) an identifying module that identifies a plurality of switches that include at least a first switch that is connected to a second switch by a first path and a second path, (2) a calculating module that calculates a plurality of multicast distribution trees for distributing multicast traffic among the plurality of switches that include at least (i) a first tree that includes the first path and whose root is different than the root of a second tree and (ii) the second tree that includes the second path, (3) a receiving module that receives a plurality of multicast packets ingress to the plurality of switches at the first switch, (4) a transmitting module that uses at least two of the plurality of multicast distribution trees to transmit the plurality of multicast packets from the first switch to the second switch, and (5) at least one physical processor that executes the identifying module, the calculating module, the receiving module, and the transmitting module.
A corresponding non-transitory computer-readable medium may include one or more computer-readable instructions that may, when executed by at least one processor of a network device, cause the network device to (1) identify a plurality of switches that include at least a first switch that is connected to a second switch by a first path and a second path, (2) calculate a plurality of multicast distribution trees for distributing multicast traffic among the plurality of switches that include (i) a first tree that includes the first path and whose root is different than the root of a second tree and (ii) the second tree that includes the second path, (3) receive a plurality of multicast packets ingress to the plurality of switches at the first switch, and (4) use at least two of the plurality of multicast distribution trees to transmit the plurality of multicast packets from the first switch to the second switch.
Features from any of the above-mentioned embodiments may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.
The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the instant disclosure.
Throughout the drawings, identical reference characters and descriptions indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the instant disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTSThe present disclosure is generally directed to systems and methods for using multicast distribution trees to load balance multicast traffic across the data links that interconnect a network of switches, such as a network of switches that make up a virtual-chassis fabric. Embodiments of the instant disclosure may compute two or more multicast distribution trees that are rooted on particular switches in a network and may balance multicast traffic ingress to the network at any one switch in the network across two or more of these multicast distribution trees.
As will be explained in greater detail below, by load balancing multicast traffic across multiple multicast distribution trees, embodiments of the instant disclosure may substantially improve data-link utilization. For example, by generating multicast distribution trees that may include paths to all switches in a network, embodiments of the instant disclosure may forward multicast packets ingress on a switch from outside of the network to any destination switch in the network along any of the multicast distribution trees. Moreover, by assigning multicast-packet flows (e.g., the multicast-packet flow of a particular Virtual Local Area Network (VLAN)) to multiple multicast distribution trees, embodiments of the instant disclosure may distribute multicast packets ingress on any one switch in a network using different multicast distribution trees and different data links and/or may distribute multicast packets ingress on different switches in the network using the same multicast distribution tree and the same data links. Embodiments of the instant disclosure may also provide various other advantages and features, as discussed in greater detail below.
The following will provide, with reference to
In addition, and as will be described in greater detail below, exemplary system 100 may include a receiving module 108 that receives a plurality of multicast packets ingress to the plurality of switches at the first switch. Exemplary system 100 may also include a calculating module 106 that uses at least two of the plurality of multicast distribution trees to transmit the plurality of multicast packets from the first switch to the second switch. Although illustrated as separate elements, one or more of modules 102 in
In certain embodiments, one or more of modules 102 in
As illustrated in
Database 120 may represent portions of a single database or computing device or a plurality of databases or computing devices. For example, database 120 may represent a portion of switches 202-212 in
Exemplary system 100 in
In one embodiment, one or more of modules 102 from
Switches 202-212 generally represent any intermediary computing device that connects network segments or network devices and/or facilitates communication between two or more other computing devices within a computing environment. For example as shown in
As shown in
Computing devices 214(1)-(N), 216(1)-(N), 218(1)-(N), and 220(1)-(N) generally represent any type or form of computing device capable of reading computer-executable instructions. Examples of computing devices 214(1)-(N), 216(1)-(N), 218(1)-(N), and 220(1)-(N) include, without limitation, laptops, tablets, desktops, servers, cellular phones, Personal Digital Assistants (PDAs), multimedia players, embedded systems, network devices, application servers, web servers, storage servers, deduplication servers, database servers, exemplary computing system 800 in
As illustrated in
As used herein, the term “switch” generally refers to any device, system, or application capable of routing or forwarding information, which may be in the form of packets, among devices of a computing network. Two switches in a network may be connected via one or more data links and/or paths. As used herein, the phrase “data link” may refer to any physical or logical connection between two devices in a network. For example, the phrase “data link” may refer to the physical connection between two switches whose physical ports are connected via a physical wire or cable. Examples of data links include data links 220-234 in
In some situations, multiple switches may be combined to form a virtual-chassis fabric (e.g., a virtual switch fabric) that may behave as a single logical switch. As used herein, the phrase “virtual-chassis fabric” generally refers to a collection of interconnected switches that may function and/or be managed as a single, logical device. In general, the switches within a virtual-chassis fabric may interconnect incoming data from ingress ports of the virtual-chassis fabric to egress ports of the virtual-chassis fabric. In some instances, a virtual-chassis fabric may facilitate a high level of scalability by providing any-to-any connectivity among nodes (e.g., switches) within the virtual-chassis fabric. Moreover, a virtual-chassis fabric may facilitate high availability by providing redundant switches and/or redundant data links. In some examples, the topology of the switches that make up a virtual-chassis fabric may not be restricted.
Returning to
At step 320, one or more of the systems described herein may calculate a plurality of multicast distribution trees for distributing multicast traffic among the plurality of switches identified as part of step 310 that includes (i) a first tree that includes the first path and whose root is different than the root of a second tree and (ii) the second tree that includes the second path. For example, at step 320 calculating module 106 may, as part of switch 206 in
The systems described herein may perform step 320 in a variety of ways. In general, calculating module 106 may calculate multicast distribution trees for load balancing multicast traffic amongst a network of switches by (1) selecting two or more of the switches to be the roots of the multicast distribution trees and (2) developing the multicast distribution trees rooted on each selected switch.
Calculating module 106 may select which switches within a network should be roots in a variety of ways. In one example, calculating module 106 may simply select each switch within a network to be a root of a multicast distribution tree. Using
Additionally or alternatively, calculating module 106 may select which switches within a network should be roots based on the physical topology of the network (e.g., the physical structure and/or physical interconnection of the switches within the network). For example, in a two-tier network in which some switches act as hubs, calculating module 106 may select the switches that act as hubs to be roots. Using
In at least one example, calculating module 106 may select roots based on input from an administrator of a network of switches. For example, calculating module 106 may allow an administrator to select which switches in a network should be roots. Additionally or alternatively, calculating module 106 may enable an administrator to indicate criteria for selecting roots.
Upon selecting switches to be roots of multicast distribution trees, calculating module 106 may calculate the multicast distribution trees using any suitable multicast routing algorithm or heuristic. In at least one example, calculating module 106 may calculate a multicast distribution tree rooted on each selected switch using a shortest path algorithm, such as Dijkstra's algorithm. In some examples, calculating module 106 may calculate multicast distribution trees such that path bandwidth and/or number of hops are taken into consideration. In general, calculating module 106 may calculate distribution trees such that each distribution tree includes a path from the root of the distribution tree to each of the plurality of switches.
In some examples, calculating module 106 may store each calculated multicast distribution tree at each switch within a network. For example, calculating module 106 may store each of multicast distribution trees 400 and 600 at each of switches 202-212. In some examples, calculating module 106 may store a multicast distribution tree at a switch as a list of the interfaces of the switch that connect the switch to the paths represented by the multicast distribution tree. Calculating module 106 may also assign an identifier to the list of interfaces such that the switch can look up the multicast distribution tree when the switch receives multicast packets that should be forwarded across the multicast distribution tree.
Calculating module 106 may calculate multicast distribution trees as part of any switch in a network. In one example, calculating module 106 may calculate a multicast distribution tree as part of the switch that is the root of the multicast distribution tree. Additionally or alternatively, calculating module 106 may calculate a multicast distribution tree as part of a switch that is not the root of the multicast distribution tree.
As illustrated in
As used herein, the phrase “multicast packet” may refer to any layer-2 frame and/or layer-3 packet that may be distributed using a flooding algorithm, a spanning-tree algorithm, a reverse-path-forwarding algorithm, a reverse-path-broadcasting algorithm, a truncated-reverse-path-broadcasting algorithm, a reverse-path-multicasting algorithm, a core-based tree algorithm, or any other multicast forwarding algorithm. Examples of multicast packets may include, without limitation, broadcast packets and/or unicast packets with unknown destination addresses. In some examples, the phrase “multicast packet” may refer to a multicast packet of a Virtual Local Area Network (VLAN). Additionally or alternatively, the phrase “multicast packet” may refer to a multicast packet of an Internet Protocol multicast group.
As illustrated in
The systems described herein may perform step 340 in a variety of ways. In general, transmitting module 110 may transmit multicast packets using two or more of the multicast distribution trees and a suitable multicast algorithm or protocol (e.g., a flooding algorithm, a spanning-tree algorithm, a reverse-path-forwarding algorithm, a reverse-path-broadcasting algorithm, a truncated-reverse-path-broadcasting algorithm, a reverse-path-multicasting algorithm, a core-based tree algorithm, or any other multicast forwarding algorithm). In the event that the multicast packets are IP multicast packets, transmitting module 110 may use two or more of the multicast distribution trees and a suitable IP multicasting protocol (such as, e.g., Protocol Independent Multicast (PIM), Internet Group Management Protocol (IGMP), or any other protocol for use in multicast communication) to transmit the IP multicast packets from the first switch to the second switch.
In some examples, transmitting module 110 may use at least two multicast distribution trees calculated as part of step 320 to transmit multicast packets ingress at any one switch in a network to other switches in the network. Using
In some examples, transmitting module 110 may assign the multicast traffic of a multicast group (e.g., a VLAN or an Internet Protocol multicast group) to one or more multicast distribution trees and may transmit multicast packets of the multicast group using the assigned multicast distribution trees. Additionally or alternatively, transmitting module 110 may use different multicast distribution trees to transmit the multicast traffic of two different multicast groups. For example, transmitting module 110 may use one multicast distribution tree to transmit the multicast packets of one of the multicast groups and a different multicast distribution tree to transmit the multicast packets of the other multicast group. Using
As explained above, by load balancing multicast traffic across multiple multicast distribution trees, embodiments of the instant disclosure may substantially improve data-link utilization. For example, by generating multicast distribution trees that may include paths to all switches in a network, embodiments of the instant disclosure may forward multicast packets ingress on a switch from outside of the network to any destination switch in the network along any of the multicast distribution trees. Moreover, by assigning multicast-packet flows (e.g., the multicast-packet flow of a particular Virtual Local Area Network (VLAN)) to multiple multicast distribution trees, embodiments of the instant disclosure may distribute multicast packets ingress on any one switch in a network using different multicast distribution trees and different data links and/or may distribute multicast packets ingress on different switches in the network using the same multicast distribution tree and the same data links.
Computing system 800 broadly represents any type or form of electrical load, including a single or multi-processor computing device or system capable of executing computer-readable instructions. Examples of computing system 800 include, without limitation, workstations, laptops, client-side terminals, servers, distributed computing systems, mobile devices, network switches, network routers (e.g., backbone routers, edge routers, core routers, mobile service routers, broadband routers, etc.), network appliances (e.g., network security appliances, network control appliances, network timing appliances, SSL VPN (Secure Sockets Layer Virtual Private Network) appliances, etc.), network controllers, gateways (e.g., service gateways, mobile packet gateways, multi-access gateways, security gateways, etc.), and/or any other type or form of computing system or device.
Computing system 800 may be programmed, configured, and/or otherwise designed to comply with one or more networking protocols. According to certain embodiments, computing system 800 may be designed to work with protocols of one or more layers of the Open Systems Interconnection (OSI) reference model, such as a physical layer protocol, a link layer protocol, a network layer protocol, a transport layer protocol, a session layer protocol, a presentation layer protocol, and/or an application layer protocol. For example, computing system 800 may include a network device configured according to a Universal Serial Bus (USB) protocol, an Institute of Electrical and Electronics Engineers (IEEE) 1394 protocol, an Ethernet protocol, a T1 protocol, a Synchronous Optical Networking (SONET) protocol, a Synchronous Digital Hierarchy (SDH) protocol, an Integrated Services Digital Network (ISDN) protocol, an Asynchronous Transfer Mode (ATM) protocol, a Point-to-Point Protocol (PPP), a Point-to-Point Protocol over Ethernet (PPPoE), a Point-to-Point Protocol over ATM (PPPoA), a Bluetooth protocol, an IEEE 802.XX protocol, a frame relay protocol, a token ring protocol, a spanning tree protocol, and/or any other suitable protocol.
Computing system 800 may include various network and/or computing components. For example, computing system 800 may include at least one processor 814 and a system memory 816. Processor 814 generally represents any type or form of processing unit capable of processing data or interpreting and executing instructions. Processor 814 may represent an application-specific integrated circuit (ASIC), a system on a chip (e.g., a network processor), a hardware accelerator, a general purpose processor, and/or any other suitable processing element.
Processor 814 may process data according to one or more of the networking protocols discussed above. For example, processor 814 may execute or implement a portion of a protocol stack, may process packets, may perform memory operations (e.g., queuing packets for later processing), may execute end-user applications, and/or may perform any other processing tasks.
System memory 816 generally represents any type or form of volatile or non-volatile storage device or medium capable of storing data and/or other computer-readable instructions. Examples of system memory 816 include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, or any other suitable memory device. Although not required, in certain embodiments computing system 800 may include both a volatile memory unit (such as, for example, system memory 816) and a non-volatile storage device (such as, for example, primary storage device 832, as described in detail below). System memory 816 may be implemented as shared memory and/or distributed memory in a network device. Furthermore, system memory 816 may store packets and/or other information used in networking operations. In one example, one or more of modules 102 from
In certain embodiments, exemplary computing system 800 may also include one or more components or elements in addition to processor 814 and system memory 816. For example, as illustrated in
Memory controller 818 generally represents any type or form of device capable of handling memory or data or controlling communication between one or more components of computing system 800. For example, in certain embodiments memory controller 818 may control communication between processor 814, system memory 816, and I/O controller 820 via communication infrastructure 812. In some embodiments, memory controller 818 may include a Direct Memory Access (DMA) unit that may transfer data (e.g., packets) to or from a link adapter.
I/O controller 820 generally represents any type or form of device or module capable of coordinating and/or controlling the input and output functions of a computing device. For example, in certain embodiments I/O controller 820 may control or facilitate transfer of data between one or more elements of computing system 800, such as processor 814, system memory 816, communication interface 822, and storage interface 830.
Communication interface 822 broadly represents any type or form of communication device or adapter capable of facilitating communication between exemplary computing system 800 and one or more additional devices. For example, in certain embodiments communication interface 822 may facilitate communication between computing system 800 and a private or public network including additional computing systems. Examples of communication interface 822 include, without limitation, a link adapter, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), and any other suitable interface. In at least one embodiment, communication interface 822 may provide a direct connection to a remote server via a direct link to a network, such as the Internet. Communication interface 822 may also indirectly provide such a connection through, for example, a local area network (such as an Ethernet network), a personal area network, a wide area network, a private network (e.g., a virtual private network), a telephone or cable network, a cellular telephone connection, a satellite data connection, or any other suitable connection.
In certain embodiments, communication interface 822 may also represent a host adapter configured to facilitate communication between computing system 800 and one or more additional network or storage devices via an external bus or communications channel. Examples of host adapters include, without limitation, Small Computer System Interface (SCSI) host adapters, Universal Serial Bus (USB) host adapters, IEEE 1394 host adapters, Advanced Technology Attachment (ATA), Parallel ATA (PATA), Serial ATA (SATA), and External SATA (eSATA) host adapters, Fibre Channel interface adapters, Ethernet adapters, or the like. Communication interface 822 may also enable computing system 800 to engage in distributed or remote computing. For example, communication interface 822 may receive instructions from a remote device or send instructions to a remote device for execution.
As illustrated in
In certain embodiments, storage devices 832 and 834 may be configured to read from and/or write to a removable storage unit configured to store computer software, data, or other computer-readable information. Examples of suitable removable storage units include, without limitation, a floppy disk, a magnetic tape, an optical disk, a flash memory device, or the like. Storage devices 832 and 834 may also include other similar structures or devices for allowing computer software, data, or other computer-readable instructions to be loaded into computing system 800. For example, storage devices 832 and 834 may be configured to read and write software, data, or other computer-readable information. Storage devices 832 and 834 may be a part of computing system 800 or may be separate devices accessed through other interface systems.
Many other devices or subsystems may be connected to computing system 800. Conversely, all of the components and devices illustrated in
While the foregoing disclosure sets forth various embodiments using specific block diagrams, flowcharts, and examples, each block diagram component, flowchart step, operation, and/or component described and/or illustrated herein may be implemented, individually and/or collectively, using a wide range of hardware, software, or firmware (or any combination thereof) configurations. In addition, any disclosure of components contained within other components should be considered exemplary in nature since many other architectures can be implemented to achieve the same functionality.
In some examples, all or a portion of system 100 in
The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the instant disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to the appended claims and their equivalents in determining the scope of the instant disclosure.
Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.”
Claims
1. A computer-implemented method comprising:
- identifying a single logical switch that comprises a plurality of physical switches, wherein: the plurality of physical switches comprises at least a first switch and a second switch; an ingress port of the logical switch at the first switch is connected to an egress port of the logical switch at the second switch by a first path that extends from the first switch to the second switch; and the ingress port of the logical switch at the first switch is also connected to the egress port of the logical switch at the second switch by a second path that extends from the first switch to the second switch;
- calculating a plurality of multicast distribution trees for distributing multicast traffic from ingress ports of the logical switch to egress ports of the logical switch by: selecting a first root switch from the plurality of physical switches; selecting a second root switch from the plurality of physical switches; generating a first bi-directional tree that is rooted on the first root switch and that includes the first path; and generating a second bi-directional tree that is rooted on the second root switch and that includes the second path, wherein the first root switch is different than the second root switch;
- receiving a plurality of multicast packets at the ingress port of the logical switch at the first switch;
- using, at each physical switch along the first path, the first bi-directional tree to transmit a first portion of the plurality of multicast packets to the egress port of the logical switch at the second switch; and
- using, at each physical switch along the second path, the second bi-directional tree to transmit a second portion of the plurality of multicast packets to the egress port of the logical switch at the second switch.
2. The method of claim 1, wherein:
- the logical switch comprises a virtual-chassis fabric; and
- using the first bi-directional tree and the second bi-directional tree comprises load balancing the plurality of multicast packets across the first path and the second path.
3. The method of claim 1, wherein the plurality of multicast distribution trees comprises a plurality of bi-directional trees.
4. The method of claim 3, wherein calculating the plurality of multicast distribution trees comprises calculating, for each physical switch in the plurality of physical switches, a bi-directional tree rooted on the physical switch.
5. The method of claim 1, wherein the first root switch and the second root switch are selected based at least in part on a physical topology of the plurality of physical switches.
6. The method of claim 1, wherein the first root switch is selected based at least in part on the first root switch being a hub.
7. The method of claim 1, wherein the first root switch and the second root switch are selected based at least in part on input from an administrator of the plurality of physical switches.
8. The method of claim 1, wherein:
- the first bi-directional tree is generated by the first root switch; and
- the second bi-directional tree is generated by the second root switch.
9. The method of claim 1, wherein the first bi-directional tree is generated by one of the plurality of physical switches that is not the first root switch.
10. The method of claim 1, wherein each of the plurality of multicast distribution trees comprises a path from the root of the multicast distribution tree to each of the plurality of physical switches.
11. The method of claim 1, wherein each of the plurality of multicast packets is of a single multicast group.
12. The method of claim 11, wherein the multicast group comprises a virtual area network.
13. The method of claim 11, wherein the multicast group comprises an internet protocol multicast group.
14. The method of claim 1, wherein:
- the plurality of multicast packets comprises: a plurality of multicast packets of a first multicast group; and a plurality of multicast packets of a second multicast group; and
- using the first bi-directional tree and the second bi-directional tree comprises: using the first bi-directional tree rather than the second bi-directional tree to transmit the plurality of multicast packets of the first multicast group from the ingress port of the logical switch at the first switch to the egress port of the logical switch at the second switch; and using the second bi-directional tree rather than the first bi-directional tree to transmit the plurality of multicast packets of the second multicast group from the ingress port of the logical switch at the first switch to the egress port of the logical switch at the second switch.
15. The method of claim 14, further comprising:
- receiving an additional plurality of multicast packets of the first multicast group and an additional plurality of multicast packets of the second multicast group at an additional ingress port of the logical switch at a third physical switch in the plurality of physical switches;
- using the first bi-directional tree rather than the second bi-directional tree to transmit the additional plurality of multicast packets of the first multicast group from the additional ingress port of the logical switch at the third physical switch to the egress port of the logical switch at the second switch; and
- using the second bi-directional tree rather than the first bi-directional tree to transmit the additional plurality of multicast packets of the second multicast group from the additional ingress port of the logical switch at the third physical switch to the egress port of the logical switch at the second switch.
16. A system comprising:
- a memory that stores: an identifying module that identifies a single logical switch that comprises a plurality of physical switches, wherein: the plurality of physical switches comprises at least a first switch and a second switch; an ingress port of the logical switch at the first switch is connected to an egress port of the logical switch at the second switch by a first path that extends from the first switch to the second switch; and the ingress port of the logical switch at the first switch is also connected to the egress port of the logical switch at the second switch by a second path that extends from the first switch to the second switch; a calculating module that calculates a plurality of multicast distribution trees for distributing multicast traffic from ingress ports of the logical switch to egress ports of the logical switch by: selecting a first root switch from the plurality of physical switches; selecting a second root switch from the plurality of physical switches; generating a first bi-directional tree that is rooted on the first root switch and that includes the first path; and generating a second bi-directional tree that is rooted on the second root switch and that includes the second path, wherein the first root switch is different than the second root switch; a receiving module that receives a plurality of multicast packets at the ingress port of the logical switch at the first switch; a transmitting module that: uses, at each physical switch along the first path, the first bi-directional tree to transmit a first portion of the plurality of multicast packets to the egress port of the logical switch at the second switch; and uses, at each physical switch along the second path, the second bi-directional tree to transmit a second portion of the plurality of multicast packets to the egress port of the logical switch at the second switch; and
- at least one physical processor coupled to the memory that executes the identifying module, the calculating module, the receiving module, and the transmitting module.
17. The system of claim 16, wherein:
- the logical switch comprises a virtual-chassis fabric; and
- the transmitting module uses the first bi-directional tree and the second bi-directional tree to load balance the plurality of multicast packets across the first path and the second path.
18. The system of claim 16, wherein the calculating module calculates the plurality of multicast distribution trees by calculating, for each physical switch in the plurality of physical switches, a bi-directional tree rooted on the physical switch.
19. The system of claim 16, wherein:
- the first bi-directional tree is generated at the first root switch; and
- the second bi-directional tree is generated at the second root switch.
20. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of at least one network device, cause the network device to:
- identify a single logical switch that comprises a plurality of physical switches, wherein: the plurality of physical switches comprises at least a first switch and a second switch; an ingress port of the logical switch at the first switch is connected to an egress port of the logical switch at the second switch by a first path that extends from the first switch to the second switch; and the ingress port of the logical switch at the first switch is also connected to the egress port of the logical switch at the second switch by a second path that extends from the first switch to the second switch;
- calculate a plurality of multicast distribution trees for distributing multicast traffic from ingress ports of the logical switch to egress ports of the logical switch by: selecting a first root switch from the plurality of physical switches; selecting a second root switch from the plurality of physical switches; generating a first bi-directional tree that is rooted on the first root switch and that includes the first path; and generating a second bi-directional tree that is rooted on the second root switch and that includes the second path, wherein the first root switch is different than the second root switch;
- receive a plurality of multicast packets at the ingress port of the logical switch at the first switch;
- use, at each physical switch along the first path, the first bi-directional tree to transmit a first portion of the plurality of multicast packets to the egress port of the logical switch at the second switch; and
- use, at each physical switch along the second path, the second bi-directional tree to transmit a second portion of the plurality of multicast packets to the egress port of the logical switch at the second switch.
Type: Application
Filed: Jun 30, 2016
Publication Date: Oct 27, 2016
Inventors: Qiang Wu (Sunnyvale, CA), Xichun Hong (Fremont, CA)
Application Number: 15/199,025