Method for providing efficient multipoint network services
A method, system and device for enabling efficient bandwidth utilization of a multipoint network service over an arbitrary topology network that includes a plurality of network elements (NEs). In a preferred embodiment, the method comprises the steps of setting up a full connectivity between the NEs and providing the multipoint network service using the full connectivity, whereby data packets of the multipoint network services are transmitted from a source NE to at least one edge NE through at least one intermediate NE, and whereby data packets that need to be flooded are not replicated or at the source NE. The full connectivity includes an arbitrary combination of a first plurality of point-to-multipoint connections between each source NE and each edge NE and a second plurality of point-to-point connections between each source NE and every edge NE.
Latest Alcatel Patents:
- Method, apparatus and system of charging for a data flow in SDN network
- Support of emergency services over WLAN access to 3GPP packet core for unauthenticated users
- Monitoring equipment for cables
- System and method for controlling congestion in a network
- Communication methods and devices for uplink power control
The present invention relates generally to virtual private networks, and in particularly to methods and system for enabling the operation of virtual private local area network (LAN) services.
BACKGROUND OF THE INVENTIONEthernet has emerged as the standard of choice for local area networks. With speeds of 10 Mbps, 100 Mbps, 1 Gbps, and 10 Gbps, Ethernet capacity has grown to meet the need for increased network capacities. Consequently, there is considerable interest by operators to offer multipoint network services over public networks. A multipoint network service is a service that allows each of the customer sites to communicate directly and independently with all other customer sites connected to the network through a single interface.
A new network technology that renders multipoint connectivity services has been introduced recently in U.S. patent application Ser. No. 10/265,621 by Casey. This technology is known as “virtual private LAN service” (VPLS). VPLS is a multipoint Layer 2 virtual private network (VPN) technology that allows multiple sites to be connected over a emulated Ethernet broadcast domain that is supported across, for example, multi-protocol label switching (MPLS) networks. That is, VPLS provides connectivity between geographically dispersed customer sites across metropolitan area networks (MANs) or wide area networks (WANs), seemingly as if the customer sites were connected using a LAN.
Abstractly, a VPLS can be defined as a group of virtual switch instances (VSIs) that are interconnected in a full mesh topology to form an emulated LAN. Specifically, a full mesh of connections, i.e., pseudowires (PWs) needs to be established between network elements (NEs) participating in a single VPLS. Concretely, a VSI can be seen as a bridging function, in which a packet is switched based upon its destination address “DA” (e.g., a medium access layer (MAC) address) and membership in a VPLS. If the packet destination address is unknown, or is a broadcast or multicast address, the packet is flooded (i.e., replicated and broadcasted) to all connections, i.e. PWs associated with the VSI. All NEs participating in a single VPLS instance appear to be on the same LAN.
Reference is now made to
NEs in VPLS 100 need to support a “split-horizon” scheme in order to prevent loops. Namely, a NE in VPLS 100 is not allowed to forward traffic from one PW to another PW in the same VPLS. Furthermore, each NE in VPLS 100 needs to implement basic bridging capabilities, such as flooding packets and replicating packets, as well as learning and aging (to remove unused) destination addresses. A packet received at a source NE (e.g. NE 120) is transmitted to its destination based on the DA designated in the packet. If the source NE (120) does not recognize the destination NE associated with the DA, the packet is flooded to all other NEs in VPLS 100.
A packet to be flooded is replicated in as many copies as the number of PWs 130 connected to a NE, namely, a packet is replicated on all connections that are associated with a particular VSI. The number of VPLS replications increases linearly as the number of connections in the VSI increases. The number of connections in a VSI is equal to the number of NEs minus one. This replication is not as efficient as the mechanism for transmitting flooded traffic with a physical device based on Ethernet switching technology, in which flooded traffic is transmitted only once per physical interface.
The primary shortcoming of VPLS and other network services that emulate multipoint connectivity lies in the broadcast and multicast packet replications that are performed at a source NE. These replications significantly limit the bandwidth utilization when providing such network services. Furthermore, replicating packets and transmitting them at wire speed may not be feasible. Therefore, it would be advantageous to eliminate the shortcomings resulting from broadcast replication.
SUMMARY OF THE INVENTIONAccording to the present invention there is provided a method for enabling efficient bandwidth utilization of a multipoint network service over an arbitrary topology network that includes a plurality of NEs, comprising the steps of: setting up a full connectivity between the NEs of the arbitrary topology network that participate in the multipoint network service; and, providing the multipoint network service between the NEs using the full connectivity, whereby data packets of the multipoint network services that have to be flooded are transmitted from one NE that serves as a source NE to at least one other NE that serves as an edge NE and may be transmitted through at least one other NE that serves as an intermediate NE, and whereby when data packets needs to be flooded they are not replicated at the source NE.
According to the present invention there is provided a network element operative to enable efficient bandwidth utilization of a multipoint network service over an arbitrary topology network, comprising a virtual connection selector (VCS) capable of mapping incoming data packets to connections, and a forwarding unit coupled to the VCS and configured to perform a forwarding function on each data packet, whereby the NE may communicate with other NEs sharing the multipoint network service over the arbitrary topology network.
According to the present invention there is provided a system for efficient bandwidth utilization of a multipoint network service over an arbitrary topology network, the system comprising a plurality of network elements (NEs), each NE operative to provide a forwarding function, and a full connectivity mechanism that facilitates multipoint network services between all the NEs on the arbitrary topology network.
The present invention discloses a method, system and device for providing efficient multipoint network services established over a physical network of an arbitrary physical topology. According to a preferred embodiment of the method, point-to-point (P2P) connections and point-to-multipoint (P2MP) connections are established between network elements providing the same network service. Transferring packets through these connections significantly improves the bandwidth utilization of the underlined network.
Reference is now made to
Network 200 comprises five NEs 220-1 through 220-5 connected to sites 270 of a customer A and to sites 280 of a customer B. Each site is connected to an output port of each NE 220 through a customer edge (CE) device (not shown). Each NE 220 is capable of forwarding labeled packets to other NEs through the underlying network. NEs 220-1, 220-3, and 220-5 establish a VPLS between the sites 270 of customer A (hereinafter “VPLS-A”), while NEs 220-1, 220-3 and 220-4 establish a VPLS between the sites 280 of customer B (hereinafter “VPLS-B”). Note that NE 220-2 is connected to a site of customer C and does not participate in either VPLS-A or VPLS-B. Also note that NE 220-1 and NE 220-3 participate in both VPLS-A and VPLS-B. Network 200 may be, but is not limited to, a MPLS network where a mesh of MPLS transport tunnels (not shown) is established between NEs 220.
To allow the operation of a VPLS, a full connectivity needs to be established between NEs 220 of a particular VPLS. In accordance with this invention, this is effected by creating P2Pconnections and P2MP connections between the NEs 220 participating in the same VPLS. Specifically, a full connectivity is achieved by a full mesh of n*(n−1) P2P unidirectional connections, ‘n’ P2MP connections, or any combination of P2P connections and P2MP connections.
Generally, a VPLS comprises ‘n’ NEs, where from each NE ‘n’ connections are created at most, i.e., at most ‘n−1’ unidirectional P2P connections and at most single unidirectional P2MP connection. Hence, the number of connections required to be established between ‘n’ NEs participating in a VPLS is at most ‘n*(n−1)’ unidirectional P2P connections and ‘n’ additional unidirectional P2MP connections. The connections may be passed through one or more NEs that are not participated in the VPLS. The creation of the P2P and the P2MP connections is described in greater detailed below. In one embodiment of this invention, the P2MP connections and the P2P connections are established through PWs over MPLS transport tunnels. Specifically, the P2MP connections are established through multipoint PWs over multipoint MPLS transport tunnels or through multipoint PWs over P2P MPLS transport tunnels.
To allow for the functionality of the P2P and the P2MP connections, each NEs (e.g., NEs 220) implements at least one of the following functions: ‘replicate-and-forward’, ‘drop-and-forward’, ‘drop’, ‘forward’, ‘replicate-drop-and-forward’ or a combination thereof. The drop function terminates a connection and drops packets at a specific NE with the aim of sending them to one or more CE devices of a customer site connected to the specific NE. The forward-function forwards packets from a NE to an immediately adjacent NE, which may or may not participate in the VPLS. It should be noted that if a NE does not participate in a specific VPLS service, it is configured to forward or replicate-and-forward all traffic that belongs to that specific VPLS. The replicate-and-forward function replicates internally an incoming packet and forwards copies of the replicated packet to NEs connected to the replicating NE. The drop-and-forward function replicates an incoming packet internally in a NE, sends a copy of the replicated packet to one or more CE devices connected to the NE, and forwards the packet to another NE. The replicate-drop-and-forward function replicates internally an incoming packet, forwards multiple copies of the replicated packet to NEs connected to the replicating NE, and sends a copy of the packet to one or more CE devices connected to the replicating NE. For each connection, NEs are configured with the appropriate functions. Specifically, for the operation of the P2P connections the NEs are configured to perform only the forward function or the drop function. For the operation of the P2MP connections the NEs are configured to perform the forward function, the drop function, the drop-and-forward function, the replicate-and-forward function, or the replicate-drop-and-forward function. However, in some implementations, the operation of P2MP connections can be utilized by configuring the NEs to perform only the drop or drop-and-forward functions. In addition, a NE is capable of learning destination addresses, as described in greater detailed below.
The VPLS is realized through a virtual connection selector (VCS) and a forwarding unit. The VCS executes all activities related to mapping incoming packets to connections while the forwarding unit executes activities related to forwarding packets. A VCS is included in each NE connected to at least one CE device. Note that there is a VCS for each provided multipoint network service. A schematic diagram of a VCS 300 is shown in
VCS 300 further includes a destination address to a destination label mapping (“DA2DL”) table 350 and a destination NE to a destination label mapping (“DNE2DL”) table 352. VCS 300 of a source NE assigns a destination label to each incoming packet received on an input port. Specifically, a destination label is assigned to a packet according to its destination address. Each destination label is associated with a different connection. The mapping information of destination addresses to destination labels is kept in DA2DL table 350. DNE2DL table 352 maintains the mapping information of destination NEs to P2P connections. The content of DA2DL table 350 and DNE2DL table 352 may be preconfigured or dynamically updated. Specifically, DA2DL table 350 can be dynamically configured through a learning procedure described in greater detail below. VCS 300 is included in each NE connected to a customer site.
The forwarding unit is included in each NE in the network and is capable of performing the replicate-drop-and-forward, replicate-and-forward, drop-and-forward, drop, and forward functions. A schematic diagram of the forwarding unit 800 is provided in
As shown in
Next, the P2P connections are established, where a single connection is added at a time. As shown in
The creation of P2P and P2MP connections can be executed manually by means of a network management system (NMS) or command line interface (CLI) or automatically by means of a signaling protocol.
In accordance with this invention, flooded packets are preferably transmitted over a P2MP connection. This significantly reduces the number of packet replications performed at a source NE. For example, a packet with an unknown destination address received at NE 420-1 is transmitted over P2MP connection 431. Subsequently, the packet is received at NE 420-2, which replicates the packet only once and forwards a copy of the packet to NE 420-3 which handles the incoming packet in the same manner as does NE 420-2. Once a packet is received at NE 420-4 the packet is sent to a customer site connected the NE 420-4. As can be understood from this example, a packet to be flooded is replicated internally only once at NEs 420-2 and 420-3. This contrasts with prior art solutions, in which a packet to be flooded is replicated at the source NE (e.g., NE 420-1) to as many as the number of NEs that are associated with a particular VPLS (e.g. three times). However, replicating packets not at the source NE rather at arbitrary NEs on the network significantly increases traffic efficiency and decreases network congestion, as fewer packets are transferred over the network. For example, in the prior art, three copies of the same packet travel between 420-1 and 420-2.
It should be noted by one skilled in the art that the implementation of the VPLS discussed hereinabove is merely one embodiment of the disclosed invention. For example, another approach is to create the P2P connections first and then add the P2MP connections. However, where P2MP connections are not created, packet replication must be performed. Yet another approach is to create the P2MP connections first, and then create the P2P connections dynamically following the real traffic patterns.
In accordance with this invention, the P2P connections are associated with DAs, e.g., medium access layer (MAC) addresses. The association between a P2P connection and a DA may be performed either by manual configuration, e.g, using a NMS or by using a learning procedure. A NMS defines for each DA a respective destination label associated with a P2P connection. A source NE (e.g., NE 420-1) determines the DA of an incoming packet and sends the packet over a P2P connection associated with the designated DA.
As an example,
The invention has now been described with reference to a specific embodiment where MPLS transport tunnels are used. Other embodiments will be apparent to those of ordinary skill in the art. For example, the method can be adapted for the use of other transport tunnels, such as generic route encapsulation (GRE), Layer 2 tunnel protocol (L2TP), Internet protocol security (IPSEC), and so on.
All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention.
Claims
1. A method for enabling efficient bandwidth utilization of a multipoint network service over a network that includes a plurality of network elements (NEs), comprising the steps of a) setting up a full connectivity between all NEs of the plurality of NEs; and, b) providing the multipoint network service between all of the NEs of the plurality of NEs using said full connectivity, whereby data packets of the multipoint network service are transmitted from one NE that serves as a source NE to at least one other NE that serves as an edge NE via at least one of a direct connection and an intermediate NE, and whereby data packets that need to be flooded are not replicated at said source NE, wherein said step of setting up full connectivity includes establishing a first plurality of point-to-multipoint connections between the source NE and more than one edge NEs, and wherein the step of establishing a first plurality of point-to-multipoint connections between the source NE and the more than one edge NEs includes configuring the intermediate NE in a path of each said point-to-multipoint connection to perform an operation on said data packet selected from the group consisting of a forward operation, a replicate-drop-and-forward operation and a replicate-and-forward operation.
2. The method of claim 1, wherein the step of establishing of a first plurality of point-to-multipoint connections between the source NE and the more than one edge NEs further includes configuring at least one edge NE in a path of each point-to-multipoint connection to perform an operation on said data packet selected from the group of a drop operation, replicate-drop-and-forward and a drop-and-forward operation.
3. The method of claim 2, wherein the step of establishing of a first plurality of point-to-multipoint connections between the source NE and the more than one edge NEs further includes configuring a respective source NE of each point-to-multipoint connection to add a respective default label to each data packet being transmitted over said point-to-multipoint connection.
4. The method of claim 3, wherein the step of providing each data packet with a respective default label by said source NE includes appending said respective default label if a destination address is not designated in said packet or if said destination address is at least one of a multicast address and a broadcast address.
5. The method of claim 4, wherein designating each data packet with a destination address includes designating each data packet with an Ethernet medium access layer (MAC) address.
6. The method of claim 3, wherein configuring is performed by an operator selected from the group consisting of a network management system (NMS), a command line interface (CLI) and a signaling protocol.
7. The method of claim 1, wherein configuring the intermediate NE in said path of each said point-to-multipoint connection to perform a replicate-and-forward function further includes: i. replicating said data packet internally in each intermediate NE, which thereby performs as a replicate-and-forward NE; and ii. sending a copy of said replicated packet to NEs connected to said replicate-and-forward NE.
8. The method of claim 2, wherein configuring each at least one edge NE in said path of each point-to-multipoint connection to perform a drop-and-forward function further includes: i. replicating said data packet internally in each edge. NE, which thereby performs as a drop-and-forward NE; ii. sending a copy of said replicated packet to at least one customer site connected to said drop-and-forward NE; and, iii. sending said data packet to one NE selected from the group of an edge NE and an intermediate NE and connected to said drop-and-forward NE.
9. The method of claim 2, wherein configuring each of said at least one edge NE in said path of each point-to-multipoint connection to perform a replicate-drop-and-forward function further includes: i. replicating said data packet internally in each edge NE, which thereby performs as a drop-and-forward NE; ii. sending a copy of said replicated data packet to at least one customer site connected to said drop-and-forward NE; and, iii. sending said data packet to at least two NEs selected from the group of an edge NE and an intermediate NE, wherein the at least two NEs are connected to said replicate-drop-and-forward NE.
10. The method of claim 1, wherein configuring said at least one intermediate NE in said path of each point-to-multipoint connection to perform a forward function further includes sending an incoming data packet to a NE selected from the group consisting of an edge NE and an intermediate NE, wherein the NE is connected to said intermediate NE that performs said forward function.
11. The method of claim 2, wherein configuring each of said at least one edge NE in said path of each point-to-multipoint connection to perform a drop function further includes sending an incoming data packet to at least one customer site connected to said edge NE that performs said drop function.
12. A method for enabling efficient bandwidth utilization of a multipoint network service over a network that includes a plurality of network elements (NEs), comprising the steps of a) setting up a full connectivity between all NEs of the plurality of NEs: and, b) providing the multipoint network service between all of the NEs of the plurality of NEs using said full connectivity, whereby data packets of the multipoint network service are transmitted from one NE that serves as a source NE to at least one other NE that serves as an edge NE via at least one of a direct connection and an intermediate NE. and whereby data packets that need to be flooded are not replicated at said source NE. wherein said step of setting up full connectivity includes establishing a first plurality of point-to-multipoint connections between the source NE and more than one edge NEs. and wherein said step of setting up full connectivity further includes establishing a second plurality of point-to-point connections between said source NE and every edge NE of the plurality of NEs., wherein establishing a second plurality of point-to-point connections between each source NE and every one of said edge NEs further includes: i. configuring each intermediate NE positioned on a path between each source NE and a respective edge NE to perform a forward action on each packet including said specific label; and, ii. configuring said respective edge NE to perform a drop action on each data packet including said specific label.
13. The method of claim 12, wherein configuring is performed by an operator selected from the group consisting of a network management system (NMS), a command line interface (CLI), and a signaling protocol.
14. The method of claim 12, wherein configuring of each intermediate NE positioned on a path between each source NE and a respective edge NE to perform a forward action on each data packet including said specific label includes sending said data packet to a NE connected to said intermediate NE that performs said forward action.
15. The method of claim 12, wherein configuring of said respective edge NE to perform a drop action on each packet including said specific label includes sending said packet to at least one customer site connected to said edge NE.
16. The method of claim 12, wherein associating each point-to-point connection with a specific label includes appending said specific label by said source NE based on a respective destination address of said data packet.
17. A network element (NE) operative to enable efficient bandwidth utilization of a multipoint network service over a network, comprising: a) a virtual connection selector (VCS) capable of mapping incoming data packets to connections; and, b) a forwarding unit coupled to said VCS and configured to perform a forwarding function on each data packet; whereby said NE may communicate with other NEs that share the multipoint network service over the arbitrary topology network, and wherein said forwarding function is selected from the group consisting of a drop function, a replicate-and-forward function, replicate-drop-and-forward function, a forward function and a drop-and-forward function.
18. The NE of claim 17, wherein said forwarding unit includes: i. a forwarding information table (FIT) that indicates said forwarding functions to be performed per each label; ii. a forwarding mechanism operative to execute said forward function; iii. a drop mechanism operative to execute said drop function; iv. a drop-and-forward mechanism operative to execute said drop-and-forward function; v. a replicate-and-forward mechanism operative to execute said replicate-and-forward function; and, vi. a replicate-drop-and-forward mechanism operative to execute said replicate-and-forward function.
6532088 | March 11, 2003 | Dantu et al. |
6680922 | January 20, 2004 | Jorgensen |
6970475 | November 29, 2005 | Fraser et al. |
7088717 | August 8, 2006 | Reeves et al. |
20030031192 | February 13, 2003 | Furuno |
20030142674 | July 31, 2003 | Casey |
20040081203 | April 29, 2004 | Sodder et al. |
20040165600 | August 26, 2004 | Lee |
20040174887 | September 9, 2004 | Lee |
20050027782 | February 3, 2005 | Jalan et al. |
20050147104 | July 7, 2005 | Ould-Brahim |
20050190757 | September 1, 2005 | Sajassi |
20060143300 | June 29, 2006 | See et al. |
20080279196 | November 13, 2008 | Friskney et al. |
Type: Grant
Filed: Jun 7, 2004
Date of Patent: Sep 14, 2010
Patent Publication Number: 20050271036
Assignee: Alcatel (Paris)
Inventors: Yoav Cohen (Kfar Saba), Gilad Goren (Nirit)
Primary Examiner: Phirin Sam
Attorney: Carmen Patti Law Group, LLC
Application Number: 10/861,528
International Classification: H04L 12/28 (20060101);