NETWORK MULTI-SOURCE INBOUND QUALITY OF SERVICE METHODS AND SYSTEMS
A computerized method useful for implementing a Multi-Source Inbound QoS (Quality of Service) process in a computer network includes the step of calculating a current usage rate of a provider entity. The provider entity is classified by a network traffic priority; implementing a fair sharing policy among a set of provider entities. The method includes the step of adjusting any excess bandwidth among a set of provider entities. The method includes the step of implementing link sharing at a provider-entity level.
This application claims priority to U.S. Provisional Application No. 62/457,816, titled and METHOD AND SYSTEM OF OVERLAY FLOW CONTROL filed on 11 Feb. 2017. This provisional application is incorporated by reference in its entirety. These applications are incorporated by reference in their entirety.
FIELD OF THE INVENTIONThis application relates generally to computer networking, and more specifically to a system, article of manufacture and method Of Multi-Source Inbound QoS (quality of service).
DESCRIPTION OF THE RELATED ARTEmployees working in branch offices of an Enterprises typically need to access resources that are located in another branch office. In some cases, these are located in the Enterprise Data Center, which is a central location for resources. Access to these resources is typically obtained by using a site-to-site VPN, which establishes a secure connection over a public network (e.g. the Internet, etc.). There may be dedicated computer equipment in the branch office, the other branch office and/or Data Center which establishes and maintains the secure connection. These types of site-to-site VPNs can be setup one at a time and can be resource intensive to set up and maintain.
It is typical in deployments that a VCMP endpoint (e.g. a receiver) (e.g. VCMP tunnel initiator or responder; can be a VCE or VCG) can receive traffic from multiple VCMP sources (e.g. providers henceforth), such as VCMP endpoints and/or a host in the Internet. In these scenarios, the sum of all the receiver traffic on the receiver can be greater than the rated receiver capacity on the link. This can be because the providers are independent of each other. The provider can also be agnostic of the total unused receiver capacity at the receiver. This can lead to receiver oversubscription at the receiver which may lead to adverse impact on application performance.
SUMMARYA computerized method useful for implementing a Multi-Source Inbound QoS (Quality of Service) process in a computer network includes the step of calculating a current usage rate of a provider entity. The provider entity is classified by a network traffic priority; implementing a fair sharing policy among a set of provider entities. The method includes the step of adjusting any excess bandwidth among a set of provider entities. The method includes the step of implementing link sharing at a provider-entity level.
The Figures described above are a representative set, and are not exhaustive with respect to embodying the invention.
DESCRIPTIONDisclosed are a system, method, and article of manufacture for overlay flow control. The following description is presented to enable a person of ordinary skill in the art to make and use the various embodiments. Descriptions of specific devices, techniques, and applications are provided only as examples. Various modifications to the examples described herein can be readily apparent to those of ordinary skill in the art, and the general principles defined herein may be applied to other examples and applications without departing from the spirit and scope of the various embodiments.
Reference throughout this specification to “one embodiment,” “an embodiment,” ‘one example,’ or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
Furthermore, the described features, structures, or characteristics of the invention may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art can recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
The schematic flow chart diagrams included herein are generally set forth as logical flow chart diagrams. As such, the depicted order and labeled steps are indicative of one embodiment of the presented method. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types may be employed in the flow chart diagrams, and they are understood not to limit the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.
DefinitionsExample definitions for some embodiments are now provided.
Border Gateway Protocol (BGP) can be a standardized exterior gateway protocol designed to exchange routing and reachability information among autonomous systems (AS) on the Internet.
Cloud computing can involve deploying groups of remote servers and/or software networks that allow centralized data storage and online access to computer services or resources. These groups of remote serves and/or software networks can be a collection of remote computing services.
CE router (customer edge router) can be a router located on the customer premises that provides an Ethernet interface between the customer's LAN and the provider's core network. CE routers can be a component in an MPLS architecture.
Customer-premises equipment (CPE) can be any terminal and associated equipment located at a subscriber's premises and connected with a carrier's telecommunication channel at the demarcation point.
Edge device can be a device that provides an entry point into enterprise or service provider core networks. An edge device can be software running in a virtual machine (VM) located in a branch office and/or customer premises.
Firewall can be a network security system that monitors and controls the incoming and outgoing network traffic based on predetermined security rules.
Flow can be a grouping of packets that match a five (5) tuple which is a combination of Source IP Address (SIP), Destination IP Address (DIP), L4 Source Port (SPORT) and L4 Destination Port (DPORT) and the L4 protocol (PROTO).
Forward error correction (FEC) (e.g. channel coding) can be a technique used for controlling errors in data transmission over unreliable or noisy communication channels.
Deep learning can be a type of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using model architectures, with complex structures or otherwise, composed of multiple non-linear transformations
Deep Packet Inspection (DPI) can be the ability to analyze the different layers of a packet on the network.
Gateway can be a node (e.g. a router) on a computer network that serves as an access point to another network.
Internet Protocol Security (IPsec) can be a protocol suite for securing Internet Protocol (IP) communications by authenticating and encrypting each IP packet of a communication session.
Multiprotocol Label Switching (MPLS) can be a mechanism in telecommunications networks that directs data from one network node to the next based on short path labels rather than long network addresses, thus avoiding complex lookups in a routing table.
Orchestrator can include a software component that provides multi-tenant and role based centralized configuration management and visibility.
Open Shortest Path First (OSPF) can be a routing protocol for Internet Protocol (IP) networks. OSPF ca use a link state routing (LSR) algorithm and falls into the group of interior gateway protocols (IGPs), operating within a single autonomous system (AS).
Quality of Service (QoS) can include the ability to define a guaranteed set of actions such as routing, resource constraints (e.g. bandwidth, latency etc.).
Software as a service (SaaS) can be a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted.
Tunneling protocol can allow a network user to access or provide a network service that the underlying network does not support or provide directly.
Virtual Desktop Infrastructure (VDI) is a desktop-oriented service that hosts user desktop environments on remote servers and/or blade PCs. Users access the desktops over a network using a remote display protocol.
Virtual private network (VPN) can extend a private network across a public network, such as the Internet. It can enable users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network, and thus benefit from the functionality, security and management policies of the private network.
Voice over IP (VoIP) can a methodology and group of technologies for the delivery of voice communications and multimedia sessions over Internet Protocol (IP) networks, such as the Internet.
Additional example definitions are provided herein.
Examples Systems and Processes of Overlay Flow Control
In order to integrate into customer environments with minimal configuration required on existing devices, an Edge device and a gateway system can support dynamic routing protocols. In order to facilitate simplified use and management of these dynamic routing protocols such as OSPF. Accordingly, various Overlay Flow Control methods and system can be implemented. These can provide a user a single, simple point of configuration for all routes in a network without requiring changes to the protocol configuration itself.
The customer can indicate whether routes are preferred (e.g. VELOCLOUD® Overlay becomes the default path with MPLS as a backup) and/or non-preferred (e.g. where MPLS remains the default path with VELOCLOUD® Overlay as a backup). The route costs for preferred, non-preferred and/or default routes can be configurable. For example, they can have different defaults based on whether OE1 or OE2 routes are used in the redistribution.
In one example, a CE Router can advertise an OE2 route. For routes with cost ‘n’ (where ‘n>1’), it can be advertised with cost ‘n−1’. For routes with cost ‘1’, it can be advertised with cost ‘1’ and a link cost ‘m−1’, where m is the link cost from the L3 Switch/Router to the CE router.
In another example, CE Router advertises an OE1 route. Take the OE1 route cost as ‘n’. The link cost can be obtained from the L3 Switch/Router to the CE router as ‘m’. It can be advertised a route with cost ‘n-prime’ and link cost ‘m-prime’ such that (‘n-prime’+‘m-prime’)<(‘n+m’).
To simplify the visualization and management of routes, they are presented in the Overlay Flow Control table. This table provides an enterprise-wide view of routes, routing adjacencies and preferred exits for each specific route. The preferred exit for any given route can be selected which can result in the routing preferences being automatically updated at each Edge device and advertised to influence routing changes across the network without the customer having to perform any further configuration actions. An edge device can implement the following rules for redistributing VCRP (e.g. a routing protocol) into OSPF. First, an edge device can redistribute VCRP prefixes that belong to various bronze sites as OE1, metric <m> If VCRP route preference is lower than DIRECT (if available) route preference. Else the prefixes are redistributed as OE2, metric <m> where m=low priority. A Direct route preference can be fixed to two-hundred and fifty-six (256). A VCRP route preference lower than 256 can indicate a route as a preferred route otherwise a Direct rout (if available) is preferred. The system can watch out for how CPE's redistribute this prefix into the MPLS cloud. The system can determine if the metric type is preserved by BGP attributes while redistributing into OSPF. The system can determine if the cost is preserved by BGP attributes while redistributing into OSPF.
Route insertion rules can be implemented. Routes can be inserted into a unified routing table based on the type of VPN profile configured. Hubs can setup direct routes for all VCRP prefixes. Branches can setup direct routes for prefixes via CG and/or VPN-hubs and/or DE2E direct route. For the same prefix, there can be two routes per transit point. This can be because the prefix is advertised by the owner and the hub. A first route can have a next_hop logical ID as transit point and destination logical ID as the owner. A next route can have a next hop logical ID and/or destination logical ID as VPN hub (e.g. not applicable for CG and DE2E).
A first example use case can include provisioning an edge device inside a datacenter location that previously did not contain one. In this example, Hub1 can be inserted into the Datacenter site as shown in the picture with a routed interface connected to L3 switch and the other WAN link connected to the Internet. The leg connecting L3 switch and Hub1 can have OSPF enabled. Hub1 can advertise default route 0.0.0.0/0 (originate-default) with metric 0 to L3 switch. This can allow Hub1 to take over Internet traffic sourced by subnets connected to L3 switch. Route H can have been learned as intra-area route (O). Route S can have been learned as external type route (e.g. OEx). Route H and Route S can be added to OSPF view and are sent to VCO for GNDT sync up. Hub1 can be marked as owner of prefix ‘H’ and VCO responds to Hub1 with advertise flag set to True for prefix ‘H’. Sites that advertise intra-area (O) or inter-area (IA) routes can be marked as owner of the routes in GNDT and can be allowed to advertise the routes to VCG. VCO can respond to Hub1 with advertise flag set to False for prefix ‘S’ as ‘S’ is an external-route and requires administrator's intervention. Hub1 can advertises route ‘H’ to VCG through VCRP.
In a second use-case example, a Bronze site can be brought online, t is noted that the as a prerequisite, the Datacenter are already be online. A Bronzel site (e.g. a simple branch office site with only Internet connections and no MPLS or dynamic routing protocols such as OSPF in use at the site) can be provisioned and connected to VCG through an Internet link. Bronzel site can advertise route ‘B’ to VCG through VCRP. VCG can be a reflector that reflects route ‘B’ to Hub1 with Bronzel site as next hop and can reflect route ‘H’ to Bronzel site with Hub1 site as next hop.
In a third use-case example, a Silver site (e.g. a branch office site containing a hybrid of MPLS and internet WAN links as well as an L3 device which Is learning and advertising routes via OSPF) can be brought online. It is noted that the as a prerequisite, the Datacenter and associated Bronze site are already be online. Silver1 site can be stood up and connected to VCG through an Internet link. Silver1 site can learn routes ‘H’ and ‘B’ through VCG and install the learned sites into a unified route table. For example, Silver1 site can learn routes ‘S’ as an intra-area and routes ‘H’ and ‘B’ as external routes (e.g. from L3 switch). Routes ‘S’, ‘H’, and ‘B’ can be added to OSPF View and are communicated to VCO for GNDT synchronization. VCO responds with advertise flag set to ‘True’ for prefix ‘S’ but set to False for prefix ‘H’ and ‘B’ Silver1 can advertise ‘S’ to other branches via VCG over VCRP.
In a fourth use-case example, a Legacy site route advertisement can be implemented. It is noted that the as a prerequisite, the Datacenter and associated Bronze and Silver sites are already online. Legacy site route ‘L’ can be learned by Hub1 site and Silver1 site as external route (e.g. OEx). Hub1 and Silver1 can communicate route ‘L’ to VCO for GNDT synchronization. Hub1 can be chosen as owner for the external route ‘L’. (e.g. without administrator intervention). Hub1 can advertise route ‘L’ to other branches via VCG over VCRP. This can enable connectivity between legacy site ‘L’ and bronze1 site ‘B’.
Various examples of hybrid sites distributing routes learned through VCRP into OSPF are now discussed. In a first example, a hybrid site on receiving route ‘R’ over VCRP can redistribute ‘R’ to L3 switch as external route based on various criteria. VELOCLOUD® (B2B) can be set as preferred. Route ‘R’ can be revoked if it was installed with metric type OE2. Route ‘R’ can be redistributed with metric type OE1, metric ‘M’=1; etc. Accordingly, the L3 switch can be programmed with route ‘R’ pointing to VCE. Additionally, OE1 can provide the adjacent routers to add cost to route ‘R’ as the routes get redistributed further and thus may not impact the route priority for this route ‘R’ on other receiving sites. In one example, Silver1 can install route ‘R’ with metric 1, metric type OE1. This route ‘R’ can be installed as the high priority route on adjacent L3 router(s). However, when this route ‘R’ reaches another hybrid site. For example, Datacenter site can see that the route ‘R’ with metric >one (1). Accordingly, this does not affect the route ‘R’ on adjacent 13 routers of Datacenter site that would be pointing to Datacenter site as next hop.
A Direct criterion can be set as preferred when it is available. In one example, route ‘R’ can be revoked if it was installed with metric type OE1, metric ‘M’=one (1). Route ‘R’ can be redistributed with metric type OE2, metric ‘M’=cost of ‘R’+<low_prio_offset>. <low_prio_offset> can be some value that installs the route as low priority route. The value can be updated based on lab experiment.
Hybrid site redistributing ‘R’ to L3 switch can enable connectivity between ‘R’ and ‘B’ over VELOCLOUD® B2B overlay. The VELOCLOUD B2B Overlay is the VELOCLOUD Edge and Gateway multipath system that was defined in the original patent providing multipath VPN connectivity between sites. Additionally, it allows connectivity between legacy sites ‘L’ and ‘B’ over private links and VELOCLOUD B2B overlay.
It should be noted that though OSPF has been used for Illustration purposes supra, the overlay flow control table supports other dynamic routing protocols. for instance, if the protocol is BGP instead of OSPF, metric ‘M’ can be automatically calculated using MED or local preference.
Multi-Source Inbound QoS
It is further noted that a provider's QoS class-based queueing may not be honoured at the receiver. For example, Provider 1802 can be sending ‘High’ priority ‘Realtime’ class traffic and Provider 2 804 can be sending ‘Low’ priority ‘Bulk’ traffic. However, there is no guarantee that the High/Realtime traffic may be prioritized over the Low/Bulk traffic.
Multi-Source Inbound QoS is now discussed. Multi-source Inbound QoS addresses the problems discussed supra by letting the receiver assess the volume and priority of the receiver traffic and then assign the transmission bandwidth to the providers 802-806.
Various example topologies for implementing Multi-Source Inbound QoS are now provided. In some example, two classes of topologies can provide a unique in the way a provider ‘shares’ the allocated bandwidth. The topologies may differ in the number of links on which a VCMP paths can be terminated on a provider which in turn changes the link scheduling hierarchy and the caps that are configured at the nodes.
Additional Exemplary Computer Architecture and Systems
Multi-Source Inbound QoS
In step 1504, process 1500 can implement fair sharing among providers (e.g. providers 802-806, etc.). For example, step 1504 can allocate bandwidth to providers based on the traffic priority and provider share ratios.
An example of allocating bandwidth to providers is now discussed. For example, the score that was communicated from a provider can be considered to be the bandwidth required for a given priority. This is because process 1500 measures the received and dropped Kbps, thus the sum of these is the amount of bandwidth that would be used to eliminate drops at the current traffic rate. The total bandwidth to be used can be considered to be the sum of all the required bandwidths per priority. If the total bandwidth to be used is less than the total link bandwidth, then it can be allocated to the provider in toto. If the total bandwidth to be used is greater than the total link bandwidth, then for each priority we assign the minimum of the bandwidth required or the total link bandwidth divided by the number of peers.
In step 1506, process 1500 can allocate excess. For example, step 1506 can adjust excess bandwidth, if any, amongst providers. For example, process 1500 can iterate through each peer and assign any leftover bandwidth to the first peer that we find which still requires more bandwidth.
In step 1508, process 1500 can implement link sharing at provider level. For example, process 1508 can share the allocated bandwidth by configuring the link scheduler at the provider when appropriate in such a way that path selection policies are honored.
CONCLUSIONAlthough the present embodiments have been described with reference to specific example embodiments, various modifications and changes can be made to these embodiments without departing from the broader spirit and scope of the various embodiments. For example, the various devices, modules, etc. described herein can be enabled and operated using hardware circuitry, firmware, software or any combination of hardware, firmware, and software (e.g., embodied in a machine-readable medium).
In addition, it can be appreciated that the various operations, processes, and methods disclosed herein can be embodied in a machine-readable medium and/or a machine accessible medium compatible with a data processing system (e.g., a computer system), and can be performed in any order (e.g., including using means for achieving the various operations). Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. In some embodiments, the machine-readable medium can be a non-transitory form of machine-readable medium.
Claims
1-14. (canceled)
15. For wide area network (WAN) comprising a plurality of forwarding nodes, a method for implementing multi-source inbound quality of service (QoS) at an edge first forwarding node that connects a site to the WAN, the method comprising
- identifying a plurality of usage values for a plurality of other forwarding nodes of the WAN that forward packets to the edge first forwarding node along a shared WAN link;
- based on the identified usage values, allocating bandwidth to each of the other forwarding nodes; and
- configuring at least one particular other forwarding node to honor the bandwidth allocated to the particular other forwarding node.
16. The method of claim 15, wherein identifying the plurality of usage values comprises receiving, from each of the other forwarding nodes, at least one usage value that represents a desired quantity of network traffic that the other forwarding node expects to send to the first forwarding node.
17. The method of claim 15, wherein allocating bandwidth comprises allocating bandwidth for different types of traffics that are associated with different priority levels.
18. The method of claim 17, wherein allocating bandwidth for different traffic types comprises allocating first and second bandwidth levels for first and second traffic types to at least one of the other forwarding nodes.
19. The method of claim 18, wherein
- identifying the plurality of usage values comprises receiving, from each of the other forwarding nodes, at least one usage value that represents a desired quantity of network traffic that the other forwarding node expects to send to the first forwarding node;
- the other forwarding node that is assigned first and second bandwidth levels for first and second traffic types provides first and second usage values for the first and second traffic types to the first forwarding node.
20. The method of claim 15, wherein the other forwarding nodes comprise a cloud gateway.
21. The method of claim 15, wherein the other forwarding nodes comprise an edge second forwarding node connecting another site to the WAN.
22. The method of claim 21, wherein the sites comprise a branch office of an enterprise and a datacenter the enterprise, or two branch offices of the enterprise.
23. The method of claim 21 further comprising forcing the particular other forwarding node to reduce its transmission rate to a rate that does not exceed the amount of bandwidth allocated to the particular other forwarding node.
24. The method of claim 21, wherein the usage value for each of the other forwarding nodes specifies an amount of bandwidth needed by the other forwarding node to send network traffic to the first forwarding node without dropping any packet.
25. An electronic device comprising:
- a set of one or more processing units; and
- a non-transitory machine readable medium storing a program which when executed by at least one of the processing units, implements a multi-source inbound Quality of Service (QoS) process at an edge first forwarding node that connects a site to the WAN, the program comprising a set of instructions for: identifying a plurality of usage values for a plurality of other forwarding nodes of the WAN that forward packets to the edge first forwarding node along a shared WAN link; based on the identified usage values, allocating bandwidth to each of the other forwarding nodes; and configuring at least one particular other forwarding node to honor the bandwidth allocated to the particular other forwarding node.
26. The electronic device of claim 25, wherein the set of instructions for identifying the plurality of usage values comprises a set of instructions for receiving, from each of the other forwarding nodes, at least one usage value that represents a desired quantity of network traffic that the other forwarding node expects to send to the first forwarding node.
27. The electronic device of claim 25, wherein the set of instructions for allocating bandwidth comprises a set of instructions for allocating bandwidth for different types of traffics that are associated with different priority levels.
28. The electronic device of claim 27, wherein the set of instructions for allocating bandwidth for different traffic types comprises a set of instructions for allocating first and second bandwidth levels for first and second traffic types to at least one of the other forwarding nodes.
29. The electronic device of claim 28, wherein
- the set of instructions for identifying the plurality of usage values comprises a set of instructions for receiving, from each of the other forwarding nodes, at least one usage value that represents a desired quantity of network traffic that the other forwarding node expects to send to the first forwarding node;
- the other forwarding node that is assigned first and second bandwidth levels for first and second traffic types provides first and second usage values for the first and second traffic types to the first forwarding node.
30. The electronic device of claim 25, wherein the other forwarding nodes comprise a cloud gateway.
31. The electronic device of claim 25, wherein the other forwarding nodes comprise an edge second forwarding node connecting another site to the WAN.
32. The electronic device of claim 31, wherein the sites comprise a branch office of an enterprise and a datacenter the enterprise, or two branch offices of the enterprise.
33. The electronic device of claim 31, wherein the program further comprises a set of instructions for forcing the particular other forwarding node to reduce its transmission rate to a rate that does not exceed the amount of bandwidth allocated to the particular other forwarding node.
34. The electronic device of claim 31, wherein the usage value for each of the other forwarding nodes specifies an amount of bandwidth needed by the other forwarding node to send network traffic to the first forwarding node without dropping any packet.
Type: Application
Filed: Feb 9, 2020
Publication Date: Jul 23, 2020
Inventors: Ajit Ramachandra Mayya (Saratoga, CA), Parag Pritam Thakore (Los Gatos, CA), Stephen Craig Connors (San Jose, CA), Steven Michael Woo (Los Altos, CA), Sunil Mukundan (Chennai), Mukamala Swaminathan Srihari (Chennai)
Application Number: 16/785,628