SYSTEM AND METHOD FOR PROVIDING FAULT TOLERANCE AND RESILIENCY IN A CLOUD NETWORK

In accordance with an embodiment, described herein is a system and method for providing fault tolerance and resiliency within a cloud network. A cloud computing environment provides access, via the cloud network, to software applications executing within the cloud environment. The cloud network can include a plurality of network devices, of which various network devices can be configured as virtual chassis devices, cluster members, or standalone devices. A fault tolerance and resiliency framework can monitor the network devices, to receive status information associated with the devices. In the event the system determines a failure or error associated with a network device, it can attempt to perform recovery operations to restore the cloud network to its original capacity or state. If the system determines that a particular network device cannot recover from the failure or error, it can alert an administrator for further action.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CLAIM OF PRIORITY

This application claims the benefit of priority to U.S. Provisional Patent Application titled “SYSTEM AND METHOD FOR PROVIDING FAULT TOLERANCE AND RESILIENCY IN A CLOUD NETWORK”, Application No. 63/021,564, filed May 7, 2020; which application is herein incorporated by reference.

COPYRIGHT NOTICE

A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.

TECHNICAL FIELD

Embodiments described herein are generally related to computer networks, network devices, and cloud computing environments, and are particularly related to systems and methods for providing fault tolerance and resiliency in a cloud network.

BACKGROUND

A cloud computing environment or cloud network typically comprises a large amount of heterogeneous network devices, such as for example, firewalls, load balancers, routers, switches, power distribution units, or other types of network devices, appliances, or components operating according to various different protocols.

Network devices may be sourced from various device vendors; with each device type or model assigned to a particular role. For example, a cloud computing environment or cloud network may comprise a variety of fabric, spine, and leaf node devices, sourced from a variety of different vendors. In some environments, network devices can be configured as virtual chassis devices, cluster members, or standalone network devices.

However, network devices are susceptible to hardware or software failures or errors. Examples of potential hardware errors can include non-maskable interrupt (NMI) errors, parity errors, or application-specific integrated circuit (ASIC) errors; while examples of potential software errors can include problems with software daemons, operating system subsystems, or service blades running specialized software for add-on services.

Some failures or errors may be catastrophic, in that they disrupt mission-critical software applications running in the cloud environment. As such, to achieve high availability and continuity of applications running in the cloud, there is an incentive to build fault tolerance and resiliency into the cloud network. However, since cloud providers often utilize network devices sourced from several third-party network device manufacturers or vendors, there is a challenge in providing resiliency in a cloud network that can handle hardware or software failures across a wide variety of network devices.

SUMMARY

In accordance with an embodiment, described herein is a system and method for providing fault tolerance and resiliency within a cloud network. A cloud computing environment provides access, via the cloud network, to software applications executing within the cloud environment. The cloud network can include a plurality of network devices, of which various network devices can be configured as virtual chassis devices, cluster members, or standalone devices. A fault tolerance and resiliency framework can monitor the network devices, to receive status information associated with the devices. In the event the system determines a failure or error associated with a network device, it can attempt to perform recovery operations to restore the cloud network to its original capacity or state. If the system determines that a particular network device cannot recover from the failure or error, it can alert an administrator for further action.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 illustrates an example cloud computing environment, including an availability domain and cloud network, in accordance with an embodiment.

FIG. 2 illustrates a system for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

FIG. 3 illustrates an example of a network device inventory, in accordance with an embodiment.

FIG. 4 illustrates a method or algorithm for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

FIG. 5 illustrates the operation of the system to provide fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

FIG. 6 further illustrates the operation of the system to provide fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

FIG. 7 illustrates a method or process for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

FIG. 8 illustrates the use of the method or process for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

FIG. 9 further illustrates the use of the method or process for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

FIG. 10 further illustrates the use of the method or process for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

FIG. 11 further illustrates the use of the method or process for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

FIG. 12 further illustrates the use of the method or process for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

FIG. 13 further illustrates the use of the method or process for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

FIG. 14 further illustrates the use of the method or process for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

FIG. 15 illustrates a method or process performed by an event monitor/poller to determine hardware or software errors, or device-specific objects, in accordance with an embodiment.

FIG. 16 illustrates use of an event monitor/poller to determine hardware or software errors, or device-specific objects, in accordance with an embodiment.

DETAILED DESCRIPTION:

As described above, a cloud computing environment or cloud network typically comprises a large amount of heterogeneous network devices, such as for example, firewalls, load balancers, routers, switches, power distribution units, or other types of network devices, appliances, or components operating according to various different protocols.

To achieve high availability and continuity of applications running in the cloud, there is an incentive to build fault tolerance and resiliency into the cloud network. However, since cloud providers often utilize network devices sourced from several third-party network device manufacturers or vendors, there is a challenge in providing resiliency in a cloud network that can handle hardware or software failures across a wide variety of network devices.

In accordance with an embodiment, described herein is a system and method for providing fault tolerance and resiliency within a cloud network. A cloud computing environment provides access, via the cloud network, to software applications executing within the cloud environment. The cloud network can include a plurality of network devices, of which various network devices can be configured as virtual chassis devices, cluster members, or standalone devices.

In accordance with an embodiment, a fault tolerance and resiliency framework can monitor the network devices, to receive status information associated with the devices. In the event the system determines a failure or error associated with a network device, it can attempt to perform recovery operations to restore the cloud network to its original capacity or state. If the system determines that a particular network device cannot recover from the failure or error, it can alert an administrator for further action.

In accordance with various embodiments, technical advantages of the systems and methods described herein include, for example, that the fault tolerance and resiliency framework is adapted to monitor and remediate faults spanning potentially thousands of network devices within a cloud network, to provide fault tolerance and resiliency within such an environment.

FIG. 1 illustrates an example cloud computing environment, including an availability domain and cloud network, in accordance with an embodiment.

As illustrated in FIG. 1, in accordance with an embodiment, a cloud computing environment (cloud environment) 100, including a cloud infrastructure 104 having computing hardware or devices, can provide access, via a cloud network 120, to software applications 140 executing within an availability domain A 110 (for example, as provided at a data center) of the cloud environment.

For example, in accordance with an embodiment, the cloud environment can provide access by one or more user devices 130 or computers to the applications. Other availability domains, for example availability domain N 112, can be similarly associated with another cloud network 122, or applications 142.

In accordance with an embodiment, the cloud network can include a plurality of network devices, of which particular network devices can be configured as virtual chassis devices 150, cluster devices 160, or standalone network devices 170.

For example, in accordance with an embodiment, in the environment illustrated in FIG. 1, which is provided for purposes of illustration, a cloud network can include a virtual chassis (VC) 152 having a plurality of spine nodes 153, 154, and leaf nodes 156, 157, 158, each of which network devices can be associated with a network device type, vendor, or model information.

As further illustrated in this example, the cloud network can include a plurality of clusters 162, 164, which in turn can include a plurality of cluster members 165, 166, 167, 168, each of which can similarly be associated with a network device type, vendor, or model information.

In accordance with an embodiment, a cloud network can include a variety of fabric, spine, and leaf nodes, for example switches, wherein the leaf nodes connect to other network devices such as firewalls, load balancers, routers, or servers. In some environments, leaf nodes or devices can be provisioned as part of a virtual chassis; while fabric or spine nodes or devices can be provisioned either as part of a virtual chassis or as standalone devices. Firewalls and load balancers are typically configured as clusters.

The example illustrated in FIG. 1 is provided for purposes of illustration. In accordance with various embodiments, a cloud network may include either some or all of the components illustrated therein, or may include only a selection of, for example, leaf nodes or cluster members.

For example, in accordance with an embodiment, a cloud environment may include a cloud network that is built using network devices from several third-party network device manufacturers or vendors, intended for use within an availability domain within a data center. Such a cloud environment must scale and operate without failure, to provide application continuity or high availability. Although switches and clusters can occasionally fail, this should preferably not have an effect on the applications, and instead the system should be able to accommodate or tolerate such failures or errors.

In cloud environments or cloud networks that include a cluster, configuration information can be duplicated to all of the network devices in that cluster; so that each cluster member includes all of the information needed to provide the functions of that cluster. Clusters are typically scalable, for example from 2 members/nodes to 4 members/nodes; and typically operate in an active-active mode, with all of the cluster members active at the same time. This provides devices such as firewalls and load balancers that are configured as clusters with a measure of built-in redundancy. However, an administrator generally has to manually add or delete cluster members/nodes.

In cloud environments or cloud networks that include a virtual chassis, a particular network device within the virtual chassis can assume the role of a control module, while remaining network devices assume roles of line cards that process data. Such a virtual chassis generally operates in an active-standby mode, wherein at least one network device is in a standby mode. If the device presently operating as the control module fails, then another device takes over as the control module. This similarly provides a virtual chassis with a measure of built-in redundancy. However as with the cluster scenario described above, with a virtual chassis the administrator generally has to manually specify the active-passive network devices.

In accordance with an embodiment, to address scenarios such as the above, the system described herein provides an automated means to, for example, add standby leaf nodes as replacements, rectify faulty components, and generate alerts for non-recoverable errors, using a fault tolerance and resiliency framework to achieve fault tolerance and resiliency at various granular parts of cloud networks, which includes network devices, hardware components (for example, line cards, ASIC, power supplies), and software subsystems.

Generally described, the described fault tolerance and resiliency framework can provide an approach to fault tolerance and resiliency for cloud networks that generally includes: (a) detecting broken network devices in a virtual chassis (e.g., leaf nodes) and/or clusters (e.g., firewall/load balancers) and replacing them with standby network devices; and (b) monitoring for catastrophic hardware/software errors, and taking appropriate actions.

FIG. 2 illustrates a system for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

As illustrated in FIG. 2, in accordance with an embodiment, the system can include a fault tolerance and resiliency framework 200, that can operate at one or more computers 202 including a computer hardware (e.g., processor, memory), and can monitor network devices within a cloud network, such as for example, leaf nodes (devices) 180, cluster members (devices) 190, and standalone network devices, so as to receive status or state information associated with the devices.

In accordance with an embodiment, the fault tolerance and resiliency framework can include a device failure handler 210, and event monitor/poller 214 components, which can operate together to provide fault tolerance and resiliency; and can receive, from a network device inventory database 216, a network device inventory 220 or data descriptive of the various network devices in the cloud network, including their type, vendor, or model information.

In accordance with an embodiment, in the event the system determines a failure or error associated with a network device, it can attempt recovery operations to restore the cloud network to its original capacity or state. If the system determines that a particular network device cannot recover from the failure or error, it can use an alerts 228 component to alert an administrator for further action.

In accordance with an embodiment, the components and processes illustrated in FIG. 2, and as further described herein with regard to various other embodiments, can be provided as software or program code executable by a computer system or other type of processing device.

FIG. 3 illustrates an example of a network device inventory, in accordance with an embodiment.

As illustrated in FIG. 3, in accordance with an embodiment, a network device inventory can provide data descriptive of the various network devices in the cloud network, including their type, vendor, or model information, and their present lifecycle status or state.

The example illustrated in FIG. 3 is provided for purposes of illustration. In accordance with various embodiments, a network device inventory can include different types of vendor, model, or status or state information associated with network devices.

FIG. 4 illustrates a method or algorithm for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

As illustrated in FIG. 4, in accordance with an embodiment, in a first aspect (232), for leaf nodes: the system (fault tolerance and resiliency framework) monitors leaf nodes which are provided as part of a virtual chassis; and, if the system determines a particular leaf node is not in a present state, then that leaf node is replaced with another leaf node in the unprovisioned state.

For example, in an environment that includes Juniper™ leaf nodes, the system can perform a “show virtual-chassis” command on all leaf node switches periodically, to detect if there are replacement switches that are not provisioned (unprovisioned), for example as in the following output:

    • 0 (FPC 0) Present
    • TR0216320086 qfx5100-48t-6q 129 Master* N VC
    • 0 vcp- 255/0/48
    • 0 vcp- 255/0/49
    • 1 (FPC 1) NotPresent
    • TR0216320093 qfx5100-48t-6q - Unprovisioned
    • TR0214480144 qfx5100-48t-6q

In the above example, the system can determine, based on the output from running the command, that the switch with serial number TR0216320086 is the master switch; and the switch with serial number TR0216320093 should be replaced with the switch with serial number TR0214480144.

The above is provided by way of example and for purposes of illustration. In accordance with various embodiments that employ devices provided by other vendors, other types of commands can be used that are appropriate to those types of devices. Information provided by the device inventory database that identifies the device inventory and data descriptive of the various network devices in the cloud network, including their type, vendor, or model information, can be used to determine the device-type commands for use in configuring or otherwise operating with the various network devices.

In accordance with an embodiment, in a second aspect (234), for cluster members: the system can monitor cluster members; and, if the system determines that a cluster member has failed, then it attempts to reboot or reload that failed cluster member; or if a reboot/reload is not successful, then a node delete event is generated for the non-functional device. An administrator can then manually add another cluster member.

In accordance with an embodiment, in a third aspect (236): the system can monitor hardware or software failures or errors associated with individual network devices, and performs error-specific actions to address the hardware or software failures or errors

For example, in accordance with an embodiment, example remedy techniques can include: for ASIC errors, performing operations to restart the ASIC; for parity errors, performing operations to restart a specific blade; for NMI errors, performing operations to reload the network device; for power supply/voltage/temperature critical events, performing operations to initiate leaf node replacement or cluster member removals; for standalone network devices, performing operations to generate critical alerts, or if action thresholds are reached, performing operations to shut down the network device; and for software errors, performing operations to generate alerts, or if the action thresholds are exceeded, performing operations to reboot the network device.

The above examples of various remedy techniques and operations are provided by way of example, and for purposes of illustration. In accordance with various embodiments, other types of remedy techniques and operations can be supported.

In accordance with an embodiment, in a fourth aspect (238): the system monitors device objects, and reports state failures associated with the device objects and/or performs appropriate actions to address the state failures.

In accordance with an embodiment, examples of the types of device objects that can be monitored include: expiration of SSL certificates for load balancers and firewalls; and virtual server or virtual IP (VIP) and pools on load balancers, for their status or state. The system can report such state failures; and in some embodiments attempt to automatically, for example, renew the expiring SSL certificates.

The above are provided by way of example, and for purposes of illustration. In accordance with various embodiments, other types of state failures can be supported.

In accordance with an embodiment, the system can recognize various thresholds as illustrated below, in monitoring the cloud network and providing fault-tolerance and resilience.

Alarm Threshold: In accordance with an embodiment, an alarm threshold can be configured to control the number of times that an event can be generated in a last (N) time interval defined before alerting the administrator. A value for “N” can be defined by cloud administrators per event; for critical events, this value will be 1.

Action Threshold: In accordance with an embodiment, an action threshold can be configured to control the number of times that an event can be generated in the last (N) time interval defined before taking the corrective action. A value for “N” can be defined by cloud administrators per event; for critical events, this value will be 1.

Restart Threshold: In accordance with an embodiment, a restart threshold can be configured to control the number of times that a faulty component can be restarted in the last (N) time interval defined before taking a system level action such as, for example, rebooting or isolating the component.

Reload Threshold: In accordance with an embodiment, a reload threshold can be configured to control the number of times a faulty network device can be reloaded in the last (N) time interval defined before isolating the network device from a virtual chassis or cluster, and initiating automatic network device replacement functionality.

In accordance with an embodiment, in each of the above scenarios, the fault tolerance and resiliency framework can use a sliding window approach for checking a network device status against the particular threshold with the event count, which means that the history of events seen before the defined N interval will be erased; and the threshold validation is performed with event count only within the defined N interval.

For example, in accordance with an embodiment, the system can maintain a database, for each error with the timestamp the error was seen in the past N time interval.

FIG. 5 illustrates the operation of the system to provide fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

As illustrated in FIG. 5, in accordance with an embodiment, when a virtual chassis appears to have gone down, the system can attempt a leaf node replacement. For example, the system can periodically check to see if there is any nonresponsive network device. Since virtual chassis and cluster members generally use a heartbeat mechanism to talk to one another, the system can check heartbeats and monitor them. Each device vendor provides an API that allows monitoring of their network devices, and heartbeats can be monitored using these APIs, to enable the system to detect a problem.

For example, in accordance with an embodiment, if one virtual chassis card is not present, and another is present; the system can perform operations to replace the non-present virtual chassis card with the present virtual chassis card. Generally described, a status of “not present” means the card lost the connection with the control module; while a status of “present” means the card is not active, but is in connection with the control module. In such a scenario the system can perform operations to change the cards status from “present” to “provisioned”.

Since each of which network devices can be associated with a network device type, vendor, or model information, as indicated in the device inventory database, the system can determine which commands to use for which vendor type.

In accordance with an embodiment, a particular virtual chassis must generally include devices of the same vendor type within that particular virtual chassis. However, the cloud network can utilize multiple types of virtual chassis, each having different vendors/devices. The device inventory database can include different types of virtual chassis, for use in determining command types associated with the different types of virtual chassis.

In accordance with an embodiment, the contents of the database and inventory can be updated periodically, for example daily, which further assists in providing resiliency and fault tolerance.

FIG. 6 further illustrates the operation of the system to provide fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

As illustrated in FIG. 6, in accordance with an embodiment, in the event the system determines a failure or error associated with a network device, it can attempt recovery operations to restore the cloud network to its original capacity or state. If the system subsequently determines that a particular network device cannot recover from the failure or error, it can communicate or otherwise signal an alert to an administrator 240 for further action.

FIG. 7 illustrates a method or process for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

In accordance with an embodiment, generally described the fault tolerance and resiliency framework, including components such as the device failure handler, and the event monitor/poller, can be provided as one or more automated software components running on a server in the cloud, where it can operate like a daemon that monitors each network device.

For example, in accordance with an embodiment, if the fault tolerance and resiliency framework tries to check the status of a network device, and there is no acknowledgment from the device, then it can proceed to perform device-type commands to bring up another network device. Such errors may be due to, for example, a bad line card, or a bad fan, which causes the network device to start malfunctioning, or other types of errors, for example ASIC errors, or software errors.

As illustrated in FIG. 7, in accordance with an embodiment, the system (fault tolerance and resiliency framework) can operate a leaf node replacement and cluster member failure handling process 250 that includes: loading the device inventory database (252); monitoring for broken devices (254); and determining if the device is a virtual chassis, cluster, or standalone device (256).

In accordance with an embodiment, if the fault tolerance and resiliency framework determines the device is a virtual chassis device, then the system can perform device-type commands to proceed to determine any broken leaf node (i.e., the leaf node is in a “not present” state) (260). If such a broken leaf node is found, then the device is locked for configuration push (262).

In accordance with an embodiment, the process can proceed to perform device-type commands to find a replacement leaf node in an unprovisioned state (264). If no replacement device is found then a critical alert (e.g., pager, e-mail, or SNMP trap) can be raised (276). The process can then remove the broken leaf node from the virtual chassis (266); and ready the non-provisioned leaf node, e.g., by zeroing (268).

In accordance with an embodiment, the process can proceed to perform device-type commands to add the non-provisioned leaf node to the virtual chassis (270); and run a command to provision the replacement leaf node (272). The process can then unlock the device for configuration push (274); and raise a notification alert (e.g., pager, e-mail, or SNMP trap) (278).

As illustrated in FIG. 7, in accordance with an embodiment, if instead the fault tolerance and resiliency framework determines the device is a cluster member, then the system (fault tolerance and resiliency framework) can proceed to detect the broken cluster member (280); and lock the device for configuration push (282).

In accordance with an embodiment, the process can perform device-type commands to attempt to reload the broken cluster member (284). If the member becomes healthy (286), then the system can unlock the device for configuration push. Alternatively, if the member does not become healthy then, the system evaluates if a reload threshold has been reached (288); and if not then will re-attempt the operation. Otherwise, the system removes the broken member/node from the cluster (290), and raises a critical alert.

FIG. 8 illustrates the use of the method or process for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

As illustrated in FIG. 8, in accordance with an embodiment, the virtual chassis devices (150) include, in this example, a first leaf node A 292, which is initially operational within the virtual chassis (152); and a second leaf node B 294, which is initially in an unprovisioned state. The system (fault tolerance and resiliency framework) can operate a leaf node replacement and cluster member failure handling process that includes loading the device inventory database, and monitoring for broken virtual chassis devices.

FIG. 9 further illustrates the use of the method or process for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

As illustrated in FIG. 9, in accordance with an embodiment, if the system determines a broken leaf node, for example, that leaf node A is found in a “not present” state, then the device is locked for configuration push.

FIG. 10 further illustrates the use of the method or process for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

As illustrated in FIG. 10, in accordance with an embodiment, the process can proceed to find a replacement leaf node, in this example leaf node B, in an unprovisioned state; remove the broken leaf node A from the virtual chassis; and ready the non-provisioned leaf node B, for example by zeroing.

FIG. 11 further illustrates the use of the method or process for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

As illustrated in FIG. 11, in accordance with an embodiment, the process can proceed to add the non-provisioned leaf node B to the virtual chassis; and run a command to provision the replacement leaf node.

FIG. 12 further illustrates the use of the method or process for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

As illustrated in FIG. 12, in accordance with an embodiment, the system (fault tolerance and resiliency framework) can operate a leaf node replacement and cluster member failure handling process that includes loading the device inventory database, and monitoring for broken devices, including cluster devices. In this example, the cluster devices include a first cluster member A 296, and a second cluster member B 298, both of which are initially operational.

FIG. 13 further illustrates the use of the method or process for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

As illustrated in FIG. 13, in accordance with an embodiment, upon detecting a broken cluster member, in this example cluster member A, the system can lock the device for configuration push.

FIG. 14 further illustrates the use of the method or process for providing fault tolerance and resiliency in a cloud network, in accordance with an embodiment.

As illustrated in FIG. 14, in accordance with an embodiment, the system can attempt to reload the broken cluster member; and if the member does not become healthy or a reload threshold has been reached; the system removes the broken member/node from the cluster

FIG. 15 illustrates a method or process performed by an event monitor/poller to determine hardware or software errors, or device-specific objects, in accordance with an embodiment.

In accordance with an embodiment, generally described the system can monitor for various failures or errors, and then apply thresholds to determine how to proceed. An administrator can configure at what point the system should signal an alarm, for example via an e-mail, or an SNMP trap, which tells the administrator about faulty hardware in the switch for example.

For example, in accordance with an embodiment, the system can be configured to poll for error every 5 minutes, and if the last two times an error occurs, then that error is flagged; but perhaps if the error is less frequent then the system will not flag; or in the case of a restart threshold, the system can be configured to assess if it is detecting a reset 3 out of 5 times, and if so it can then reset the ASIC to try and fix the problem. A similar approach can be applied to any component that is faulty, for example to reload/reboot the network device. For example, an action threshold for a virtual chassis may include bringing in another member that is healthier; or for a cluster deleting the member/node from the cluster.

In accordance with an embodiment, each of the thresholds can be associated with a ratio of times the system sees the problem before taking the appropriate action. This allows the administrator to have control as to how to address the situation by making various/different attempts until the issue is escalated, and provides an incremental approach to addressing and fixing hardware and software errors.

As illustrated in FIG. 15, in accordance with an embodiment, the event monitoring and auto-remediation process 300 provided by the fault tolerance and resiliency framework can be performed by an event monitor/poller (302) as described above, that operates to determine hardware errors (304), such as for example NMI errors, parity errors, ASIC errors, chassis errors; software errors (306), such as for example malfunctioning daemons, storage errors, session failures, route/interface failures; or changes in states associated with device-specific objects (308) such as for example expiring SSL certificates, VIPs, pool objects, or miscellaneous objects.

In accordance with an embodiment, for a hardware error, the process can include determining if an alarm threshold for error is reached (310); and if so then generating a critical alert (314); otherwise continue monitoring the error (312). If the action threshold for error reached (316), then isolate and restart faulty component (318). The system can make a determination if the fault is rectified (320); and if so then generate notification alert (326). Otherwise, a determination is made as to whether a restart threshold is reached (322); and if so then initiate leaf node replacement and cluster member/node failure handling (324).

In accordance with an embodiment, software errors can generate notification alert. Examples of device-specific objects can include, for example, VIP down, or pool down (320), in which case generate a critical alert; or whether a certificate is expiring (332) in which case the system can perform an attempt to auto-renew the SSL certificate (334).

For example, in the case of an expired SSL certificate the system can automate the process of reviewing and acting on certificates that are expiring, and automating renewals, to alleviate the burden off the administrator.

FIG. 16 illustrates use of an event monitor/poller to determine hardware or software errors, or device-specific objects, in accordance with an embodiment.

As illustrated in FIG. 16, in accordance with an embodiment, the system (event monitor/poller) operates to determine hardware errors 344, software errors 346, or device-specific objects 348 associated with the network devices in the cloud network; and based on such event monitoring 350, perform remediation tasks as described above, and generate appropriate notification alerts 352 and/or critical alerts 354.

Technical Advantages

In accordance with various embodiments, technical advantages of the systems and methods described herein can include, for example:

A fault tolerance and resiliency framework adapted to monitor and remediate faults spanning potentially thousands of network devices, with a vast number of underlying components.

The fault tolerance and resiliency framework is tightly-coupled to achieve fault tolerance, resiliency and monitoring. A single component failure can be addressed at a component level, and if not fixed then a fault-tolerant action can be percolated to the network device and upward in terms of network granularity.

The fault tolerance and resiliency framework can operate autonomously, to automatically perform corrective actions for many correctable issues.

The fault tolerance and resiliency framework can be extended, for example with plugins associated with heterogeneous network devices and network device vendors, to support additional types of network device, component, object, or software errors.

The fault tolerance and resiliency framework can use stateful processes to provide resiliency. States can be highly correlated from individual components up to an entire availability domain, and can be used to build predictable fault tolerant networks. The system can also be extended to include support for artificial intelligence techniques.

In accordance with various embodiments, the teachings herein can be implemented using one or more general purpose or specialized computer, computing device, machine, or microprocessor, including one or more processors, memory and/or computer readable storage media programmed according to the teachings of the present disclosure. Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.

In some embodiments, the teachings herein can include a computer program product which is a non-transitory computer readable storage medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the present teachings. Examples of such storage mediums can include, but are not limited to, hard disk drives, hard disks, hard drives, fixed disks, or other electromechanical data storage devices, floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems, or other types of storage media or devices suitable for non-transitory storage of instructions and/or data.

The foregoing description has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the scope of protection to the precise forms disclosed. Many modifications and variations will be apparent to the practitioner skilled in the art.

The embodiments were chosen and described in order to best explain the principles of the present teachings and their practical application, thereby enabling others skilled in the art to understand the various embodiments and with various modifications that are suited to the particular use contemplated. It is intended that the scope be defined by the following claims and their equivalents.

Claims

1. A system for providing fault tolerance and resiliency in a cloud network, comprising:

a computer including a processor and a fault tolerance and resiliency framework operating therein that monitors network devices within a cloud network, to receive status or state information associated with the network devices;
wherein the cloud network provides access by a cloud environment to software applications executing within an availability domain of the cloud environment;
wherein particular network devices can be configured as virtual chassis devices, cluster members, or standalone network devices; and
wherein the system monitors the network devices, and if a failure or error associated with a network device is determined, attempts recovery operations on the network device, to restore the cloud network to an original capacity or state.

2. The system of claim 1, wherein for a virtual chassis device, if the system determines a broken leaf node, then the system performs device-type commands to:

lock the device i for configuration push;
determine a replacement leaf node;
remove the broken leaf node from the virtual chassis;
prepare the non-provisioned replacement leaf node; and
add the replacement leaf node to the virtual chassis, and provision the replacement leaf node.

3. The system of claim 1, wherein for a network device that includes cluster members, upon detecting a broken cluster member, the system performs device-type commands to:

lock the device for configuration push;
attempt to reload the broken cluster member; and
if the member does not become healthy or a reload threshold has been reached, remove the broken member from the cluster.

4. The system of claim 1, wherein the system operates to determine hardware errors, software errors, or device-specific objects associated with the network devices in the cloud network; and based on such event monitoring, performs remediation tasks and generates appropriate notification alerts and/or critical alerts.

5. A method for providing fault tolerance and resiliency in a cloud network, comprising:

monitoring network devices within a cloud network, to receive status or state information associated with the network devices;
wherein the cloud network provides access by a cloud environment to software applications executing within an availability domain of the cloud environment;
wherein particular network devices can be configured as virtual chassis devices, cluster members, or standalone network devices; and
in the event of determining a failure or error associated with a network device, attempting recovery operations on the network device, to restore the cloud network to an original capacity or state.

6. The method of claim 5, wherein for a virtual chassis device, if the system determines a broken leaf node, then the system performs device-type commands to:

lock the device i for configuration push;
determine a replacement leaf node;
remove the broken leaf node from the virtual chassis;
prepare the non-provisioned replacement leaf node; and
add the replacement leaf node to the virtual chassis, and provision the replacement leaf node.

7. The method of claim 5, wherein for a network device that includes cluster members, upon detecting a broken cluster member, the system performs device-type commands to:

lock the device for configuration push;
attempt to reload the broken cluster member; and
if the member does not become healthy or a reload threshold has been reached, remove the broken member from the cluster.

8. The method of claim 5, wherein the system operates to determine hardware errors, software errors, or device-specific objects associated with the network devices in the cloud network; and based on such event monitoring, performs remediation tasks and generates appropriate notification alerts and/or critical alerts.

9. A non-transitory computer readable storage medium, including instructions stored thereon which when read and executed by at least one of a computer or other electronic network device causes the at least one of a computer or other electronic network device to perform a method comprising:

monitoring network devices within a cloud network, to receive status or state information associated with the network devices;
wherein the cloud network provides access by a cloud environment to software applications executing within an availability domain of the cloud environment;
wherein particular network devices can be configured as virtual chassis devices, cluster members, or standalone network devices; and
in the event of determining a failure or error associated with a network device, attempting recovery operations on the network device, to restore the cloud network to an original capacity or state.

10. The non-transitory computer readable storage medium of claim 9, wherein for a virtual chassis device, if the system determines a broken leaf node, then the system performs device-type commands to:

lock the device i for configuration push;
determine a replacement leaf node;
remove the broken leaf node from the virtual chassis;
prepare the non-provisioned replacement leaf node; and
add the replacement leaf node to the virtual chassis, and provision the replacement leaf node.

11. The non-transitory computer readable storage medium of claim 9, wherein for a network device that includes cluster members, upon detecting a broken cluster member, the system performs device-type commands to:

lock the device for configuration push;
attempt to reload the broken cluster member; and
if the member does not become healthy or a reload threshold has been reached, remove the broken member from the cluster.

12. The non-transitory computer readable storage medium of claim 9, wherein the system operates to determine hardware errors, software errors, or device-specific objects associated with the network devices in the cloud network; and based on such event monitoring, performs remediation tasks and generates appropriate notification alerts and/or critical alerts.

Patent History
Publication number: 20210349796
Type: Application
Filed: Jul 22, 2020
Publication Date: Nov 11, 2021
Patent Grant number: 11334453
Inventor: Rishi Mutnuru (San Jose, CA)
Application Number: 16/935,961
Classifications
International Classification: G06F 11/20 (20060101); H04L 12/24 (20060101); H04L 29/08 (20060101); G06F 11/30 (20060101);