Identifying root causes of network service degradation

- Ciena Corporation

Systems and methods for analyzing the root cause of service failures and service degradation in a telecommunications network are provided. A method, according to one implementation, includes a step of receiving any of Performance Monitoring (PM) data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs from equipment configured to provide services in a network. The method also includes a step of automatically detecting a root cause of a service failure or signal degradation from the available PM data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
TECHNICAL FIELD

The present disclosure generally relates to networking systems and methods. More particularly, the present disclosure relates to identifying root causes of service failures and signal degradation in a network.

BACKGROUND

Telecommunications networks are typically managed by a team of network operators. These network operators have the responsibility of minimizing service disruptions when failures occur in the network, such as by quickly and precisely determining the location and the root cause of failures.

Typically, Root Cause Analysis (RCA) is performed manually by the team of domain experts who leverage various types of data, such as equipment Performance Monitoring (PM) data and standard alarms. For example, the standard alarms may be provided when certain parameters (e.g., PM data) cross certain threshold values. In addition to path PM data and path alarms, the team of experts can also utilize other data, such as services PM data, service alarms, network topology, and configuration logs.

Currently, RCA requires expert knowledge of the telecommunications network. Typically, if a failure occurs in a network using equipment from one vendor, that vendor is usually going to be called. This may mean that the vendor may need experts who can be ready at any time to troubleshoot and recover the failure. For multi-vendor, multi-layer applications, end-to-end domain expertise is usually not available for all network equipment.

The conventional troubleshooting procedure requires the availability of all of the above-mentioned types of data (i.e., path PM data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs, etc.). Also, the troubleshooting procedure is normally performed manually by the network operators. For example, the troubleshooting procedure may require looking at the PM and alarm data from different ports and sources and stitching the paths of failed services. In addition, among the substantial amounts of PM data and alarms reported in a path, the domain experts usually have to manually identify the specific alarm or abnormal PM data that might be recognized as the root cause of the service issues.

Since some failures on the path may not set any alarms and may not be recognized as an issue, even experts may not be able to diagnose network problems quickly and accurately. Therefore, there is a need in the field of network management to quickly and accurately detect the root cause of service failures and/or signal degradation when PM data and alarms are obtained and to detect root causes, even when an incomplete dataset of PM data and alarms is obtained or when end-to-end network expertise is unavailable.

BRIEF SUMMARY

The present disclosure is directed to systems, methods, and non-transitory computer-readable media for performing Root Cause Analysis (RCA) in a communications network. According to the various embodiments described in the present disclosure, RCA procedures may be performed with incomplete data and without the need for expertise from a network operator. A method, according to one implementation, includes the step of receiving any of Performance Monitoring (PM) data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs from equipment configured to provide services in a network. Also, the method includes the step of automatically detecting a root cause of a service failure or signal degradation from the available PM data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs.

BRIEF DESCRIPTION OF THE DRAWINGS

The present disclosure is illustrated and described herein with reference to the various drawings. Like reference numbers are used to denote like components/steps, as appropriate. Unless otherwise noted, components depicted in the drawings are not necessarily drawn to scale.

FIG. 1 is a block diagram illustrating an example of underlay equipment configured to support multiple overlay services, according to various embodiments of the present disclosure.

FIG. 2 is a block diagram illustrating a service path of a network, according to various embodiments.

FIG. 3 is a block diagram illustrating a computing system configured to analyze root causes of network service degradation, according to various embodiments of the present disclosure.

FIG. 4 is a diagram illustrating different use cases for performing root cause analysis based on different levels of availability of network data, according to various embodiments.

FIG. 5 is graph illustrating a sample of Performance Monitoring (PM) data obtained in an example network, according to various embodiments.

FIG. 6 is a flow diagram illustrating a process related to a first use case shown in FIG. 4, according to various embodiments.

FIG. 7 is a flow diagram illustrating a process for creating additional derived alarms, according to various embodiments.

FIG. 8 is a table illustrating a sample of additional derived alarms created using the first use case shown in FIG. 4, according to various embodiments.

FIG. 9 is a chart illustrating a Pearson correlation between Rx alarms and path alarms in an example network, according to various embodiments.

FIG. 10 is a flow diagram illustrating a process related to a second use case shown in FIG. 4, according to various embodiments.

FIG. 11 is a flow diagram illustrating a process related to a third use case shown in FIG. 4, according to various embodiments.

FIG. 12 is a table illustrating a sample of a number of instances of training datasets and testing datasets from a root cause analysis of an example network, according to various embodiments.

FIG. 13 is a table illustrating a sample of PM data obtained from an example network for root cause analysis, according to various embodiments.

FIG. 14 is a table illustrating a sample of PM data of an example network related to the third use case shown in FIG. 4, according to various embodiments.

FIG. 15 is a chart illustrating a confusion matrix of PM data of an example network related to the third use case shown in FIG. 4, according to various embodiments.

FIG. 16 is a flow diagram illustrating a general process for performing root cause analysis, according to various embodiments of the present disclosure.

DETAILED DESCRIPTION

The present disclosure relates to systems and methods for monitoring telecommunications networks and performing Root Cause Analysis (RCA) to determine a root cause of service failures and/or signal degradation in the network. As described in the present disclosure, the embodiments for performing RCA can include procedures that can be a) executed automatically, b) used even in situations where there is incomplete data, c) learned from historical data, d) performed without networking domain expertise, and e) applied to a variety of communications network services (e.g., optical networks).

FIG. 1 is a block diagram illustrating an embodiment of a portion of a network 10 having underlay equipment (E1, E2, . . . , E10). The underlay equipment E1, E2, . . . , E10 is configured to support multiple overlay services (S1, S2, S3, S4). In this example, suppose that one or more of the services S1-S4 fails or degrades. As mentioned above, a network operator would want to identify the root cause of these issues so that proper remediation can be performed to restore the network 10. For example, according to some embodiments, a root cause may be associated with a specific alarm raised with respect to a specific piece of equipment E1-E10 at a given time. The alarm may be associated with the piece of equipment itself or with a communication path or link connecting one piece of equipment to an adjacent piece.

Ideally, the availability of all relevant data regarding the network 10 would be useful for determining the root cause. However, at times, not all of this data may be available and therefore alternative procedures may need to be performed to adequately detect the root cause. The embodiments of the present disclosure are configured to determine root cause based on any amount of data that is available. For example, as described in more detail below, a first procedure may be performed when all (or much) of the relevant data is available.

In particular, this “relevant data” may include Performance Monitoring (PM) data associated with each of the pieces of equipment E1-E10 on the path (i.e., path PM data), standard alarms that are often associated with the equipment E1-E10 on the path (i.e., standard path alarms), PM data associated with each of the services S1-S4 (i.e., service PM data), standard alarms that are often associated with the services S1-S4 (i.e., standard service alarms), topology of the network 10, and configuration logs. In this embodiment, the term “topology” may include the physical devices (e.g., equipment E1-E10) and the connectivity of the equipment (e.g., communication or transmission paths between the respective pairs of equipment) configured to provide the services.

According to some embodiments, “services” may include, for example, optical Dense Wavelength Division Multiplexing (DWDM) operations, Internet Protocol (IP) and/or Multi-Protocol Label Switching (MPLS) operations, virtual Local Area Network (vLAN) operations, Layer 3 (L3) Virtual Private Network (VPN) operations, Software-Defined Wide Area Network (SD-WAN) tunnel operations, etc. As shown in FIG. 1, services S1 and S3 may utilize equipment E1 as a transmitter (Tx) device and equipment E5 as a receiver (Rx) device. Services S2 and S4 may utilize E1 as a Tx device and E10 as a Rx device. Thus, the services S1-S4 may include a specific Tx device, Rx device, and one or more additional devices forming a path in the network 10.

The standard alarms (e.g., standard path alarms and standard service alarms) that may be threshold-crossing alarms or other similar alarms that may normally be used for indicating issues in the network 10. In addition to these standard alarms, the embodiments of the present disclosure introduce a new type of alarm that may be calculated from the PM data. These new alarms may be different from the standard alarms and can be used along with the standard alarms. In some embodiments, the new alarms may be referred to as “derived alarms” since they may be derived from the PM data using any suitable rules, algorithms, techniques, procedures, etc. For example, these derived alarms may be associated with conditions of the network 10 that may impact or may likely have an impact on any of the services S1-S4 of the network 10. Therefore, the present disclosure is able to calculate these derived alarms to capture issues that may otherwise be invisible to network operators or other experts.

According to some embodiments, the derived alarms may include, for example, a) specific PM data patterns (e.g., power drop), b) abnormal PM data patterns detected by anomaly detection, c) specific network configuration changes, etc. The derived alarms may be associated with conditions (or issues) with the Tx devices, Rx devices, ports, paths, connections, links, topology, etc.

FIG. 2 is a block diagram showing an embodiment of a portion of a network 20. In this embodiment, the network 20 includes a service path for enabling a transmitter (Tx) device 22 to provide a network service to a receiving (Rx) device 24. The service path of the network 20 also includes a Multiplexer/Demultiplexer (MD) device 26, a Wavelength Selective Switch (WSS) 28, a first amplifier 30, a second amplifier 32, a third amplifier 34, another WSS 36, and another MD device 38. The service path also includes a number of links 40 or transmission paths connecting adjacent devices 22, 24, 26, 28, 30, 32, 34, 36, 38 together. The links 40 may be configured to connect one or more ports of one device to one or more ports of the corresponding adjacent device. For communication, signals are transmitted from one device to another via the connecting link 40, which is regarded as one hop.

The following description includes various root cause procedures for handling various levels of availability of different types of data. The RCA procedures described herein may be applicable to the network 10 of FIG. 1, the network 20 of FIG. 2, or any other suitable type of network configured to provide various network services.

I. Automated Root Cause Analysis (RCA) with Complete Data

In the ideal situation, all the important Tx alarms, path alarms, Rx alarms, topology, etc. would be known and would be available to or possibly calculated by domain experts. In this case, it is possible to determine the root cause of degraded service with a “path traversal” procedure (and/or a “triangulation” procedure as described below). The path traversal procedure may also be referred to as a “circuit traversal” procedure. With reliable labels for identifying path degradation (e.g., “bad path hop”) and/or service degradation (e.g., “bad service quality”), the embodiments of the present disclosure may be configured to use Supervised ML (SML) to train multi-variate classifier algorithms. These SML classifiers may outperform domain expert heuristics (e.g., threshold crossings) in complex network scenarios.

II. Automated RCA with Incomplete Domain Expertise

Typically, there may only be a few teams of experts having sufficient domain expertise to perform end-to-end RCA, especially when considering multi-layer and multi-vendor networks. However, it is more common that each network operator might have expertise about only a part of the network. In this situation (with incomplete domain expertise), the present disclosure may use statistical methods (e.g., Machine Learning (ML) methods, etc.) to infer the consequences of the limited expert knowledge to correlated data about which there is little or no expertise. In particular, the present embodiments can encode domain expertise with data “labels” in a SML framework, using either the current domain expertise or third-party data (e.g., Network Operating Center (NOC) tickets, etc.).

A. Identified Degraded Services without Path PMs and Alarms

It may be possible in a network to know how to identify degraded services from Rx alarms (e.g., “bad service quality” labels), but without domain expertise about path alarms. In this case, the embodiments of the present disclosure may be configured to perform one or more different procedures. For example, in this situation, the embodiments may include a) training SML models to determine path alarm patterns that are service-affecting or service-impacting, b) using a feature-ranking process provided by the trained SML model to determine which Tx alarms and path alarms are important (and possibly suppress other path alarms), c) using anomaly detection to determine Tx alarm patterns and path alarm patterns that are service-affecting, d) using Pearson correlation (or other similar statistical process) to determine which Tx alarms and path alarms are correlated with relevant Rx alarms (and possibly suppress the others), and/or e) using Pearson correlation and/or SML models to test if new derived alarms are service-affecting.

One difficulty with conventional SML models for these tasks is that the number of hops along a path may change from service to service and may change over time (e.g., after a service re-route). Hence, many conventional algorithms cannot be used because they may require a fixed-size input. The embodiments of the present disclosure, however, is configured to overcome this difficulty and provide solutions to this problem. For example, the present embodiments may include procedures to a) aggregate PM data and alarms along the path to a fixed size (e.g., use average values, minimum values, maximum values, etc. each PM parameter) before feeding the SML classifier, b) use a long fixed-size input vector corresponding to the max number of hops, leave null for hops that are not present, and use an algorithm that can handle null inputs (e.g., XGBoost), and/or c) use Recurrent Neural Network (RNN) family of algorithms, input each path hop sequentially, and make and inference after seeing all hops (for any number of hops).

B. Identified Equipment/Path Alarms without Service-Impact Knowledge

It may be possible in the network to know how to identify important path alarms (e.g., device alarms, path alarms, “bad path hop” labels, etc.), but without knowing the expected impact on overlay services. In this case, the embodiments of the present disclosure may be configured to a) train SML model to determine Rx alarms patterns that are indicative of underlay path issues, b) use feature-ranking procedure provided by the SML model to determine which Rx alarms are important (and possibly suppress the other Rx alarms), c) use anomaly detection to determine Rx alarm patterns that are indicative of underlay path issues, d) use Pearson correlation to determine which Rx alarms are correlated with important path alarms, and/or e) use Pearson and/or SML to test if new derived alarms are indicative of underlay path issues.

Similar to the situation above with “identified degraded services without equipment/path alarms,” one difficulty with SML models for these tasks is that the number of services may change from hop to hop and may change over time (e.g., after new services are provisioned, deleted, re-routed, etc.). The present disclosure therefore provides similar solutions, including a) performing PM data and alarm aggregation across services before feeding the fixed-size classifier, b) use a long fixed-size input vector corresponding to a max number of services, leave nulls for services not present, and use an algorithm that can handle nulls (e.g., XGBoost), and/or c) use RNN family of algorithms, input each service (Rx alarms) sequentially, and make an inference after seeing all services (for any number of services).

C. Additional Processes

As a result of the above scenarios, the present embodiments can obtain a list of Tx alarms and path alarms or alternatively obtain a list of Rx alarms about which there may be little or no domain expertise. From these results, the systems and methods of the present disclosure may effectively create new derived alarms that are known to be effective to 1) identify overlay service issues or 2) underlay infrastructure issues. These additional derived alarms can then be used like standard alarms in an RCA process, which may include a utilization of standard alarms and derived alarms to locate the root-cause of service failure/degradation (e.g., as described below with respect to use case #1) and may include RCA with incomplete data.

Furthermore, collecting and accessing complete data from the entire network may be possible, but it is also expensive. Having access to only a subset of the data is usually a more common scenario. With incomplete data, the present embodiments would not use the “path traversal” (or circuit traversal) method but may instead use 1) a triangulation procedure from services, which may include obtaining Rx alarms and network topology information, but not equipment/path alarms (e.g., as described below with respect to use case #2), or 2) another procedure where only Rx alarms are obtained, but not topology (e.g., as described below with respect to use case #3). With expert rules, these methods can be used a straightforward manner. With ML, they can also be used for inference, but a complete data set may need to be available for model training and testing.

According to various embodiments, the present disclosure provides a suite of solutions for performing RCA when there is a service failure on a network (e.g., network 10, 20, etc.). The RCA solutions may include automatically providing diagnostics in spite of incomplete data and without domain expertise. The present disclosure may be configured to I) automatically create derived alarms with incomplete domain expertise, II) automatically create derived alarms for optical networks based on domain expertise III) automatically select service-affecting alarms amongst all standard alarms and derived alarms that could be the root cause of a service failure, IV) utilize the selected service affecting alarms to locate the root-cause of service degradation, V) locate the root-cause with incomplete data, and VI) determine generalization to multi-vendor and multi-layer services, each of which is described in more detail below.

I. Automatically Create Derived Alarms with Incomplete Domain Expertise

A. One possible scenario includes a case where only service degradation information (e.g., “bad service quality” labels) is available, but no domain expertise about an underlay path (e.g., links 40). The process for this scenario may be similar to the “Automated Root Cause Analysis (RCA) with complete data” section described above and may include:

    • 1. using Pearson correlation to determine which path alarms are useful for service assurance (SA) task;
    • 2. training SML model to create new derived alarms from path information for a Service Assurance (SA) task;
    • 3. identifying abnormal path PM behavior with anomaly detection for SA task; and
    • 4. using SML feature ranking to determine which path alarms are useful for SA task.

B. Another possible scenario includes a case where only path alarms (e.g., “bad path hop” labels) are available, but no domain expertise about overlay services (e.g., S1-S4. The process for this scenario may be similar to the “Automated RCA with incomplete domain expertise” section described above and may include:

    • 1. using Pearson correlation to determine which Rx alarms are useful for Network Assurance (NA) task;
    • 2. training SML model to create new derived alarms from services information for NA task;
    • 3. identifying abnormal Rx PM behavior with anomaly detection for NA task; and
    • 4. using the SML feature-ranking process to determine which services alarms are useful for NA task.

C. Another possible scenario includes a case where either path alarms (e.g., “bad hop” labels) with varying number of overlay services or service degradation (e.g., “bad service” labels) with varying number of underlay hops. The process for this scenario may use various techniques, procedures, algorithms, etc. to handle varying size inputs and may include:

    • 1. aggregating Tx PM data, Tx alarms, path PM data, path alarms, Rx PM data, and/or Rx alarms to a fixed-size vector, before inputting SML results;
    • 2. using long fixed-size input (corresponding to max possible length), leave null for missing items, and use an algorithm that can handle null (e.g., XGBoost); and
    • 3. using Recurrent Neural Network (RNN) family of techniques/algorithms, input each item sequentially, and make an inference after considering the items
      II. Automatically Create Derived Alarms for Optical Networks Based on Domain Expertise

D. Another possible scenario includes a case where new specific derived alarms indicative of issues or changes of the network (which are not captured by existing alarms) are derived. The network issues may include:

    • 1. abnormal behavior of PM data (e.g., minor changes, as described below); and
    • 2. configuration changes from log files or NOC tickets, such as:
      • a. channel add, delete, and/or re-route changes,
      • b. manually set channels/equipment in-service or out-of-service, and
      • c. system optimization
        III) Automatically Select Service-Affecting Alarms Amongst all Standard Alarms and Derived Alarms that could be the Root Cause of a Service Failure

E. Another possible scenario includes a case where without sufficient domain expertise, alarms that are service affecting are selected amongst all standard alarms and derived alarms by a) use feature-ranking procedure provided by the SML model b) use Pearson correlation to determine which Rx alarms are correlated with important path alarms,

IV. Utilization of Selected Service Affecting Alarms to Locate the Root-Cause of Service Degradation

F. Another possible scenario includes a case where a single root cause may be automatically identified from a list of standard alarms and/or derived alarms. This process may include:

    • 1. a “path traversal” process for one or more degraded service or one or more alarms to identify a first hop as the root cause;
    • 2. a “triangulation” process for a group of several service failures and/or degradations at the same time in a similar way to identify a root cause as being on a common hop;
    • 3. a “Rx only” process when Rx patterns indicate the type of root cause along the path (but not where the issue is); and
    • 4. A combination of the “path traversal,” “triangulation,” and “Rx only” processes, which may include:
      • a. triangulation to find a multi-hop section,
      • b. traversal to find alarm on the first common hop, which is the root-cause, and
      • c. if several alarms are found, Rx only may resolve the ambiguity.
        V. Locating the Root-Cause with Incomplete Data

G. Another possible scenario includes a case where RCA may include the triangulation process when path PMs/alarms are not available. From a list of many services, the embodiment can locate common root-cause sections. This process may include:

    • 1. Triangulation from services, which may utilize Rx alarms and network topology information, but not path alarms, and
    • 2. Rx only process, which may utilize only Rx alarms, but not network topology information.
      VI. Generalization to Multi-Vendor and Multi-Layer Services

H. Another possible scenario includes a case where all the above procedures may be applied to a variety of telecommunications network services, such as:

    • 1. Layer-1: DWDM channels,
    • 2. Layer-2: vLAN,
    • 3. Layer-3: IP/MPLS tunnels, L3 VPN, and
    • 4. Over the top: SD-WAN tunnels.

FIG. 3 is a block diagram illustrating an embodiment of a computer system 50 configured to analyze root causes of network service degradation. The computer system 50 may be implemented in a Network Management System (NMS), Network Operations Center (NOC), or other suitable management facility for managing a network. In some embodiments, the computer system 50 may be usable by one or more network operators, network administrators, network technicians, etc. working in association with the NMS, NOC, etc. For example, the computer system 50 may be configured to perform various high-level methods as described herein. The methods can be used in combination with expert rules and/or ML classifiers to prepare derived alarms and/or derived alarms inputs.

In the illustrated embodiment, the computer device 50 may be a digital computing device that generally includes a processing device 52, a memory device 54, Input/Output (I/O) interfaces 56, a network interface 58, and a database 60. It should be appreciated that FIG. 3 depicts the computer device 50 in a simplified manner, where some embodiments may include additional components and suitably configured processing logic to support known or conventional operating features. The components (i.e., 52, 54, 56, 58, 60) may be communicatively coupled via a local interface 62. The local interface 62 may include, for example, one or more buses or other wired or wireless connections. The local interface 62 may also include controllers, buffers, caches, drivers, repeaters, receivers, among other elements, to enable communication. Further, the local interface 62 may include address, control, and/or data connections to enable appropriate communications among the components 52, 54, 56, 58, 60.

It should be appreciated that the processing device 52, according to some embodiments, may include or utilize one or more generic or specialized processors (e.g., microprocessors, CPUs, Digital Signal Processors (DSPs), Network Processors (NPs), Network Processing Units (NPUs), Graphics Processing Units (GPUs), Field Programmable Gate Arrays (FPGAs), semiconductor-based devices, chips, and the like). The processing device 52 may also include or utilize stored program instructions (e.g., stored in hardware, software, and/or firmware) for control of the computer device 50 by executing the program instructions to implement some or all of the functions of the systems and methods described herein. Alternatively, some or all functions may be implemented by a state machine that may not necessarily include stored program instructions, may be implemented in one or more Application Specific Integrated Circuits (ASICs), and/or may include functions that can be implemented as custom logic or circuitry. Of course, a combination of the aforementioned approaches may be used. For some of the embodiments described herein, a corresponding device in hardware (and optionally with software, firmware, and combinations thereof) can be referred to as “circuitry” or “logic” that is “configured to” or “adapted to” perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc., on digital and/or analog signals as described herein with respect to various embodiments.

The memory device 54 may include volatile memory elements (e.g., Random Access Memory (RAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Static RAM (SRAM), and the like), nonvolatile memory elements (e.g., Read Only Memory (ROM), Programmable ROM (PROM), Erasable PROM (EPROM), Electrically-Erasable PROM (EEPROM), hard drive, tape, Compact Disc ROM (CD-ROM), and the like), or combinations thereof. Moreover, the memory device 54 may incorporate electronic, magnetic, optical, and/or other types of storage media. The memory device 54 may have a distributed architecture, where various components are situated remotely from one another, but can be accessed by the processing device 52.

The memory device 54 may include a data store, database (e.g., database 60), or the like, for storing data. In one example, the data store may be located internal to the computer device 50 and may include, for example, an internal hard drive connected to the local interface 62 in the computer device 50. Additionally, in another embodiment, the data store may be located external to the computer device 50 and may include, for example, an external hard drive connected to the Input/Output (I/O) interfaces 56 (e.g., SCSI or USB connection). In a further embodiment, the data store may be connected to the computer device 50 through a network and may include, for example, a network attached file server.

Software stored in the memory device 54 may include one or more programs, each of which may include an ordered listing of executable instructions for implementing logical functions. The software in the memory device 54 may also include a suitable Operating System (O/S) and one or more computer programs. The 0/S essentially controls the execution of other computer programs, and provides scheduling, input/output control, file and data management, memory management, and communication control and related services. The computer programs may be configured to implement the various processes, algorithms, methods, techniques, etc. described herein.

Moreover, some embodiments may include non-transitory computer-readable media having instructions stored thereon for programming or enabling a computer, server, processor (e.g., processing device 52), circuit, appliance, device, etc. to perform functions as described herein. Examples of such non-transitory computer-readable medium may include a hard disk, an optical storage device, a magnetic storage device, a ROM, a PROM, an EPROM, an EEPROM, Flash memory, and the like. When stored in the non-transitory computer-readable medium, software can include instructions executable (e.g., by the processing device 52 or other suitable circuitry or logic). For example, when executed, the instructions may cause or enable the processing device 52 to perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein according to various embodiments.

The methods, sequences, steps, techniques, and/or algorithms described in connection with the embodiments disclosed herein may be embodied directly in hardware, in software/firmware modules executed by a processor (e.g., the processing device 52), or any suitable combination thereof. Software/firmware modules may reside in the memory device 54, memory controllers, Double Data Rate (DDR) memory, RAM, flash memory, ROM, PROM, EPROM, EEPROM, registers, hard disks, removable disks, CD-ROMs, or any other suitable storage medium.

Those skilled in the pertinent art will appreciate that various embodiments may be described in terms of logical blocks, modules, circuits, algorithms, steps, and sequences of actions, which may be performed or otherwise controlled with a general purpose processor, a DSP, an ASIC, an FPGA, programmable logic devices, discrete gates, transistor logic, discrete hardware components, elements associated with a computing device, controller, state machine, or any suitable combination thereof designed to perform or otherwise control the functions described herein.

The I/O interfaces 56 may be used to receive user input from and/or for providing system output to one or more devices or components. For example, user input may be received via one or more of a keyboard, a keypad, a touchpad, a mouse, and/or other input receiving devices. System outputs may be provided via a display device, monitor, User Interface (UI), Graphical User Interface (GUI), a printer, and/or other user output devices. I/O interfaces 56 may include, for example, one or more of a serial port, a parallel port, a Small Computer System Interface (SCSI), an Internet SCSI (iSCSI), an Advanced Technology Attachment (ATA), a Serial ATA (SATA), a fiber channel, InfiniBand, a Peripheral Component Interconnect (PCI), a PCI eXtended interface (PCI-X), a PCI Express interface (PCIe), an InfraRed (IR) interface, a Radio Frequency (RF) interface, and a Universal Serial Bus (USB) interface.

The network interface 58 may be used to enable the computer device 50 to communicate over a network 64, such as the network 10, 20, the Internet, a Wide Area Network (WAN), a Local Area Network (LAN), and the like. The network interface 58 may include, for example, an Ethernet card or adapter (e.g., 10BaseT, Fast Ethernet, Gigabit Ethernet, 10 GbE) or a Wireless LAN (WLAN) card or adapter (e.g., 802.11a/b/g/n/ac). The network interface 58 may include address, control, and/or data connections to enable appropriate communications on the network 64.

In addition, the computer device 50 includes a root cause analyzer 66, which is configured to determine a root cause of signal degradation and/or service failure/interruption in the network 64. The root cause analyzer 66 may be implemented as software or firmware and stored in the memory device 54 for execution by the processing device 52. Alternatively, the root cause analyzer 66 may be implemented as hardware in the processing device 52. According to other embodiments, the root cause analyzer 66 may include any suitable combination of hardware, software, and/or firmware and may include instructions (e.g., stored on a non-transitory computer-readable medium) that enable or cause the processing device 52 to perform various procedures for detecting root causes of service issues as described in the present disclosure.

According to various embodiments of the present disclosure, a system may include the processing device 52 and the memory device 54, which may be configured to store a computer program (e.g., root cause analyzer 66) having instructions. The instructions, when executed, enable the processing device 52 to receive any of Performance Monitoring (PM) data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs from equipment configured to provide services in a network. Also, the instructions further enable the processing device 52 to automatically detect a root cause of a service failure or signal degradation from the available PM data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs.

The root cause analyzer 66 may further include instructions to enable the processing device 52 to automatically detect the root cause independently of a network operator associated with the network. For example, the network may be a multi-layer, multi-vendor network. The instructions of the root cause analyzer 66 may further enable the processing device 52 to determine one or more derived alarms from the available path PM data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs. The derived alarms may be different from the standard path alarms and standard service alarms. The standard path alarms and standard service alarms may be threshold-crossing alarms. The one or more derived alarms may include one or more of PM data patterns, power drops, loss of signal, and network configuration changes. Determining the one or more derived alarms may include determining network conditions that have an impact on the services.

Furthermore, the instructions of the root cause analyzer 66 may further enable the processing device 52 to perform a Pearson correlation procedure, and a Supervised Machine Learning (SML) procedure, a “derived-alarm” generation procedure and a path traversal procedure when the path PM data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs are available. The processing device 52 may further be enabled to perform one or more of a triangulation procedure, and a SML procedure when the network topology information is available and alarms related to receiving equipment are available. The instructions can also enable the processing device 52 to perform a SML procedure for multi-variate root cause classification when alarms related to receiving equipment are available for identifying the service failure or signal degradation.

According to additional embodiments, the instructions of the root cause analyzer 66 may also enable the processing device 52 to rank the standard path alarms based on a level of impact the respective standard path alarms have on the services. For example, ranking the standard path alarms may include utilizing a Pearson correlation technique to determine a usefulness of transmission paths for a service assurance procedure. Also, in some embodiments, the system may be configured for use with an optical network having at least a transmitter device, a receiver device, and one or more network devices configured to communicate optical signals along transmission paths.

FIG. 4 is a diagram illustrating different use cases for performing Root Cause Analysis (RCA) based on different levels of availability of network data. In some embodiments, the RCA may be executed with respect to the root cause analyzer 66 shown in FIG. 3. Three use cases, as illustrated, may be based on various availability characteristics of network topology information, Rx PM data, Rx alarms, path PM data, and path alarms. Three processes may correspond to the illustrated use cases, the processes including a “path traversal” technique, a “triangulation” technique, and an “Rx-only” technique.

Use Case #1: “Path Traversal” with Full Knowledge of Network Topology Information, PM Data, and Alarms of Entire Network

For this use case, the “path traversal” procedure is performed. Input features includes network topology information, Rx PM data, and alarms from each port along the path. Output labels may include a label for a good circuit or bad circuit (e.g., Rx PM data or alarms), and a label of a good hop (e.g., ports and link) or bad hop on the path (e.g., port alarms or derived alarms). An example for illustrating the “path traversal” method include reference to the network 30 of FIG. 2. The complete path of the circuit includes the components and links from the Tx device 22 to the Rx device 24 and include specific topology information of the network 30.

FIG. 5 is a flow diagram illustrating an embodiment of a process 70 related to the first use case shown in FIG. 4. Again, the process 70 relies on input data including network topology information, path PM data, path alarms, Rx PM data, Rx alarms, and Rx failures. As described in the flow chart of FIG. 5, the path traversal process 70 includes a first step (block 72) of associating the PM data and alarm data to each individual hop on the path and Rx, as shown in the graphical data of FIG. 6 described below.

A second step (block 74) of the path traversal process 70 includes generating derived alarms for hops based on abnormal pattern of PM (if it is not captured by any alarms or if the alarm data is missing). It may be noted that many minor power drops may not be captured by alarms with hard-coded threshold. However, these minor power drops could be significant enough to fail the Rx if there is not enough margin allocated. Therefore, it is important to identify and label these power drops for RCA. In this example, abnormal behaviors are detected based on a dynamic threshold between the current day and the most recent day with no failure, where, if the power drop of the current day is greater than the previous good day minimum Q-value minus 6, that is:
power_drop_threshold=Qminthe most recent good day−6  (Eq.1)
then there is a high possibility that it will have a hit to the received signal. Derived alarms are generated where the abnormal PM pattern is detected and marked in FIG. 4 with <hop #><failure # of the hop>.

FIG. 6 shows a graph 84 of a sample of Performance Monitoring (PM) data obtained in an example network. The graph 84 shows PM data related to the different paths 40. For example, the PM data in this example include Daily Min/Max/Avg Power of the various hops (or paths 40) reported by the respective ports. The graph 84 also shows PM data related to Daily Qmin/Qavg Power and Daily Min/Avg Power reported by the Rx device 24. According to expert rules, the circuit is considered problematic if High Correction Count Second (HCCS) is reported on the Rx. As shown in the second to last subplot in FIG. 6, HCCS was reported on five different days over the monitoring period in this example. The “path traversal” method may be used in this case for root cause and failure location analysis of these Rx failures. For example, the graph 84 shows three events (i.e., labelled 5.1, 5.2, and 5.3) in the PM data associated with hop #5, four events (i.e., labelled 6.1, 6.2, 6.3, and 6.4) in the PM data associated with hop #6, and five events (i.e., labelled 8.1, 8.2, 8.3, 8.4, and 8.5) in the PM data associated with hop #8.

FIG. 7 is a flow diagram illustrating an embodiment of a process 90 for creating additional derived-alarms based on expert rules. The process 90 include getting current day power of a hop, as indicated in block 92. Then, it is determined whether there is a channel monitoring (CHMON) facility, as indicated in decision block 94. If so, the process 90 proceeds to decision block 96, which includes the step of determining if the daily min power is less than −30 dBm. If so, the process 90 goes to block 98, which includes the step of creating a derived alarm indicating a channel Loss of Signal (LOS). If it is determined in decision block 96 that the daily min power is not less than −30 dBm, then the process 90 goes to block 100, which includes the step of calculating power drop between the current day and the previous good day daily min. The process 90 also includes the step of determining if the power drop is greater than or equal to a threshold, as indicated in decision block 102. If so, the process goes to block 104, which includes the step of creating a derived alarm to indicate a channel power drop. If it is determined that the power drop is less than the threshold, then the process 90 proceeds to block 106.

If is determined in decision block 94 that there is no CHMON facility, then the process 90 proceeds instead to decision block 108. The process 90 includes determining whether the daily min power is greater than −35 dBm, as indicated in decision block 108. If it is greater, then the process 90 goes to block 110, which includes the step of creating a derived alarm to indicate a total power LOS. If it is not greater, then the process 90 goes to block 112, which includes the step of calculating the power drop between the current day and the previous good day daily min. Then, the process 90 includes determining if the power drop is greater than or equal to another threshold. If so, the process 90 goes to block 116, which includes the step of creating a derived alarm indicating a total power drop. Otherwise, if the power drop is less than this threshold, the process 90 goes to block 106, which includes passing (on the creation of any alarm for this hop). The process 90 may be performed in real-time to detect abnormal PM behavior on each hop to help with real-time diagnoses whenever a failure happens in the network.

The process 90 summarizes the expert derived methods that may be used in creating the derived alarms for the network. In this example, there are four derived alarms that may be created when abnormal behavior of channel power and total power is detected from the PM data. If the power is below a hard-coded threshold of invalid low power, a Loss of Signal (LOS) alarm can be raised. If the power dropped for more than a dynamic threshold (e.g., calculated by Eq. 1), a power drop alarm can be raised. Note that derived alarms can also be created based on data driven method such as anomaly detection.

FIG. 8 is a table 120 illustrating a sample of additional derived alarms created using the first use case shown in FIG. 4. The table 120 shows the detailed derived alarms raised in an example network, where the PM data shown in FIG. 6 is considered. Note that the abnormal pattern detection in this example is based on expert rules. However, in some embodiments, Machine Learning (ML) based anomaly detection can also be used in these procedures.

FIG. 9 is a chart 124 illustrating an example of a Pearson correlation between Rx alarms and path alarms in an example network. Returning again to the process 70 of FIG. 5, a third step (block 76) of the path traversal method includes selecting the most relevant alarms on the path to Rx failures based on Pearson correlation. The Pearson correlation in this example may include the correlation of three of the most critical failure indicators in the Rx device 24 (e.g., HCCS-OTU, CV-OTU, UAS-OTU), obtained from the PM data, versus the possible alarms raised on the path. With help from the Pearson correlation, the three most relevant alarms (e.g., Alarm optical line fail, Alarm loss of signal, and Alarm automatic shutoff) may be selected. The three alarms in this step (in addition to the four derived alarms created from the previous step) may be used to create “bad hop” labels that could cause failure in the Rx device 24.

Up to this point in the process 70 of FIG. 5, labels for both good Rx hops and bad Rx hops are prepared. In the next step (block 78), for each Rx failure, the algorithm traverses the circuit hop by hop from the first hop to look for bad hop labels (i.e., where a selected standard alarm or derived alarm is presented). The traversing stops at the first hop with an alarm since any subsequent alarms are most likely considered to be consequences of the first alarm in the path. For example, derived alarm #6.1 and #8.1 may be viewed simply as consequences of the derived alarm #5.1. Therefore, the root cause of the failure on the Rx device 24 on the corresponding day (i.e., 2020-05-16) is derived alarm #5.1 at hop 5. Similarly, root cause and location of the rest of the four failures are derived alarm #6.2 for failure on 2020-05-17, derived alarm #8.3 for failure on 2020-05-25, derived alarm #5.2 for failure on 2020-07-01, derived alarm #5.3 for failure on 2020-07-02.

The process 70 further includes a step of determining if there is any alarm in the path before the end of the circuit, as indicated in decision block 80. If yes, the process 70 provides the outputs of the root cause and location of the Rx failures. Otherwise, the process 70 may end and proceed with the use case #2.

Use Case #2: “Triangulation” with Knowledge of Network Topology Information, Rx PM Data, and Rx Alarms

Some networks do not have the availability of PM data and standard alarms of every single port in the network. However, the network topology information, the PM data of the Rx device, and Rx alarms are a much smaller dataset and should be much easier to obtain and monitor. In addition, even for networks with a full set of PM data and alarm data of every port that enables the “path traversal” procedure of use case #1,not every single type of issue can be detected by the PM data and standard alarms. For example, conventional networks do not have thorough build-in instrumentation for monitoring polarization related parameters, WSS filter shape effect, fiber nonlinear performance of the entire network, etc. Therefore, Rx failures caused by these types of issues are not detectable by PM data and standard alarms on the path.

However, according to the embodiments of the present disclosure, the systems and methods described herein are configured to cover this use case #2,where the failures are observed by the Rx device while there may be no data available to indicate the issue in the path. Thus, the present disclosure can execute a “triangulation” method to localize the failure in the path. Input features in this case may include network topology information, PM data, and/or standard alarms from the RX ports. The output labels may include groups of failed Rx devices.

FIG. 10 is a flow diagram illustrating an embodiment of a process 130 related to the second use case (use case #2) shown in FIG. 4 related to the triangulation method. After getting the input data of topology and Rx PM data, standard Rx alarms, and timestamp information, the process 130 includes identifying Rx failures and grouping the failures based on timestamps and PM/alarm data, as indicated in block 132. For example, it may be determined that the Rx device in each group fails at the same time in the same way. Then, for each group of failures (block 134), a group can be determined to be equal to n, where n=1 up to N, and start with the failure group #1. The process 130 finds a common section (e.g., section in an optical network that links two Reconfigurable Optical Add/Drop Multiplexers (ROADMs)) of the failed Rx device as the potential location of the root cause, as indicated in block 136. The process 130 further includes moving to the next failure group (if one exists) until all the groups have been processed, as indicated in block 138. The output includes possible root cause location of each Rx failure.

Use Case #3: Supervised ML for Root Cause Classification with RX PM/Alarm Data Only

In this case, the input features only include the PM data and/or standard alarms from the RX ports. Thus, the path PM data, standard path alarms, and network topology information is unknown or unavailable. The output labels in the case include classes of root cause from the “path traversal” method. For this use case #3, since only Rx PM and Rx alarm data are available, it will be impossible to tell the location of the root cause. However, a root cause classification model using only Rx PM data and alarms would be useful for identifying the type of the failures.

FIG. 11 is a flow diagram illustrating an embodiment of a process 140 related to the third use case (i.e., use case #3) shown in FIG. 4. The process 140 also shows a model that can be used in a case where only Rx PM data is obtained (e.g., from transponders of the various network equipment). For model training, the training data and testing data can be obtained from the path traversal method. The classes and number of instances in the training and testing datasets are shown in table 150 of FIG. 12 Table 152 of FIG. 13 shows the input features of the PM data and standard alarms reported by the receiver Rx.

FIG. 12 shows the table 150 having a sample of a number of instances of training datasets and testing datasets from a root cause analysis of an example network according to one example. FIG. 13 shows the table 152 having a sample of PM data obtained from an example network for root cause analysis according to one example. XGBoost model is used in this prototype of Rx only root cause classification. FIG. 14 shows a table 154 having a sample of PM data of an example network related to the third use case according to one example. Table 154 shows the performance of the Rx only root cause classification based on XGBoost and shows the classification result.

FIG. 15 is a chart 156 illustrating a confusion matrix of PM data of an example network related to the third use case shown in FIG. 4. The chart 156 may be related to the confusion matrix of XGBoost for Rx only failure classification.

It may be noted that since the various systems and methods of the present disclosure may be executed for root cause classification of example optical network cards that may not obtain PM data for monitoring non-power-related behaviors, such as polarization parameters (e.g., Polarization Dependent Loss (PDL), Polarization Mode Dispersion (PMD), State of Polarization (SOP), etc.), chromatic dispersion, nonlinear performance, etc. The failure classes that can be identified by PM data of the Rx are limited while the above-mentioned non-power-related failures all go into “other” groups. However, it is hopeful that for new generations of transponders that have richer datasets of PM, the Rx only PM classification could identify more types of failures.

FIG. 16 is a flow diagram illustrating a general process 160 for performing root cause analysis, according to one embodiment of the present disclosure. In this embodiment, the process 160 includes the step of receiving any of Performance Monitoring (PM) data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs from equipment configured to provide services in a network, as indicated in block 162. The process 160 further includes the step of automatically detecting a root cause of a service failure or signal degradation from the available PM data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs.

It should be noted that the process 160 can be further defined according to the following description. For example, the process 160 may include automatically detecting the root cause independently of a network operator associated with the network. For example, the network may be a multi-layer, multi-vendor network. The process 160 may also include the step of determining one or more derived alarms from the available path PM data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs, the derived alarms being different from the standard path alarms and standard service alarms. The standard path alarms and standard service alarms, for example, may be threshold-crossing alarms. The one or more derived alarms, for example, may include one or more of PM data patterns, power drops, loss of signal, and network configuration changes. In some embodiments, the step of determining the one or more derived alarms may include determining network conditions that have an impact on the services.

Furthermore, the process 160 can also include the step of performing a Pearson correlation procedure, a derived-alarm generation procedure, a Supervised Machine Learning (SML) procedure and a path traversal procedure when the path PM data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs are available. In some embodiments, the process 160 may additionally or alternatively include the step of performing one or more of a triangulation procedure, and a SML procedure when the network topology information and alarms related to receiving equipment are available. In some embodiments, the process 160 may additionally or alternatively include the step of performing a SML procedure for multi-variate root cause classification when alarms related to receiving equipment are available for identifying the service failure or signal degradation.

Also, the process 160 may include additional steps and features. For example, the process 160 may include the step of ranking the standard path alarms based on a level of impact the respective standard path alarms have on the services. The step of ranking the standard path alarms may include the step of utilizing a Pearson correlation technique to determine a usefulness of transmission paths for a service assurance procedure. In some embodiments, the network for which Root Cause Analysis (RCA) is performed may be an optical network having at least a transmitter device, a receiver device, and one or more network devices configured to communicate optical signals along one or more transmission paths.

One of the benefits of the various systems and methods described in the present disclosure is that the solutions may provide automatic failure diagnoses, without the need for network expertise. Network operators, who may use the embodiments described herein, can benefit from the fast and precise diagnoses, which are able to significantly accelerate failure analysis and recovery. Moreover, network operators associated with multi-vendor, multi-layer networks may be more motivated to utilize the systems and methods of the present disclosure since the present embodiments are configured to work with incomplete data and can also work without requiring domain expertise.

Although the present disclosure has been illustrated and described herein with reference to various embodiments and examples, it will be readily apparent to those of ordinary skill in the art that other embodiments and examples may perform similar functions, achieve like results, and/or provide other advantages. Modifications, additions, or omissions may be made to the systems, apparatuses, and methods described herein without departing from the spirit and scope of the present disclosure. All equivalent or alternative embodiments that fall within the spirit and scope of the present disclosure are contemplated thereby and are intended to be covered by the following claims.

Claims

1. A system comprising:

a processing device, and
a memory device configured to store a computer program having instructions that, when executed, enable the processing device to receive data that includes a subset of any of Performance Monitoring (PM) data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs from equipment configured to provide services in a network, wherein the data is incomplete and includes at least all received service alarms, and responsive to training a model, automatically detect, using the model, a root cause of a service failure or signal degradation from the received PM data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs, wherein the model is trained with incomplete domain expertise and the training includes derived alarms which are a combination of patterns in the data and configuration changes in the configuration logs.

2. The system of claim 1, wherein the instructions enable the processing device to automatically detect the root cause independently of a network operator associated with the network, and wherein the network is a multi-layer, multi-vendor network.

3. The system of claim 1, wherein the instructions further enable the processing device to determine one or more derived alarms from the received path PM data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs, the derived alarms being different from the standard path alarms and standard service alarms.

4. The system of claim 3, wherein standard path alarms and standard service alarms are threshold-crossing alarms.

5. The system of claim 3, wherein the one or more derived alarms include one or more of PM data patterns, power drops, loss of signal, and network configuration changes.

6. The system of claim 3, wherein determining the one or more derived alarms includes determining network conditions that have an impact on the services utilizing Pearson correlation or Supervised Machine Learning (SML) techniques.

7. The system of claim 1, wherein the instructions further enable the processing device to perform a path traversal procedure when the path PM data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs are available.

8. The system of claim 1, wherein the instructions further enable the processing device to perform one or more of a triangulation procedure and a Supervised Machine Learning (SML) procedure when the network topology information is available and PMs and/or alarms related to receiving equipment are available.

9. The system of claim 1, wherein the instructions further enable the processing device to perform a Supervised Machine Learning (SML) procedure for multi-variate root cause classification when PMs and alarms related to receiving equipment are available for identifying the service failure or signal degradation.

10. The system of claim 1, wherein the instructions further enable the processing device to perform a Pearson correlation procedure and/or using the feature ranking in a Supervised Machine Learning (SML) procedure to determine the service-affecting alarms that could be the root cause of the service failure amongst all standard and derived path alarms.

11. The system of claim 1, wherein the network is an optical network having at least a transmitter device, a receiver device, and one or more network devices configured to communicate optical signals along transmission paths.

12. A non-transitory computer-readable medium configured to store computer logic having instructions that, when executed, cause one or more processing devices to:

receive data that includes a subset of any of Performance Monitoring (PM) data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs from equipment configured to provide services in a network, wherein the data is incomplete and includes at least all received service alarms, and
responsive to training a model, automatically detect, using the model, a root cause of a service failure or signal degradation from the available path PM data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs, wherein the model is trained with incomplete domain expertise and the training includes derived alarms which are a combination of patterns in the data and configuration changes in the configuration logs.

13. The non-transitory computer-readable medium of claim 12, wherein the instructions enable the processing device to automatically detect the root cause independently of a network operator associated with the network, and wherein the network is a multi-layer, multi-vendor network.

14. The non-transitory computer-readable medium of claim 12, wherein the instructions further enable the processing device to determine one or more derived alarms from the available path PM data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs, wherein the standard path alarms and standard service alarms are threshold-crossing alarms, wherein the one or more derived alarms have an impact on the services and include one or more of PM data patterns, power drops, loss of signal, and network configuration changes.

15. A method comprising the steps of:

receiving data that includes a subset of any of Performance Monitoring (PM) data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs from equipment configured to provide services in a network, wherein the data is incomplete and includes at least all received service alarms, and
responsive to training a model, automatically detecting, using the model, a root cause of a service failure or signal degradation from the available path PM data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs, wherein the model is trained with incomplete domain expertise and the training includes derived alarms which are a combination of patterns in the data and configuration changes in the configuration logs.

16. The method of claim 15, wherein the instructions further enable the processing device to perform a path traversal procedure when the path PM data, standard path alarms, service PM data, standard service alarms, network topology information, and configuration logs are available.

17. The method of claim 15, wherein the instructions further enable the processing device to perform one or more of a triangulation procedure when the network topology information is available and PMs and/or alarms related to receiving equipment are available.

18. The method of claim 15, wherein the instructions further enable the processing device to perform a Supervised Machine Learning (SML) procedure for multi-variate root cause classification when PMs and alarms related to receiving equipment are available for identifying the service failure or signal degradation.

19. The method of claim 15, wherein the instructions further enable the processing device to perform a Pearson correlation procedure and/or using the feature ranking in a Supervised Machine Learning (SML) procedure to determine the service-affecting alarms that could be the root cause of the service failure amongst all standard path alarms.

20. The method of claim 15, wherein the network is an optical network having at least a transmitter device, a receiver device, and one or more network devices configured to communicate optical signals along transmission paths.

Referenced Cited
U.S. Patent Documents
8477679 July 2, 2013 Sharifian et al.
8887217 November 11, 2014 Salem et al.
9060292 June 16, 2015 Callard et al.
9432257 August 30, 2016 Li et al.
9686816 June 20, 2017 Sun et al.
9819565 November 14, 2017 Djukic et al.
9832681 November 28, 2017 Callard et al.
9871582 January 16, 2018 Djukic et al.
9980284 May 22, 2018 Djukic et al.
10015057 July 3, 2018 Djukic et al.
10069570 September 4, 2018 Djukic et al.
10148578 December 4, 2018 Morris et al.
10153869 December 11, 2018 Djukic et al.
10390348 August 20, 2019 Zhang et al.
10448425 October 15, 2019 Au et al.
10491501 November 26, 2019 Armolavicius et al.
10503535 December 10, 2019 Hickey
10623277 April 14, 2020 Djukic et al.
10631179 April 21, 2020 Djukic et al.
10644941 May 5, 2020 Djukic et al.
10746602 August 18, 2020 Pei et al.
10887899 January 5, 2021 Au
10945243 March 9, 2021 Kar et al.
20140229210 August 14, 2014 Sharifian et al.
20180062943 March 1, 2018 Djukic et al.
20190230046 July 25, 2019 Djukic et al.
20190379589 December 12, 2019 Ryan et al.
20200067935 February 27, 2020 Carnes, III et al.
20200084087 March 12, 2020 Sharma
20200313380 October 1, 2020 Pei et al.
20200351380 November 5, 2020 Fedorov et al.
20200387797 December 10, 2020 Ryan et al.
20210028973 January 28, 2021 Cote et al.
20210076111 March 11, 2021 Shew et al.
20210092036 March 25, 2021 Jain
20210150305 May 20, 2021 Amiri et al.
Other references
  • Teixeira et al., Advanced Fiber-Optic Acoustic Sensors, Photonic Sensors / vol. 4, No. 3, 2014: 198-208.
Patent History
Patent number: 11477070
Type: Grant
Filed: Jul 12, 2021
Date of Patent: Oct 18, 2022
Assignee: Ciena Corporation (Hanover, MD)
Inventors: Yinqing Pei (Kanata), David Côté (Gatineau), Philippe Alain Ngani Sigue (Montreal), Ali Mahmoudialami (Montreal), Christine Tremblay (Mont-Royal), Christian Desrosiers (Montreal)
Primary Examiner: Kyung H Shin
Application Number: 17/372,678
Classifications
International Classification: H04L 41/0631 (20220101); H04L 41/12 (20220101); H04L 41/16 (20220101); H04W 24/04 (20090101); G06F 11/30 (20060101); H04L 43/16 (20220101); H04L 43/0817 (20220101);