FLEET LEVEL PROGNOSTICS FOR IMPROVED MAINTENANCE OF VEHICLES
A ground-based computing system receives data of performance parameters for like components disposed on like aircraft, and determines corresponding levels of degradation and rates of change of degradation for the respective like components. A fleet-level of degradation for groups of like components is generated based on analysis of the combined degradations of the like components in the respective group and models of components revised. A predicted time for maintenance for each like component is determined based on the corresponding at least one of the RUL and SOH of the like component, thereby enabling cost effective maintenance determinations for components based on a fleet-level information. The ground-based computing system transmits a modified component model to the like aircraft to replace a prior version of the component model which is used to generate on-board degradation analysis, thereby enhancing the accuracy of on-board degradation analysis based on fleet level data.
CROSS REFERENCE TO RELATED APPLICATION(S)
This patent application is a divisional application of U.S. application Ser. No. 16/945,263 filed Jul. 31, 2020, entitled FLEET LEVEL PROGNOSTICS FOR IMPROVED MAINTENANCE OF VEHICLES, which is a continuation-in-part of U.S. application Ser. No. 16/801,596 filed on Feb. 26, 2020, entitled PROGNOSTICS FOR IMPROVED MAINTENANCE OF VEHICLES, which is a continuation-in-part of U.S. application Ser. No. 16/583,678 filed on Sep. 26, 2019, entitled HIGH FREQUENCY SENSOR DATA ANALYSIS AND INTEGRATION WITH LOW FREQUENCY SENSOR DATA USED FOR PARAMETRIC DATA MODELING FOR MODEL BASED REASONERS, which was a continuation-in-part of U.S. application Ser. No. 16/163,726 filed on Oct. 18, 2018, entitled, PARAMETRIC DATA MODELING FOR MODEL BASED REASONERS the entire contents of which are incorporated herein by reference.
TECHNICAL FIELDEmbodiments of the present invention generally relate to determining when maintenance of components of complex vehicles, e.g. aircraft, is needed. Additionally, the embodiments identify whether or not maintenance of like components on all like aircraft is required based on analysis of fleet level data associated with such components and generating actionable dynamic and cost-effective “predictive maintenance” schedules based on the condition of the components for certified maintenance technicians vs. conventional fixed time-interval based “scheduled maintenance” which is costly and labor intensive.
BACKGROUNDMaintenance of complex systems, such as vehicles, aircraft, spacecraft and other systems, represents a significant cost of operation. Maintenance of such systems is typically done on a predetermined schedule (periodic time based) for the various components of the system. The schedule may be solely time-based, e.g. every three months, or maybe based on a combination of time, usage, and reliability metrics, e.g. every three months or 1000 hours of operation determined by mean-time-between-failure (MTBF) component reliability calculations. The amount of time and usage are typically based on the performance history of the same or similar components as utilized in a similar operational environment. Scheduled maintenance based entirely on such reliability metrics has been shown to be less than optimal in numerous commercial and department of defense systems. The Office of the Secretary of Defense (OSD) issued a directive 4151.22 (mandate) that all systems will follow Condition Based Maintenance Plus (CBM+) processes by the year 2032. CBM+ processes provide for maintenance to be scheduled based on the condition of the component and no longer on predetermined time-based and usage-based maintenance. However, compliance with the CBM+ requirements has presented significant challenges.
Although such predetermined scheduled maintenance of components has been satisfactory in some environments, this type of maintenance has not performed as well for some equipment/vehicles in other environments. Predetermined scheduled maintenance has proved to be costly and the cause for delays in vehicle availability due to unnecessary maintenance that may result in inadvertent mishaps by taking parts out for testing and replacing them. For example, consider a jet aircraft. The same type of jet aircraft may be utilized in a variety of extremely different environmental conditions, e.g. a desert with widely varying temperatures and blowing sand versus a temperate latitude with only minor airborne dust. Additionally, if the same aircraft is in operation under the same external environmental conditions, different pilots, especially military pilots, may choose to fly the same aircraft in substantially different ways causing different load variations on aircraft structures. Also, different missions will pose differing levels of stress on the components of the aircraft. Hence, predetermined scheduled maintenance for components may result in not performing needed maintenance due to more than anticipated stress or may result in performing unneeded maintenance due to a significantly lower level of stress than anticipated. Reliable prognostics for the improved timing of maintenance for components of vehicles will provide a more cost-effective solution as well as increasing the operational availability of an aircraft/system avoiding unneeded maintenance, and better utilization of the maintenance workforce. Performing maintenance only when actually needed is even more critical where a fleet of like aircraft have to be maintained. Therefore, there exists a need for a more accurate prediction of maintenance of components in a fleet of like type aircraft so that metrics of component performance across multiple like aircraft for the same component can be utilized to improve on-board testing criteria for the component as well as making fleet-wide concurrent maintenance determinations for like components.
SUMMARYOne object of embodiments of the present invention is to satisfy the need to update on-board component models for increased accuracy to yield reliable prognostics based on the incorporation of fleet-wide metrics of corresponding component performance across multiple like aircraft.
Another object of one embodiment of the present invention is to identify whether or not maintenance of the same components located on all like aircraft is required based on analysis of fleet level data associated with such components. A ground-based computing system receives data of performance parameters for like components disposed on like aircraft, and determines corresponding levels of degradation and rates of change of degradation for the respective like components. A fleet-level of degradation for groups of like components is generated based on analysis of the combined degradations of the like components in the respective group. At least one of a remaining useful lifetime (RUL) and a state-of-health (SOH) for each of the respective like components is determined based on a comparison of the levels of degradation for each of the like components and the fleet-level of degradation of the group of like components. A predicted time for maintenance for each like component is determined based on the corresponding at least one of the RUL and SOH of the like component, thereby enabling cost effective maintenance determinations for components based on a fleet-level information.
Some example embodiments of the present invention incorporate inputs from an IVHM system and a data fusion module which are described below with reference to the accompanying drawings in order to better understand the operation of embodiments of the present invention in which:
The fleet level prognostics embodiments benefit from the explanation of the embodiments and features associated with other inventions as discussed in relationship to
In one embodiment the prognostics system utilizes inputs from the Data Fusion Module 1175 and the MBR Diagnostics Engine 106. The IVHM system includes all modules in
IVHM using Model Driven Architectures (MDA) and Model Based Engineering (MBE) is a solution where software and hardware elements are flight qualified once instead of every time the system is changed or upgraded. This results in significant cost savings by using an XML format configuration file containing a model with the diagnostics domain knowledge of the system. The model needs to be verified for accuracy but does not require expensive and time-consuming software flight qualification. This saves between 25%-35% in military operations and support costs.
Operational Flight Program (OFP) 102 encompasses hardware and software for managing the overall operation of the vehicle. OFP 102 includes a runtime diagnostics engine IVHMExec 104. OFP 102 may also be implemented as a standalone avionics IVHM computer attached passively to the avionics data buses, actively interfaced with mission planning systems, and actively interfaced with ground systems and maintenance systems 122. IVHMExec 104 includes a diagnostic Model Based Reasoner (MBR) Engine 106. MBR Engine 106 combines a physical model of a vehicle system or subsystem with input data describing the system state, then performs deterministic inference reasoning to determine whether the system is operating normally, if any system anomalies exist, and if so, to isolate and identify the locations and types of faults and false alarms that exist. IVHMExec 104 writes maintenance records to a disk 126 that may also be accessed by Portable Maintenance Device Viewer 122.
MBR Engine 106 receives real-time sensor data through Data Message Interface 108 in which high-frequency and low-frequency sensor data are analyzed and integrated together to facilitate the decision-making by MBR engine 106. It also receives a Run Time Operational Model 110 of the vehicle through Real-Time Operational Interface 112. Model 110 of the vehicle is created by a modeling engineer using a Model Development Graphical User Interface (GUI) 114. Model 110 is created and verified with the MBR Engine 106 offline (non-real time) and then exported to an XML file that is used by a real-time embedded build of IVHMExec 104. In addition to creation of model 110, GUI 114 is also used to verify the model. Verification and validation are a test of the model's internal logic and elements, without the use of any specific input data. This process is necessary to ensure that the model is logically consistent, without errors that would prevent it from operating properly or not at all.
As a further step in the model development process, Test Manager 116 evaluates a model by testing it against simulated or actual flight data 118. Development Interface 120 allows for modification and addition of MBR Engine 106 algorithms, which are separate classes statically or dynamically linked to the IVHMExec 104 runtime executable (statically for standalone IVHMExec and dynamically for integration with the Graphical User Interfaces (GUIs)). While verification tests a model logically, Test Manager 116 ensures that the model performance and output is as desired. Once a model is verified and tested, an XML model configuration file 110 is generated.
IVHMExec 104 is the executive that loads the XML representation of the model and executes the MBR Engine 106 in real-time by applying the model to input sensor data messages as they are received from various buses in the vehicle and/or stored history data in various formats for replay on ground. IVHMExec 104 may also be used by Test Manager 116 through Development Interface 120. Storage interface 124 connects MBR Engine 106 to Recorded Data storage 126. Recorded Data 126 includes log files, complete time-stamped state of the equipment, for example, snapshots, time-stamped fault/failure anomalies, detections, isolations, and any functional assessments on the isolations. The log files also include the MBR Engine software states (version number, failures & reboots) as well as identification of other aircraft software, their version number, if failed their state at failure, reboots of software, and functional assessments that lead to the failure. Collection of this data allows for the replay of diagnostics visualization of the actual events that occurred on the aircrafts, and allows the maintainer to better understand both hardware and software interactions leading to the failed component(s). Recorded Data storage 126 stores the raw data used by the MBR Engine 106 and the results of its processing.
In an embodiment, MBR Engine 106 includes dynamically calibrated data input capability, and a set of logic gates (intersection AND, union OR, exclusive-or XOR, and others), rules, cases (histories), and decision trees combined in sensor logic for IVHM data fusion of parameterized and direct analog sensor data with corresponding Built-In-Test (BIT) inputs. A comparison of parametric data, direct analog sensor data, and BIT results produce confidence measures in failure and false alarm predictions.
An example of the creation of a model for use by MBR Engine 106 will now be described. In an embodiment, the model provides for data fusion from many sources within a modeled vehicle. In particular, the model may include parameterized data input capabilities that allow MBR Engine 106 to include analog and quantified digital data input, with either fixed or dynamically calibrated bounds to the measured physical quantities to determine the existence of anomalies. The parameterized data anomaly decision can be based on simple fixed bounds, dynamically changing calibration values based on physical sensor operations, or more complex decision properties including signal noise reduction, windowing, latency times and similar parameterized data conditioning. These data calibration parameters and thresholds become sensor node properties for evaluation during real time operations of the system. Functions can be represented as logic sets and operands while rules may be represented as logic sets or natural language semantics, historic behaviors (case based), or decision trees (fault tree analysis). For example, in the case of pressure functions, the model would evaluate whether flow pressure is provided and combine other inputs according to the function logic desired. In an embodiment, each input must indicate a positive result for the function to be evaluated as true although other logic functions may also be used. Various user-defined parameters for this function can be represented as node properties of the function. The XML MBR Model(s) 110 of the vehicle and the binary IVHMExec 104 real time engine running on an avionics computational device provide IVHM capability/functionality for the entire vehicle.
A parametric and BIT MBR model may include components and sensors that are related by their functions. In an embodiment, a model of a vehicle system or subsystem may be represented as nodes in a graph as shown in
Diagnostic nodes are used directly in the MBR model reasoning engine to determine the system components causing a fault or false alarm, while non-diagnostic nodes are used for tasks such as sensor output and BIT test comparison. The non-diagnostics nodes are used for real-time comparison of parametric sensor data with BIT data results. The parametric sensors represent the true system behavior (when the sensors have not failed), and if they are operating nominally when the BIT data show failure of corresponding components, this result is shown as a false alarm. Failed sensors are identified from false positive and false negative tests upon the sensors. Components, such as a Flow Pressure component, refer to a specific system element whose state (e.g. on, off, high or low pressure, etc.) and status (operational, failed, leaking, etc.) is indicated by MBR Engine 106, by connecting the component to other elements of the model. Sensor nodes are modeled with input data, which could take many forms, for example, direct sensor analog input, parametric data input and binary BIT data input. Referring to
In the default parameter values 303, 311 indicates a failure probability (failure modes) entered from a component supplier with a “0” indicating no supplier data available. Alternatively, the failure probability can be entered from historical performance data. It can be recalculated with degradation events, i.e. the failure probability increases with degradation events. The intermittency threshold 313 refers to a time period of intermittent or random behaviors with an exemplary default value of five seconds. The state 315 defines the various states of the component, e.g. ON, OFF, high-pressure, etc. The available and in use parameters 317 are shown as both being set to “true”, i.e. the component is both available and in use. A “false” state in either of the parameters 317 could be due to failure and/or due to other reasons such as loss of power, etc. the link specification 319 specifies links to other components by function nodes.
Another type of node in the model of
Another type of node in the model of
Another example of a physical sensor node is BIT ECS_FlowPressureFault 220. This sensor uses Built-In-Test (BIT) data from the modeled system, which indicates either an anomaly or normal operation in the data output. This BIT test is designed to use the same upper and lower bounds as the corresponding parameterized sensor, but could produce a different result in the case of an anomalous operation. As such, we use the BIT test as an input along with a separate parameterized data input, into XOR_ECS_FlowPressure node 222 which is an exclusive logical or (XOR) sensor node. In some cases, only a BIT test sensor may be available to the maintainer; in this case, the BIT test will be used as a diagnostic sensor similar to the parametric sensor node used here for the ECS Flow Pressure 218. Other physical sensor nodes in the model of
XOR_ECS_FlowPressure node 222 receives inputs from physical sensor node BIT_ECS_FlowPressureFault 220 and ECS_FlowPressure_ND 228 (nondiagnostics), which is a parameterized input sensor node. The reason that a separate parameterized input sensor is used for the XOR input is because this input is non-diagnostic (no diagnostics cycle performed). Sensors can be either diagnostic, which means that they are used in the MBR engine to determine system faults and false alarms, or non-diagnostic to remove them from the MBR engine assessment. For XOR sensor input, a non-diagnostic parametric sensor input 228 is desirable to prevent interference with the MBR engine, as the XOR logic and output is complementary and separated from the MBR engine processing. In the example used here, the BIT test sensor 220 is also non-diagnostic, for the same reasons. In addition, for XOR sensors, a blank function 226 is used to fulfill a design requirement that each sensor has a downstream function attached to it. Another blank function is shown at 236. Similarly, to node 222, XOR_ECS_Temp node 244 receives input from physical sensor node BIT_ECS_TempFault 242 and parameterized sensor node ECS_Temperature_ND 224.
XOR_ECS_FlowPressure node 222 produces a separate output stream, only indicating a positive Boolean result when the connected sensors (the parameterized sensor node 228 and the corresponding BIT test node 220) provide different assessments. Under normal operating conditions this should not happen, therefore the XOR sensor node is useful to determine when one of the system's BIT or parameterized inputs is providing an anomalous result. This provides the modeler with another tool to diagnose the system's health, which may otherwise be difficult to analyze.
An example of a case where only a BIT test data field is available is shown in
In some cases, parametric nodes will not have fixed upper and lower bounds. In this case, a separate node class can be used, as shown, for example, in
In an embodiment, a model such as that shown in
Referring back to
In an alternative embodiment, a modeling engineer using GUI 114 (
An example of output data from a model test is shown in
The central purpose of the invention is to produce High Fidelity Real Time Diagnostics capability (False Alarm (FA) rejections, Fault Detection (FD), Fault Isolation (FI), and parameter trending for equipment failures) for vehicles and other systems, but is especially (but not exclusively) suited for aircraft. This invention provides embedded software diagnostics capability on numerous hardware devices and operating systems, and system avionics integration for determining the health of the system during in-flight real-time system operations. By implementing parametric data input from high-frequency and low-frequency sensors and XOR parametric-BIT comparator fusion, the system has the capability to analyze quantitative sensor input, develop sophisticated fault and false alarm confidence measures, and identify and analyze BIT failures while maintaining valid system health management and diagnostic capabilities.
The digital signal representations of the sensor outputs are supplied as inputs to the alarm detector 1120 which functions to make a determination of whether an alarm condition exists. Such a determination is based on a comparison of whether the digital value of the sensor output is within a fixed window of values defined by static, stored, upper and lower threshold values associated with each respective sensor. Such a comparison can be made by a microprocessor comparing the sensor value with the corresponding threshold values, or could be made by dedicated circuitry, e.g. integrated circuit comparators. If the value of the sensor output is within the respective window, the functioning of the component's parameter being sensed is determined to be within an acceptable range, i.e. no alarm condition. If the value of the sensor output is outside the respective window, functioning of the parameter is determined to be not within an acceptable range, i.e. an alarm is needed. If a sensor window is relatively wide (low and high threshold values are far apart), an extreme or unusual, but abnormal, operating condition may cause the parameter being sensed to exceed such a window and thereby cause alarm. This corresponds to a false positive. The wide window allows for most signals to register as alarms, especially noisy signals, while the system may be functioning properly. This is generally the case in pre-flight testing when components and sensors are achieving normal steady state. The time internal for steady state can be up 30 minutes for certain systems such as Radars. As steady state is achieved false alarms are significantly reduced. Current methods require a long schedule and budget to achieve an understanding of remaining false alarms and an acceptable static lower and upper threshold for each sensor. Our MBR Engine implementation reduces this effort and budget by 90% within two test flights. True False Alarms are easily identified. True Faults can then be worked upon for maintenance (repair or replacement). Persistent false positives (above upper threshold) are an indication that the corresponding sensor has failed. A zero sensor raw value represents an electrical short circuit in the sensor. If the sensor window is set to relatively narrow (low and high threshold values closer together) to accommodate a sensor output corresponding to extreme or unusual operating conditions so as to minimize false alarms, there is a risk that the parameter being sensed may be operating with an unacceptable characteristic that will not be determined to be an anomaly/alarm condition because the sensor output lies outside the narrow window. This corresponds to a false negative. False negatives indicate that possible real anomalies have missed alarm registration and tagging that would otherwise be processed in the detection cycle for causal analysis. Hence, there are challenges in establishing a window with fixed upper and lower threshold values.
The output from the alarm detector 1120 consists of the input digital sensor values with respective digital tags indicating alarm or no alarm. This provides an input to data conditioning 1125 which provides data formatting and alignment. Since the digital output from different sensors may have a different number of digits or may have different ways of encoding values, data formatting converts these values into standardized data representations and formats (i.e., floats, integers, binary bits, etc.), as well as padding of digits of data as necessary. Also, because the sensor data rate (frequency) will typically differ for different sensors, converting each sensor data stream into a data stream having a common data rate, e.g. 50 Hz, makes it easier to integrate and process the information from such a variety of sensors and data branches. The data conditioning 1125 can be implemented on a microprocessor which can make formatting changes to provide conformity of the expression of the sensor values, and can also utilize a common clock to establish time synchronized signals into a single common data rate for the respective digital sensor outputs which may require either up-sampling or down-sampling of each sensor data stream to convert it to the target common data rate, e.g. 50 Hz.
The other data 1130 represents other information obtained from sensors or monitoring such as hardware and software BIT, system fault codes, warnings, cautions, advisories, meteorological, and biological (heart rate, etc. of the vehicle operator, e.g. pilot) data. Signals associated with this information are further processed by the A/D converter 1135, alarm detector 1140, and data conditioning 1145 which perform similar functions as explained above for the corresponding A/D converter 1115, alarm detector 1120, and data conditioning 1125, respectively.
The high-frequency sensors 1150 provide high data rate analog information and may for example include sensors such as, stress gauges, strain gauges, accelerometers, vibration sensors, transducers, torque gauges, acoustics sensors, optical sensors, etc. Such sensor outputs are converted to a digital signal representation by A/D converter 1155 and are input to the anomaly/degradation detector 1160 (see
The data fusion module 1170 (see
A consistency of sensor data indicating out of norm conditions from more than one sensor in a sensor group is a step in identifying the presence of an actual degradation or failure. The actual failure isolation is determined by the MBR Engine algorithms 106 (
Sensor data associated with other upstream or downstream components can also be included within a group. In the above pump example, assume that the pump is controlled to produce a flow of liquid that is channeled through a component, e.g. an engine that requires cooling. In this further example a heat sensor associated with the engine could be included within the group since a failure of the pump would also likely produce an increased engine heating that could exceed a desired operational range. Thus, it will be understood that the grouping of sensor data that are correlated can be associated with the sensed attributes for more than one component. A group of sensor data may include sensor information from a high-frequency sensor 1150, a low-frequency sensor 1110, and/or other data sensors 1130. Of course, the data from some sensors may not be included in any group and hence will be analyzed and considered individually.
The data fusion module 1170 analyzes the mapped sensor data within a time window that increments over time, either on a group basis for the sensor data included within a predetermined group of correlated sensors or on an individual basis where sensor data is not part of any group. The data fusion module 1170 makes a determination based on stored usage and operational norm information for each group/individual of sensor data of whether a tag should be associated with the group/individual sensor data, where the tag consists of one of a predetermined set of conditional codes. Each conditional code is mapped to and compared to similar fault code generated by the component. The conditional codes are then transmitted for further processing in MBR Engine 106 (
The sensor data along with the conditional codes are transmitted from the data fusion module 1170 to the diagnostic model-based reasoner engine 106 for further analysis. The data fusion module 1170 is implemented in software that runs on a microprocessor/computer capable of mapping the sensor data streams into correlated groups, comparing the respective sensor values against a dynamic normal window of operation having an upper and lower threshold, determining if an anomaly/fault associated with one sensor in a group is supported by a correlation of an anomaly/fault by another sensor in the group, and encoding the respective sensor data with an error code tag representative of the malfunction/fault determined.
These parameters are transmitted to the anomaly/degradation detection module 1240 which utilizes the corresponding parameters for each sensor data stream to identify values that lie outside of the anticipated operational norms defined by these parameters. Thus, dynamic windows of normalized operation for each sensor varies depending on the mode of operation. This provides a dynamic change of normal parameters for each sensor based upon the mode of operation and thus allows a more accurate determination of whether an anomaly/degradation is being sensed because the corresponding “normal value windows” can be changed to allow for values anticipated during a specific mode of operation. Because sensor values can vary considerably depending upon the mode of operation, tailoring window thresholds and parameters for the respective modes of operation greatly enhances the ability to eliminate false alarms without having to utilize a single large acceptable range to cover all modes of operation. Off-line training based on collected and stored previous sensor data for various modes of operation allows for refinement of these window threshold values.
The off normal measurement module 1245 receives the respective sensor data from the anomaly/degradation detection module 1240. Module 1245 makes parameter distance measurements of the values associated with each sensor output relative to normal parameter values for the determined mode of operation. Based on these parameter distance measurements, the off normal measurement module 1245 makes a determination for each sensor output of whether the function being sensed by the sensor is operating within a normal mode or if an anomaly exists. If the sensor output value falls within the corresponding normal value window, a normal operation is determined, i.e. the function is operating within anticipated range of operation. If the sensor output falls outside the corresponding normal value window, and anomaly of operation is determined, i.e. the function is operating with degraded performance or failure, or problem with the sensor or its calibration exists. Refer to the tag conditional codes as explained above. Such a tag is applied to each sensor output and transmitted to the set anomaly and degradation flag module 1250. Module 1250 incorporates such a tag with each of the sensor output values which are transmitted as outputs 1162 to the data conditioning module 1165.
Sensor output data 1505 represents vibrations from a malfunctioning/defective bearing in a pump. Somewhat similar to the variations in
Once sensor data has been collected and stored corresponding to the normal anticipated bearing vibrations during the operation of a pump in good working order, this data can be compared/contrasted with sensor data during an in-service operation (in-flight for an aircraft) to make a determination of whether the subject pump is operating normally or is in need of maintenance or replacement. As explained below with regard to
The fusion of the data from the pump vibration sensor with the pump power sensor leads to a high reliability determination of whether the bearing of the pump is malfunctioning/degrading. Positive correlation by both a defective bearing signal 1505 and the power sensor data 1705 results in a highly reliable determination that the associated pump, at a minimum, needs maintenance or perhaps replacement. Conversely, without a positive correlation from two or more sensor signals, it is possible that only one sensor signal indicating a defect could be a false positive. Such a false positive could be the result of a temporary condition, such as a temporary change in operation of the vehicle or transient electrical interference. Alternatively, a lack of positive correlation could also indicate the sensor associated with the detection of the malfunction being itself defective or perhaps going out of calibration.
|s|<0.0167
and
SD/u<⅙
This technique accommodates the verification of persistent shifts in sensor output values as well as determining alarm coefficients, i.e. when alarm should be determined. The technique is based on a low probability with Gaussian Distribution statistics of determining a consistence value greater than six standard deviations, as normalized by the mean. It will be noted that the standard deviation is normalized by the mean to accommodate different sensor output values. In comparison with
In step 2315 a determination is made of the current operational mode and corresponding stored parameters for the operation mode are selected. For an aircraft, the current operational mode could be takeoff, normal acceleration, combat acceleration, cruising in the steady-state speed, landing, etc. this information can be determined such as from a flight plan stored in memory or from analysis of the sensor data that reflect the mode of operation, e.g. weight on wheels, accelerometers, speed, rate of change of altitude, etc. Stored predetermined criteria/thresholds for such sensor data can be utilized to determine the mode of operation when compared with the current sensor tags. Detection parameters, e.g. upper and lower threshold values, or stored normal values for the determined mode of operation, associated with particular modes of operation are selected. Each of multiple anomaly detectors 1160 is connected to a set of identical existing high frequency sensors (from 1 to n sensors) in the component and implemented in one core of the GPU. Alternatively, multiple anomaly detectors 1160 can be executed in the same GPU core for different sensors from similar or differing components. The sensor thresholds and calibration information are available from supplier data and stored on the vehicle for processing against real time input vehicle data. There are sufficient GPU cores that can be used for each high frequency sensor in the vehicle.
In step 2320 the current sensor values are compared with the selected detection parameters for a current moving window. With actual measurements (real time input signal), these selected detection parameters conform to nominal operation of the component to which the sensor is attached. An artificial neural network (ANN) with input, hidden, and output layers with backward propagation may be utilized as the anomaly detection mechanism. Stored training data is organized into groups of classes and is utilized in supervisory capacity (off-line supervised learning). An n-dimension Gaussian function can be utilized for modeling each class. These are also referred to as radial basis functions (RBF). They capture the statistical properties and dimensional interrelationships between the input and the output layers. The algorithmic goal of the RBF ANNs is the output parameter “0” for nominal component behavior and “1” for an off-nominal component behavior.
In step 2325, for an output of normal sensor value, e.g. an anomaly, the difference between the sensor values and the corresponding normal detection parameters is calculated and stored. This information is useful in off-line training of sensor data and RBF function model refinement. In step 2330, data flags/tags are set, if needed, for corresponding sensor data.
In step 2335 determination is made of a short, medium and long moving averages for the output of each sensor for each moving window. The computation of moving averages is well understood by those skilled in the art will have no trouble implementing such calculations and software. In step 2340 a determination of the differences among these moving averages is made as well as the trends of the moving averages. In step 2345 the determined trends are compared to stored historical trend data to determine if off normal conditions exist. If a persistent shift (determined by discussion above) exists per step 2347, the process continues with verification and validating of the need for an alarm flag and sends corresponding sensor data to 2350.
In step 2350 the slope, mean and standard deviation for each sensor output in each moving window is computed. One of ordinary skill in the art will know how to implement such calculations in software either using a standard microprocessing unit or using an arithmetic processing unit. These calculations can also be implemented on a graphical processing unit. In step 2355 a ‘test 1’ is made where the slope is compared with a stored predetermined slope threshold value to determine if an off normal condition exists. In step 2360 a ‘test 2’ is made where the normalized standard deviation is compared with a stored predetermined standard deviation threshold value to determine if an off normal condition exists. In step 2365 off normal behavior is determined to be present if both ‘test 1 and 2’ have out of normal values. If needed, anomaly/degradation flags are set in step 2330 following step 2365. Also, in step 2330, the high-frequency sensor data is down sampled in order to have substantially the same data rate as the data rate received from the low-frequency sensors and the other data sensors. This facilitates easier processing and integration of the sensor data from all the sources by the data fusion block 1170.
In step 2410, for each of the correlated groups, the sensor values are compared with corresponding normal range of values associated with the current operational mode. Based on this analysis, the sensor data associated with a group identified to be off normal is tagged with a conditional code. In step 2415, the fused group sensor data is compared with individual (single) sensor values for correlation or lack of correlation over multiple data windows to detect off-normal or trending to off-normal behavior. For example, individual sensor data coming from one of sensors 1110 or 1130 that is correlated with a group of correlated high-frequency sensors 1150 can be useful in either confirming an anomaly or preventing a potential false alarm where the individual sensor data is not validated by other off normal sensor outputs by others in the group. Alternatively, such an individual sensor data may reflect normal operation while the corresponding group of correlated sensors from high-frequency sensors may show a trend towards an off-normal behavior. This represents a “false negative” for the individual sensor in which the single sensor data is not responsive enough to provide a warning that the subject component may require some form of maintenance.
As will be understood by those skilled in the art, the ROM 2510 and/or nonvolatile storage device 2520 will store an operating system by which the microprocessor 2505 is enabled to communicate information to and from the peripherals as shown. More specifically, sensor data is received through the I/O 2525, stored in memory, and then processed in accordance with stored program instructions to achieve the detection of anomalies and degradation of components associated with the respective sensors. Based on the analysis of the sensor data as explained above, those skilled in the art will know how to implement in the computer system software to determine different length moving averages such as discussed with regard to
If used and unless otherwise stated, the terms “upper,” “lower,” “front,” “back,” “over,” “under,” and similar such terms are not to be construed as limiting embodiments to a particular orientation. Instead, these terms are used only on a relative basis.
The Data Interface Module 2605 takes input 1175 from the Data Fusion Module, identifies the tagged data and alarms, and passes this separated data to the LRU/System Selection and Data Transport Module 2610. Any alarms are passed immediately by module 2610 to the Damage Estimator Module 2625. The output from 106 (MBR diagnostics engine) consists of multiple component fault codes corresponding to respective components' fault/failure and false alarm isolations as well as nominal data and functional analysis for the cause of fault/failure (e.g., leaking pipe, worn bearing, motor shaft misalignment, etc.). Both faulty and nominal data corresponds to BIT and parametric data that is passed to the Damage Estimator Module 2625. Alarms and corresponding decomposed fused data (i.e., data that is separated for degraded components in LRU/System Selection & Data Transport Module 2610 for each component and built into multiple streams that are simultaneously transferred into various modules with highest priority given to component data with alarms) from multiple sources are also passed to the Damage Estimator Module 2625 from the Data Fusion Module 1175 via the Data Interface 2605 and the LRU/System Selection & Data Transport Module 2610.
The LRU/System Selection & Data Transport Module 2610 identifies and separates the pertinent tagged data (tagged data with alarms getting the highest priority) associated with the component to be analyzed and the corresponding BIT, parametric, analogs (direct sensor signals without analog-to-digital (A/D) conversion), discretes (hardware ON/OFF control signals passed via software bits), environmental (internal environmental conditions e.g. humidity, pressure, temperature, dust, others), and external metrological data (e.g., sand, dust, heat, wind, rain, others), etc. air vehicle data. This data is transmitted to the Model Interface Module 2620. Model Interface Module 2620 creates multiple streams of data one for each component. The tagged data is metadata obtained from the Anomaly Detector (high frequency sensors), low frequency sensors, and other data streams with tagged alarms and data fused with corroborated evidence data. Corroborated evidence data consists of data representing 1) no fault, 2) fault, 3) a false alarm either false negative or false positive, and 4) functional analysis for cause of 2) and 3). Corroborated evidence data from various interacting/interconnected component sensors for the current component fault/failure mode is received from the Anomaly/Degradation Detector Module 1160, fused in the Data Fusion Module 1175, and provided to the Data Interface Module 2605. This data is decomposed in the LRU/System Selection & Data Transport Module 2610 for the current component fault/failure mode. Independently corroborated evidence data is received from the MBR Diagnostics Engine 106 that performs fault/failure/false alarms isolations from various interacting/interconnected components sensors parametric and BIT data for the current component fault/failure mode. LRU/System Selection & Data Transport Module 2610 also requests via the Maintenance History Database Interface 2615 case-based histories from the Maintenance History Database 2618 associated with the pertinent component maintenance data (including pilot squeaks and maintainer notes post flights), which is then compared in real time against current anomalous component behavior and alarm data. These histories are case-based records and stored parameter values for the pertinent component. Maintenance History DB Interface Module 2615 may, for example, utilize ANSI SQL statements to extract previously stored repair and replacement information and tests conducted, contained in these maintenance records written by the maintainer. The maintainer is the technician assigned to perform the maintenance on the aircraft/system components for which he/she has attained maintenance certification. These maintenance records also contain previously stored inflight real time assessments by the prognostics engine and stored in the Maintenance History Database 2618, which is preferably a relational database. The maintainer enters maintenance notes via a graphical user interface (GUI) when fixing or repairing or replacing a component and additional visual observation on the status of a component's degradation. The pertinent records also contain the original BIT and sensor parametric recorded data from which degradation is determined. Recorded notes may also contain manually input explanations on the nature of an alarm and functional analysis of why the component alarm was issued and what remediation steps were taken to fix the alarm/problem.
The Model Interface Module 2620, based on the decomposed tagged data received from LRU/System Selection & Data Transport Module 2610 for a particular component, transmits a request to the Prognostics Model Database 2635 identifying the associated component and requesting that the Prognostics Model Database 2635 transmit the relevant physics-based model, e.g. an XML file, empirical model, e.g. an XML file, and physical system logical/functional model, e.g. an XML file to the Hybrid PS Model Module 2630. These models/files are the “blue prints” that contain the diagnostics and prognostics definition of and knowledge of the component in terms of the respective component attributes, functions, behaviors, and semantics. As will be explained in more detail with regard to
The Damage Estimator Module 2625 utilizes the residues from the physics-based model (i.e., physical system damage equations) and empirical model along with physical system logical/functional model to generate a representation of the degradation behavior of each component. The residues are the differences between the expected component attributes, functions, behaviors, and semantics and the corresponding attributes generated by the current data streams for the component being evaluated. Residues are generated for each of the three models during the evaluation of a component and are typically near zero for aircraft components with good performance. The level of degradation represents the severity level of the alarm displayed to the pilot/mission operator. The alarm levels may, for example, be correlated to the remaining useful life (RUL) of the component which is determined by the Damage Estimator Module 2625. An RUL between 70% and 51% may indicate a mild degradation behavior (assuming a gradual decline and not a sharp drop from recent RUL values), between 50% and 11% representing a medium degradation behavior, and below 11% requiring repair or replacement of the component. Of course, various percentages may result depending on the anticipated future wear/degradation and severity of future environments. The Damage Estimator Module 2625, RUL and EOL (end of life) determination are explained in more detail below.
The Alarm Generator Module 2645 generates alarms based on the level of the RUL determined by the Damage Estimator Module 2625. It calculates the slope of the RUL from current and previous stored RUL data and generates the level of the alarm based on this slope and the current value of the RUL. For example, a change of slope greater than a predetermined amount would likely signal too rapid a degradation and cause an alarm even if the value of the RUL alone would not warrant generating an alarm. This is further described for
The Algorithm Selector Module 2660 determines the algorithms utilized by the Damage Estimator Module 2625 and the State Predictor Module 2670. Algorithms 2668 associated with the component currently being analyzed are identified and loaded from the Algorithms Selector Module 2660 into memory for Damage Estimator Module 2625 and State Predictor Module 2670 for processing. The State Predictor Module 2670 uses historic past state and calculates the current and predicted future states for the component being analyzed (using the particle filter algorithm). These states are updated as new BIT, parametric sensor, analog, discretes, environment, etc. data relevant to the subject component are received. The Analyzed Data Database 2655 receives and stores all the analyzed data by the Damage Estimator Module 2625 along with hybrid model parameters from the Hybrid Model Module 2630 and the State Predictor Module 2670.
The Quality of Service (QoS) Calculator 2675 calculates an IVHM diagnostics and prognostics system level quality of service metric as well as component level quality of service metric. Some of these metrics are depicted in
- where:
- p=parametric sensor data
- Mpb=physics-based model
- ypbm=parametric output of model
- u=input stream of parametric data
similar definitions
similar definitions
- The residues r′ (p) and r(p) are defined as:
- r′(p)=ypbm (p)+yem(p)=ypbm(p)+M em(p)u(p)={Mpbm(p)+M em(p)}u(p)
- r(p)=yps(p) r′ (p)
- The residual zero or near zero is ideal for matching the component process and model. Of course, additive noise will change the residual and must be accounted for in the model if not eliminated from the system.
- The three-tier models contain the entire prognostics models of the component and are preferably stored as XML files. The Inputs 2705 for each of the models is the parametric data sensed for each respective component, i.e. the output of the Data Fusion Module 1175 and MBR Diagnostics Engine 106. The residue outputs 2725 and 2730 from the Physics-based Model Module 2710 and Empirical Model Module 2715, respectively, are summed by summation node 2735 with its output forming an input to summation node 2740. The residue output 2745 from the Physical System Model Module 2720 forms the other input to summation node 2740 which is subtracted from the other input, i.e. the combination of the addition of the residues from the Physics-based Model Module 2710 and Empirical Model Module 2715 (i.e., this combination of residues representing the difference between anticipated behavior vs observed behavior as shown in
FIG. 1 ).
The output 2751 of the summation node 2740 is an input to the Performance Estimation Module 2755 and the output 2750 is forwarded to Damage Estimator Module 2625, stored in Analyzed Data 2655, and model parameter refinements into the Prognostics Models DB 2635. State Predictor Module 2670 receives the updated analyses (consisting of model parameters and residues) from the Analyzed Data 2655. In Performance Estimation Module 2755 the initial input is received from the Initial Performance Parameter Database 2760 which stores historical performance data on every aircraft/system component. A comparison of the parameter residues 2751 and the corresponding parameters obtained from database 2760 provides an input to the Enhanced Kalman Filter Observer Module 2765 which filters the input and provides an output to Performance Estimation Module 2755 containing a delta differential of performance. The output of Performance Estimation Module 2755 is a feedback loop routed to Physics-based Model 2710 where a predetermined performance difference from the expected and historical performance measures triggers a root cause analysis. Continuous decreasing performance is caused by a corresponding increasing degradation of the component.
As an example, a degraded brine pump was selected and monitored for its various component signals over tens of minutes of operation. The degradation for this pump's bearing performance and pump's power distribution performance is shown in
The Physics-based Model 2710 will contain a plurality of equations that characterizes the operation of the subject component based on physics of the component, e.g. electrical, mechanical, fluid dynamics, etc. It describes the nominal behavior and when component damage indication exists (from input parametric and BIT data streams), how this damage is expected to grow, both in quality and quantity. Damage indications may not be monotonic in nature. Damage could be caused by the intrinsic properties of the component (e.g., effects due to recovery in batteries, or semiconductors in power systems) or extrinsic effects such as incomplete/partial maintenance actions. Each fault mode may in general have a different damage propagation model. The Empirical Model 2715 is very helpful in capturing these differences in different component fault mode damage variations and possibly component healing (if hardware has this capability) of the component Physics-Based Model 2710. These equations will, of course, vary depending upon the particular component that is to be characterized. For example, the exemplary brine pump could be characterized with individual component operations as shown in Table 1 below.
The definitions of the above parameters are given in Table 2.
Typical brine pump nominal parameter values are given in Table 3.
The Empirical Model 2715 provides a model of the subject component data values that models normal system operation based on a statistically significant sample of operational data of the component. Such an empirical model based on historical performance data of the component allows for a wider variation of performance expectations than the physics-based model since the same component may have been operated under different stress levels and/or in different environments. As an example, monitoring time dependent (at different times of the day over days, weeks, months, etc.) fluid flow through the brine pump impeller housing and pipes characterizes local brine pump operations and usage. The corresponding data values provide a statistically significant empirical model that empirically defines the brine pump as utilized in local aircraft/system operations and usage. By utilizing the empirical model, differing RUL and EOL predictions for identical brine pumps at different locations on the same aircraft/system or identical brine pumps in other aircrafts/systems provides for real world variations by which the RUL and EOL of monitored brine pumps can be judged. Such results provide increased accuracy for predictions of RUL and EOL.
The Physical System Model 2720 is a data-driven functional model. The prognostics nodes in the Physical System Model 2720 contain the expected usage parameters of the component pertinent to the mission profile of the aircraft/system and its operating modes (i.e., preflight, taxi, takeoff, loiter, etc.). The observed/measured behaviors, i.e. data values, are compared against the corresponding functional model, i.e. acceptable ranges (thresholds) of values for the corresponding measured data values, which are a “blue print” of acceptable behaviors with the residue 2745 of this comparison being the output.
For example, a subset of brine pump components of an exemplary Physical System Model 2720 is shown in
The Performance Estimation Module 2755 and the Enhanced Kalman Filter 2765 together measure component performance differences in time dependent sliding sensor data windows. The Enhanced Kalman Filter 2765 is used as an observer of component sensor parameter(s) over time. That is, it uses sensor parameter(s) history to monitor and calculate the change in the parameter(s) of the component over a variable time dependent sliding sensor parametric data window. The Enhanced Kalman Filter provides for nonlinear dynamics in component performance. It initially calculates the performance of the component from existing stored trained data (which is trained off-line), calculates differences with current data, and compares with historic component performance. Any change in performance is forwarded to the Performance Estimation Module 2755. The Enhanced Kalman Filter Observer 2765 and the Performance Estimation Module 2755 are founded on robust banks of two-stage Kalman Filters (in the first module 2765 used as an “observer”) where both simultaneously estimate the performance state and the degradation bias (if one is seen for the component; see
The two-stage Kalman Filter is depicted in more detail in
- uk+1≈C1uk+C2vk−C2zkyk+wku
- where
- uk is the input data stream
- vk is the first performance estimate
- yk+1=yk+wku
- C1 and C2 are constants
covariance matrix provides the parameter coupling between stage1 and stage2 wku and wky are uncorrelated random Gaussian vectors
- Input data stream uk(p) and residue 2751 goes to Performance State Estimator Module of 2756 of Performance Estimation Module 2755 which calculates two sets of equations 1) the time update equations and 2) the measurement update equations. These are distinguished in
FIG. 34 with subscripts (k+1|) for time update equations and (k+1|k+1) for measurement update equations.
Time update equations are responsible for calculating a priori estimates by moving the state and error covariance 1 . . . n steps forward in time. Measurement equations are represented by a static calculation of uk+1 and are responsible to obtain a posteriori estimates through feedback measurements into the a priori estimates. Time dependent updates are responsible for performance prediction while measurement updates are responsible for corrections in the predictions. This prediction-correction iterative process estimates states close to their real values. The measurement equations v(k+1|k+1) are passed to the Coupling Module 2757. New time dependent parameter states v(k+1|k) are passed to the Performance Optimization Module 2766 and parameter states with subscript (k+1|k) are modified with parameter residues 2751 where parameter states with subscript (k+1|k+1) do not included residues in their determination. Performance History Module 2768 provides, maintains, and updates history of optimized performance predictions. New optimized performance states yk+1 are produced by the Performance Optimization Module 2766 and passed to the A Performance Generator Module 2767. Module 2767 provides dynamic Δ (delta) component parameter performance over the current number of forward time steps 1 . . . n. Module 2767 passes the time dependent delta updates y(k+1|k) to Coupling Module 2757 and the measurement updates y(k+1|k+1) the Error Correction Module 2758. The Coupling Module 2757 couples (solves for) the measurement updates v(k+1|k+1) and time dependent delta prediction updates y(k+1|k) via solving the covariance matrix Zk resulting in the final performance parameters {tilde over (v)}(k+1|k+1) corrected for errors in the Error Correction Module 2758.
The flow chart utilized with the two-stage Kalman Filter method is shown
The Damage Estimator Module 2625 makes a determination of the amount of damage for each component. For the brine pump example, the damage vector equation is given by (note that all variables in the physics-based model equations are directly measured parametric sensor values or derived values from these sensor parametric data):
- Friction Wear (sliding and rolling friction)
- rthrust (t)=wthrustrthrustω2;
- rradial (t)=wradialrradialω2;
- where
wthrust=the thrust bearing wear coefficient
wradial=the radial bearing wear coefficient
ω=pump rotational speed (defined earlier)
rthrust=the sliding friction
rradial=the rolling friction
- Damage vector
- d(t)=[a4 (t), rthrust (t) rradial (t)]θ
- where
d(t)=is the damage vector
a4 (t)=is the impeller area coefficient (defined earlier)
rthrust (t)=is the sliding friction (defined earlier)
rradial (t)=is the rolling friction (defined earlier)
θ=is pump temperature
- Significant damage in brine pumps occurs due to bearing wear, which is a function of increased friction (i.e., subject to friction coefficients).
- The Wear Vector is formed by the wear coefficients:
- w(t)=ϕ(t)=[wa
4 , wthrust, wradial]θ - where
ϕ(t) will be used as a parameter vector in predictive algorithm differentiating it from the weight calculations
θ=pump temperature
- The RUL calculation in the Damage Estimator Module 2625 is identical to the EOL calculation discussed below for the State Predictor 2670 except that RUL is calculated for the current point in time and not a future projection/prediction in time. Typical RUL graph 3005 representing a degraded Brine Pump degradation over time is shown in
FIG. 30 . RUL may be estimated at any time in the aircraft/system operating history, even in the absence of faults/failures and/or component defects (i.e., the defect equation is valid for all operating conditions in either historic or current time horizons where data is available). Future predictions of RUL (where data is not available) are known as end-of-life (EOL) predictions.
State Predictor 2670 uses a state vector that is a time dependent equation. For the Brine Pump the complete state vector equation can be written as (note that all variables in the physics-based model equations are directly measured parametric sensor values or derived values from these parametric sensor data):
- x(t)[ω(t), θthrust (t), θradial (t), θoil (t), a4 (t), rthrust (t), rradial (t)]θ
- θthrust (t)=is the temperature at thrust bearings (defined earlier)
- θradial (t)=is the temperature at radial bearings (defined earlier)
- θoil=is the temperature of the oil (defined earlier)
- a4 (t)=the impeller area coefficient (defined earlier)
- rthrust=thrust is the sliding friction (defined earlier)
- rradial=is the rolling friction (defined earlier)
- ω(t)=is the pump motor rotation (defined earlier)
- State Predictor 2670 uses a state vector that is a time dependent equation, i.e., the state of the brine pump at any point in time. Together, the damage equation and the state equation define the physics of the component at any point in time.
- The prediction of EOL of the brine pump is calculated numerically using a particle filter algorithm that predicts the future state of the brine pump with the equation for the state vector equation and the damage equation as defined above. The future state particle probability density (note x is the state vector given above) is given by the particle filter (PF) process:
- The Particle Filter (PF) computes
- Approximate this distribution in n steps
- so that the particle i is propagated n steps forward without new data available, taking its weights as wk
p i; EOL is approximated by
- i.e., propagate each particle forward to its own EOL while using the particle's weight at k, for the weight of its EOL prediction.
The particle filter process is a robust approach that avoids the linearity and Gaussian noise assumption of Kalman filtering, and provides a robust framework for long time horizon prognosis while accounting effectively for uncertainties. Correction terms are estimated in off-line training/learning to improve the accuracy and precision of the algorithm for long time horizon prediction from collected Analyzed Data 2655. Particle filtering methods assume that the state equations that represent the evolution of the degradation mode in time can be modeled as a first order Markov process with additive noise, iteratively refined sampling weights, and conditionally independent outputs.
Once the “Server” button icon in toolbar 3603 is changed from “OFF” to “ON” (by the mission operator in the mission operator mode), the fleet level prognostics system goes into an autonomous mode and establishes various data socket links with the COMMS System 3605 which receives data from various aircrafts in flight or other in ground operations such as pre-flight testing, taxiing, and pre-launch. All GUI functions will typically run autonomously unless selected by the mission operator to run manually. The maintainer, via the GUI 3601, can direct the system to run in an offline “maintenance mode” (i.e. where new aircraft data is not attempted to be acquired or COMMS 3605 is inactive) and run various calculations and predictions, generate reports as required for predictive maintenance schedules, and build maintenance schedules based on mean-time-between-failures (MTBF) of aircraft components. MTBF are developed at the design phase from component specifications from the component supplier reliability and test data. These assist in inventory control and supply chain operations in Enterprise Asset Management (EAM) systems, keeping components and parts stocked and ready for repair/replacement when predictive maintenance informs a need, as will be described in association with
Models, algorithms, and components are mapped in triplets (one, one, one) or (one, many, one) or (one, many, many). Mappings (one, many, one) and (one, many, many) occur when components are similar/identical and multiple algorithms may produce accurate results for the component. Note that in this mapping scheme model is one and only one (i.e., the three tiers model combination) pertinent to the component, always. This allows for automatic selection of model-algorithm-component in Prognostics Models Module 3640. Data Distribution Module 3630 pulls data (has access to database triggers which allow for automatic pulling of data as new data arrives) from the Storage Module 3626 database and distributes the “pulled” data to the Prognostics Models Module 3640 analysis system. Module 3640 selects the appropriate three-tier models unique to the component, with appropriate algorithms applied to the component in Algorithms Module 3670 and chosen from the Algorithms Library 3680. This allows model-algorithm-component system to perform the prognosis on the component. The Data Distribution Module 3630 transmits raw data to the Automated Logistics Environment (ALE) modules 3800 and 3830. Prognostics Models Module 3640 has a mapping of algorithms pertinent to the component model. Each component has an individual tailored algorithm (e.g., electronics systems will not have the same algorithm as mechanical systems or structural systems). Within the Prognostics Model Module 3640, the data is further distributed to a Physics-Based Prognostics Model 3641, an Empirical Prognostics Model 3642, and a Data Driven Prognostics Model 3643. Similarly, the Prognostics Algorithms Module 3670 consists of Physics Based Module 3671, Empirical Module 3672, and Data Driven Module 3673. Data is transmitted from Module 3641 to Module 3671 and the cumulative model residues from Residue Distribution Module 3650 to Module 3671. This happens in the same way for Modules 3642, 3672, and 3650, as well as Modules 3643, 3673, and 3650, respectively. The processing and operations for these Modules are substantially the same as described for Module 2630 (in
A Test Manager 4355 utilizing a model from the Prognostics Model Library 4350 and corresponding algorithms from Algorithm Library 4385 can test the prediction accuracy based on simulated or recorded flight data from storage database 4360 (flight data migrated from 3626 Storage (
In the design mode, the user has the capability of choosing analysis for like components in like aircrafts and time horizons (hours, days, months, years, and decades) via a graphical user interface (GUI) as well as the appropriate component algorithms. These choices may be written as XML files that include the ANSI SQL statements for component models and algorithms and stored in Prognostics Model Library 4350 and Algorithms Library 4385 databases. Embedded SQL statement in the XML file allow for automatic extraction of data from the various databases when run in the Test Manager 4355. The XML file becomes a script file for running reproducible tests. Automated script file testing is very important for regression testing of models and algorithms applied to the component(s) under test. They are easily updated for new parameter refinements while old parameters are retained under the <histories></histories> label in the XML file. In the operational mode, in System 2600, the prognostics model and corresponding algorithm configuration files are transferred (model database migrated) to and stored on the aircraft in Prognostics Models DB 2635 and Algorithms 2668 files, respectively. Similarly, in System 3600 these are stored in the Models Library DB 3660 and Algorithms Library DB 3680, and are utilized for processing in the Prognostics Models Module 3640 and Algorithms Module 3670. The fleet level analysis proceeds further in the Damage Estimator Module 3674 (similar processing control taken from System 2600 with modification of communication line 2600 in System 3600) which operates on the fleet data the same way as described for Damage Estimator Module 2625 operating on the on-board System 2600 data. Similarly, the fleet level analysis proceeds in the State Predictor Module 3675 (similar processing control taken from System 2600 with modification for SOH calculation and communication line 2600 in System 3600) which operates on the fleet data the same way as described for State Estimator Module 2670 operating on the on-board System 2600 data.
The Algorithms Library DB 3680 stores all of the algorithms developed for all of the components on the aircraft for which performance analysis is available. The algorithms used in Algorithms Module 2668 are a small subset (i.e., Neural Networks, Kalman Filter, and Particle Filter which are efficient and fast for onboard processing) of the complete set of algorithms developed for the Fleet Level Prognostics System 3600. Some of these algorithms as applied to specific components include: logistics regression, linear regression, time series, symbolic time series, random forest, decision trees, moving averages, principal component (PCA), support vector machines (SVN), Markov Chains, Bayesian methods, Monte-Carlo methods, neural networks (numerous of them), particle-swarm optimization, various filters (Kalman and Particle filters), fast Fourier transforms (FFT), gradient and Ada boost, wavelet, genetic, natural language, clustering, classification, learning & training, and data mining.
The output from the Damage Estimator Module 3674 is provided as an input to the Analysis Fusion Module 3690, which is shown in more detail in
The output from Analysis Fusion Module 3690 is provided as an input to the Metrics Module 3740 which provides the grouping of information on a user selectable basis. For example, the GUI 4400 of
An exemplary predictive maintenance schedule 4500 generated by the Reports Module 3750 (
The History and Training Data Database 3730 contains historical data collected from the analysis of data associated with all of the analyzed components during online and offline operations. The Training Systems Module 3720 utilizes the data contained in Database 3730 to periodically update the algorithms utilized in the Analysis Fusion Module 3690 in order to provide a more refined and accurate analysis.
The Analyzed Data Repository 3710 contains all of the data from the MBR Diagnostics Engine 106 (14), the PMBR onboard Prognostics Engine 2600, and the previously analyzed data from the PMBRGCS (the ground-based Fleet Level Prognostics Engine 3600). This information is made available to the Analysis Fusion Module 3690 to assist in increasing the accuracy of analysis and to provide access to this information by the Metrics Module 3740 in order to generate the user selected reports by Module 3750. Also, the analysis as made by Module 3690 is sent to the Database 3710 to be integrated with the historical data information. The Database 3710 also communicates with the Algorithms Library Database 3680 so that the algorithms can be continually refined. Refinements of both models & algorithms can occur autonomously in the Training Systems Module 3720 online or offline (since Module 3600 is running on a ground located server and is not a flight or mission critical system for aircraft operations). These refinements provide cumulative analysis in Module 3720 with data and analysis received from Module 3690, which receives data from Module 3710, which in turn receives data and analysis from Modules 3770 & 3780. The training algorithms for Module 3720 are obtained from the Algorithms Library DB 3680. With data from the History Database and in conjugation with moving averages algorithm for existing analyses & principle component algorithm for parametric and BIT data of other modules (described above), high fidelity refinements can be made and stored for execution of analysis of the next data for like component in the History & Training Data DB Module 3730. The Prognostics Accuracy and Algorithms Tuning and Training Module 3770 also provides interactive communication with the Algorithms Library Database 3680 and provides input to the Analyzed Data Repository 3710. The Module 3770 provides continuing accuracy updates as reflected in modifications of the corresponding algorithms utilized for analysis of the component data. Similarly, Module 3710 is also in communication with the Prognostics Models Tuning and Training Module 3780 which is in turn in communication with the models as stored in the Model Library Database 3660.
The Prognostics Outcome Database 3760 receives and stores the user selected metrics and data from Metrics Module 3740 and the corresponding report from Reports Module 3750. The Database 3760 is also in communication with the Analyzed Data Repository 3710 providing access to analyzed data. The Database 3760 also provides information to the Mission Dashboard 3785 and the Maintainer Dashboard 3790. Mission Dashboard 3785 is accessible in the mission operator mode (roles-based access) of operating Fleet Level Prognostics System 3600. It provides situational awareness of all operating and ground based aircrafts, as well as access to all menus and buttons in the Fleet Level Prognostics System 3600 GUI Module 3601, except for the model design functions. Similarly, the Maintainer Dashboard 3790 is accessible to the maintainer in the maintainer mode of operation or the maintainer's role. The maintainer will have access to all the menus and buttons in the GUI 3601, except for the model design functions. The output from each of these dashboards is provided to Print Function Module 3795 when the print button is clicked in the GUI 3601, which then provides a printed output of the chosen item in the dashboard (see
In this illustrative embodiment, the data from the Data Distribution Module 3630 is provided to the Organic Automated Logistics Environment (ALE) (i.e., Maintenance and Support Systems) 3800 and the ALE Contractor Logistics 3830. The output from Module 3800 is provided to Data Mining Module 3810 which selects data for storage and pulls data for use from the Database Farm 3820. Similarly, the output from module 3830 is provided to Data Mining Module 3840 which selects data for storage and pulls data for use from the Database Farm 3850. The ALE maintenance and support systems (logistics operations both for military and its contractors) provide the necessary enterprise level applications for maintenance functions such as 3-levels of maintenance, repair and/or replacement, overhaul, and other organization functions such as inventory control, supply chain functions for spares and parts ordering, and so on. The Fleet Level Prognostics System 3600 provides the diagnostics and prognostics data and analysis to these ALE.
In the operational state 4102, the step 4165 Model & Algorithms Databases refers to the databases 3660 Models Library DB and 3680 Algorithms Library DB in System 3600, respectively. The steps 4170 to 4185 are parallel processing to existing processing occurring in 3770 Prognostics Accuracy & Algorithms Tuning & Training and 3780 Prognostics Model Tuning & Training for algorithms and models, respectively. This occurs only when new data arrives (when COMMS 3605 is active and transmitting data to System 3600, otherwise existing data in 3626 Storage is utilized) for parameter performance tuning and models and algorithms refinement. Initial analysis and data are extracted in step 4101 Training Systems Learning & Training Algorithms, described above. With new data arrival the sensitivity and specificity of model analysis are initiated in step 4170. For models, this analysis searches for changes in model output values that result from changes in the model input values. It also determines the uncertainty in the model parameters from input to output results. Algorithms sensitivity ascertains “true positive” degradation rate while algorithm specificity ascertains “true negative” degradation rate that are correctly identified. True negative rates tend to reduce the true positive rates when taken for entire like components on an aircraft. Sensitivity and specificity enhance the accuracy in measurements of RUL, EOL, and SOH. Step 4175 determines the uncertainty in the metrics calculations, e.g. to ±5%. If this percentage is greater than the set amount step 4170 is reinitiated, otherwise the process proceeds to step 4180 Verification and Validation Test. In step 4180 regression tests are performed (existing successful test scripts are executed) and compared with previous test results. If these test results pass in step 4185, the new analysis and data are transmitted to the step 4101 databases for storage and for future processing. If these test results do not pass, the analysis & processing goes back to step 4170 and the process continues as described above.
SOH is a percentage score that depends on the relevant physics, chemistry, and/or biology (i.e., biotechnology) of the component, for instance the SOH for a battery is estimated by dividing the maximum residual capacity Ci of the i-th cycle by the nominal capacity C0:
- SOH for batteries depends on cyclic charging and discharging while its RUL and EOL depend on its constituent materials physical and chemical properties. For other components this process is not cyclic, e.g., structures, mechanical equipment, electronics boards, rotating machinery, etc. For example, sand particles in oil-water-mixture can cause serious erosion in choke-valves. When erosion happens the value fluid flow coefficient Cv increases due to increased value opening. The SOH for this system would be:
- Cf is the coefficient of friction in pipe flow given by the (Darcy-Weisbach equation):
-
- where
- Hf is the head loss (ft)
- L is the length of pipe (ft)
- D is the inner diameter of pipe (ft)
- v is the velocity of fluid (ft/s)
- g is the gravitational acceleration (ft/s2)
- Cf is generally provided in the corresponding supplier specification. The calculated value of Cv is not direct, but it can be calculated from sensor data, i.e., upstream pressure, downstream pressure, temperature, flow rates, fluid density, entropy, depending on the sensors available within and near the choke valves.
- Another example is crack growth in structures (such as turbo blade of turbo fan aircraft engine, turbo fan housing, helicopter blades, wing structures, others). The flow diagram is shown in
FIG. 45 . Finite element analysis (FEA) 4610 (using available data 4605) is performed during the design phase with an initial mesh that is a partition of the problem crack space into elements (smaller spatial cells or zones) over which partial differential equations can be solved producing the result of net larger domain of the crack space. In the operational phase 4615-4650 inFIG. 45 , step 4615 extracts the structure fracture parameters from the FEA 4610 and processes these parameters with incoming sensor data and then calculates the crack propagation direction, its growth in length, and then registers the new position of the crack tip. Step 4625 determines the crack length in all directions and compares with previous calculations or from a previous flight calculation. In step 4630 SOH is calculated and transmitted to Module 3690. The FEA 4610 also calculates the critical crack length “Lcritical” that would cause aerodynamic instability of the aircraft. If this instability occurs, “Lcritical” is reached in step 4635, an alarm is generated in step 4640 and shown on the Mission Operator Dashboard 3785 as a blinking alarm for immediate action. In step 4645 if the critical crack length in decision step 4635 is not reached the processing continues with update of the crack parameters and re-mesh is applied in step 4650 with these new crack growth parameters. Processing continues with feedback to step 4615. SOH for these components would indicate the percentage of crack propagation over time and its criticality. SOH would depend on the material composition (i.e., micro structural deformities and material impurities), environmental conditions, and the usage of the aircraft under various operating maneuvers and missions (high and low acceleration, fast turning, fast ascent & descent having the effect of increasing and decreasing gravitational forces). As shown inFIG. 47 , the crack growth can be small crack which grows slowly over time (4805) or long crack growth (4810) which grows quickly over time and may propagate to critical system failure in a very short time interval depending on the cyclic external load cycle N (i.e., da/dN 4815) and the increasing tearing of the area around the crack is due to this cyclic external load and the stress (4820) on the structural material. The SOH is given mathematically:
- where:
- a=crack length
- N=load cycle
- ΔK=the stress intensity factor
- “C” and “m” are experimentally obtained from frequency, temperature, stress ratio and
- environmental conditions
- ΔK=sN√{square root over (π×a)}
- where s=applied stress
- It is noted, that the battery example is dealing with cyclic charging and discharging due to its internal chemical and physical structure, as it provides electricity to an external load. Whereas, in the crack growth example an external cyclic load causes deformation in the structure leading to tearing sheer stresses & strains in the material causing the crack to grow and tear. This external load on the structure could have been continuous rather than cyclic.
In step 4635 a determination is made of whether the critical crack length Lcritical has been reached for which structural failure is projected. A YES determination by step 4635 results in an alarm being generated at the mission dashboard in step 4640. The generation of the alarm is appropriate since the critical length Lcritical of the crack to potentially create a structural failure has been determined. A NO determination by step 4635, indicating that the critical crack length has not yet reached Lcritical projected to cause a structural failure, results in step 4645 updating the crack parameters, sensor parameters, new environmental condition and mission profile, and new usage parameters. Following this updating, in step 4650 another calculation iteration is continued by returning to step 4615 so that continuing determinations can be made by step 4635 based on new information.
- From this graph it should be apparent that on the onset of potential failure 4710, the preventative maintenance 4715 will typically be substantially less cost-effective while adding more cost than predictive maintenance 4730 since often a component will have substantial RUL beyond the scheduled time of preventive maintenance 4715. Hence, utilizing maintenance based on actual performance utilizing predictive maintenance will obtain a longer operational lifetime of the component and hence minimize component replacement/repair costs as well as the labor costs for performing the maintenance while keeping the aircraft operational longer.
- In
FIG. 46B the dashed line 4755 represents a Maintenance Value 4750 of zero. This value represents the transition point between aircraft Mean Logistics Delay Time (MLDT) and Mean Time To Repair (MTTR) (due to aircraft downtime with work orders for unscheduled maintenance) and its ready for operational state 4755. MLDT can be due to various delays in administrative work, lack of transportation or long time for round trip transportation, and other reasons. Once MLDT has been resolved, additional MTTR is needed to perform the maintenance repair or replacement action. Note that the positive peak in graph 4760 and it's direct fall to zero, generally represents the component nearing its the EOL, which may lead to unscheduled maintenance (complete overhaul for expensive component).
In
Claims
1-7. (canceled)
8. A method implemented by a ground-based computing system to improve the accuracy of component degradation determinations on-board an aircraft comprising the steps of:
- receiving data of performance parameters for like components disposed on a plurality of like aircraft and storing the data in memory associated with a microprocessor in the ground-based computing system;
- determining by the microprocessor, by at least two models of each of the corresponding like components, corresponding levels of degradation and rates of change of degradation for the respective like components based on the received data;
- generating by the microprocessor a fleet level of degradation for each group of like components based on the degradations of the like components in each group;
- modifying by the microprocessor at least one of the at least two models of like components based on the fleet level degradation to improve the accuracy of the at least one model;
- transmitting from the ground-based computing system the modified at least one model to the like aircraft so that the modified at least one model can be used to replace a prior version of the at least one model to generate on-board degradation analysis, thereby enhancing the accuracy of on-board degradation analysis based on fleet level data.
9. The method of claim 8 wherein:
- the data includes identification of an operational mode of the aircraft for the performance parameters;
- the determining step determines the corresponding levels of degradation and rates of change of degradation for the respective like components based on the stored performance parameters and the operational mode of the aircraft associated with the respective performance parameters.
10. The method of claim 8 wherein:
- the determining step determines at least one of a remaining useful lifetime (RUL) and a state-of-health (SOH) for each of the respective like components based on a comparison of the levels of degradation for each of the like components and the fleet level of degradation of the group of like components;
- the determining step determines a predicted time for maintenance for each like component based on the corresponding at least one of the RUL and SOH of the like component, thereby enabling cost effective maintenance determinations for components based on a fleet level information.
11. The method of claim 8 wherein the receiving of data includes transferring a plurality of like components' data from a respective like aircraft to the ground-based computing system while the aircraft is on the ground, the data based on component performance parameters stored in an aircraft data storage device while the aircraft was in flight.
12. The method of claim 8 wherein the receiving of data includes transferring via a wireless communication link a plurality of like components' data from a respective like aircraft to the ground-based computing system while the aircraft is in flight.
13. The method of claim 8 wherein the determining corresponding levels of degradation and rates of change of degradation for the respective like components comprises:
- determining, by at least one of a physics-based model of the respective like components and an empirical model of the respective like components, a first residue that is a difference between a current state of the performance parameters of the like components and corresponding states of the performance parameters of the like components as determined by the at least one of the physics-based model and the empirical model;
- determining, by a physical system model of the respective like components, a second residue that is a difference between the current state of the performance parameters of the like components and predetermined ranges of states of performance parameters of the corresponding like components as determined by the physical system model; and
- determining, based on a combination of the first and second residues, a level of degradation for the respective like components and a rate of change of degradation for the respective like components;
- determining a current state of health (SOH) for each like component;
- comparing for each like component the current corresponding SOH with a state of 100% performance to generate a current SOH for the like component.
14. The method of claim 10 wherein determining the RUL for each of the respective like components comprises:
- determining the RUL for each like component based on a previous determination of RUL of the like component and the rate of degradation of performance of the respective like component.
15. A computer readable non-transitory storage medium having data stored therein representing software instructions executable by a computer, the software including instructions to implement a ground-based computing system to improve the accuracy of component degradation determinations on-board an aircraft, the storage medium comprising:
- instructions for storing, in memory associated with a microprocessor in the ground-based computing system, received data of performance parameters for like components disposed on a plurality of like aircraft;
- instructions to cause determining by the microprocessor, by at least two models of each of the corresponding like components, corresponding levels of degradation and rates of change of degradation for the respective like components based on the received data;
- instructions to cause generating by the microprocessor a fleet level of degradation for each group of like components based on the degradations of the like components in each group;
- instructions to cause modifying by the microprocessor at least one of the at least two models of like components based on the fleet level degradation to improve the accuracy of the at least one model;
- instructions to cause transmitting from the ground-based computing system the modified at least one model to the like aircraft so that the modified at least one model can be used to replace a prior version of the at least one model to generate on-board degradation analysis, thereby enhancing the accuracy of on-board degradation analysis based on fleet level data.
16. The medium of claim 15 wherein:
- the data includes identification of an operational mode of the aircraft for the performance parameters;
- the instructions to cause determining includes determining the corresponding levels of degradation and rates of change of degradation for the respective like components based on the stored performance parameters and the operational mode of the aircraft associated with the respective performance parameters.
17. The medium of claim 15 wherein:
- the instructions to cause the determining includes determining at least one of a remaining useful lifetime (RUL) and a state-of-health (SOH) for each of the respective like components based on a comparison of the levels of degradation for each of the like components and the fleet level of degradation of the group of like components;
- the instructions to cause the determining includes determining a predicted time for maintenance for each like component based on the corresponding at least one of the RUL and SOH of the like component, thereby enabling cost effective maintenance determinations for components based on a fleet level information.
18. The medium of claim 15 wherein the instructions to cause the determining comprises:
- instructions to cause determining, by at least one of a physics-based model of the respective like components and an empirical model of the respective like components, a first residue that is a difference between a current state of the performance parameters of the like components and corresponding states of the performance parameters of the like components as determined by the at least one of the physics-based model and the empirical model;
- instructions to cause determining, by a physical system model of the respective like components, a second residue that is a difference between the current state of the performance parameters of the like components and predetermined ranges of states of performance parameters of the corresponding like components as determined by the physical system model; and
- instructions to cause determining, based on a combination of the first and second residues, a level of degradation for the respective like components and a rate of change of degradation for the respective like components;
- instructions to cause determining a current state of health (SOH) for each like component;
- instructions to cause comparing for each like component the current corresponding SOH with a state of 100% performance to generate a current SOH for the like component.
19. The medium of claim 17 wherein the instructions to cause the determining of the RUL for each of the respective like components comprises:
- instructions to cause determining the RUL for each like component based on a previous determination of RUL of the like component and the rate of degradation of performance of the respective like component.
Type: Application
Filed: Jan 15, 2021
Publication Date: Oct 28, 2021
Inventor: Sunil Dixit (Torrance, CA)
Application Number: 17/150,752