Method of Hierarchical Machine Learning for an Industrial Plant Machine Learning System

- ABB Schweiz AG

A method of hierarchical machine learning includes receiving a topology model having information on hierarchical relations between components of the industrial plant, determining a representation hierarchy comprising a plurality of levels, wherein each representation on a higher level represents a group of representations on a lower level, wherein the representations comprise a machine learning model, and training an output machine learning model using the determined hierarchical representations.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This patent application claims priority to International Patent Application No. PCT/EP2021/058476, filed on Mar. 31, 2021, and to International Patent Application No. PCT/EP2020/059169, filed on Mar. 31, 2020, each of which is incorporated herein in its entirety by reference.

FIELD OF THE DISCLOSURE

The present disclosure relates to a computer-implemented method of hierarchical machine learning for an industrial plant machine learning system, as well as an industrial plant machine learning system.

BACKGROUND OF THE INVENTION

In industrial plants, a relatively high number of signals and events produce data that can be potentially leveraged to train machine learning, ML, models for tasks like event prediction or process monitoring, in particular anomaly detection, soft sensors. Not all the data is equally important for the various tasks and either a feature engineering has been done manually or large amounts of data are required to use ML that identify relevant features automatically, in particular by using. deep artificial neural networks. At the same time, the curse of dimensionality makes machine learning models very prone towards overfitting on high-dimensional input data. In short, one has to choose between (a) manual feature selection and engineering or (b) the requirements of a very large number of training data.

BRIEF SUMMARY OF THE INVENTION

According to an aspect of the disclosure, a computer-implemented method of hierarchical machine learning for an industrial plant machine learning system comprises the following steps. In one step, a topology model, comprising structural information on hierarchical relations between components of the industrial plant is received by a machine learning unit. The components comprise data signals of sensors of the industrial plant and hierarchical units, wherein the hierarchical units comprise assets, plant sub-units, plant units and plant sections of the industrial plant. In another step, the machine learning unit determines a representation hierarchy comprising a plurality of levels using the received data signals and the received topology model, wherein the representation hierarchy comprises a signal representation for each of the plurality of received data signals and a hierarchical representation for each of the hierarchical units on different levels. Each representation on a higher level represents a group of representations on a lower level.

Each of the signal representation and the hierarchical representation comprise a machine learning model. In another step, an output machine learning model of the machine learning unit is trained by the machine learning unit using the determined hierarchical representations.

The term “topology model,” as used herein, comprises a tree-like structure comprising root nodes, wherein each root node represents a data signal, and hierarchical nodes, wherein each hierarchical node represents an hierarchical unit. In other words, a first level of hierarchical nodes group the root nodes. The hierarchical nodes of the first level might be grouped again by hierarchical nodes of a second level. For each higher level, the mount of representations is reduces. A node of a higher level grouping a plurality of nodes of a lower level is referred to as parent node and the plurality of grouped nodes of the lower level are referred to as children nodes.

The topology model is preferably determined using plant diagrams, in particular diagram names, e.g. P&ID, piping and instrumentation, diagrams, PFD, process flow, diagrams, or naming schemes like the identification system for power stations, KKS, or customer specific naming schemes. Preferably, a hierarchical presentation is determined for the topology model. For example, from the plant diagram a flow direction of the product in a production process, parallel production paths, parallel sensors, as well as a location of the sensors on different components of the plant (e.g. upper or lower sensor of a tank) is determined. Based on this a hierarchical relationship is determined.

The term “representation hierarchy,” as used herein, comprises a tree-like structure in line with the topology model. The representation hierarchy reflects, which representation of a higher level is learned based on which representation of a lower level.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S)

FIG. 1 is a topology model of an industrial plant in accordance with the disclosure.

FIG. 2 is a diagram of a representation hierarchy of the industrial plant in accordance with the disclosure.

FIG. 3 is a diagram of a combination of machine learning models by training on compressed data in accordance with the disclosure.

FIG. 4 is a diagram of a combination of machine learning models by training on reconstructed data in accordance with the disclosure.

FIG. 5 is a schematic of the method of hierarchical machine learning for an industrial plant machine learning system in accordance with the disclosure.

FIG. 6 is a schematic of an industrial plant machine learning system in accordance with the disclosure.

DETAILED DESCRIPTION OF THE INVENTION

The reference symbols used in the drawings, and their meanings, are listed in summary form in the list of reference symbols. In principle, identical assembly parts are provided with the same reference symbols in the figures.

Preferably, the functional modules and/or the configuration mechanisms are implemented as programmed software modules or procedures, respectively; however, one skilled in the art will understand that the functional modules and/or the configuration mechanisms can be implemented fully or assembly partially in hardware.

FIG. 1 shows a topology model T of an industrial plant 20. The topology model T comprises a bottom level Lb, also called bottom layer, and a target level Lt, also called target layer, with an intermediate level Li, which is also called intermediate layer, in between. In the bottom level Lb, signals of different assets are grouped. Thus, the bottom level Lb comprises data signals of an asset A, indicated S1,1 to S1,n, data signals of an asset B, indicated S2,1 to S2,m, and data signals of an asset α, indicated So,1 to So,α. On the intermediate level Li, the respective assets A, B and a are indicated. On the target level Lt, a sub-units SU is indicated, comprising all the assets A, B and a of the intermediate level Li. In this hierarchical display in form of a so called tree, each entry is called a node. Nodes of a lower level combined to a node of a higher level are called children nodes and parent nodes, respectively.

FIG. 2 shows hierarchical representations H of the industrial plant 20 of FIG. 1. For each signal Si,j, in the bottom level Lb, a representation AEi,j is learned. The same applies to the assets A, B and α in the intermediate level Li, wherein for each asset a representation AEA, AEB and AEα is learned. Consequently, this applies also to the target level Lt, wherein for the sub-unit SU, a representation of the sub-unit AESU is learned. The machine learning unit 20 can then be trained on the final representation of the sub-unit AESU. In this case, the representations AE1,1 of the signal 1 of asset A to the representation AE1,n of signal n of asset A are used to learn the representation AEA of asset A. Furthermore, the representation AE2,1 of signal 1 of asset B to the representation AE2,m of signal m of asset B are used to learn the representation AEB of asset B. Furthermore, the representation AEo,1 of signal 1 of asset α to the representation AEo,1 of signal 1 of asset α are used to learn the representation AEα of asset α. Thus, the representation AEA of asset A, the representation AEB of asset B and the representation AEα of asset α are used to learn the representation AESU of subunit SU. The output machine learning unit 11, proving the prediction for the given task for the industrial plant 20, is thus only trained on the machine learning model of representation AESU of subunit SU, instead of the raw data signals S.

In industrial plants 20, a relatively high number of signals and events produce data that can be potentially leveraged to train machine learning models 10 for tasks like event prediction, process monitoring, anomaly detection, soft sensors, and so on. Not all the data is equally important for the various tasks and either a feature engineering has been done manually or large amounts of data are required to use machine learning that identify relevant features automatically, e.g. deep artificial neural networks. At the same time, the curse of dimensionality makes machine learning models 10 very prone towards overfitting on high-dimensional input data. In other words, one has to choose between (a) manual feature selection and engineering or (b) the requirements of a very large number of training data.

Instead of learning directly on the possibly large number of individual data signals S in an industrial plant, representations on hierarchical units, in particular assets, sub-units, units, plant sections, are learned and combine lower-level representations on the next higher level step-wise in a self-supervised fashion.

This is a bottom-up iterative method that starts with treating each data signal (leaf node in the bottom level of the tree) individually to build its representation. Then, the method goes on agglomerating these representations in groups based on their parent node in the plant hierarchy until all representations are merged into a single representation that contains all the data.

For example, each of the first asset A, second asset B and third asset α, has 12 signals available as input. If a machine learning model has to predict trips of the main pumps in the reinjection pump subsystem based on data with a sampling rate of one minutes and an input size of 60 minutes that results in 2160 data points in the machine learning model input. This high-dimensionality makes the machine learning model prone towards overfitting and will require large amounts of data. However, trip events are rare vents and the resulting (large) data set would be very imbalanced.

For asset A, the input is composed by 12 signals resulting in 720 inputs. The self-learning algorithms (e.g. an AutoEncoder) can compress the data to for instance 64 outputs. Asset B and asset α also have 12 signals also resulting in 720 inputs that can be compressed to 64 outputs. The machine learning model of representation AESU of subunit SU then has 192 input points which it might or might not further compress to e.g. to a lower mount of outputs. The output machine learning model 11 is then trained on the 192 data points.

In the training process, first each model can be trained individually to learn to compress the data in a self-supervised fashion. The output machine learning model might be trained independently on the output of the final compression layer or propagate it losses to the models in the hierarchy to fine tune their parameters to include information relevant for output machine learning model 11.

FIG. 3 shows a combination of machine learning models by training on compressed data Dc. The representations AE, being machine learning models in form of an Auto Encoder encode the data signals S, or in other words, the raw data of the industrial plant 20. The representations AE are thus referred to as children encoder. The representations AE provide compressed data Dc, being compressed data of the data signals S. A parent encoder in this case is the output machine learning model 11, being trained on the compressed data Dc. The output machine learning model 11 thus provides its output, a prediction P, based on the compressed data Dc.

FIG. 4 shows a combination of machine learning models by training on reconstructed data. The setup is similar to the data pipeline described in FIG. 3. However, the compressed data Dc is provided to children decoders D before the data is provided to the output machine learning model 11. The children decoders D determine reconstructed and/or de-noised data Dr based on the provided compressed data. The output machine learning model 11 is this trained on the reconstructed data Dr instead of the compressed data Dc. The output machine learning model 11 thus provides its prediction based on the reconstructed data Dr.

The data pipeline described in FIGS. 3 and 4 also applies to a setup, in which the parent encoder is a hierarchical representation of an intermediate level instead of the output machine learning model 11 of the target level.

FIG. 5 shows a schematic view of the method of hierarchical machine learning for an industrial plant machine learning system.

In a first step S10 a topology model T, comprising structural information on hierarchical relations between components of the industrial plant 20 is received by a machine learning unit 10 wherein the components comprise data signals S of sensors of the industrial plant 20 and hierarchical units A, SU, wherein the hierarchical units A, SU comprise assets A, plant sub-units SU, plant units and plant sections of the industrial plant 20, wherein the topology model T comprises root nodes, wherein each root node represents a data signal S, and hierarchical nodes, wherein each hierarchical node represents an hierarchical unit A, SU. In a second step S20, the machine learning unit 10 determines a representation hierarchy H comprising a plurality of levels Lb, Li, Lt using the received data signals S and the received topology model T, wherein the representation hierarchy H comprises a signal representation AE1,1 for each of the plurality of received data signals S and a hierarchical representation AEA, AESU for each of the hierarchical units A, SU on different levels. Each representation AEA, AESU on a higher level represents a group of representations AEA, AE1,1 on a lower level. Each of the signal representation AE1,1 and the hierarchical representation AEA, AESU comprise a machine learning model. In a third step S30 an output machine learning model 11 of the machine learning unit 10 is trained, by the machine learning unit 10, using the determined hierarchical representations AEA, AESU.

FIG. 6 shows a schematic view of an industrial plant machine learning system 100. The industrial plant machine learning system 100 comprises an industrial plant 20 providing data signals S of sensors of the industrial plant 20 as well as a topology model T, comprising structural information on hierarchical relations between components of the industrial plant 20, to a machine learning unit 10. The machine learning unit 10 uses the provided information to provide predictions on a given task of the industrial plant 20. The output of the machine learning model 10 is defined by an output of an output machine learning model trained on representations from lower level plant components of the industrial plant 20 instead of the raw data signals S.

Preferably, the assets comprise a pump or a motor. Further preferably, the plant sub-unit comprises a pressure vessel like a supercharged boiler, or a feeder system. Further preferably, the plant unit comprises a support structure, an enclosure or a steam generator interior. Further preferably, the plant section comprises a conventional head generation.

In other words, instead of training the output machine learning model using the data signals of the sensors of the industrial plant, and thus the raw data, representations of the data signals are determined and grouped over at least one level in order to reduce the amount of input data for the output machine learning model. Thus, the output machine learning model is trained on representations of hierarchical units of the industrial plant instead of the data signals.

Preferably, at least two root nodes, also referred to as leaves, are associated with one hierarchical node. For example, two root nodes represent two data signals of an asset. The hierarchical node associated with the two root nodes is thus the hierarchical node representing said asset. The tree-like node structure of the topology model is thus transferred into machine learning

Thus, based on the topology model, the machine learning models of the various representations of components of the industrial plant are automatically combined. This allows for self-supervised learning to automatically construct meaningful features without human interaction.

The output machine learning model preferably is configured to monitor or control the industrial plant, in particular components of the industrial plant.

Preferably, the industrial plant is configured to execute a production process, in particular based on a process recipe, thereby manufacturing a product.

Preferably, the plant assets comprise columns and reactors of the industrial plant.

Preferably, the topology model is a digital topology model indicating the industrial plant hierarchy.

Preferably, the output machine learning model is configured to perform anomaly detection, prediction of specific events and/or prediction of specific variables, like a quality of a production product manufactured by the industrial plant.

Due to the training of the output machine learning model based on the hierarchical representations instead of the raw sensor data, problems of dimensionality can be avoided. Thus, the provided hierarchical machine learning method is prone against overfitting, in particular to specific raw data signals or event data of the industrial plant.

Furthermore, combining the learning representations over the plant hierarchy improves a robustness of the output machine learning model. This facilitates detecting new complex patterns in the industrial plant, which can be crucial to detecting and preventing breakdowns and disruptions.

The hierarchical structured machine learning models are more informative than unstructured data sets, especially when visualized using tree-like structures, for example dendrograms.

Thus, an improved industrial plant machine learning method is provided.

In a preferred embodiment, the representation hierarchy comprises at least a bottom level and at least a target level, wherein the bottom level comprises the signal representations and wherein the target level comprises the hierarchical representations.

Preferably, an amount of signal representations in the bottom level is larger than an amount of hierarchical representations on the target level.

Preferably, the hierarchical representations on the target level comprise any information of the signal representations of the bottom level.

Due to the reduced amount of hierarchical representations on the target level compared to the signal representations on the bottom level, an amount of input data for the output machine learning model is reduced.

In a preferred embodiment, the representation hierarchy comprises at least an intermediate level, wherein the at least one intermediate level comprises the hierarchical representations with a lower level than the target level.

The at least one intermediate level allow for a subsequent reduction in representations over each level from the bottom level over the at least one intermediate level to the target level.

In a preferred embodiment, the target level comprises only one hierarchical representation that contains information about all lower level representations.

In a preferred embodiment, training the output machine learning model comprises training the output machine learning model using the hierarchical representations of the previous level.

Preferably, the output machine learning model is trained based on the hierarchical representations on each intermediate level and the target level. This allows for a reduced input for the output machine learning model.

In a preferred embodiment, training the output machine learning model comprises training the output machine learning model using the hierarchical representations of the target level.

Preferably, the output machine learning model only uses one representation to be trained, namely the hierarchical representation of the target level. This allows for a reduced input for the output machine learning model.

In a preferred embodiment, determining the representation hierarchy comprises

learning for each data signal a signal representation and learning for each hierarchical unit a hierarchical representation, wherein each hierarchical representation is learned based on corresponding representation of a previous level.

In other words, the representation hierarchy comprises a tree-like relationship between signal representations and hierarchical representations over a plurality of levels. The representation hierarchy is determined using the topology model of the industrial plant.

For example, the data signals comprise data signals from a first asset and from a second asset. The output machine learning unit can be trained using the raw data signals. In this case, the representation hierarchy reflects a grouping of the data signals of the first asset and the data signals of the second asset. A first hierarchical representation of the first asset corresponds to the data representations reflecting the data signals of the first asset. Thus, the first hierarchical representation comprises a machine learning model that is learned from those data representations. A second hierarchical representation of the second asset corresponds to the data representations reflecting the data signals of the second asset. Thus, the second hierarchical representation comprises a machines learning model that is learned from those data representations. The output machine learning model is thus trained using the first and second hierarchical representation instead of the raw data signals.

Preferably, a hierarchical representation is learned using corresponding signal representations.

In a preferred embodiment, learning the signal representation and learning the hierarchical representation comprises using a dimensionality reduction method. A dimensionality reduction methods preferably projects data from a high dimensional space (e.g. many signal values with a minute sampling over one hour) to a lower dimensional space while preserving most of the information of the original data. The content of information might be measured in terms of variance still present in lower dimensional space or the ability to reconstruct the original data from the lower dimensional space.

Preferably, the dimensionality reduction method comprises AutoEncoder, principal component analysis, PCA, t-distributed stochastic neighbour embedding, local linear embeddings, and Laplacian Eigenmaps.

In a preferred embodiment, determining the representation hierarchy comprises determining a distance matrix between the representations using the received topology model, identifying hierarchical representations as parent representations and its corresponding children representations on a lower level using the determined distance matrix and learning the parent representations using the identified children representations.

In a preferred embodiment, learning the parent representations using the identified children representations comprises determining reconstructed children data by decoding the identified children representations and learning the parent representations using the reconstructed children data.

The parent representation preferably is a hierarchy representation on an intermediate level, for example an auto encoder on intermediate level, or the output machine learning model on the target level.

Thus, the output machine learning model is less sensitive to noise and bias.

In a preferred embodiment, decoding the identified children representations comprises reconstructing and/or de-noising the data of the identified children representations.

For example, the dimensionality reduction simplifies the data and removes parts in the reduction process. The goal is always to keep variance in the data. Going from the encoded data, the decoding step will construct data following this logic, thus data aspects which were lost in the dimensionality reduction will not be there anymore. This procedure acts as de-noising.

In a preferred embodiment, the method comprises the step repeating identifying hierarchical representations as parent representations and its corresponding children representations from a lower level to a higher level until the target level is reached.

Preferably, the target level is externally provided to the machine learning unit.

In other words, depending on the case, the target level can refer to any hierarchy level of the industrial plant. In some cases, the target level is arrived when an asset level is arrived. For example, a target level is arrived when a sub-unit level of the industrial plant is arrived.

In a preferred embodiment, the topology model and the data signals of sensors of the industrial plant are provided by the industrial plant and/or by an industrial plant simulation.

Preferably, the data signals are provided by modules of a modular plant.

Preferably, the data signals, and in particular the topology model, are provided from a simulator instead of data signals from an actual physical plant or a combination of the two, where the representations, or in other words the machine learning models, on the all levels are trained on samples derived from simulated data and data from the actual physical plant.

In a preferred embodiment, the a topology model comprises structural information on hierarchical relations between process steps in a process recipe processed by the industrial plant. The aforementioned could be also applied this to modules in a Modular plant

According to an aspect of the invention, an industrial plant machine learning system comprising means for carrying out the steps of a method, as described here, is provided.

Preferably, the industrial plant machine learning system comprises an input interface for receiving input data from an industrial plant. The input data preferably comprises a topology model, comprising structural information on hierarchical relations between components of the industrial plant, and data signals of sensors of the industrial plant. The industrial plant machine learning system furthermore comprises a machine learning unit processing the input data. The machine learning unit preferably trains an output machine learning model of the machine learning unit configured for prediction of specific high-level elements of the industrial plant.

LIST OF REFERENCE SYMBOLS

  • 10 machine learning unit
  • 11 output machine learning model
  • 20 industrial plant
  • 100 Industrial plant machine learning system
  • A first asset
  • B second asset
  • α third asset
  • AEA representation of first asset A
  • AEB representation of second asset B
  • AEα representation of third asset α
  • S1,1 signal 1 of first asset A
  • S1,n signal n of first asset A
  • S2,1 signal 1 of second asset B
  • S2,m signal m of second asset B
  • So,1 signal 1 of third asset α
  • So,α signal o of third asset α
  • AE1,1 representation of signal 1 of first asset A
  • AE1,n representation of signal n of first asset A
  • AE2,1 representation of signal 1 of second asset B
  • AE2,m representation of signal m of second asset B
  • AEo,1 representation of signal 1 of third asset α
  • AEo,α representation of signal o of third asset α
  • SU sub unit
  • AESU representation of sub unit
  • Lb Bottom level
  • Li Intermediate level
  • Lt target level (prediction level)
  • S data signal
  • Dc compressed data
  • P prediction
  • Dr reconstructed data
  • T topology model
  • H hierarchical representation
  • S10 first step
  • S20 second step
  • S30 third step

All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.

The use of the terms “a” and “an” and “the” and “at least one” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.

Preferred embodiments of this invention are described herein, including the best mode known to the inventors for carrying out the invention. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate, and the inventors intend for the invention to be practiced otherwise than as specifically described herein. Accordingly, this invention includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the invention unless otherwise indicated herein or otherwise clearly contradicted by context.

Claims

1. A computer-implemented method of hierarchical machine learning for an industrial plant machine learning system, comprising:

receiving, by a machine learning unit, a topology model comprising structural information on hierarchical relations between components of the industrial plant, wherein the components comprise data signals of sensors of the industrial plant and hierarchical units, wherein the hierarchical units comprise assets, plant sub-units, plant units and plant sections of the industrial plant;
determining, by the machine learning unit, a representation hierarchy comprising a plurality of levels using the received data signals and the received topology model, wherein the representation hierarchy comprises a signal representation for each of the plurality of received data signals and a hierarchical representation for each of the hierarchical units on different levels;
wherein each representation on a higher level represents a group of representations on a lower level;
wherein each of the signal representation and the hierarchical representation comprise a machine learning model;
training, by the machine learning unit, an output machine learning model of the machine learning unit using the determined hierarchical representations.

2. The method of claim 1, wherein the representation hierarchy comprises at least a bottom level and at least a target level, wherein the bottom level comprises the signal representations, and wherein the target level comprises the hierarchical representations.

3. The method of claim 1, wherein the representation hierarchy comprises at least an intermediate level, wherein the at least one intermediate level comprises the hierarchical representations with a lower level than the target level.

4. The method of claim 2, wherein the target level comprises only one hierarchical representation that contains information about all lower level representations.

5. The method of claim 1, wherein training the output machine learning model comprises training the output machine learning model using the hierarchical representations of the previous levels.

6. The method of claim 1, wherein training the output machine learning model comprises training the output machine learning model using the hierarchical representations of the target level.

7. The method of claim 1, wherein determining the representation hierarchy comprises learning, for each data signal, a signal representation and learning, for each hierarchical unit, a hierarchical representation, and wherein each hierarchical representation is learned based on corresponding representations of a previous level.

8. The method of claim 7, wherein learning the signal representation and the hierarchical representation comprises using a dimensionality reduction method.

9. The method of claim 1, wherein determining the representation hierarchy comprises:

determining a distance matrix between the representations using the received topology model;
identifying hierarchical representations as parent representations and its corresponding children representations on a lower level using the determined distance matrix; and
learning the parent representations using the identified children representations.

10. The method of claim 9, wherein learning the parent representations using the identified children representations comprises:

determining reconstructed children data by decoding the identified children representations; and
learning the parent representations using the reconstructed children data.

11. The method of claim 10, wherein decoding the identified children representations comprises reconstructing and/or de-noising the data of the identified children representations.

12. The method of claim 9, further comprising repeating identifying hierarchical representations as parent representations and its corresponding children representations from a lower level to a higher level until the target level is reached.

13. The method of claim 1, wherein the topology model and the data signals of sensors of the industrial plant are provided by the industrial plant and/or by an industrial plant simulation.

14. The method of claim 1, wherein the topology model comprises structural information on hierarchical relations between process steps in a process recipe processed by the industrial plant.

Patent History
Publication number: 20230029400
Type: Application
Filed: Sep 30, 2022
Publication Date: Jan 26, 2023
Applicant: ABB Schweiz AG (Baden)
Inventors: Benedikt Schmidt (Heidelberg), Ido Amihai (Heppenheim), Arzam Muzaffar Kotriwala (Ladenburg), Moncef Chioua (Montreal), Dennis Janka (Heidelberg), Felix Lenders (Darmstadt), Jan Christoph Schlake (Darmstadt), Martin Hollender (Dossenheim), Hadil Abukwaik (Weinheim), Benjamin Kloepper (Mannheim)
Application Number: 17/957,609
Classifications
International Classification: G06N 20/00 (20060101);