POWER DISTRIBUTION SYSTEM RECONFIGURATIONS FOR MULTIPLE CONTINGENCIES
System and method simulate power distribution system reconfigurations for multiple contingencies. Decision tree model is instantiated as a graph with nodes and edges corresponding to simulated outage states of one or more buses in the power distribution system and simulated states of reconfigurable switches in the power distribution system, Edges related to each outage are disconnected. A reconfiguration path is determined with a plurality of switches reconfigured to a closed state by an iteration of tree search algorithms. A simulation estimates feeder cable and transformer loading and bus voltages on the reconfigured path for comparing against constraints including system capacity ratings and minimum voltage. Further iterations identify additional candidate reconfiguration paths which can be ranked by total load restoration
Latest Siemens Corporation Patents:
- KNOWLEDGE GRAPH FOR INTEROPERABILITY IN INDUSTRIAL METAVERSE FOR ENGINEERING AND DESIGN APPLICATIONS
- FAILURE PREDICTION IN SURFACE TREATMENT PROCESSES USING ARTIFICIAL INTELLIGENCE
- SYSTEM AND METHOD TO AUTOMATICALLY GENERATE AND OPTIMIZE RECYCLING PROCESS PLANS FOR INTEGRATION INTO A MANUFACTURING DESIGN PROCESS
- LARGE-SCALE MATRIX OPERATIONS ON HARDWARE ACCELERATORS
- FINE-GRAINED INDUSTRIAL ROBOTIC ASSEMBLIES
This application relates to power distribution systems. More particularly, this application relates to power distribution system reconfigurations for multiple outage contingencies.
BACKGROUNDIn large scale power distribution systems, robustness is designed into the systems by inclusion of redundant feeder paths for every load. Switches are strategically positioned in the network allowing power to follow different paths to each load. The switches are either normally open or normally closed. Contingency planning for line outages (e.g., due to severe weather or wildfires) involves performing contingency studies for determining best case switching decisions for reconfiguring radial feeder paths to bypass the fault for restoration of power to system loads, ideally to as many of the interrupted buses as possible. Critical loads (e.g., hospitals, emergency responders, etc.) are a priority and minimizing restoration time is an important objective. Fast response to line outages requires taking critical control actions in a proper sequence for system risk mitigation. Finding alternate paths for the power supply is complex as there can be thousands of buses and hundreds of switches throughout the network. Switching often requires dispatching a work crew to manually operate the switches, and the proper sequence for multiple switch operations is critical.
In industry, it is common practice to perform N-1 contingency studies, where N-1 represents N buses in the distribution systems less 1 bus due to a single component failure (i.e., study the power system performance under various scenarios having a single component failure, such as one line outage). Another contingency study type is N-1-1, in which there is a single loss followed by another single loss. During the planning stage, power system engineers run exhaustive N-1 cases to ensure the power system is robust under any single line failure/outage. A more comprehensive contingency study attempts to model more severe distribution system failures for scenarios with multiple outages (i.e., k failures). However, N-k contingency studies are not typically explored in industry as the number of possible contingencies even for a small value of k make total enumeration computationally intractable. Tractable approaches instead rely on determining service restoration strategies once a set of k line outages have been identified (post-outage).
N-k contingency studies have been explored in academia by researchers with focus on two approaches for a solution. A first approach formulates the problem into an optimization problem and solves with standard optimization solvers. Advantages of this approach are that continuous control variables are modeled, and it is capable of multistep decision making. However, the limitation of this approach is that is only applicable to small distribution systems, unscalable to larger systems due to presence of integer variables. A second approach formulates the problem into a graph reduction problem and then uses graph search (e.g., Minimum Spanning Tree) for a solution using a single-step decision process. While this approach solves large-scale problems, it cannot model continuous control variables, nor can it perform multistep decision making.
Another shortcoming of prior works is the attempt to model contingencies using deterministic outages, such as with distribution system software tools (e.g., open source software OpenDSS). Depending on the whether there is a dedicated function for N-k deterministic distribution system resiliency study, the deterministic N-k resiliency can be performed by stacking multiple N-1 studies. However, line outages are not deterministic as power lines in certain areas (for example in the snowy area routed across the mountain) are subject to more vulnerability in other areas. Moreover, natural disasters generate outages that are inherently stochastic. Hence, proper response contingencies require stochastic analysis.
SUMMARYSystem and method are provided for power distribution system reconfiguration simulations for multiple contingencies. In one aspect, a greedy topology reconfiguration algorithm models a distribution system and simulates single (N-1), sequential (N-1-1), or simultaneous (N-k) contingency scenarios. The topology reconfiguration algorithm seeks to determine which set of switches to operate in a distribution system to serve maximum load while adhering to network and operational constraints such as radial structure, line and transformer loading limits, and bus voltages. The contingency analyses are useable for either pre-outage planning or post-outage recovery.
In an aspect, a computer system is provided system for power distribution system reconfigurations for multiple contingencies. A memory stores algorithmic modules executable by a processor, the modules including a decision tree engine and a power flow simulation engine. Decision tree engine instantiates a decision tree model configured as a graph with nodes and edges corresponding to simulated outage states of one or more buses in the power distribution system and simulated states of reconfigurable switches in the power distribution system. The model spans from parent nodes to child nodes in a radial pattern of branches. Decision tree engine disconnects edges in the model related to each outage and determines a reconfiguration path with a plurality of switches reconfigured to a closed state by iteration of tree search algorithms. Power flow simulation engine generates a simulation to estimate feeder cable and transformer loading and bus voltages on the reconfigured original graph in response to a simulation trigger, compares the estimates against constraints including system capacity ratings and minimum voltage, the constraints extracted from a power distribution system database, and classifies the reconfiguration as successful on a condition that the constraints are satisfied. Iterations of tree search algorithms are repeated to identify additional candidate reconfiguration paths and to rank reconfiguration paths classified as successful.
Non-limiting and non-exhaustive embodiments of the present embodiments are described with reference to the following FIGURES, wherein like reference numerals refer to like elements throughout the drawings unless otherwise specified.
Systems and methods are disclosed for enhancing resilience of a large-scale power distribution system by optimizing total load service restoration in the context of N-k contingency analysis. Power distribution systems consisting of feeder buses, feeder lines (herein, “lines” relate to feeder cables from buses) and transformers, follow a radial tree pattern to distribute power in one direction from feeder head buses in a downstream direction to load buses, which maintains protective safeguards. As part of a N-k contingency analysis for hypothetical k failures, a decision tree model engine generates a model that can identify all feasible restoration paths. As an example of a practical application, a distribution system may have N>=10,000 buses and k<=4 outages. An objective is to find top candidate restoration paths that restore the greatest load to the system including prioritized critical loads. Another objective is to determine the optimum operation sequence of configurable switches. The benefit arising from this point is a more resilient operation in distribution systems. On one hand, contingencies, such as severe weather events or natural disasters, often hit distribution systems sequentially. On the other hand, typical distribution systems consist of many manual breakers that are costly to operate, as utilities have to dispatch field technicians to open/close them. As a result, if decisions are made poorly, it is possible that a first decision to close an originally opened switch is followed by a second decision to open the same switch.
In addition, a stochastic feature is incorporated into the contingency analysis which is useful for pre-outage planning and post-outage recovery. By considering the probabilities of adversary contingencies, a system operator is able to best account for the scenarios that have higher averaged damage. Natural disasters are usually stochastic. Some line outages are likely to happen, but have regional impacts, other line outages are unlikely to happen, however, can introduce cascading failures. Deterministic N-k planning tends to ignore contingencies with small probability but large consequences. The embodiments of this disclosure find a balance between large probability/small damage and small probability/large damage events, thereby helping distribution system operators to avoid black swan events in power distribution systems.
In an embodiment, decision tree engine 111 generates a circuit graph based on the power distribution system data 131. As shown, circuit graph 230 illustrates such a circuit graph that represents a portion of distribution system 220 for top level buses B0, B1, B2, B3, R1 and R2. Normally open switches are represented by dashed edges, and normally closed switches are represented by solid edges. In an embodiment, decision tree engine 111 generates a decision tree based on the power distribution system data 131. The decision tree may be generated directly from the power distribution system data 131 or based on the intermediate data using circuit graph 230. An example of a decision tree section is illustrated by decision tree 240, in which each node represents a state of the system. In this example, node N1H0 represents the initial state of the system, such as a normal state. Upon a first simulated outage between buses B0 and B1, an outage decision tree node N1H1 is generated representing the outage event, from which three possible decision tree paths span out to three action decision tree nodes N2H1. For example, a first action tree node may represent a decision, in response to an outage to edge (B0, B1), to close normally open switch of edge (R1, B1). Similarly, action decision tree nodes can be generated for decisions to close normally open switches of edges (R2, B1) or (B2, B1).
The outage simulation by decision tree engine 111 generates decision tree nodes that track information related to a respective decision for the node, which may include one or more of the following: switch and line status, actions related to an edge, reward and penalty values to promote or discourage a decision path, and sequence of k outages. Switch and line status represents an open line due to a lost bus from an outage (i.e., an edge outage in the circuit graph). Action information can include the action of an open edge representing an outage, or a manual action due to a reconfigured switch in response to an outage. Reward values are computed as a bus load that would be restored by a switch reconfiguration (e.g., closing a normally open switch) for the current decision path. An objective for restoration is maximizing lost load restoration. Penalty values are computed by weighting according to depth of the circuit graph tree being reconnected, which accounts for anticipated voltage drop being proportional to circuit length. Penalty values can satisfy an objective to maintain bus voltage to be greater than the minimum allowable threshold as defined for stable power delivery (e.g., system transformers having minimum input voltage requirements to meet delivery of standard output voltage to consumers). Outage sequence information may be tracked by the decision tree for a simulation so that different contingencies can be compared. For example, to simulate N-k for k=3 outages for circuit graph 230, different sequences may be simulated and the results can be evaluated for resiliency across the different contingencies. To continue the simulation in
For sequential decision making (i.e., N-k contingencies), the MCTS engine uses a MCTS algorithm for finding out the optimal operation sequence of configurable switches. For every k, a switch configuration decision at each time is determined. The depth of the decision tree corresponds to the k contingencies. The MCTS algorithm executes an iterative method where every iteration has four steps: selection 111, expansion 112, simulation 113, and backpropagation 114. In the selection phase 111, the algorithm searches for the best child node according to an Upper Confidence Bound. Once it reaches the best child node, the expansion step 112 expands the decision tree. MCTS algorithm 320 calls STS algorithm 330 at this stage to determine the possible decisions to be made.
For every contingency, STS algorithm 330 seeks out all possible feasible reconfiguration solutions according to the following steps. STS algorithm 330 retrieves expanded decision tree 331 and opens one or more of all configurable switches 332 (e.g., sets open a subset of configurable switches). This step provides a significant improvement over prior art solutions that typically only analyze the distribution system keeping all normally closed switches closed. By opening one or more of all configurable switches for the contingency study, a greater number of possible reconfiguration contingencies are within the pool of candidates. Next, STS algorithm 330 identifies islands of connected components 333, finds spanning trees for a condensed graph 334, and reconstructs the decision tree 335. During step 333, the original graph size is significantly reduced by aggregating the islands of connected components as a single load node on the graph, which will be explained in greater detail below with reference to
Following the trees spanning, MCTS algorithm 320 takes the reconstructed graph and executes the simulation step 313 using a decision for operating an open switch to a closed state, which connects a load to the expanded bus in the virtual model. The selection of the switch for closing may be a random decision or may be based on optimizations that will be described in greater detail below with reference to
After the simulation 313 is finished, the MCTS algorithm 320 executes a backpropagation 114 on the simulation outcome (often called reward) to update the success rate for each node along the path that leads to this decision. For example, each node keeps a ratio score (s/A), where s is the value for successful reconfigurations per A attempts. The MCTS algorithm 320 and STS algorithm 330 operates a number of iterations N as described above. Candidate reconfigurations are ranked according to success rate scores and/or which reconfigurations maximize the reconnected load. For example, the number of iterations may be defined by a minimum value for N based on experimentation. Alternatively, the iterations may be repeated until a convergence test is satisfied, such as convergence of the ranked candidate list.
In first part 501 of process 500, the decision tree is traversed in a Breadth-first-search (BFS) type of traversal to identify connected components (step 333), and then traverses in a bottom-up manner with an objective of determining the aggregated load under each line, so that if one line is taken out (N-1 contingency), load loss can be immediately retrieved.
The second part 502 of process 500 relates to a calculation of load losses for N-k based on the aggregated load loss result. In subpart 502a, power flow simulation engine 112 determines all possible N-k load loss scenarios. Given M lines in the distribution system subject to loss, the number of possible scenarios can be denoted as combination
In sub-part 502b, power flow simulation engine 112 determines the relationships of these k outages for load loss in each scenario. The two-part algorithm 500 is configured to avoid miscalculating a load loss (i.e., an overestimation) that would result from simply summing aggregated loads of two lines on the same branch of a distribution circuit.
To perform the first part 501 of process 500, load aggregation for individual distribution circuits is determined. Without loss of generality, there are a total of N buses in a distribution system. For a distribution system with radial structure, the number of lines/transformers is N. M out of the N lines/transformers have chances of outages and k is the actual number of line outages. The N-k resiliency prediction problem is evaluated using an abstracted directed graph G=(V, E) with direction pointing from root to leaves, where Vis the set of nodes in the system, and E is the set of edges in the system.
An example of a pseudo code for part 501 is presented below in Algorithm 1.
To calculate the possible load losses under each scenario, Algorithm 1 first defines the aggregated load under each bus. The BFS traversal algorithm calculates the load aggregation. Algorithm 1 performs a BFS of the graph and stores the sequence of bus (node) visits in a queue. Node visits are useful for tracking a level of confidence for a particular contingency path in the decision tree, where higher number of visits represents a higher level of confidence. The queue Q is offloaded from the tail of the queue and adds up the aggregated load L from the feeder end in a bottom-up manner. As a result, Algorithm 1 executes an N-1 resiliency level prediction problem to determine the overall load loss after an outage. For this analysis, outage failures are defined by M different lines/transformers that could fail under natural disaster. The probability of outage under line/transformer i is denoted pi, and the probability of outage under line/transformer j is denoted pj, in which it is assumed pi and pj are i.i.d if i≠j. N-1 contingency considers the event that only one of M possible outages happens, and each event is denoted as a scenario. With the definition of M possible outages, the algorithm determines the load loss under each scenario and calculates the joint probability of only one of the M possible outages happening. Given the aggregated load calculated using Algorithm 1, the power simulation engine determines the probability associated with each of the N-1 scenarios, which can be expressed as follows:
where pi is the probability of line/switch element i has an outage.
The second part 502 of process 500 determines N-k resiliency level prediction for contingency reconfiguration paths based on a variable of Depth-first-search (DFS) traversal which traverses a graph in a depthward motion and uses a stack for recall to get the next vertex to start a search, when a dead end occurs in any iteration. The probability that associates with each N-k scenario can be calculated as follows:
where K is the set of k lines that have outages.
When it comes to determining the load loss for each distribution circuit lost under k outages where k≥2, the situation is more complicated than the N-1 case.
DFS is performed to determine the parent-child relationship among the outage edges. Given two nodes u and v, DFS is performed from the substation bus. Referring to
-
- If intime(u)<intime(v) and outtime(u)>outtime(v)→u is the parent of v
- Else if intime(u)>intime(v) and outtime(u)<outtime(v)→v is the parent of u
Otherwise, u and v are not on the same branch
For example, where node u corresponds to node 0 and node v corresponds to node 3, applying the above rule reveals that u is the parent of v. Alternatively, where node u corresponds to node 2 and node v corresponds to node 3, applying the above rule reveals that u and v are not on the same branch.
An example of a pseudo code for part 502 is presented below as Algorithm 2.
Advantages of the two-part process 500 with stochastic reconfiguration include the following. Compared with deterministic N-k resiliency, the results provide a system operator a better overview of the system resiliency level for each candidate reconfiguration. In an embodiment, the results of process 500 are sent to a user interface for display presentation to a user. As an example of such a display presentation,
For additional enhancements to the resiliency forecasting methods described above, parallelization and model reduction features are introduced in accordance with embodiments of this disclosure, as shown by flow chart examples of
possible scenario combinations. Because the different scenarios are mutually independent, the probability computation for each scenario (lines 6-7 of Algorithm 2) can be parallelized on different processors (step 805). Additionally, the DFS for parent-child relationship (lines 8-14 of Algorithm 2) can be parallelized on different processors. A collector operation at step 806 tabulates the aggregated loads and probabilities for each scenario.
In another example, process 850 is similar to process 800, where steps 851, 852, 854, 855 and 856 correspond to steps 801, 802 804, 805 and 806. Process 850 introduces model reduction step 853, in which insignificant buses are filtered out. For example, in the original set of M possible line outages, it is possible that some of the outages are either a very low probability or have a very small aggregated load under that line. As a result, step 851 applies empirical thresholds to reduce the number of candidate loss loads from M to M′. Accordingly, the number of combinations generated at 851 can be greatly reduced, which accelerates the resiliency forecast computation.
An advantage of the parallelization and model reduction features described above compared with deterministic approaches is that the results are obtained much faster and are achievable for large power distribution system having 10,000 feeder buses or more. Table 1 summarizes the outperformance in computational time compared with a conventional deterministic approach that uses OpenDSS.
The model reduction component is made possible by the combination of BFS and DFS algorithms used in the proposed stochastic N-k resiliency. The BFS and DFS are all O(Vertex+Edge), so it can be scaled to very large-scale distribution systems.
The processors 1020 may include one or more central processing units (CPUs), graphical processing units (CPUs), or any other processor known in the art. More generally, a processor as described herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine-readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device. A processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer. A processor may include any type of suitable processing unit including, but not limited to, a central processing unit, a microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, a Complex Instruction Set Computer (CISC) microprocessor, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), a System-on-a-Chip (SoC), a digital signal processor (DSP), and so forth. Further, the processor(s) 1020 may have any suitable microarchitecture design that includes any number of constituent components such as, for example, registers, multiplexers, arithmetic logic units, cache controllers for controlling read/write operations to cache memory, branch predictors, or the like. The microarchitecture design of the processor may be capable of supporting any of a variety of instruction sets. A processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between. A user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof. A user interface comprises one or more display images enabling user interaction with a processor or other device.
The system bus 1021 may include at least one of a system bus, a memory bus, an address bus, or a message bus, and may permit exchange of information (e.g., data (including computer-executable code), signaling, etc.) between various components of the computer system 1010. The system bus 1021 may include, without limitation, a memory bus or a memory controller, a peripheral bus, an accelerated graphics port, and so forth. The system bus 1021 may be associated with any suitable bus architecture including, without limitation, an Industry Standard Architecture (ISA), a Micro Channel Architecture (MCA), an Enhanced ISA (EISA), a Video Electronics Standards Association (VESA) architecture, an Accelerated Graphics Port (AGP) architecture, a Peripheral Component Interconnects (PCI) architecture, a PCI-Express architecture, a Personal Computer Memory Card International Association (PCMCIA) architecture, a Universal Serial Bus (USB) architecture, and so forth.
Continuing with reference to
The operating system 1038 may be loaded into the memory 1030 and may provide an interface between other application software executing on the computer system 1010 and hardware resources of the computer system 1010. More specifically, the operating system 1038 may include a set of computer-executable instructions for managing hardware resources of the computer system 1010 and for providing common services to other application programs (e.g., managing memory allocation among various application programs). In certain example embodiments, the operating system 1038 may control execution of one or more of the program modules depicted as being stored in the data storage 1040. The operating system 1038 may include any operating system now known or which may be developed in the future including, but not limited to, any server operating system, any mainframe operating system, or any other proprietary or non-proprietary operating system.
The computer system 1010 may also include a disk/media controller 1043 coupled to the system bus 1021 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 1041 and/or a removable media drive 1042 (e.g., floppy disk drive, compact disc drive, tape drive, flash drive, and/or solid state drive). Storage devices 1040 may be added to the computer system 1010 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire). Storage devices 1041, 1042 may be external to the computer system 1010.
The computer system 1010 may include a user interface module 1060 for communication with a graphical user interface (GUI) 1061, which may comprise one or more input/output devices, such as a keyboard, touchscreen, tablet and/or a pointing device, for interacting with a computer user and providing information to the processors 1020, and a display screen or monitor. In an aspect, the GUI 1061 relates to a display for presenting resiliency level distributions as earlier described.
The computer system 1010 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 1020 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 1030. Such instructions may be read into the system memory 1030 from another computer readable medium of storage 1040, such as the magnetic hard disk 1041 or the removable media drive 1042. The magnetic hard disk 1041 and/or removable media drive 1042 may contain one or more data stores and data files used by embodiments of the present disclosure. The data store 1040 may include, but are not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed data stores in which data is stored on more than one node of a computer network, peer-to-peer network data stores, or the like. Data store contents and data files may be encrypted to improve security. The processors 1020 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 1030. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
As stated above, the computer system 1010 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein. The term “computer readable medium” as used herein refers to any medium that participates in providing instructions to the processors 1020 for execution. A computer readable medium may take many forms including, but not limited to, non-transitory, non-volatile media, volatile media, and transmission media. Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as magnetic hard disk 1041 or removable media drive 1042. Non-limiting examples of volatile media include dynamic memory, such as system memory 1030. Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the system bus 1021. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
Computer readable medium instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer readable medium instructions.
The computing environment 1000 may further include the computer system 1010 operating in a networked environment using logical connections to one or more remote computers, such as remote computing device 1073. The network interface 1070 may enable communication, for example, with other remote devices 1073 or systems and/or the storage devices 1041, 1042 via the network 1071. Remote computing device 1073 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 1010. When used in a networking environment, computer system 1010 may include modem 1072 for establishing communications over a network 1071, such as the Internet. Modem 1072 may be connected to system bus 1021 via user network interface 1070, or via another appropriate mechanism.
Network 1071 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 1010 and other computers (e.g., remote computing device 1073). The network 1071 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-6, or any other wired connection generally known in the art. Wireless connections may be implemented using Wi-Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 1071.
It should be appreciated that the program modules, applications, computer-executable instructions, code, or the like depicted in
Although specific embodiments of the disclosure have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the disclosure. For example, any of the functionality and/or processing capabilities described with respect to a particular device or component may be performed by any other device or component. Further, while various illustrative implementations and architectures have been described in accordance with embodiments of the disclosure, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative implementations and architectures described herein are also within the scope of this disclosure. In addition, it should be appreciated that any operation, element, component, data, or the like described herein as being based on another operation, element, component, data, or the like can be additionally based on one or more other operations, elements, components, data, or the like. Accordingly, the phrase “based on,” or variants thereof, should be interpreted as “based at least in part on.”
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
Claims
1. A computer system for simulating power distribution system reconfigurations for multiple contingencies, the computer system comprising:
- a processor; and
- a memory having algorithmic modules stored thereon executable by the processor, the modules comprising:
- a decision tree engine configured to: instantiate a decision tree model configured as a graph with nodes and edges corresponding to simulated outage states of one or more buses in the power distribution system and simulated states of reconfigurable switches in the power distribution system, the model spanning from parent nodes to child nodes in a radial pattern of branches; disconnect edges in the model related to each outage; and determine a reconfiguration path with a plurality of switches reconfigured to a closed state by an iteration of tree search algorithms; and
- a power flow simulation engine configured to: generate a simulation to estimate feeder cable and transformer loading and bus voltages on the reconfigured path; compare the estimates against constraints including system capacity ratings and minimum voltage, the constraints extracted from a power distribution system database; and
- classify the reconfiguration as successful on a condition that the constraints are satisfied; wherein further iterations of the tree search algorithms are repeated to identify additional candidate reconfiguration paths and to rank reconfiguration paths classified as successful.
2. The computer system of claim 1, wherein the iteration of tree algorithms comprises:
- executing a Monte Carlo tree search (MCTS) algorithm and a spanning tree search (STS) algorithm, wherein the MCTS algorithm is configured to select a child node for expansion, and the STS algorithm is configured to: set open a subset of configurable switches in the model; identify islands of connected components through aggregation of connected loads; and reconstruct a condensed graph from spanning trees across aggregated components;
- wherein the MCTS algorithm triggers the power flow simulation with a selection of at least one switch closure.
3. The computer system of claim 1, wherein the decision tree engine is further configured to generate chance nodes in the decision tree model for tracking probabilities for a reconfiguration branch decision of a parent node to either of two child nodes, wherein the probabilities relate to a successful reconfiguration classification.
4. The computer system of claim 3, wherein the processor comprises a set of parallel processors, and the probabilities are computed in a parallelized manner across the parallel processors.
5. The computer system of claim 1, wherein the power flow simulation engine is further configured to determine an aggregated load under each feeder line by traversing the decision tree using a breadth-first-search traversal algorithm.
6. The computer system of claim 5, wherein the power flow simulation engine is further configured to determine all combinations of load loss scenarios and parent-child relationships among outage edges, and to calculate a total load loss for all aggregated loads for each distribution circuit lost in the outage.
7. The computer system of claim 6, wherein the parent-child relationships are determined based on intime( ) and outtime( ) recorded time stamp values for when outage nodes are pushed into and out of a stack during a depth-first-search traversal of the decision tree model.
8. The computer system of claim 1, wherein the power flow simulation engine is further configured to apply thresholds to reduce the number of candidate loss loads based on outages having low probability or outages having aggregated load below a low threshold.
9. The computer system of claim 1, wherein k outages are known to have occurred, and the decision tree engine is further configured to determine which switch to close by:
- ranking islanded components in order of importance criteria,
- filtering highest ranking components based on loading of a grid connected feeder on energized side of the switch, and nodal voltage being above minimum specifications.
10. The computer system of claim 1, further wherein the power simulation engine is further configured to:
- determine a probability for each contingency;
- send a resiliency level distribution for the power distribution system to a display as a graph of contingency probability versus load loss for the contingency; and
- rank the candidate reconfigurations according to resiliency level.
11. A computer-implemented method simulating power distribution system reconfigurations for multiple contingencies, the method comprising:
- instantiating a decision tree model configured as a graph with nodes and edges corresponding to simulated outage states of one or more buses in the power distribution system and simulated states of reconfigurable switches in the power distribution system, the model spanning from parent nodes to child nodes in a radial pattern of branches;
- disconnecting edges in the model related to each outage; and
- determining a reconfiguration path with a plurality of switches reconfigured to a closed state by an iteration of tree search algorithms;
- generating a simulation to estimate feeder cable and transformer loading and bus voltages on the reconfigured path;
- comparing the estimates against constraints including system capacity ratings and minimum voltage, the constraints extracted from a power distribution system database; and
- classifying the reconfiguration as successful on a condition that the constraints are satisfied;
- wherein further iterations of tree search algorithms are repeated to identify additional candidate reconfiguration paths and to rank reconfiguration paths classified as successful.
12. The method of claim 11, wherein the iteration of tree algorithms comprises:
- executing a Monte Carlo tree search (MCTS) algorithm and a spanning tree search (STS) algorithm, wherein the MCTS algorithm is configured to select a child node for expansion, and the STS algorithm is configured to: set open a subset of configurable switches in the model; identify islands of connected components through aggregation of connected loads; and reconstruct a condensed graph from spanning trees across aggregated components;
- wherein the MCTS algorithm triggers the power flow simulation with a selection of at least one switch closure.
13. The method of claim 9, further comprising:
- generating chance nodes in the decision tree model for tracking probabilities for a reconfiguration branch decision of a parent node to either of two child nodes, wherein the probabilities relate to a successful reconfiguration classification.
14. The method of claim 9, further comprising:
- determining an aggregated load under each feeder line by traversing the decision tree using a breadth-first-search traversal algorithm;
- determining all combinations of load loss scenarios and parent-child relationships among outage edges; and
- calculating a total load loss for all aggregated loads for each distribution circuit lost in the outage;
- wherein the parent-child relationships are determined based on intime( ) and outtime( ) recorded time stamp values for when outage nodes are pushed into and out of a stack during a depth-first-search traversal of the decision tree model.
15. The method of claim 11, wherein k outages are known to have occurred, the method further comprising: determining which switch to close by:
- ranking islanded components in order of importance criteria, and
- filtering highest ranking components based on loading of a grid connected feeder on energized side of the switch, and nodal voltage being above minimum specifications.
Type: Application
Filed: Aug 30, 2021
Publication Date: Nov 9, 2023
Applicant: Siemens Corporation (Washington, DC)
Inventors: Yubo Wang (Princeton, NJ), Siddharth Bhela (Kendall Park, NJ), Ulrich Muenz (Princeton, NJ)
Application Number: 18/245,017