Dynamic allocation of computing resources for electronic design automation operations
A system may include a set of compute engines. The compute engines may be configured to perform electronic design automation (EDA) operations on a hierarchical dataset representative of an integrated circuit (IC) design. The system may also include a dynamic resource balancing engine configured to allocate computing resources to the set of compute engines and reallocate a particular computing resource allocated to a first compute engine based on an operation priority of an EDA operation performed by a second compute engine, an idle indicator for the first compute engine, or a combination of both.
Latest Siemens Industry Software Inc. Patents:
- Image-based defect detections in additive manufacturing
- CIRCUIT DESIGN AND MANUFACTURING HOTSPOT ROOT CAUSE EXTRACTION
- METADATA PREDICTION FOR PRODUCT DESIGN
- HARVESTING CIRCUIT INFORMATION FROM SCHEMATIC IMAGES
- Computer aided design of custom cellular lattice kernels according to material properties
Electronic circuits, such as integrated microcircuits, are used in nearly every facet of modern society from automobiles to microwaves to personal computers. Design of microcircuits may involve many steps, known as a “design flow.” The particular steps of a design flow often are dependent upon the type of microcircuit being designed, its complexity, the design team, and the microcircuit fabricator or foundry that will manufacture the microcircuit. Electronic design automation (EDA) applications support the design and verification of circuits prior to fabrication. EDA applications may include various functions, tools, or features to test or verify a design at various stages of the design flow, e.g., through execution of software simulators and/or hardware emulators for error detection.
SUMMARYDisclosed implementations include systems, methods, devices, and logic that may support dynamic allocation of computing resources for EDA operations.
In one example, a method may be performed, executed, or otherwise carried out by a computing system. The method may include allocating computing resources to a set of compute engines, each compute engine configured to perform EDA operations on a hierarchical dataset representative of an integrated circuit (IC) design and dynamically reallocating a particular computing resource from a first compute engine to a compute second engine based on an operation priority of an EDA operation executed by the second compute engine, an idle indicator for the first compute engine, or a combination of both.
In another example, a system may include a set of compute engines, and each compute engine may be configured to perform EDA operations on a hierarchical dataset representative of an IC design. The system may also include a dynamic resource balancing engine configured to allocate computing resources to the set of compute engines and reallocate a particular computing resource allocated to a first compute engine based on an operation priority of an EDA operation performed by a second compute engine, an idle indicator for the first compute engine, or a combination of both.
In yet another example, a non-transitory machine-readable medium may store processor-executable instructions. When executed, the instructions may cause a system to allocate computing resources to a set of compute engines, each compute engine configured to perform EDA operations on a hierarchical dataset representative of a circuit design and dynamically reallocate a particular computing resource from a first compute engine to a second compute engine based on an operation priority of an EDA operation executed by the second compute engine, an idle indicator for the first compute engine, or a combination of both.
Certain examples are described in the following detailed description and in reference to the drawings.
The following disclosure relates to EDA applications and CAD systems which may be used to facilitate the design and manufacture of circuits. As technology improves, modern circuit designs may include billions of components and more. To support increasing degrees of circuit design complexity, EDA applications may include various features such as high-level synthesis, schematic capture, transistor or logic simulation, field solvers, functional and physical verifications, geometry processing, equivalence checking, design rule checks, mask data preparation, and more.
Execution of EDA applications and processes may require significant computational resources, and computing environments may vary between different entities using EDA applications for circuit design and verification. Computing environments configured to execute EDA applications and processes may range from 16 CPUs, to 10,000 of CPUs, to more. As circuit designs continue to increase in complexity, the computational requirements of EDA applications may continue to increase. As such, improvements in the computational performance and capability of computing systems used to execute EDA applications may provide significant technical benefits.
The features described herein may support dynamic allocation of computing resources for execution of EDA operations. In particular, the dynamic resource allocation features (also referred to as dynamic resource balancing features) described herein may provide specific criteria and mechanisms by which computing resources can be dynamically distributed during execution of EDA processes and underlying EDA operations. The various dynamic load balancing features described herein may be specific to EDA processes for circuit design, and example load balancing criteria include EDA operation-based resource reallocations and idle-based resource reallocations (e.g., at execution tails of EDA operations). Such dynamic resource balancing specific to EDA operations may increase the computational efficiency and effectiveness of EDA computing systems.
In connection with the various dynamic resource balancing features described herein, the computing system 100 may implement, utilize, or otherwise support dynamic resource allocation as described in U.S. patent application Ser. No. 15/873,827 filed on Jan. 17, 2018 and titled “DYNAMIC DISTRIBUTED RESOURCE MANAGEMENT” (the '827 application), which is incorporated herein by reference in its entirety. Computing resources used and allocated by the computing system 100 may be maintained and allocated according to the various dynamic allocation mechanisms as described in the '827 application and in accordance with the various criteria and resource balancing features described herein.
The computing system 100 may include various computing resources to execute EDA processes and operations. As an example implementation, the computing system 100 shown in
Each compute engine may be implemented as a combination of hardware and software, and may thus include physical computing resources (e.g., CPUs, memory, network resources, etc.) and processor-executable instructions (e.g., workflow processes, instruction scheduling logic, resource acquisition or thread activation instructions, etc.) to support EDA computations. In operation, the compute engines may operate in parallel, for example each serving as command servers that perform EDA operations on specific portions of an IC design or perform specific sets of EDA operations to provide parallelism and operation-level concurrency in EDA process execution. For example, the compute engines 101, 102 and 103 shown in
EDA applications may be executed in various type of computing environments, including in whole or in part via cloud computing. As such, the computing system 100 (including the compute engines 101, 102, and 103) may be implemented in part via a public cloud, private cloud, or hybrid cloud. Additionally or alternative, EDA applications may be executed via a software-as-a-service (“SaaS”) distribution model (whether in whole or in part), and the computing resources that comprise the computing system 100 may be off-premise (e.g., with regards to EDA application users), on-premise, or a combination of both. The various EDA features described herein may be implemented as part of a SaaS distribution model or via cloud computing implementations.
As described in greater detail herein, the computing system 100 may dynamically balance computing resources assigned to different compute engines according to various balancing criteria. In doing so, the computing system 100 may increase computational efficiency by reallocating computing resources (e.g., physical CPUs, memory, or any other computational resource used for operation execution) according to a priority of EDA operations, idle compute engines, or various other factors. Such dynamic resource allocation and balancing may be performed by a dynamic resource balancing engine 110, e.g., as shown in
The computing system 100 may implement the dynamic resource balancing engine 110 (and components thereof) in various ways, for example as hardware and programming. The programming for the dynamic resource balancing engine 110 may take the form of processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the dynamic resource balancing engine 110 may include a processor to execute those instructions. A processor may take the form of single processor or multi-processor systems, and in some examples, the computing system 100 implements multiple engine components or system elements using the same computing system features or hardware components (e.g., a common processor or common storage medium for the dynamic resource balancing engine 110 and compute engines 101, 102, and 103).
In operation, the dynamic resource balancing engine 110 may allocate computing resources to a set of compute engines, such as the compute engines 101, 102, and 103. As used herein, a computing resource may include any physical or logical resource that used for execution of an EDA operation. Computing resources may thus include physical CPUs (including remote resources), network adapters, I/O bandwidth, memory (physical or virtual), scheduling slots for use of particular computational elements, etc. In operation, the dynamic resource balancing engine 110 may also reallocate a particular computing resource allocated to a first compute engine (e.g., compute engine 101) based on an operation priority of an EDA operation performed by a second compute engine (e.g., compute engine 102), an idle indicator for the first compute engine, or a combination of both.
These and other dynamic resource balancing features are described in greater detail next. Various balancing criteria are described specific to execution of EDA applications, including computing resource reallocations based on specific EDA operations or operation types, idle-based reallocations (e.g., at EDA operation tails), and others.
The hierarchical dataset 220 may take the form of any hierarchical representation of a circuit design (e.g., the circuit layout 210). In that regard, the hierarchical dataset 220 (which may be in the form of a hierarchical database) may include various hierarchies in circuit representation to provide computational efficiencies. Hierarchies in the hierarchical dataset 220 may include hierarchies of design elements (e.g., combination of individual circuit structures into larger circuit structures), of stacking order of individual layers in an integrated circuit, or in other hierarchical forms.
In some instances, the hierarchical dataset 220 may include various design cells that represent the circuit layout in a cell-based hierarchical structure. Hierarchical cells may encapsulate individual design structures (e.g., electrode contacts) which can be grouped to form other hierarchical cells (e.g., NAND gates) on a higher hierarchical layer, which may be further grouped into other hierarchical cells on yet other higher hierarchical layers. In such a way, the hierarchical dataset 220 may represent various design hierarchies in the circuit layout 210. Such hierarchical structures may support parallelism and operational-level concurrency for EDA application executions.
The dynamic resource balancing engine 110 may allocate computing resources for performing EDA operations on the hierarchical dataset 220, or delineated portions thereof. In some examples, the dynamic resource balancing engine 110 identifies specific sections of the hierarchical dataset 220 (e.g., specific layers or hierarchical cells) to assign to different compute engines for EDA computations. Additionally or alternatively, the dynamic resource balancing engine 110 may issue different EDA operations for execution on the hierarchical dataset 220 (or selected portions thereof) to different compute engines. As such, the compute engine 101 may perform a first EDA operation on a portion of the hierarchical dataset 220 (e.g., a multi-patterning color operation on a metal 1 layer of the circuit layout 210) whereas the compute engine 102 may (simultaneously) perform a second EDA operation on a different portion of the hierarchical dataset 220 (e.g., a design-rule-check operation on a metal 2 layer of the circuit layout 210).
EDA operations may vary in computational complexity and some EDA operations may use (e.g., consume) outputs generated from other EDA operations. That is, EDA operations may have different timing, computing, or dependency requirements. To address such differences in requirements and complexity, the dynamic resource balancing engine 110 may reallocate resources among compute engines to prioritize execution of selected EDA operations. Such re-provisioning of computing resources among compute engines for execution of an EDA application may be based on preconfigured or determined EDA operation priorities.
To illustrate, upon instantiation of an EDA application, the dynamic resource balancing engine 110 may assign the compute engines 101, 102, and 103 a fixed set of computing resources to use for execution of the EDA operations assigned to each respective compute engine. At any point during execution of the EDA application, the dynamic resource balancing engine 110 may reallocate computing resources assigned to a given compute engine to a different compute engine based on the operation priorities of the EDA operation executed by the given compute engine, the different compute engine, or both.
As a particular example, the dynamic resource balancing engine 110 may prioritize EDA operations in a critical execution path of EDA processes of an EDA application. Various EDA processes may be performed on the hierarchical dataset 220 such as layout-versus-schematic (LVS) verifications, design rule checks (DRC), design for manufacturing (DFM) processes, and optical proximity corrections (OPC) to name a few. Each such EDA process may include a critical execution path of EDA operations, particularly when executed upon the hierarchical dataset 220. The critical execution path may represent a critical path of execution time for execution of an EDA process (e.g., the sequence of EDA operations in the EDA process with the longest execution time).
The dynamic resource balancing engine 110 may determine the critical execution path of a particular EDA process (e.g., a LVS verification) through a graph analysis in the various EDA operations that comprise the particular EDA process. For instance, the dynamic resource balancing engine 110 may represent the EDA process as a graph, each node representing EDA operations on different hierarchical portions of a circuit design, with edges between nodes representing input dependencies and including traversal (e.g., computational) costs. As such, the dynamic resource balancing engine 110 may determine or otherwise identify a critical execution path for execution of an EDA process.
During execution of the EDA process, the dynamic resource balancing engine 110 may provision additional computing resources to any compute engine(s) executing EDA operations in the critical execution path of an EDA process. To do so, the dynamic resource balancing engine 110 may identify a particular compute engine performing an EDA operation on a critical execution path (e.g., of an EDA process) and reallocate computing resources assigned to a different compute engine executing an EDA operation that is not on the critical execution path.
In some examples, a compute engine itself may provide an indication of operation priority to the dynamic resource balancing engine 110. In
Responsive to a determination that the compute engine 103 (in this example) is executing an EDA operation on a critical execution path, the dynamic resource balancing engine 110 may reallocate computing resources from other compute engines to the compute engine 103. In the example shown in
In some implementations, the dynamic resource balancing engine 110 may reallocate computing resources based on specific EDA operation types. As EDA operations may vary in computational complexity, the dynamic resource balancing engine 110 may prioritize EDA operations with higher computational complexities, latencies, or timing requirements. Such operation priorities may be specified through operation priority indicator messages sent from compute engines (e.g., as part of the operation priority indicator 230), which may specify a prioritized EDA operation type being executed by a compute engine. The dynamic resource balancing engine 110 may issue reallocation instructions 240 responsive to receiving the operation priority indicator 230 from the compute engine 103 indicative of a high-priority EDA operation being executed by the compute engine 103.
EDA operation priority may be specified, identified, or determined in various ways. In some examples, the dynamic resource balancing engine 110 maintains a priority list of EDA operation types, and such a priority list may be user-configurable or preconfigured. In other examples, the dynamic resource balancing engine 110 may implement or consult specific load balancing criteria, which may specify particular EDA operations (or EDA operation types) and corresponding resource reallocation actions. As particular example EDA operation types, the dynamic resource balancing engine 110 may allocate additional computing resources to a compute engine executing fill operations, multi-patterning operations, high-performance compute (HPC) operations, or any other EDA operations identified otherwise specified as computationally complex. On the other hand, the dynamic resource balancing engine 110 may deallocate computing resources for less computationally complex EDA operations, for example by reassigning a subset of computing resources assigned to compute engines executing Boolean-based EDA operations or other EDA operations with lesser complexity or reduced computational requirements or dependencies.
In some instances, the dynamic resource balancing engine 110 may determine EDA operation priority based on a particular hierarchal level an EDA operation is operating on. As one example, the dynamic resource balancing engine 110 may prioritize resource allocations for various layers of an IC design differently. The dynamic resource balancing engine 110 may prioritize front-end align layers of an IC design higher than other layers, e.g., by reallocating a computing resource to a compute engine that executes an EDA operation on a metal 1 or metal 2 layer from another compute engine that executes an EDA operation on a metal 10 layer, as one particular example. Such hierarchical-level based priorities and resource reallocations may be configured by a system administrator to allocate additional computing resources to hierarchical levels with specific physical characteristics (thicker IC layers with increased computational requirements, such as metal 1 and metal 2 layers) or hierarchical levels which are computationally complex/resource intensive in terms of processing requirements.
As yet another example of EDA operation-based priority, the dynamic resource balancing engine 110 may prioritize EDA operations with reduced-parallelization capability. In some implementations, compute engines may instantiate other compute engines to perform a sub-portion of EDA operations. In such cases, the compute engine may act, in effect, as a command server to initiate execution threads (e.g., through acquisition of local or remote computing resources) to process EDA operations or portions thereof. Such an instantiating compute engine may be referred to as a command compute engine. The compute engines instantiated by the command compute engine (which may be referred to as instantiated-compute engines) may operate on a selected dataset apportioned by the command compute engine. These instantiated compute engines may increase parallelism and support operation-level concurrency to more efficiently perform EDA operations on the hierarchical dataset 220 as logically separate execution units.
In some implementations, instantiated compute engines need not access or have knowledge to the different hierarchies of data in the hierarchical dataset 220, instead executing EDA operations on a data subset provided by the command compute engine. However, some EDA operations may specifically require hierarchical knowledge or cross multiple hierarchical layers, and thus may not be suitable for independent execution by the instantiated compute engines. Such EDA operations may require access to the hierarchical dataset 220, and may require execution by the command compute engine itself instead of instantiated pseudo compute engines. In such scenarios, the dynamic resource balancing engine 110 may prioritize execution of these hierarchy-dependent EDA operations by allocating additional compute resources to a command compute engine, e.g., by drawing from computing resources allocated to the instantiated compute engines or other compute engines executing lower-priority EDA operations.
As described above, the dynamic resource balancing engine 110 may dynamically vary computing resources assigned to different compute engines that execute an EDA application. By leveraging operation-level concurrency and using balancing criteria based on operation priority (whether explicitly set for specific operation-types or based on a critical execution path), the dynamic resource balancing engine 110 may improve EDA computing systems to reduce execution times, increase operational efficiency, and reduce overall resource consumption. Such dynamic resource balancing may reduce EDA application execution times by 10% or more, thus improving computing efficiency of EDA computing systems.
While various individual balancing criteria are described above, the dynamic resource balancing engine 110 may utilize any of the described balancing criteria in combination. Additionally or alternatively, the dynamic resource balancing engine 110 may reallocate computing resources amongst various compute engines to efficiently use idle resources at various junctures of EDA operation execution, for example as described next in
Compute engines may include underutilized or idle computing resources at various points during EDA operation execution. Such points can include the execution tail of EDA processes in an EDA application. To illustrate, different compute engines (e.g., the compute engines 101, 102, and 103) may serve as command compute engines each assigned to execute various EDA processes, each of which may include an associated set of EDA operations. The command compute engines may instantiate other compute engines to help perform the EDA processes, doing so to increase parallelism and operation concurrency. As each command compute engine (and respectively instantiated compute engines) execute different EDA processes, the dynamic resource balancing engine 110 (or a command compute engine itself) may continually reallocate resources as these compute engines finish execution of assigned EDA operations.
As a compute engine nears completion of an EDA operation or EDA process (e.g., during an execution tail), there may be decreased opportunities for parallelism as a critical execution path may require serial execution of EDA operations. Such EDA operations executed at the end of an EDA process may be referred to as an EDA operation tail, which may result in unused computing resources during EDA operation execution. In some cases, the “tail” of EDA operation execution or the EDA operation tail may refer to the last 10% of the execution time or the last 10% of an EDA instruction set, though other percentages or gauge to measure the end of an EDA operation sequence are contemplated as well.
A scenario may occur when multiple command compute engines reach the execution tail of EDA processes, during which the dynamic resource balancing engine 110 may reallocate idle computing resources to other compute engines to increase computational efficiency. In that regard, the dynamic resource balancing engine 110 may increase the overall utilization of computing resources in a computing environment, particularly during the “tail” of EDA operation execution.
In operation, the dynamic resource balancing engine 110 may identify idle or unused computing resources during an EDA operation tail in various ways. In some implementations, the dynamic resource balancing engine 110 polls the various compute engines of a computing system to determine resource utilization rates. Such polling may occur at periodic or irregular (e.g., user triggered) times. As another example, the compute engines themselves may communicate idle indicators to alert the dynamic resource balancing engine 110 of idle computing resources of a computing engine.
Examples of idle indicators are illustrated in
In response, the dynamic resource balancing engine 110 may send a reallocation instruction 320 to the compute engine 101, which in
In the example shown in
While various resource balancing examples are described with respect to EDA operation tails, the dynamic resource balancing engine 110 may reallocate resources for compute engines with any idle computing resources at any point in time. By doing so, the dynamic resource balancing engine 110 may increase computational efficiency and increase resource utilization, which may improve the computing output and operation of EDA systems.
As described above, the dynamic resource balancing engine 110 may dynamically reallocate computing resources during execution of an EDA application. Moreover, the dynamic resource balancing engine 110 may support any of the described dynamic resource balancing features whether compute engines have idle/unused computing resources or not (e.g., EDA operation-based reallocations when computing resources are fully utilized). By flexibly supporting such resource reallocations specific execution of EDA operations, the dynamic resource balancing engine 110 may provide the various technical benefits described herein.
In implementing the logic 400, the dynamic resource balancing engine 110 may allocate computing resources to a set of compute engines, each compute engine configured to perform EDA operations on a hierarchical dataset representative of a circuit design (402). The dynamic resource balancing engine 110 may further dynamically reallocate a particular computing resource from a first compute engine to a second compute engine based on an operation priority of an EDA operation executed by the second compute engine, an idle indicator for the first compute engine, or a combination of both (404).
While an example dynamic resource balancing features are shown and described through
The system 500 may execute instructions stored on the machine-readable medium 520 through the processor 510. Executing the instructions may cause the system 500 to perform any of the dynamic resource balancing features described herein, including according to any of the features of the dynamic resource balancing engine 110.
For example, execution of the dynamic resource balancing instructions 522 by the processor 510 may cause the system 500 to allocate computing resources to a set of compute engines, each compute engine configured to perform EDA operations on a hierarchical dataset representative of a circuit design and dynamically reallocate a particular computing resource from a first compute engine to a second compute engine based on an operation priority of an EDA operation executed by the second compute engine, an idle indicator for the first compute engine, or a combination of both.
The systems, methods, devices, and logic described above, including the compute engines 101, 102, and 103 as well as the dynamic resource balancing engine 110, may be implemented in many different ways in many different combinations of hardware, logic, circuitry, and executable instructions stored on a machine-readable medium. For example, the compute engines 101, 102, and 103, the dynamic resource balancing engine 110, or combinations thereof, may include circuitry in a controller, a microprocessor, or an application specific integrated circuit (ASIC), or may be implemented with discrete logic or components, or a combination of other types of analog or digital circuitry, combined on a single integrated circuit or distributed among multiple integrated circuits. A product, such as a computer program product, may include a storage medium and machine readable instructions stored on the medium, which when executed in an endpoint, computer system, or other device, cause the device to perform operations according to any of the description above, including according to any features of the compute engines 101, 102, and 103, the dynamic resource balancing engine 110, or combinations of both.
The processing capability of the systems, devices, and engines described herein, including the compute engines 101, 102, and 103 as well as the dynamic resource balancing engine 110, may be distributed among multiple system components, such as among multiple processors and memories, optionally including multiple distributed processing systems or cloud/network elements. Parameters, databases, and other data structures may be separately stored and managed, may be incorporated into a single memory or database, may be logically and physically organized in many different ways, and may be implemented in many ways, including data structures such as linked lists, hash tables, or implicit storage mechanisms. Programs may be parts (e.g., subroutines) of a single program, separate programs, distributed across several memories and processors, or implemented in many different ways, such as in a library (e.g., a shared library).
While various examples have been described above, many more implementations are possible.
Claims
1. A method comprising:
- through a computing system: allocating computing resources to a set of compute engines, each compute engine configured to perform electronic design automation (EDA) operations on a hierarchical dataset representative of an integrated circuit (IC) design; and dynamically reallocating a particular computing resource from a first compute engine to a second compute engine based on an operation priority of an EDA operation executed by the second compute engine, an idle indicator for the first compute engine, or a combination of both, wherein dynamically reallocating the particular computing resource based on the idle indicator for the first compute engine comprises: identifying that the first compute engine is executing an EDA process that has reached an EDA operation tail that requires execution of particular EDA operations for the EDA process in serial; and identifying the particular computing resource as an idle resource unused by the first compute engine during the EDA operation tail of the EDA process for which the particular EDA operations for the EDA process are executed in serial.
2. The method of claim 1, wherein dynamically reallocating the particular computing resource based on the operation priority of the EDA operation comprises:
- determining that the EDA operation executed by the second compute engine is on a critical execution path of an EDA process and, in response, reallocating the particular compute resource to the second compute engine.
3. The method of claim 1, wherein dynamically reallocating the particular computing resource based on the operation priority of the EDA operation comprises:
- determining an operation type of the EDA operation executed by the second compute engine; and
- reallocating the particular computing resource to the second compute engine responsive to a determination that the operation type of the EDA operation is a fill operation or multi-patterning operation.
4. The method of claim 1, wherein dynamically reallocating the particular computing resource based on the operation priority of the EDA operation comprises:
- determining a circuit layer that the EDA operation executed by the second compute engine operates on; and
- reallocating the particular computing resource to the second compute engine responsive to a determination that the circuit layer is a metal 1 or metal 2 layer.
5. A system comprising:
- a set of compute engines, each compute engine configured to perform electronic design automation (EDA) operations on a hierarchical dataset representative of an integrated circuit (IC) design; and
- a dynamic resource balancing engine configured to: allocate computing resources to the set of compute engines; and reallocate a particular computing resource allocated to a first compute engine based on an operation priority of an EDA operation executed by a second compute engine, an idle indicator for the first compute engine, or a combination of both, wherein the dynamic resource balancing engine is configured to dynamically reallocate the particular computing resource based on the idle indicator for the first compute engine by: identifying that the first compute engine is executing an EDA process that has reached an EDA operation tail that requires execution of particular EDA operations for the EDA process in serial; and identifying the particular computing resource as an idle resource unused by the first compute engine during the EDA operation tail of the EDA process for which the particular EDA operations for the EDA process are executed in serial.
6. The system of claim 5, wherein the dynamic resource balancing engine is configured to dynamically reallocate the particular computing resource based on the operation priority of the EDA operation by:
- determining that the EDA operation executed by the second compute engine is on a critical execution path of an EDA process and, in response, reallocating the particular compute resource to the second compute engine.
7. The system of claim 5, wherein the dynamic resource balancing engine is configured to dynamically reallocate the particular computing resource based on the operation priority of the EDA operation by:
- determining an operation type of the EDA operation executed by the second compute engine; and
- reallocating the particular computing resource to the second compute engine responsive to a determination that the operation type of the EDA operation is a fill operation or multi-patterning operation.
8. The system of claim 5, wherein the dynamic resource balancing engine is configured to dynamically reallocate the particular computing resource based on the operation priority of the EDA operation by:
- determining a circuit layer that the EDA operation executed by the second compute engine operates on; and
- reallocating the particular computing resource to the second compute engine responsive to a determination that the circuit layer is a metal 1 or metal 2 layer.
9. A non-transitory machine-readable medium comprising instructions that, when executed by a processor, cause a system to:
- allocate computing resources to a set of compute engines, each compute engine configured to perform electronic design automation (EDA) operations on a hierarchical dataset representative of a circuit design; and
- dynamically reallocate a particular computing resource from a first compute engine to a second compute engine based on an operation priority of an EDA operation executed by the second compute engine, an idle indicator for the first compute engine, or a combination of both,
- wherein the instructions to dynamically reallocate the particular computing resource based on the idle indicator for the first compute engine comprise instructions that, when executed by the processor, cause the system to: identify that the first compute engine is executing an EDA process that has reached an EDA operation tail that requires execution of particular EDA operations for the EDA process in serial; and identify the particular computing resource as an idle resource unused by the first compute engine during the EDA operation tail of the EDA process for which the particular EDA operations for the EDA process are executed in serial.
10. The non-transitory machine-readable medium of claim 9, wherein the instructions to dynamically reallocate the particular computing resource based on the operation priority of the EDA operation comprise instructions that, when executed by the processor, cause the system to:
- determine that the EDA operation executed by the second compute engine is on a critical execution path of an EDA process and, in response, reallocate the particular compute resource to the second compute engine.
11. The non-transitory machine-readable medium of claim 9, wherein the instructions to dynamically reallocate the particular computing resource based on the operation priority of the EDA operation comprise instructions that, when executed by the processor, cause the system to:
- determine an operation type of the EDA operation executed by the second compute engine; and
- reallocate the particular computing resource to the second compute engine responsive to a determination that the operation type of the EDA operation is a fill operation or multi-patterning operation.
12. The non-transitory machine-readable medium of claim 9, wherein the instructions to dynamically reallocate the particular computing resource based on the operation priority of the EDA operation comprise instructions that, when executed by the processor, cause the system to:
- determine a circuit layer that the EDA operation executed by the second compute engine operates on; and
- reallocate the particular computing resource to the second compute engine responsive to a determination that the circuit layer is a metal 1 or metal 2 layer.
6625638 | September 23, 2003 | Kubala |
6865591 | March 8, 2005 | Garg |
6937969 | August 30, 2005 | Vandersteen |
7051188 | May 23, 2006 | Kubala |
8365177 | January 29, 2013 | Chan |
8548790 | October 1, 2013 | Tylutki |
8706798 | April 22, 2014 | Suchter |
10492023 | November 26, 2019 | Gurin |
10884778 | January 5, 2021 | Dunagan |
11354164 | June 7, 2022 | Dennis |
11811680 | November 7, 2023 | Jain |
11822971 | November 21, 2023 | Macha |
20030139838 | July 24, 2003 | Marella |
20030182348 | September 25, 2003 | Leong |
20060036735 | February 16, 2006 | Gasca |
20060143390 | June 29, 2006 | Kottapalli |
20110161972 | June 30, 2011 | Dillenberger |
20120284492 | November 8, 2012 | Zievers |
20130086543 | April 4, 2013 | Agarwal |
20130214408 | August 22, 2013 | Zhao |
20140195673 | July 10, 2014 | Cook |
20140310722 | October 16, 2014 | McGaughy |
20150006140 | January 1, 2015 | Parikh |
20150089511 | March 26, 2015 | Smith |
20150095918 | April 2, 2015 | Alameldeen |
20150286492 | October 8, 2015 | Breitgand |
20150363527 | December 17, 2015 | McGaughy |
20160210154 | July 21, 2016 | Lin |
20160210174 | July 21, 2016 | Hsieh |
20160294728 | October 6, 2016 | Jain |
20160378545 | December 29, 2016 | Ho |
20170061054 | March 2, 2017 | Kalafala |
20170083077 | March 23, 2017 | Hankendi |
20170192887 | July 6, 2017 | Herdrich |
20170262320 | September 14, 2017 | Newburn |
20180052711 | February 22, 2018 | Zhou |
20180121103 | May 3, 2018 | Kavanagh |
20180137031 | May 17, 2018 | Jain |
20180145871 | May 24, 2018 | Golin |
20180173090 | June 21, 2018 | Wang |
20190034326 | January 31, 2019 | Nalluri |
20190340034 | November 7, 2019 | Saballus |
20190361753 | November 28, 2019 | Macha |
20200005127 | January 2, 2020 | Baum |
20200026564 | January 23, 2020 | Bahramshahry |
20200050530 | February 13, 2020 | Orth |
20200073717 | March 5, 2020 | Hari |
20200111713 | April 9, 2020 | Zang |
20200218534 | July 9, 2020 | Lim |
20200241921 | July 30, 2020 | Calmon |
1955931 | May 2007 | CN |
101288049 | October 2008 | CN |
105718318 | June 2016 | CN |
108293011 | July 2018 | CN |
- PCT International Search Report and Written Opinion of International Searching Authority dated Jul. 15, 2019 corresponding to PCT International Application No. PCT/US2018/056877 filed Oct. 22, 2018.
Type: Grant
Filed: Oct 22, 2018
Date of Patent: Apr 9, 2024
Patent Publication Number: 20210374319
Assignee: Siemens Industry Software Inc. (Plano, TX)
Inventors: Patrick D. Gibson (Tualatin, OR), Robert A. Todd (Beaverton, OR), Jimmy J. Tomblin (Santa Rosa Beach, FL)
Primary Examiner: Phallaka Kik
Application Number: 17/281,649
International Classification: G06F 30/398 (20200101); G06F 9/48 (20060101); G06F 9/50 (20060101); G06F 30/20 (20200101); G06F 30/337 (20200101); G06F 30/373 (20200101);