POWER MANAGEMENT FOR MULTIPLE PROCESSOR CORES
Methods and apparatus relating to power management for multiple processor cores are described. In one embodiment, one or more techniques may be utilized locally (e.g., on a per core basis) to manage power consumption in a processor. In another embodiment, power may be distributed among different power planes of a processor based on energy-based considerations. Other embodiments are also disclosed and claimed.
This application is a continuation of U.S. patent application Ser. No. 12/263,421 entitled “POWER MANAGEMENT FOR MULTIPLE PROCESSOR CORES,” filed Oct. 31, 2008, issued as U.S. Pat. No. 8,402,290, issued on Mar. 19, 2013, which is hereby incorporated herein by reference and for all purposes.
FIELDThe present disclosure generally relates to the field of electronics. More particularly, an embodiment of the invention relates to power management for multiple processor cores.
BACKGROUNDAs integrated circuit (IC) fabrication technology improves, manufacturers are able to integrate additional functionality onto a single silicon substrate. As the number of these functionalities increases, however, so does the number of components on a single IC chip. Additional components add additional signal switching, in turn, generating more heat. The additional heat may damage an IC chip by, for example, thermal expansion. Also, the additional heat may limit usage locations and/or applications of a computing device that includes such chips. For example, a portable computing device may solely rely on battery power. Hence, as additional functionality is integrated into portable computing devices, the need to reduce power consumption becomes increasingly important, for example, to maintain battery power for an extended period of time. Non-portable computing systems also face cooling and power generation issues as their IC components use more power and generate more heat.
To limit damage from thermal emergencies, one approach may utilize Dynamic Voltage Scaling (DVS). For example, when the temperature exceeds a certain threshold, the frequency and the voltage are dropped to a certain level, and then increased to another level (not necessarily the original one). In multiple core processor designs, however, such an approach would reduce performance as all cores may be penalized whether or not they are causing a thermal emergency. Another approach may use frequency throttling (which may only be a projection of DVS to frequency domain). However, the penalty for such an approach may be linear relative to the power reduction. For example, the penalty of DVS techniques may be less in part because reducing frequency by factor x is accompanied by reducing the voltage by the same factor, and the power is reduced by factor of x3 as a result.
The detailed description is provided with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. However, various embodiments of the invention may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the particular embodiments of the invention. Further, various aspects of embodiments of the invention may be performed using various means, such as integrated semiconductor circuits (“hardware”), computer-readable instructions organized into one or more programs (“software”), or some combination of hardware and software. For the purposes of this disclosure reference to “logic” shall mean either hardware, software, or some combination thereof.
Some of the embodiments discussed herein may provide efficient power management for multiple processor cores. As discussed above, relying on DVS may reduce performance, as all cores may be penalized whether or not they are causing a thermal emergency. In an embodiment, one or more throttling techniques may be utilized locally (e.g., on a per core basis) for one or more processor cores (e.g., in a multiple core processor), for example, that share a single power plane, in response to detection of a thermal event (e.g., detection of excessive temperature at one or more of the cores). Also, in designs with multiple power planes, power may be distributed among different power planes under energy-based definitions in accordance with one embodiment. Moreover, some embodiments may be applied in computing systems that include one or more processors (e.g., with one or more processor cores), such as those discussed with reference to
More particularly,
In an embodiment, the processor 102-1 may include one or more processor cores 106-1 through 106-M (referred to herein as “cores 106,” or “core 106”), a cache 108, and/or a router 110. The processor cores 106 may be implemented on a single integrated circuit (IC) chip. Moreover, the chip may include one or more shared and/or private caches (such as cache 108), buses or interconnections (such as a bus or interconnection 112), graphics and/or memory controllers (such as those discussed with reference to
In one embodiment, the router 110 may be used to communicate between various components of the processor 102-1 and/or system 100. Moreover, the processor 102-1 may include more than one router 110. Furthermore, the multitude of routers 110 may be in communication to enable data routing between various components inside or outside of the processor 102-1.
The cache 108 may store data (e.g., including instructions) that are utilized by one or more components of the processor 102-1, such as the cores 106. For example, the cache 108 may locally cache data stored in a memory 114 for faster access by the components of the processor 102 (e.g., faster access by cores 106). As shown in
The system 100 may also include a power source 120 (e.g., a direct current (DC) power source or an alternating current (AC) power source) to provide power to one or more components of the system 100. In some embodiments, the power source 120 may include one or more battery packs. The power source 120 may be coupled to components of system 100 through a voltage regulator (VR) 130. Moreover, even though
Additionally, while
As shown in
Assume that we have a system of n processors, and that there is a single hot core, whose power is to be reduced by a factor of (i.e., to be multiplied by which may lie between 0 and 1) in order to prevent violating thermal constraints. We measure the total slowdown of the system as a weighted sum of slowdowns of each one of the processors. For example, if one core out of 4 is slowed down by 20%, the effective frequency factor may be determined to be (3+0.8)/4 or 95%.
In an embodiment, the slowdown for different thermal management methods (where Sdvs refers to slowdown for DVS and Sft refers to slowdown for frequency control) may be determined as follows:
Referring to
More particularly, the relationship between pure throttling versus DVS may be described by the formula Sft/Sdvs−1. As can be seen by reference to
In some embodiments, the determination of whether to use one or more of DVS and/or throttling may be based on the number of cores. For example, such techniques may be applied in processors with more than 2 cores. However, such techniques may also be applied to processors with less than 3 cores.
Referring to
At an operation 304, it may be determined how many cores in a processor are active and cold (e.g., at a temperature value that is below an excessive threshold temperature value, for example, as detected by the sensor(s) 150). For example, logic 140 may consider statistics (e.g., provided by monitor(s) 145, sensor(s) 150, and/or cores 106 themselves) on operating states of the cores 106 to determine which cores 106 are active and cold. At an operation 306, the information of operations 302 and/or 304 may be taken into account (including various penalties such as those discussed with reference to
In an embodiment, the operations of method 300 may be repeated continuously (or on a periodic basis), e.g., after operation 308, operation 302 may be resumed without delay (or after elapsing of a time period, e.g., set by a timer logic). In some embodiments, method 300 may allow a processor to decrease the thermal management performance penalty in thermally limited applications for multiple cores sharing the same power plane.
Additional power management challenges may however be present with respect to processors having multiple power planes. For example, if multiple power planes sharing the same power source 120 are present in system 100, power management may need to be satisfied for both individual constraints per power plane and global constraints per package. By doing so, it is possible to share a common package power/energy budget and allow for use of, for example, unused processor core power for more Graphics Effect(s) (GFX) performance in a GFX intensive workload, when processor resources are not fully utilized. However, the techniques discussed herein with reference to multiple power planes may also be applied to implementations that may utilize a single power plane.
Some current power-based management schemes may miss the temporal aspect of power management, since they generally address only the current point of time. Accordingly, in some embodiments, an energy budget per power plane may be used that permits: (1) to define both individual component (e.g., individual processor core) and shared constraints (e.g., shared amongst multiple components or processor cores) in a way that takes temporal aspects into account; (2) to express effective constraints corresponding to different time constants; (3) to handle the case of on-line changes in the constrains. In an embodiment, power distribution may be managed among different power planes and under energy-based definition(s).
In some embodiments, energy budget may be managed and/or power setting(s) (e.g., voltage and/or frequency changes) may be made accordingly to a current budget. Energy-based power management may be performed by controlling the energy budget defined iteratively as follows:
En+1=αEn+(TDPn−Pn)Δtn (1),
where TDPn is the Thermal Design Power (TDP) power limit on step n; Pn is the power spent on step n over time Δtn, and is the decay component. For example, using α=0.999 corresponds to the window size of seconds, while using α=0.9 corresponds to much smaller window size. The expression for En corresponds to the energy “remainder” which is the amount of energy not consumed by the system. The value of the decay component is defined by the requirements of the time window size. Also, the TDP values may be provided by a software application or user in some embodiments.
Such energy budget corresponds to imposing an exponential mask on the difference between TDP at each time moment and the power spent at this moment, namely:
Accordingly, the system (e.g., logic 140) may set TDP constraints on each of the power planes separately, or/and set TDP constraints on the whole IC package. According to the imposed constraints, multiple budgets may be maintained in some embodiments, for example per power plane for individual constraints and/or per package for shared constraints. Moreover, different budgets power per window size (expressed by α) and/or per TDP constraints may be maintained. Note this framework smoothly handles a case when TDP is changed on-the-fly (e.g., by a software application or a user) in some embodiments. In an embodiment, a set of energy budgets, Enk may be maintained, e.g., where k corresponds to a specific constraint, and n to the time step. One goal of such power management mechanisms is to keep the energy budgets positive, while maximizing performance.
In an embodiment, a controller budget may be defined (e.g., implemented by the logic 140). Per power plane i, we define a controller function, denoted by ƒi(E), which maps energy budget onto the discrete set of power states. In one embodiment, a controller, which is required to be a non-decreasing function, maps a range [Elowi,Ehighi] onto the discrete range of [Pni,P0i], where P0i is the maximal turbo state for this power plane, and Pni is the maximal efficiency state. Budget values below Elowi are mapped into while budget values above Ehighi are mapped onto P0i. Other requirements may include meeting some requirements in anchor points—for example, the zero budget corresponds to the guaranteed power state called P1:
ƒi(E)=Pni,E≦Elowi
ƒi(0)=P1i
ƒi(E)=P0i,E≧Ehighi
In accordance with one embodiment, an example of a controller function is shown in
Regarding the constraints, assume existence of m controllers, ƒi(E) corresponding to different power planes. For constraint k, user (or application defined) preferences that describe how budget Ek is distributed among power planes may be determined. For example, such information may be provided as an input in the form of vectors Wk of length m, such that entry i corresponds to the portion of the budget that goes to power plane i. For individual constraints, a single power plane may obtain the entire budget, so the corresponding weight vector is a unit vector. In the general case, power plane i may obtain the portion of the budget:
Ek,i=WkiEk (2)
In some cases, we may have that for a power plane i, its portion WkiEk is high enough to provide maximal turbo (namely, WkiEk≧Ehighi). In this case, the “unused” budget for this power plane, WkiEk−Ehighi, may be distributed among the rest of the power planes proportionally to their weights.
Let us denote by the portion of the budget Ek that receives power plane i at step tn, then the resulting P-state recommendation on this step for power plane i may be written as:
Collecting the recommendations over all the constraints may provide the resulting setup per power plane. Note that the result may be an upper bound and may be further modified by other algorithms in some embodiments. Moreover, in accordance with an embodiment of the invention, an example of a flow chart for energy-based power management is shown in
More particularly,
Referring to
In an embodiment, the operations of method 300 may be repeated continuously (or on a periodic basis), e.g., after operation 308, operation 302 may be resumed without delay (or after elapsing of a time period, e.g., set by a timer logic).
In an embodiment, the operations of method 500 may be repeated continuously (or on a periodic basis), e.g., after operation 508, operation 502 may be resumed without delay (or after elapsing of a time period, e.g., set by a timer logic). In an embodiment, techniques discussed with reference to
A chipset 606 may also communicate with the interconnection network 604. The chipset 606 may include a graphics and memory control hub (GMCH) 608. The GMCH 608 may include a memory controller 610 that communicates with a memory 612. The memory 612 may store data, including sequences of instructions that are executed by the processor 602, or any other device included in the computing system 600. In one embodiment of the invention, the memory 612 may include one or more volatile storage (or memory) devices such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), or other types of storage devices. Nonvolatile memory may also be utilized such as a hard disk. Additional devices may communicate via the interconnection network 604, such as multiple CPUs and/or multiple system memories.
The GMCH 608 may also include a graphics interface 614 that communicates with a graphics accelerator 616. In one embodiment of the invention, the graphics interface 614 may communicate with the graphics accelerator 616 via an accelerated graphics port (AGP). In an embodiment of the invention, a display (such as a flat panel display, a cathode ray tube (CRT), a projection screen, etc.) may communicate with the graphics interface 614 through, for example, a signal converter that translates a digital representation of an image stored in a storage device such as video memory or system memory into display signals that are interpreted and displayed by the display. The display signals produced by the display device may pass through various control devices before being interpreted by and subsequently displayed on the display.
A hub interface 618 may allow the GMCH 608 and an input/output control hub (ICH) 620 to communicate. The ICH 620 may provide an interface to I/O devices that communicate with the computing system 600. The ICH 620 may communicate with a bus 622 through a peripheral bridge (or controller) 624, such as a peripheral component interconnect (PCI) bridge, a universal serial bus (USB) controller, or other types of peripheral bridges or controllers. The bridge 624 may provide a data path between the processor 602 and peripheral devices. Other types of topologies may be utilized. Also, multiple buses may communicate with the ICH 620, e.g., through multiple bridges or controllers. Moreover, other peripherals in communication with the ICH 620 may include, in various embodiments of the invention, integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive(s), USB port(s), a keyboard, a mouse, parallel port(s), serial port(s), floppy disk drive(s), digital output support (e.g., digital video interface (DVI)), or other devices.
The bus 622 may communicate with an audio device 626, one or more disk drive(s) 628, and one or more network interface device(s) 630 (which is in communication with the computer network 603). Other devices may communicate via the bus 622. Also, various components (such as the network interface device 630) may communicate with the GMCH 608 in some embodiments of the invention. In addition, the processor 602 and the GMCH 608 may be combined to form a single chip. Furthermore, the graphics accelerator 616 may be included within the GMCH 608 in other embodiments of the invention.
Furthermore, the computing system 600 may include volatile and/or nonvolatile memory (or storage). For example, nonvolatile memory may include one or more of the following: read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically EPROM (EEPROM), a disk drive (e.g., 628), a floppy disk, a compact disk ROM (CD-ROM), a digital versatile disk (DVD), flash memory, a magneto-optical disk, or other types of nonvolatile machine-readable media that are capable of storing electronic data (e.g., including instructions). In an embodiment, components of the system 600 may be arranged in a point-to-point (PtP) configuration. For example, processors, memory, and/or input/output devices may be interconnected by a number of point-to-point interfaces.
As illustrated in
In an embodiment, the processors 702 and 704 may be one of the processors 602 discussed with reference to
In at least one embodiment, one or more operations discussed with reference to
Chipset 720 may communicate with the bus 740 using a PtP interface circuit 741. The bus 740 may have one or more devices that communicate with it, such as a bus bridge 742 and I/O devices 743. Via a bus 744, the bus bridge 742 may communicate with other devices such as a keyboard/mouse 745, communication devices 746 (such as modems, network interface devices, or other communication devices that may communicate with the computer network 603), audio I/O device, and/or a data storage device 748. The data storage device 748 may store code 749 that may be executed by the processors 702 and/or 704.
In various embodiments of the invention, the operations discussed herein, e.g., with reference to
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, and/or characteristic described in connection with the embodiment may be included in at least an implementation. The appearances of the phrase “in one embodiment” in various places in the specification may or may not be all referring to the same embodiment. Also, in the description and claims, the terms “coupled” and “connected,” along with their derivatives, may be used. In some embodiments of the invention, “connected” may be used to indicate that two or more elements are in direct physical or electrical contact with each other. “Coupled” may mean that two or more elements are in direct physical or electrical contact. However, “coupled” may also mean that two or more elements may not be in direct contact with each other, but may still cooperate or interact with each other.
Thus, although embodiments of the invention have been described in language specific to structural features and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features or acts described. Rather, the specific features and acts are disclosed as sample forms of implementing the claimed subject matter.
Claims
1. An apparatus comprising:
- a processor having a plurality of processor cores;
- a single power plane to supply power to more than one of the plurality of processor cores; and
- a power management logic to cause a modification to an operational characteristic of at least one processor core of the plurality of processor cores in response to: a detection of excessive temperature at the least one processor core; and a determination of which one of the other processor cores of the plurality of processor cores are active and at a temperature value.
Type: Application
Filed: Mar 19, 2013
Publication Date: Aug 22, 2013
Inventors: Lev Finkelstein (Netanya), Efraim Rotem (Haifa), Aviad Cohen (TA), Ronny Ronen (Haifa), Doron Rajwan (Rishon Le-Zion)
Application Number: 13/847,392
International Classification: G06F 1/26 (20060101);