GPU power and performance management
Systems, methods, and computer readable media to improve the operation of graphics systems are described. In general, techniques are disclosed for determining the computational need of GPU-centric elements executing from within pages of another application, selecting one or more GPU's appropriate to the need, and transitioning the system to the selected GPUs.
Latest Apple Patents:
This disclosure relates generally to the use of graphics processing units (GPUs). More particularly, but not by way of limitation, this disclosure relates to a technique for determining the computational need of GPU-centric elements executing from within pages of another application, selecting one or more GPU's appropriate to the need, and transitioning the system to the selected GPUs.
In mobile and embedded devices markets, consumers typically want designs that provide both more power and lower power consumption. As a result application processors have become increasingly heterogeneous, integrating multiple components into a single System-on-Chip (SoC) design. One SoC may include a central processing unit (CPU), a GPU, an image signal processor (ISP), video encoders and decoders, and a radio processing unit. For still further processing power, one or more additional GPUs may be provided. These additional GPUs, because of their power consumption, are often not incorporated directly in the SoC and are instead included as separate functional units. Again because of their power consumption, it is important that the use of such devices is properly controlled to preserve a device's battery life.
SUMMARYIn one embodiment the disclosed concepts provide a method to select a graphics processing units (GPUs) from a collection of GPUs based on the operational characteristics of an object embedded in a page of an application (e.g., an object in a web browser window). The method may include detecting an event in a system based on an action associated with a GPU-centric object, the system having two or more GPUs, the GPU-centric object including GPU instructions and non-GPU instructions. By way of example, the GPU-centric object may be a WebGL object, a WebGPU object or a Stage3D object. The method may then continue by determining, in response to the event, a computational need of the system based on one or more rules. In one or more embodiments the rules may be heuristic in nature and can evaluate a viewable characteristic of the GPU-centric object. Viewable characteristics can include, but are not limited to, determining if: the GPU-centric object changes its operational state from active to idle (or idle to active); or the GPU-centric object changes its operational state from not-viewable to viewable (or from viewable to not-viewable). The method continues by identifying a first GPU of the two or more GPUs is a currently active GPU for the system; identifying, based on the computational need, a second GPU, wherein the second GPU is different from the first GPU; transitioning the system from the first GPU to the second GPU; and executing the GPU-centric object's GPU instructions on the second GPU. In one embodiment, the first GPU has a lower computational capability than the second GPU. In another embodiment, it is the second GPU that has the lower computational capability. In still other embodiments, the operation of transitioning from the first to the second GPU may include evaluating one or more timing restrictions associated with the first and/or second GPU. In one or more other embodiments, the various methods may be embodied in computer executable program code and stored in a non-transitory storage device. In yet another embodiment, the method may be implemented in an electronic device or system having two or more GPUs.
This disclosure pertains to systems, methods, and computer readable media to improve the operation of graphics systems. In general, techniques are disclosed for determining the computational need of GPU-centric elements executing from within pages of another application, selecting one or more GPU's appropriate to the need, and transitioning the system to the selected GPUs.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosed concepts. As part of this description, some of this disclosure's drawings represent structures and devices in block diagram form in order to avoid obscuring the novel aspects of the disclosed concepts. In the interest of clarity, not all features of an actual implementation may be described. Further, as part of this description, some of this disclosure's drawings are in the form of flowcharts. The boxes in any particular flowchart may be presented in a particular order. It should be understood however that the particular sequence of any given flowchart is used only to exemplify one embodiment. In one or more embodiments, one or more of the disclosed steps may be omitted, repeated, and/or performed in a different order than that described herein. In addition, other embodiments may include additional steps not depicted as part of the flowchart. Moreover, the language used in this disclosure has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter. Reference in this disclosure to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosed subject matter, and multiple references to “one embodiment” or “an embodiment” should not be understood as necessarily all referring to the same embodiment.
Embodiments of the GPU selection and transition techniques set forth herein can assist with improving the functionality of computing devices or systems that utilize multiple GPUs. Computer functionality can be improved by enabling such computing devices or systems to select the most appropriate GPU so as to optimize its processing capability while minimizing its power consumption. By way of example, when computational capability is not needed (such as when the device or system's display is not changing), a low(er) computationally capable GPU may be selected (as opposed to a higher performance and higher power GPU), thereby reducing overall system power consumption. Conversely, when computational capability is needed (such as when the device or system's display is changing), a high(er) computationally capable GPU may be selected (as opposed to a lower performance and lower power GPU), thereby increasing the computing device or system's performance. These actions can have the effect of improving the device or system's performance (when needed) and, for mobile devices, extending the device or system's time before its battery needs to be charged.
It will be appreciated that in the development of any actual implementation (as in any software and/or hardware development project), numerous decisions must be made to achieve a developers' specific goals (e.g., compliance with system- and business-related constraints), and that these goals may vary from one implementation to another. It will also be appreciated that such development efforts might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the design and implementation of multi-GPU computing devices and systems having the benefit of this disclosure.
Referring to
Referring to
Referring to
Once the current object instance has been evaluated, a check may be made to determine is additional object instances remain to be evaluated (block 310). If at least one object instance remains to be evaluated (the “NO” prong of block 310), a next object instance may be selected (block 315) and evaluated as described above. If all object instances have been evaluated (the “YES” prong of block 310), GPU selection operation 100 may continue to block 115. In another embodiment, only a specified subset of object instance may be evaluated during performance of block 110. For example, only object instances associated with a display window that is at least partially visible may be evaluated.
Once a system's computational need has been determined, GPU selection operation 100 may determine what GPU(s) are currently active. This information may be obtained, for example, through GPU-level system APIs. Referring to
Referring to
Tables 2 through 4 provide pseudo-code that may be used to guide the implementation of the disclosed subject matter in a computational system supports the WebGL API and has two (2) GPUs (one GPU may be a lower-performance/lower-power device while the other a higher-performance/higher-power device). The Web Graphics Library (WebGL) is a JavaScript API for rendering 3D graphics within any compatible application. WebGL elements can be mixed with other embedded objects and/or HyperText Markup Language (HTML) elements and composited with other parts of a page or page background. WebGL programs consist of control code written in JavaScript and shader code that is written in the OpenGL Shading Language (GLSL) which is executed by a computer system's GPU. GLSL is a high-level shading language created by the OpenGL Architecture Review Board (ARB) to give developers more direct control of the graphics pipeline without having to use ARB assembly language or hardware-specific languages. Referring to Table 2, the “createWebGL” routine may be called every time a WebGL object is instantiated (created). As shown, the system defaults to using the lower power GPU, and thereafter sends a messages to the host controller (“host”) that a new WebGL object has been created.
Table 3 shows how the host controller may respond to a message generated in accordance with the pseudo-code of Table 2 (i.e., to the creation of a WebGL object). More specifically, it checks the new object's GPU requirements.
Table 4 provides pseudo-code for how one implementation of a 2 GPU system checks for the GPU requirements a WebGL object needs. Additional details may be obtained from Table 4's internal comments.
While the above detailed example is directed to a WebGL API compliant 2 GPU system, in general any computational environment that supports the embedding of objects that execute GPU directed code and has two or more GPU's may benefit from the techniques disclosed herein. The GPUs may be similar or dissimilar. For example, all of a system's GPUs could be functionally equivalent (and consume equal amounts of power). In other embodiments the GPUs could be dissimilar. One or more could be lower-performance/lower-power while others (one or more) could be higher-performance/higher-power devices. In addition, one (or more) GPUs could be combined with one or more central processing units (CPUs) on a common substrate while other GPUs could be separate (e.g., discrete) from the CPU and, possibly, from other GPUs. Some of the GPUs may have multiple cores, others may have single cores. Some of the GPUs may be programmable, others may not.
Referring to
Processor module or circuit 605 may include one or more processing units each of which may include at least one central processing unit (CPU) and zero or more GPUs; each of which in turn may include one or more processing cores. Each processing unit may be based on reduced instruction-set computer (RISC) or complex instruction-set computer (CISC) architectures or any other suitable architecture. Processor module 605 may be a system-on-chip, an encapsulated collection of integrated circuits (ICs), or a collection of ICs affixed to one or more substrates. Memory 610 may include one or more different types of media (typically solid-state, but not necessarily so) used by processor 605, graphics hardware 620, device sensors 625, image capture module 630, communication interface 635, user interface adapter 640 and display adapter 645. For example, memory 610 may include memory cache, read-only memory (ROM), and/or random access memory (RAM). Storage 615 may include one more non-transitory storage mediums including, for example, magnetic disks (fixed, floppy, and removable) and tape, optical media such as CD-ROMs and digital video disks (DVDs), and semiconductor memory devices such as Electrically Programmable Read-Only Memory (EPROM), and Electrically Erasable Programmable Read-Only Memory (EEPROM). Memory 610 and storage 615 may be used to retain media (e.g., audio, image and video files), preference information, device profile information, computer program instructions or code organized into one or more modules and written in any desired computer programming languages (e.g., WebGL and/or the GLSL language), and any other suitable data. Graphics hardware module or circuit 620 may be special purpose computational hardware for processing graphics and/or assisting processor 605 perform computational tasks. In one embodiment, graphics hardware 620 may include one or more GPUs, and/or one or more programmable GPUs and each such unit may include one or more processing cores. When executed by processor(s) 605 and/or graphics hardware 620 computer program code may implement one or more of the methods described herein. Device sensors 625 may include, but need not be limited to, an optical activity sensor, an optical sensor array, an accelerometer, a sound sensor, a barometric sensor, a proximity sensor, an ambient light sensor, a vibration sensor, a gyroscopic sensor, a compass, a barometer, a magnetometer, a thermistor sensor, an electrostatic sensor, a temperature sensor, a heat sensor, a thermometer, a light sensor, a differential light sensor, an opacity sensor, a scattering light sensor, a diffractional sensor, a refraction sensor, a reflection sensor, a polarization sensor, a phase sensor, a florescence sensor, a phosphorescence sensor, a pixel array, a micro pixel array, a rotation sensor, a velocity sensor, an inclinometer, a pyranometer and a momentum sensor. Image capture module or circuit 630 may include one or more image sensors, one or more lens assemblies, and any other known imaging component that enables image capture operations (still or video). In one embodiment, the one or more image sensors may include a charge-coupled device (CCD) or complementary metal oxide semiconductor (CMOS) sensor. Image capture module 630 may also include an image signal processing (ISP) pipeline that is implemented as specialized hardware, software, or a combination of both. The ISP pipeline may perform one or more operations on raw images (also known as raw image files) received from image sensors and can also provide processed image data to processor 605, memory 610, storage 615, graphics hardware 620, communication interface 635 and display adapter 645. Communication interface 635 may be used to connect computer system 600 to one or more networks. Illustrative networks include, but are not limited to, a local network such as a Universal Serial Bus (USB) network, an organization's local area network, and a wide area network such as the Internet. Communication interface 635 may use any suitable technology (e.g., wired or wireless) and protocol (e.g., Transmission Control Protocol (TCP), Internet Protocol (IP), User Datagram Protocol (UDP), Internet Control Message Protocol (ICMP), Hypertext Transfer Protocol (HTTP), Post Office Protocol (POP), File Transfer Protocol (FTP), and Internet Message Access Protocol (IMAP)). User interface adapter 640 may be used to connect microphone(s) 650, speaker(s) 655, pointer device(s) 660, keyboard 665 (or other input device such as a touch-sensitive element), and a separate image capture element 670—which may or may not avail itself of the functions provided by graphics hardware 620 or image capture module 630. Display adapter 645 may be used to connect one or more display units 675 which may also provide touch input capability. System bus or backplane 650 may be comprised of one or more continuous (as shown) or discontinuous communication links and be formed as a bus network, a communication network, or a fabric comprised of one or more switching devices. System bus or backplane 650 may be, at least partially, embodied in a network router or hub, a wireless access point (AP) or repeater, a set-top box, or a combination thereof.
Referring to
Processor 705, display 710, user interface 715, graphics hardware 720, device sensors 725, communications circuitry 745, image capture module or circuit 750, memory 760 and storage 765 may be of the same or similar type and serve the same or similar function as the similarly named component described above with respect to
It is to be understood that the above description is intended to be illustrative, and not restrictive. The material has been presented to enable any person skilled in the art to make and use the disclosed subject matter as claimed and is provided in the context of particular embodiments, variations of which will be readily apparent to those skilled in the art (e.g., some of the disclosed embodiments may be used in combination with each other). Accordingly, the specific arrangement of steps or actions shown in any of
Claims
1. A computer system, comprising:
- a memory operatively coupled to the display unit;
- two or more graphics processing units (GPUs) operatively coupled to the memory; and
- one or more processors operatively coupled to the memory and the GPUs, the processors configured to execute program instructions stored in the memory for causing the computer system to— detect an event based on an action associated with a GPU-centric object embedded in a page of an application, the GPU-centric object including GPU instructions and non-GPU instructions, determine, in response to the event, a computational need of the computer system based on heuristic rules that evaluate a status and a size of multiple GPU-centric objects including the GPU-centric object embedded in the page of the application, identify a first GPU of the two or more GPUs is a currently active GPU for the system, identify, based on the computational need, a second GPU, wherein the second GPU is different from the first GPU, transition the system from the first GPU to the second GPU, and execute the GPU-centric object's GPU instructions on the second GPU.
2. The computer system of claim 1, wherein the GPU-centric object comprises one of a WebGL object, a WebGPU object, and a Stage3D object.
3. The computer system of claim 1, wherein the heuristic rules evaluate a change in a viewable characteristic of the GPU-centric object.
4. The computer system of claim 3, wherein the viewable characteristic comprises one or more of the following:
- the GPU-centric object changes its operational state from active to idle;
- the GPU-centric object changes its operational state from idle to active;
- the GPU-centric object changes its operational state from not-viewable to viewable; and
- the GPU-centric object changes its operational state from viewable to not-viewable.
5. The computer system of claim 1, wherein the first GPU comprises a GPU having a lower-computational capacity than the second GPU.
6. The computer system of claim 1, wherein the instructions to cause the computer system to transition from the first GPU to the second GPU comprise instructions to cause the system to:
- take the second GPU from an idle state to an active state; and
- execute the GPU-centric object's GPU instructions with the second GPU once the second GPU is in the active state.
7. The computer system of claim 6, further comprising instructions to cause the computer system to take the first GPU from an active state to an idle state prior to executing the GPU-centric object's GPU instructions with the second GPU.
8. The computer system of claim 6, wherein the instructions cause the computer system to take the second GPU from the idle state to the active state comprise instructions to cause the computer system to:
- evaluate at least one time restriction related to the second GPU; and
- transition the second GPU from the idle state to the active state based on the evaluation.
9. A non-transitory program storage device having instructions stored thereon to cause, when executed by one or more processors, the one or more processors to:
- detect an event in a system based on an action associated with a GPU-centric object embedded in a page of an application, the system having two or more graphics processing units (GPUs), the GPU-centric object including GPU instructions and non-GPU instructions;
- determine, in response to the event, a computational need of the system based on heuristic rules that evaluate a status and a size of multiple GPU-centric objects including the GPU-centric object embedded in the page of the application;
- identify a first GPU of the two or more GPUs is a currently active GPU for the system;
- identify, based on the computational need, a second GPU, wherein the second GPU is different from the first GPU;
- transition the system from the first GPU to the second GPU; and
- execute the GPU-centric object's GPU instructions on the second GPU.
10. The non-transitory program storage device of claim 9, wherein the GPU-centric object comprises one of a WebGL object, a WebGPU object, and a Stage3D object.
11. The non-transitory program storage device of claim 9, wherein the heuristic rules evaluate a change in a viewable characteristic of the GPU-centric object.
12. The non-transitory program storage device of claim 11, wherein the viewable characteristic comprises one or more of the following:
- the GPU-centric object changes its operational state from active to idle;
- the GPU-centric object changes its operational state from idle to active;
- the GPU-centric object changes its operational state from not-viewable to viewable; and
- the GPU-centric object changes its operational state from viewable to not-viewable.
13. The non-transitory program storage device of claim 9, wherein the first GPU comprises a GPU having a lower-computational capacity than the second GPU.
14. The non-transitory program storage device of claim 9, wherein the instructions to cause the system to transition from the first GPU to the second GPU comprise instructions to cause the system to:
- take the second GPU from an idle state to an active state; and
- execute the GPU-centric object's GPU instructions with the second GPU once the second GPU is in the active state.
15. The non-transitory program storage device of claim 14, further comprising instructions to cause the system to take the first GPU from an active state to an idle state prior to executing the GPU-centric object's GPU instructions with the second GPU.
16. The non-transitory program storage device of claim 14, wherein the instructions to cause the system to take the second GPU from the idle state to the active state comprise instructions to cause the system to:
- evaluate at least one time restriction related to the second GPU; and
- transition the second GPU from the idle state to the active state based on the evaluation.
17. A method to select a graphics processing unit (GPU), comprising:
- detecting an event in a system based on an action associated with a GPU-centric object embedded in a page of an application, the system having two or more GPUs, the GPU-centric object including GPU instructions and non-GPU instructions;
- determining, in response to the event, a computational need of the system based on heuristic rules that evaluate a status and a size of multiple GPU-centric objects including the GPU-centric object embedded in the page of the application;
- identifying a first GPU of the two or more GPUs is a currently active GPU for the system;
- identifying, based on the computational need, a second GPU, wherein the second GPU is different from the first GPU;
- transitioning the system from the first GPU to the second GPU; and
- executing the GPU-centric object's GPU instructions on the second GPU.
18. The method of claim 17, wherein the GPU-centric object comprises one of a WebGL object, a WebGPU object, and a Stage3D object.
19. The method of claim 17, wherein the heuristic rules evaluate a change in a viewable characteristic of the GPU-centric object.
20. The method of claim 19, wherein the viewable characteristic comprises one or more of the following:
- the GPU-centric object changes its operational state from active to idle;
- the GPU-centric object changes its operational state from idle to active;
- the GPU-centric object changes its operational state from not-viewable to viewable; and
- the GPU-centric object changes its operational state from viewable to not-viewable.
21. The method of claim 17, wherein the first GPU comprises a GPU having a lower-computational capacity than the second GPU.
22. The method of claim 17, wherein transitioning from the first GPU to the second GPU comprises:
- taking the second GPU from an idle state to an active state; and
- executing the GPU-centric object's GPU instructions with the second GPU once the second GPU is in the active state.
23. The method of claim 22, further comprising taking the first GPU from an active state to an idle state prior to executing the GPU-centric object's GPU instructions with the second GPU.
24. The method of claim 22, wherein taking the second GPU from the idle state to the active state comprises:
- evaluating at least one time restriction related to the second GPU; and
- transitioning the second GPU from the idle state to the active state based on the evaluation.
20070103474 | May 10, 2007 | Huang |
20100164962 | July 1, 2010 | Sakariya |
20100328323 | December 30, 2010 | Redman |
20110057936 | March 10, 2011 | Gotwalt |
20110060924 | March 10, 2011 | Khodorkovsky |
20110134132 | June 9, 2011 | Wolf |
20110164046 | July 7, 2011 | Niederauer |
20110169840 | July 14, 2011 | Bakalash |
20130063450 | March 14, 2013 | Kabawala |
20140366040 | December 11, 2014 | Parker |
20150317762 | November 5, 2015 | Park |
20160055667 | February 25, 2016 | Goel |
20160132083 | May 12, 2016 | Jeganathan |
20170060633 | March 2, 2017 | Suarez Gracia |
Type: Grant
Filed: Feb 12, 2018
Date of Patent: Sep 15, 2020
Patent Publication Number: 20180232847
Assignee: Apple Inc. (Cupertino, CA)
Inventors: Dean Jackson (Holt), Jonathan J. Lee (San Francisco, CA), Christopher C. Niederauer (Los Altos Hills, CA), Gavin Barraclough (Sunnyvale, CA)
Primary Examiner: Terrell M Robinson
Application Number: 15/894,622
International Classification: G06T 1/20 (20060101); G06F 1/3293 (20190101); G06F 1/3228 (20190101); G06F 1/3215 (20190101);