FULL SCREEN PROCESSING IN MULTI-APPLICATION ENVIRONMENTS

Systems and methods for determining a foreground application and at least one background application from multiple graphics applications executing within an execution environment are disclosed. Pixel data rendered by the foreground application may be displayed in the execution environment while a rendering thread of the background application may be paused.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
BACKGROUND

Various graphics software applications may be utilized by different digital/electronic systems to render graphical scenes. In some cases, multiple graphics software applications may run in the same execution environment or system. In a multi-application execution environment such as a multi-application framework (MAF) environment, multiple native application user interfaces (UIs) may need to be composed to create a designated user experience. In MAF full screen mode a particular application may be selected and brought to the foreground while remaining applications except the UI framework render to off-screen surfaces and these surfaces are redirected to the UI framework for final output.

However, compared with an application rendering natively directly to screen, such full screen mode processing may consume more hardware (HW) resources than desirable for cross process rendering and UI compositing purposes. For instance, a graphics core, such as an embedded Graphics Processing Unit (GPU), may generally support only one execution thread. To support multiple applications, a typical GPU may time slice between rendering applications regardless of whether those applications are rendering on screen or off screen. As a result, in a conventional MAF environment, even if only one of multiple rendering applications is rendering on screen, that application benefits from only a fraction of the GPU's rendering capacity. To better effect on screen rendering, a typical MAF environment may shut down all other rendering processes to permit an on screen rendering process sole access to GPU resources.

BRIEF DESCRIPTION OF THE DRAWINGS

The material described herein is illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements. In the figures:

FIG. 1 is an illustrative diagram of an example execution environment;

FIG. 2 illustrates an example process;

FIG. 3 illustrates an example process;

FIG. 4 illustrates an example process;

FIG. 5 illustrates an example process;

FIG. 6 is an illustrative diagram of an example system; and

FIG. 7 illustrates an example process, all arranged in accordance with at least some implementations of the present disclosure.

DETAILED DESCRIPTION

One or more embodiments are now described with reference to the enclosed figures. While specific configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. Persons skilled in the relevant art will recognize that other configurations and arrangements may be employed without departing from the spirit and scope of the description. It will be apparent to those skilled in the relevant art that techniques and/or arrangements described herein may also be employed in a variety of other systems and applications other than what is described herein.

While the following description sets forth various implementations that may be manifested in architectures such system-on-a-chip (SoC) architectures for example, implementation of the techniques and/or arrangements described herein are not restricted to particular architectures and/or computing systems and may implemented by any architecture and/or computing system for similar purposes. For instance, various architectures employing, for example, multiple integrated circuit (IC) chips and/or packages, and/or various computing devices and/or consumer electronic (CE) devices such as set top boxes, smart phones, etc., may implement the techniques and/or arrangements described herein. Further, while the following description may set forth numerous specific details such as logic implementations, types and interrelationships of system components, logic partitioning/integration choices, etc., claimed subject matter may be practiced without such specific details. In other instances, some material such as, for example, control structures and full software instruction sequences, may not be shown in detail in order not to obscure the material disclosed herein.

The material disclosed herein may be implemented in hardware, firmware, software, or any combination thereof. The material disclosed herein may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any medium and/or mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others.

References in the specification to “one implementation”, “an implementation”, “an example implementation”, etc., indicate that the implementation described may include a particular feature, structure, or characteristic, but every implementation may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same implementation. Further, when a particular feature, structure, or characteristic is described in connection with an implementation, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other implementations whether or not explicitly described herein.

Material described herein may be implemented in the context of a multi-application execution environment hereinafter referred to as a Multiple Application Framework (MAF) that permits the compositing of multiple application UIs for graphical display. FIG. 1 illustrates a MAF environment 100 in accordance with the present disclosure. Environment 100 may be implemented in hardware, software, firmware or any combination thereof. For example, environment may be implemented, at least in part, by software and/or firmware instructions executed by or within a computing system such as a CE system employing a SoC architecture.

Environment 100 includes an operating system (OS) 102 that may be stored in memory (not shown). OS 102 may be of any type and may be operably and/or communicatively coupled to or with a graphics device library (GDL) driver 104. In various implementations, GDL driver 104 may be, by way of non-limiting example, a rendering application or program that may be executed by system hardware (not shown).

Environment 100 further includes multiple graphics or rendering applications 106, 108 and 110. In various implementations, rendering applications 106, 108 and/or 110 may include one or more rendering functions and may communicate with other software such as GDL driver 104. By way of non-limiting example, application 106 may be a DirectFB application (see, e.g., DirectFB version 1.4.11, released Nov. 15, 2010), application 108 may be an OpenGL ES application (see, e.g., OpenGL Specification version 4.1, published Jul. 25, 2010), and application 110 may represent one or more rendering applications such as Simple DirectMedia Layer (SDL) or the like. In various implementations, applications 106, 108 and 110 may be associated with respective application programming interface (API) engines or libraries 112, 114 and 115. Further, in various implementations, applications 106, 108 and 110 and/or API libraries 112, 114 and 115 may be associated with corresponding software agents or graphics wrappers 116, 118 and 120. For instance, by way of non-limiting example, application 106 may be a DirectFB rendering application and may include a DirectFB API library 112 and a DirectFB wrapper 116 while application 108 may be a OpenGL ES rendering application and may include a OpenGL ES API library 114 and a OpenGL ES wrapper 118.

In various implementations, any of rendering API agents or wrappers 116, 118 and 120, such as wrapper 116, may act within a rendering API library, such as API library 112, to change a on-screen rendering output to off-screen and to provide associated memory surface information to other entities as will be described in greater detail below. Those of skill in the art will recognize that a memory surface may be implemented in a memory buffer and may contain pixel information or data. Environment 100 further includes an application/surface management component or Global Scene Graph Library (GSGL) 122 operably and/or communicatively coupled to wrappers 116, 118 and 120. In various implementations, GSGL 122 may host all underlying memory surfaces and may communicate with wrappers 116, 118 and 120 using well known inter process communication methods. In response to communications from GSGL 122, wrappers 116, 118 and/or 120 may cause the rendering output of respective applications 106, 108 and/or 110 to switch between on-screen or off-screen memory surfaces.

In various implementations, environment 100 further includes an application registry 124 to maintain information about and to manage applications 106, 108 and/or 110. Environment 100 further includes a rendering service application or UI application 126 to composite off-screen output from applications 106, 108 and 110, and to display a final UI on a display screen (not shown). UI application 126 may also act to determine, at least in part, whether a particular application's output should be provided to an on-screen memory surface or to an off-screen memory surface. UI application 126 may obtain application information and/or memory surface information from registry 124. In some implementations, environment 100 may include a binding library or layer 128 to transform underlying memory surfaces to various rendering API surfaces. In various implementations, binding layer 128 may implement Clutter Binding or any other graphics engines such as OpenGL ES or Qt.

FIG. 2 illustrates a flow diagram of an example process 200 according to various implementations of the present disclosure. Process 200 may include one or more operations, functions or actions as illustrated by one or more of blocks 202, 204, 206, 208, 210, 212, 214, 216, and 218. While, by way of non-limiting example, process 200 will be described herein in the context of example environment 100 of FIG. 1, those skilled in the art will recognize that process 200 may be implemented in various other execution environments. Process 200 may begin at block 202.

At block 202, a UI application may be started and the UI application may wait to receive memory surface information. For example, UI application 126 may begin at block 202 and may wait to receive memory surface information regarding one or more of applications 106, 108 and/or 110. At block 204, a rendering application may be started and the rendering application may allocate a memory surface from an API library. For instance, application 108 may begin at block 204 and may allocate a memory surface from API library 114. At block 206, the application may provide rendering application information to an application registry. For example, at block 206, application 108 may provide rendering application information to application registry 124 where that application information includes information identifying the rendering application such as process name, process identification number, etc. At block 208, the underlying memory surface may be detected. For instance, at block 208, wrapper 118 associated with API library 114 and application 108 may detect the memory surface allocated at block 204 by application 108.

Process 200 may continue at block 210 where memory surface information may be provided to a graph library. For example, at block 210, wrapper 118 may provide application information including information identifying the allocated memory surface to GSGL 122. The memory surface information may also include information specifying the relationship between that memory surface and the rendering process such as how the allocated memory surface is ordered with respect to other memory surfaces the rendering application may be using, the location of the allocated memory surface in memory, and so forth. At block 212 the memory surface information and information specifying the relationship between that surface and the corresponding application process may be stored. For instance, at block 212, GSGL 122 may store the memory surface information provided in block 210 and may also store information specifying the relationship between that memory surface and the rendering process of application 108.

Process 200 may continue at block 214 where a flip call may be intercepted and the graph library may be notified of the flip call. For example, at block 214, wrapper 118 may intercept a flip call (e.g., a gdl_flip( ) call) that may result from rendering by application 108 and wrapper 118 may notify GSGL 122 that wrapper 118 has intercepted a flip call. As those skilled in the art may recognize, a flip call such as gdl_flip( ) may occur when a graphics application switches from rendering to a background or off-screen memory surface to rendering to a foreground or on-screen memory surface or vice versa. At block 216 execution of the flip call may be blocked. For instance, at block 216, wrapper 118 may block execution of the flip call intercepted at block 214. In various implementations, block 216 may include, for example, blocking the transfer of pixel data from an internal application buffer to a physical display device.

Process 200 may conclude at block 218 where the memory surface information may be updated. For example, at block 218, GSGL 122 may, in response to the notification provided at block 214, update the memory surface information previously stored at block 218 to indicate that a flip call associated with application 108 has been intercepted and/or to identify, of the memory surfaces used by application 108, the memory surface(s) are affected by the flip call. While the implementation of example process 200, as illustrated in FIG. 2, may include the undertaking of all of blocks 202-218 in the order illustrated, claimed subject matter is not limited in this regard and, in various examples, implementation of process 200 may include the undertaking only a subset of blocks 202-218 and/or in a different order than illustrated.

FIG. 3 illustrates a flow diagram of an example process 300 according to various implementations of the present disclosure. Process 300 may include one or more operations, functions or actions as illustrated by one or more of blocks 302, 304, 306, 308, and 310. While, by way of non-limiting example, process 300 will be described herein in the context of example environment 100 of FIG. 1, those skilled in the art will recognize that process 300 may be implemented in various other execution environments. Process 300 may begin at block 302.

At block 302 a rendering application may be brought to the foreground for full screen display and, at block 304, application information may be obtained for the foreground application. For instance, referring to process 200 of FIG. 2, block 302 may occur after a flip call is intercepted and corresponding memory surface information is updated at blocks 214-218. Thus, block 302 may involve bringing the rendering output of the application that issued the flip call at block 214 to the foreground for full screen display. For example, block 302 may involve UI application 126 bringing application 108 to the foreground for full screen rendering. UI application 126 may then undertake block 304 by obtaining application information corresponding to application 108 from application registry 124. At block 306, corresponding wrappers may be notified. For instance, UI application 126 may undertake block 306 by requesting that GSGL 122 provide instructions to wrappers 116, 118 and 120 where those instructions may specify that the rendering output of application 108 is to be processed for foreground rendering while applications 106 and 110 are to be treated as background applications.

Process 300 may continue at block 308 where a native flip may be performed for the foreground application while, at block 310, the rendering process(es) of the background application(s) may be paused. For example, in response to an instruction provided by GSGL 122 at block 306, wrapper 118 may undertake block 308 by routing application 108's rendering process to a direct flip, while, also in response to instructions provided by GSGL 122 at block 306, wrappers 116 and 120 may undertake block 310 by pausing the rendering threads of respective applications 106 and 110. In various implementations, when performing a native flip at block 308, a rendering application such as application 108 may be allowed direct access to physical hardware planes and/or display devices such that intermediate compositing of rendered output may not be required. Further, when implementing block 310, wrappers 116 and 120 may also block flip calls from respective applications 106 and 110 and wait for further notification from GSGL 122.

While the implementation of example process 300, as illustrated in FIG. 3, may include the undertaking of all of blocks 302-310 in the order illustrated, claimed subject matter is not limited in this regard and, in various examples, implementation of process 300 may include the undertaking only a subset of blocks 302-310 and/or in a different order than illustrated. Thus, for example, in various implementations process 300 may involve undertaking blocks 308 and 310 substantially in parallel or may involve undertaking block 310 prior to undertaking block 308, etc.

FIG. 4 illustrates a flow diagram of an example process 400 according to various implementations of the present disclosure. Process 400 may include one or more operations, functions or actions as illustrated by one or more of blocks 402, 404, 406, 408, and 410. While, by way of non-limiting example, process 400 will be described herein in the context of example environment 100 of FIG. 1, those skilled in the art will recognize that process 400 may be implemented in various other execution environments. Process 400 may begin at block 402.

At block 402 a foreground application may be returned to the background and, at block 404, application information may be obtained from an application registry. For example, UI application 126 may undertake block 402 by sending application 108 to the background and may undertake block 404 by obtaining application information corresponding to application 108 from application registry 124. At block 406, corresponding wrappers may be notified. For instance, UI application 126 may undertake block 406 by requesting that GSGL 122 provide instructions to wrappers 116, 118 and 120.

Process 400 may continue at block 408 where a native flip may be disabled for the foreground application while, at block 410, the rendering process(es) of the background application(s) may be resumed. For example, in response to an instruction provided by GSGL 122 at block 406, wrapper 118 may undertake block 408 by inhibiting application 108's rendering process from making a flip call and by routing the rendering to an off-screen memory surface. Further, and also in response to instructions provided by GSGL 122 at block 406, wrappers 116 and 120 may, for example, undertake block 410 by resuming the rendering threads of respective applications 106 and 110.

While the implementation of example process 400, as illustrated in FIG. 4, may include the undertaking of all of blocks 402-410 in the order illustrated, claimed subject matter is not limited in this regard and, in various examples, implementation of process 400 may include the undertaking only a subset of blocks 402-410 and/or in a different order than illustrated. Thus, for example, in various implementations process 400 may involve undertaking blocks 408 and 410 substantially in parallel or may involve undertaking block 410 prior to undertaking block 408, etc.

FIG. 5 illustrates a flow diagram of an example process 500 for full screen application processing in a multi-application environment according to various implementations of the present disclosure. Process 500 may include one or more operations, functions or actions as illustrated by one or more of blocks 502, 504 and 506. While, by way of non-limiting example, process 500 will be described herein in the context of example environment 100 of FIG. 1, those skilled in the art will recognize that process 500 may be implemented in various other execution environments. Process 500 may begin at block 502.

At block 502 a rendering application may exit. For example, application 108 may undertake block 502 by exiting execution. At block 504, notice may be provided that memory surfaces have been destroyed, and, at block 506, memory surface information may be updated. For instance, block 504 may involve wrapper 118, in response to application 108 exiting at block 502, notifying GSGL 122 that one or more memory surfaces used by application 108 have been destroyed. Block 506 may then include GSGL 122 updating memory surface information in response to the notice received from wrapper 118. While the implementation of example process 500, as illustrated in FIG. 5, may include the undertaking of all of blocks 502-506 in the order illustrated, claimed subject matter is not limited in this regard and, in various examples, implementation of process 500 may include the undertaking only a subset of blocks 502-506 and/or in a different order than illustrated.

Any one or more of the processes of FIGS. 2-5 may be undertaken in response to instructions provided by one or more computer program products. Such program products may include signal bearing media providing instructions that, when executed by, for example, a processor, may provide the functionality described above with respect to FIGS. 1-5. The computer program products may be provided in any form of computer readable medium. Thus, for example, a processor including one or more processor core(s) may undertake one or more of the blocks shown in FIGS. 2-5 in response to instructions conveyed to the processor by a computer readable medium.

FIG. 6 illustrates an example system 600 in accordance with the present disclosure. System 600 may be used to perform some or all of the various functions discussed herein and may include any device or collection of devices capable of undertaking full screen application processing in a multi-application environment in accordance with various implementations of the present disclosure. For example, system 600 may include selected components of a computing platform or device such as a desktop, mobile or tablet computer, a smart phone, a set top box, etc., although the present disclosure is not limited in this regard. In some implementations, system 600 may be a computing platform or SoC based on Intel® architecture (IA) for CE devices. It will be readily appreciated by one of skill in the art that the implementations described herein can be used with alternative processing systems without departure from the scope of the present disclosure.

System 600 includes a processor 602 having one or more processor cores 604. Processor cores 604 may be any type of processor logic capable at least in part of executing software and/or processing data signals. In various examples, processor cores 604 may include a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor or microcontroller.

Processor 602 also includes a decoder 606 that may be used for decoding instructions received by, e.g., a display processor 608 and/or a graphics processor 610, into control signals and/or microcode entry points. While illustrated in system 600 as components distinct from core(s) 604, those of skill in the art may recognize that one or more of core(s) 604 may implement decoder 606, display processor 608 and/or graphics processor 610. In some implementations, core(s) 604 and/or graphics processor 610 may be configured to undertake any of the processes described herein including the example processes described with respect to FIGS. 2-5. Further, in response to control signals and/or microcode entry points, core(s) 604, decoder 606, display processor 608 and/or graphics processor 610 may perform corresponding operations.

Processing core(s) 604, decoder 606, display processor 608 and/or graphics processor 610 may be communicatively and/or operably coupled through a system interconnect 616 with each other and/or with various other system devices, which may include but are not limited to, for example, a memory controller 614, an audio controller 618 and/or peripherals 620. Peripherals 620 may include, for example, a unified serial bus (USB) host port, a Peripheral Component Interconnect (PCI) Express port, a Serial Peripheral Interface (SPI) interface, an expansion bus, and/or other peripherals. While FIG. 6 illustrates memory controller 614 as being coupled to decoder 606 and the processors 608 and 610 by interconnect 616, in various implementations, memory controller 614 may be directly coupled to decoder 606, display processor 608 and/or graphics processor 610.

In some implementations, system 600 may communicate with various I/O devices not shown in FIG. 6 via an I/O bus (also not shown). Such I/O devices may include but are not limited to, for example, a universal asynchronous receiver/transmitter (UART) device, a USB device, an I/O expansion interface or other I/O devices. In various implementations, system 600 may represent at least portions of a system for undertaking mobile, network and/or wireless communications.

System 600 may further include memory 612. Memory 612 may be one or more discrete memory components such as a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, or other memory devices. While FIG. 6 illustrates memory 612 as being external to processor 602, in various implementations, memory 612 may be internal to processor 602. Memory 612 may store instructions and/or data represented by data signals that may be executed by the processor 602. In some implementations, memory 612 may include a system memory portion and a display memory portion. Further, in various implementations, the display memory may include one or more frame buffers to store memory surfaces.

The systems described above, and the processing performed by them as described herein, may be implemented in hardware, firmware, or software, or any combination thereof. In addition, any one or more features disclosed herein may be implemented in hardware, software, firmware, and combinations thereof, including discrete and integrated circuit logic, application specific integrated circuit (ASIC) logic, and microcontrollers, and may be implemented as part of a domain-specific integrated circuit package, or a combination of integrated circuit packages. The term software, as used herein, refers to a computer program product including a computer readable medium having computer program logic stored therein to cause a computer system to perform one or more features and/or combinations of features disclosed herein.

FIG. 7 illustrates a flow diagram of an example process 700 for full screen application processing in a multi-application environment according to various implementations of the present disclosure. While, by way of non-limiting example, process 700 will be described herein in the context of example MAF environment 100 of FIG. 1 and/or the processes of FIGS. 2-5, those skilled in the art will recognize that process 700 may be implemented in various other execution environments and/or other processes.

Process 700 may begin at block 702 with the determination of a foreground application and at least one background application from among multiple graphics applications executing in an execution environment. For example, referring also to process 200 of FIG. 2, block 702 may include at least the following operations, functions or actions: beginning a UI application and waiting for memory surface information (block 202); beginning a rendering application and allocating a rendering surface from an API library (block 204); providing rendering application information including memory surface information to a graph library (block 210); and, intercepting a flip call made by the rendering application and notifying the graph library that the flip call has been intercepted (block 214). Although not illustrated in FIG. 7, block 702 may also include detecting the underlying memory surface allocated to the rendering application. In addition, when providing rendering application information including memory surface information to a graph library, block 702 may also include using a wrapper or agent associated with the foreground application to provide the application information to the graph library.

Process 700 may continue at block 704 with the provision of pixel data rendered by the foreground application while pausing a rendering thread of the background application. For example, referring also to process 300 of FIG. 3, block 704 may include at least the following operations, functions or actions: bringing the rendering application to foreground for full screen display (block 302); performing a native flip for the foreground application rendering process (block 308); and, pausing rendering process(es) of the background application(s) (block 310). Although not illustrated in FIG. 7, block 704 may also include obtaining application information for the foreground application(s) and notifying corresponding wrappers.

Process 700 may continue at block 706 with the ending or disabling of the native flip for the rendering process of the foreground application and the resumption of the rendering thread(s) or process(es) of the background application(s). For example, referring also to process 400 of FIG. 4, block 706 may include at least the following operations, functions or actions: the return of the foreground application to background rendering (block 402) by exiting the foreground application from full screen rendering; disabling native flip for foreground application rendering process (block 408); and, resumption of the rendering process(es) of background application(s) (block 410). Although not illustrated in FIG. 7, block 706 may also include obtaining application information from the application registry and notifying corresponding wrappers.

Process 700 may end at block 708 with the ending or of the foreground or rendering application. For example, referring also to process 500 of FIG. 5, block 708 may include at least the following operations, functions or actions: providing notice to the UI application that the memory surface allocated to the foreground application has been destroyed (block 504); and, the corresponding updating of memory surface information (block 506).

While certain features set forth herein have been described with reference to various implementations, this description is not intended to be construed in a limiting sense. Hence, various modifications of the implementations described herein, as well as other implementations, which are apparent to persons skilled in the art to which the present disclosure pertains are deemed to lie within the spirit and scope of the present disclosure.

Claims

1-20. (canceled)

21. A machine readable medium including at least one memory, storage device or storage disk comprising machine readable instructions that, when executed, cause one or more processors to at least:

execute a first application and a second application, the first application to render first graphics data to a display and the second application to render second graphics data to the display;
responsive to a first instruction, transition the first application to a foreground to cause the first application to continue to render the first graphics data to the display; and
pause the second application responsive to the first instruction.

22. The machine readable medium of claim 21, wherein the first graphics data includes pixel data.

23. The machine readable medium of claim 21, wherein the instructions are to cause the one or more processors to transition the second application to a background responsive to the first instruction.

24. The machine readable medium of claim 21, wherein the first application includes a first thread to render first graphics data to a display, and the second application includes a second thread to render second graphics data to the display.

25. The machine readable medium of claim 24, wherein the instructions are to cause the one or more processors to cause the first thread of the first application to render the first graphics data to the display responsive to the first instruction.

26. The machine readable medium of claim 24, wherein the instructions are to cause the one or more processors to pause the second thread of the second application responsive to the first instruction.

27. The machine readable medium of claim 21, wherein the one or more processors are included in a smartphone.

28. An apparatus comprising:

at least one memory;
instructions; and
processor circuitry to execute the instructions to at least: execute a first application and a second application, the first application to render first graphics data to a display and the second application to render second graphics data to the display; responsive to a first instruction, transition the first application to a foreground to cause the first application to continue to render the first graphics data to the display; and pause the second application responsive to the first instruction.

29. The apparatus of claim 28, wherein the first graphics data includes pixel data.

30. The apparatus of claim 28, wherein the processor circuitry is to transition the second application to a background responsive to the first instruction.

31. The apparatus of claim 28, wherein the first application includes a first thread to render first graphics data to a display, and the second application includes a second thread to render second graphics data to the display.

32. The apparatus of claim 31, wherein the processor circuitry is to cause the first thread of the first application to render the first graphics data to the display responsive to the first instruction.

33. The apparatus of claim 31, wherein the processor circuitry is to pause the second thread of the second application responsive to the first instruction.

34. The apparatus of claim 28, wherein the processor circuitry is included in a smartphone.

35. A method comprising:

executing a first application and a second application, the first application to render first graphics data to a display and the second application to render second graphics data to the display;
responsive to a first instruction, transitioning the first application to a foreground to cause the first application to continue to render the first graphics data to the display; and
pausing the second application responsive to the first instruction.

36. The method of claim 35, wherein the first graphics data includes pixel data.

37. The method of claim 35, further including transitioning the second application to a background responsive to the first instruction.

38. The method of claim 35, wherein the first application includes a first thread to render first graphics data to a display, and the second application includes a second thread to render second graphics data to the display.

39. The method of claim 38, further including:

causing the first thread of the first application to render the first graphics data to the display responsive to the first instruction; and
pausing the second thread of the second application responsive to the first instruction.

40. The method of claim 35, wherein the method is performed by a smartphone.

Patent History
Publication number: 20220230272
Type: Application
Filed: Apr 8, 2022
Publication Date: Jul 21, 2022
Inventors: Tao Zhao (Shanghai), John C. Weast (Portland, OR), Brett P. Wang (Shanghai)
Application Number: 17/716,595
Classifications
International Classification: G06T 1/20 (20060101); G06F 9/48 (20060101); G06F 9/451 (20060101);