DOUBLE RENDER PROCESSING FOR HANDHELD VIDEO GAME DEVICE
Methods and systems for alternating rendering of information of a common display for a video game are provided by identifying certain data relating to a scene as belonging to a first layer and identifying other certain data relating to a scene as belonging to a second layer. Rendered information of each layer are captured in memory, and rendered information for each layer are used for two successive video frames.
The present invention relates generally to handheld devices, and more particularly to image rendering for a handheld video game device.
Video games provide a source of enjoyment to users by allowing users to engage in simulated scenarios and situations the users may not otherwise be able to experience. Video games receive different types of interactive user inputs, and process the inputs into vibrant interactive visual displays and audio accompaniments for the users to enjoy.
Handheld video game devices, or other mobile devices providing video game functions, are often preferred over traditional video game consoles due to their convenience and mobility. Because of the relatively small size of most handheld video game devices, handheld video game devices allow for easy transport and playing flexibility in environments which would typically be unsuitable for video game play using traditional video game consoles.
The tradeoff for the small size and mobility of handheld video game devices is generally manifested in the processing power and video display capabilities of the handheld video game devices. The relatively small housings of handheld video game devices does not allow for the capacity of hardware and processing power of traditional video game consoles. In addition, the smaller platforms allow for only limited screen sizes, further reducing the video display capabilities of handheld video game devices. While recent years have seen marked improvements in the video display capabilities in a number of handheld video game device platforms, generally the capacity of video displays in handheld video game devices still falls far short of video display capabilities of more traditional video game consoles.
BRIEF SUMMARY OF THE INVENTIONThe invention provides for displays of a video game. In one aspect the invention provides a method of providing images for a video game, comprising: associating objects with either a first layer or a second layer; rendering objects associated with the first layer; rendering objects associated with the second layer; displaying the rendered objects associated with the first layer on a display; and displaying the rendered objects associated with the second layer on the display.
In another aspect the invention provides a method of providing images for a music based video game, comprising: associating a first object with a first display layer, the first object representative of a musician in the music based video game; associating a plurality of background objects with a second display layer, the plurality of background objects representative of a venue in the music based video game; iteratively rendering the first object, storing rendered information of the first object in a first memory, and displaying rendered information of the first object on the display; iteratively rendering the plurality of background objects, storing rendered information of the plurality of background objects in a second memory, and displaying rendered information of the plurality of background objects on the display; with displaying rendered information of the first object on the display utilizing the information stored in the first memory in first alternating time periods and displaying rendered information of the plurality of background objects utilizing the information stored in the second memory in second alternating time periods, the first alternating time periods and the second alternating time periods occurring at different times.
In another aspect the invention provides a handheld game system, comprising: memory storing scene data, the scene data including first scene data and second scene data; a processor configured to render the first scene data and the second scene data; first video memory, coupled to the processor, configured to store rendered first scene data; second video memory, coupled to the processor, configured to store rendered second scene data; and a display coupled to the first video memory and the second video memory; the processor being further configured to alternately: a) render the first scene data, command display on the display of the rendered first scene data, command display on the display of the rendered second scene data in the second video memory, and command storage of the rendered first scene data in the first video memory, and b) render the second scene data, command display on the display of the rendered second scene data, command display on the display of the rendered first scene data in the first video memory, and command storage of the rendered second scene data in the second video memory.
In another aspect the invention provides a method of providing images for a video game, comprising: associating different objects with different layers; rendering objects associated with a first layer of the different layers; storing information of rendered objects associated with the first layer in a first memory; rendering objects associated with a second layer of the different layers; combining the information of rendered objects associated with the first layer in the first memory with information of rendered objects associated with the second layer; and displaying the combined information.
These and other aspects of the invention are more fully comprehended on review of this disclosure.
In some embodiments, for example, the embodiment as illustrated in
In the embodiment as illustrated in
The handheld video game system illustrated in
In the embodiment as illustrated in
The scene data and other video generation information is stored in scene data memory 211. In some embodiments, scene data memory may be included as an allocated portion of a main memory in the handheld video game device. In other embodiments, scene data memory may be separate memory in the handheld video game device, allocated specifically for storage of scene data and other video generation information.
A graphics processing unit 213 retrieves scene data and other video generation information stored in the scene data memory. The graphics processing unit processes the information from scene data memory and renders images, for example 2D or 3D images, for display on the video display based on the information. In one embodiment, the graphics processing unit in the handheld video game device performs image rendering at a rate of 60 images per second. The graphics processing unit may alternate image rendering between what may be considered a front layer and what may be considered a back layer. In the context of a music video game, the front layer may be used, exclusively in some embodiments, to render images of a lead character or characters, for example a simulated lead guitarist and a simulated lead singer, while the back layer may be used to render images of a background environment, for example a venue, other band members, and remaining image details. In most embodiments, the front layer may be considered on top of the back layer, thereby blocking display of portions of the back layer. In other words, if a particular pixel in a composite image includes image information for both the front layer and the back layer, the image information for the back layer is occluded. As the lead guitarist and lead singer in the front layer are the main features of the video game footage, by using this arrangement, it is possible to render the lead guitarist and the lead singer in greater detail than the rest of the image.
In the embodiment of
The graphics processing unit may continually alternates rendering front layers and back layers in this manner. Therefore, on even frames, for example, the display of the handheld video game device may display a newly updated front layer, and a back layer reused from the previous frame and retrieved from VRAM B. On odd frames, for example, the display of the handheld video game device may alternatively display a newly updated back layer, and a front layer reused from the previous frame and retrieved from VRAM A. As stated previously, the graphics processing unit of the handheld video game device may only be capable of rendering image layers with 2048 polygons. However, the graphics processing unit may be capable of combining two previously rendered image layers into one composite image including more than 2048 polygons. Likewise, the video memory associated with each graphics processing unit is capable of storing images containing more than 2048 polygons, and the display is capable of displaying images containing more than 2048 polygons. Using an alternating layer rendering approach as described, the handheld video game device is therefore capable of displaying videos with double the original resolution capacity of the handheld video game device.
In block 311, the process processes scene data. Scene data may include, for example, video game instructions from a removable memory including information for running the particular video game being played. Scene data may also include, for example, user inputs generated through video game play, retrieved from, for example, user input apparatuses built into the handheld video game device, or from, for example, a peripheral device as illustrated in
In block 313, the process, usually by way of a graphics processing unit, renders objects within an image Generally, the process renders objects associated with different layers at different times. For example, using an example with two layers, the process may render objects associated with a first layer during a first time period, render objects associated with a second layer during a second time period, and repeat the rendering of objects in different layers in an alternating manner, thereby effectively rendering a first layer of an image and a second layer of an image in an alternating manner.
In block 317, the process stores the information associated with each object generated during the image rendering process, or in other words, stores results of rendering objects. In some embodiments rendered object information for different layers are stored in different memory. In addition, in some embodiments the process may also command display of the rendered objects. In embodiments where images are rendered polygon by polygon, the process may generate a variety of information pertaining to each polygon. For example, the polygon shape and color is generated in block 313, and the polygon layer identification information and exact display location of the polygon within the layer is generated in block 315. After the rendered object information has been generated and compiled, it is stored in memory until the entire image layer has been successfully rendered. For example, the object information may be stored in the video memory associated with the graphics processing unit, or alternatively, the object information may be temporarily stored in the main memory of the handheld video game device. An image layer may be considered to be successfully rendered when all the objects to be rendered associated with the layer have been compiled, and the graphics processing unit can use the compilation of object information to render a completed image layer.
It should be recognized that in some embodiments the process performs the operations of block 311, relating to association of objects with layers, prior to or when storing game data on, for example, a game cartridge or other memory storing video game instructions and data, and the process may thereafter repetitively perform the operations of blocks 313, 315, and 317 during game play.
The process afterwards returns. The process may be repeated based on the object generation progress of the image layer being rendered and on the image rendering requirements of the graphics processing unit.
At the next frame, frame 1, image B1 413, representing the first back layer, is rendered and displayed on the video display, and stored into VRAM B, the video memory slot allocated for storage of the most recent back layer image. In accordance with embodiments of the invention, the display of the handheld video game device is capable of displaying at least two image layers at the same time. Therefore, during frame 1, image A1 is not replaced in VRAM A, and is instead recycled and combined with the new image B1 into a composite image making up the complete screenshot.
At the next frame, frame 2, image A2 415, representing the second front layer, is rendered and displayed on the video display, while simultaneously stored into VRAM A. When stored into VRAM A, image A2 overwrites and replaces the previous image A1, so that there is only one front layer image stored in VRAM A at any given time. Inage B1, which is still stored in VRAM B, is reused, and image A2 is rendered on top of image B1, completing the screenshot.
At frame 3, image B2 417, representing the second back layer, is rendered and displayed on the video display, and stored into VRAM B, overwriting the previous image B1. This process is similar to the storage process associated with VRAM A during frame 2. The new image B2 is layered with image A2 to create the composite screenshot at frame 3. As can be seen in
Likewise, layer B is a rendered back layer 517, including all the background imagery associated with the screenshot. Similar to layer A, layer B can display a maximum of 2048 polygons as well. In this fashion, the background imagery may utilize an increased number of polygons, as the polygons of the 2048 polygons that would otherwise be associated with the lead singer and/or lead guitarist can be used instead to enhance details in the background imagery.
The rendered layers A and B are combined into one composite screenshot 519 including both layers. With both the front layer and the back layer capable of displaying up to 2048 polygons, the composite layer thereby has a maximum rendering capability of 4096 polygons, equivalent to double the video display resolution of traditional image rendering on similar handheld video game devices. In the embodiment as illustrated in
Handheld video game devices generally integrate displays, speakers, and user inputs directly into the handheld video game device.
The removable memory interface of the handheld video game device is configured to communicate with a removable memory, for example, a video game cartridge providing video game instructions related to the operation of a specific video game. The processor executes the video game instructions from the removable memory by communicating with each component, including the removable memory, via the bus. The main memory receives and stores information from the other components as needed for the video game to run properly. Stored information may include, for example, video game play instructions, input processing instructions, audio and video generation information, and configuration information from the removable memory, as well as user inputs from either the user I/O or the peripheral interface. The processor adjusts the video game's audio and video properties based in large part on a combination of video game processing instructions from the removable memory and the user inputs. A processor may also receive additional video game instructions and inputs from other handheld video game devices via the wireless communication interface, for example, during wireless multiplayer game play.
The processor of the handheld video game device receives and processes the video game instructions and inputs, and generates audio and video information for the video game based on the instructions and inputs. The audio driver is configured to receive the audio information from the processor, and to translate the audio information into audio signals to be sent to the speakers.
In an embodiment of a handheld video game device associated with the invention, the graphics processing unit renders images with a maximum resolution of 2048 polygons per image, at a frame rate of 60 images per second. The graphics processing unit for each display is configured to retrieve video generation information, and to translate the video generation information into display images to be sent to the display coupled to the graphics processing unit, to the video memory coupled to the graphics processing unit, or to both. In some embodiments, such as embodiments of the invention and the embodiment as illustrated in
In block 713, the process determines whether to render a first image layer A or a second image layer B. In some embodiments, the first image layer A may be a front layer, and the second image layer B may be a back layer, where the front layer is always layered atop the back layer, and features of the front layer occlude features of the back layer at pixels where objects are rendered for both layers. In some embodiments, certain features associated with each screenshot will be grouped into either a front layer A or a back layer B. For example, in the embodiments associated with the invention, polygons used to generate images of the lead singer and the lead guitarist are grouped into front layer A, and the remaining polygons, which may be used to generate image details including backup singers or venue, may be grouped into back layer B. In other embodiments, polygons falling within a predefined portion of the display, for example, polygons located in the left half of the display image, may be included in layer A, and polygons falling outside the predefined portion of the display, the right half of the display image in this example, may be included in layer B. The process may include a render index to help determine whether to render layer A or layer B. The initial render index value may be arbitrary, or may be preset to either layer A or layer B. If layer A is to be rendered, the process proceeds to block 715. Similarly, if layer B is to be rendered, the process proceeds to block 721.
In block 715, the process renders image layer A. The graphics processing unit retrieves video information associated with features included in image layer A, and processes the video information to generate polygon information for the image layer A features. Video information associated with image generation may be initially processed by the processor associated with the handheld video game device, and may include video game information stored in the removable memory and/or user input signals originating from input buttons, an attached peripheral, or a wireless communication interface. The video information may be temporarily stored in, and retrieved by a graphics processing unit from, the main memory of the handheld video game device. In embodiments associated with the invention, the process includes rendering a front layer A, including polygons used to generate a simulated lead singer and a simulated lead guitarist with respect to a music based video game. In other embodiments associated with other video games, different features of video display images may be grouped with and rendered as image layer A.
In block 717, the process stores image layer A. Storage space may be provided by a video memory connected to the graphics processing unit. The video memory may have the capacity to store and sort multiple images simultaneously. In embodiments of the invention, the video memory is capable of storing at least two images in two separate memory allocations at any given time. The process stores the rendered image layer A into one of the memory slots, which may be labeled, for example, video memory slot A. In some embodiments of the invention, the rendering process of block 715 and the storing process of block 717 may be performed in conjunction with each other. In other words, when an object in an image layer is rendered during the rendering process, the newly rendered object may immediately be stored into the associated video memory slot before another object in the image layer is rendered. In other embodiments, the entire image layer may be rendered before storage of the completed image layer into the video memory slot A. If video memory slot A is occupied with a previously rendered image layer A, the previously rendered image layer A is overwritten and replaced by the newly rendered image layer A.
In block 719, the process displays the rendered image layer A and a second image layer B stored in a second video memory slot dedicated to holding rendered layer B images. The process may combine the two image layers into one composite image before sending the display information to the video display. Alternatively, the process may send the image layers separately, and layer the images on top of one another on the display. If the process is in its first iteration, and no image layer B has yet been rendered, the process may display the rendered image layer A alone, or alternatively, the process may display a blank image layer B.
In block 721, the process renders image layer B. The rendering process closely mirrors the rendering process for image layer A as described in block 715. The graphics processing unit retrieves video information associated with features included in image layer B, and processes the video information to generate polygon information for the image layer B features. The features included in image layer B may be the features in a typical screenshot of the video game which were not rendered in image layer A. In the context of a music based video game, the image layer B may be a back layer B, which includes polygons associated with background imagery, for example, background singers, the remaining band members, and the venue.
In block 723, the process stores image layer B. The storing process again closely mirrors the storage process for image layer A as described in block 717. Image layer B may be stored in a separate memory allocation in a video memory slot B, or a similar memory allocation dedicated to the storage of newly rendered layer B images. In some embodiments, the image layer B storing process in block 723 may be performed in conjunction with the image layer B rendering process in block 721. In other embodiments, an entire image layer B may be rendered before storage into the video memory slot B. If video memory slot B already holds a previously rendered image layer B, the previously rendered image layer B is overwritten and replaced by the newly rendered image layer B.
In block 725, the process displays on a video display a composite image including the rendered image layer B and the image layer A stored in video memory slot A. In most embodiments, the graphics processing unit recombines the rendered image layer B and the stored image layer A before sending the composite image to the video display. The image layer defined to be the front layer, generally image layer A as has been described herein, may be layered atop the image layer defined to be the back layer, generally image layer B in the described embodiments, thereby occluding objects in the back layer. In other words, pixels where both layers have object information will display the pixel information of the front layer, and the object information for the pixel included in the back layer will not be displayed.
In block 727, the process swaps the render index. If image layer A was rendered and stored in the previous iteration, in other words, if the process performed the tasks associated with blocks 715, 717, and 719, the render index is switched from A to B. Likewise, if image layer B was rendered and stored in the previous iteration, meaning the process performed the tasks associated with blocks 721, 723, and 725, the render index is switched from B to A. Therefore, upon the next iteration of the process, the process will render the image layer which was not rendered in the previous iteration of the process. Swapping the image render index allows for image layer A and image layer B to be alternately rendered, thereby allowing for generation of a completely original screenshot every two iterations, and 30 times every second for embodiments where rendering is performed at a rate of 60 images per second.
In block 729, the process determines whether to exit the image rendering process. If the process determines to remain in image rendering, the process cycles back to render either a new image layer A or a new image layer B, depending on the current render index. If the process determines to exit image rendering, the process returns.
In the embodiment of
During odd frames, a new image B 827 is rendered by the graphics processing unit. In the embodiment as illustrated in
In the embodiment as illustrated in
In some situations, a second level of detail of an image layer may be available. For example, a zoomed in shot or close-up of one of the characters may be desired. A close-up of, for example, the lead guitarist character may be rendered when a particular task, such as a high score, has been achieved in the context of the video game. The second image 917 is an example of an image layer including only one character, a lead guitarist 919, from the music based video game. The second image illustrates a second level of detail at which the front layer may be rendered, and may be used interchangeably with the first image as a front layer in the embodiments of the invention described herein. Because there is no lead singer in the second image, the 2048 polygons in the image layer may be completely dedicated to rendering the lead guitarist, and the lead guitarist at the second level of detail is rendered at a much higher resolution than the lead guitarist at the first level of detail.
In various aspects, the invention may allow for the capability to generate different levels of detail for either one image layer alone, or for both image layers. In some embodiments, such as the embodiment as illustrated in
In block 1011 the process determines whether to render objects associated with a first layer or a second layer. If the process determines to render objects associated with the first layer, the process proceeds to block 1013. If the process determines to render objects associated with the second layer the processor proceeds to block 1019. In most embodiments the process alternates between rendering of objects associated with the first layer and rendering of objects associated with the second layer. In various embodiments the process may maintain a flag, a register setting, or an index indicating whether to render objects of the first layer or the second layer, or which layer was last rendered.
In block 1013 the process renders objects associated with the first layer. The process may perform the rendering of objects associated with the first layer by way of use of a graphics processing unit, which may be a separate chip or portion of a chip configured to process graphic information. In block 1015 the process stores information of the rendered objects associated with the first layer in a first memory, which may be considered a memory A. In block 1017 the process displays information stored in a second memory, which may be denoted as a memory B.
In block 1025 the process swaps a render index. The purpose of the swapping of the render index is to indicate to the process that the process should thereafter render objects associated with a layer other than the objects associated with the layer just rendered. This may be done by way of an index, but many other ways of doing this may also be performed, for example a flag may be set, a register may be set, separate code sections may be used, or other methods may be used.
In block 1027 the process determines that the process should exit. If yes, the process thereafter exits. If no, the process returns to block 1011.
Upon returning to block 1011 the process again determines whether to render the objects associated with layer 1 or render the objects associated with layer 2. Assuming, for the sake of example, that the process had previously rendered objects associated with layer 1, the process proceeds to block 1019. In block 1019 the process renders objects associated with layer 2. In block 1021 the process layers rendered information of objects of layer 1 and the rendered information of objects of layer 2. In some embodiments the layering may be performed by the graphics processing unit. In some embodiments, the rendering may be performed by a 3D render engine and the layering may be performed by a 2D graphics engine. In other embodiments the process may be performed by another processor. In some embodiments the layering of information may be performed as part of performance of operations of block 1023, in which the layered information of objects associated with layer 1 and layer 2 are stored in the second memory, which may be denoted as memory B. For example, in some embodiments information associated with the rendered objects of layer 2 may be first stored in memory B, with the information stored in memory A thereafter but over writing information stored in memory B, thereby effectively occluding the information in layer B or vice versa.
The process then again goes to block 1017 and displays the information stored in memory B, and thereafter continues as previously discussed.
During odd frames, the process renders objects associated with a second layer, and layers, which in some embodiments comprises combines, the information stored in video memory A with the rendered information of objects of the second layer. The process stores the layered information in video memory B. The process also, during odd frames, displays the information stored in video memory B on the display.
Thus, in every other frame, the process renders objects associated with different layers. In addition, in alternating frames, the process stores information that had previously been stored in video memory B, or displays information rendered during the frame and information previously stored in video memory A on the display.
The invention therefore provides an image rendering process for, for example, a handheld video game device. Although the invention has been described with respect to certain embodiments, it should be recognized that the invention may be practiced other than as specifically described, the invention comprising the claims and their insubstantial variations supported by this disclosure.
Claims
1. A method of providing images for a video game, comprising:
- associating objects with either a first layer or a second layer;
- rendering objects associated with the first layer;
- rendering objects associated with the second layer;
- displaying the rendered objects associated with the first layer on a display; and
- displaying the rendered objects associated with the second layer on the display.
2. The method of claim 1, further comprising storing information of the rendered objects associated with the first layer in a first memory.
3. The method of claim 2, further comprising storing information of the rendered objects associated with the second layer in a second memory.
4. The method of claim 3 wherein rending objects associated with the first layer occurs during a first time period.
5. The method of claim 4 wherein rendering objects associated with the second layer occurs during a second time period.
6. The method of claim 5 wherein the second time period follows the first time period.
7. The method of claim 3 wherein rendering objects associated with the first layer and rendering objects associated with the second layer occurs repetitively during play of the video game.
8. The method of claim 3 further comprising additionally rendering objects associated with the first layer, additionally storing information of the additionally rendered objects associated with the first layer in the first memory, displaying the additionally rendered objects associated with the first layer on the display and displaying the rendered objects associated with the second layer on the display using the information of the rendered objects associated with the second layer stored in the second memory.
9. The method of claim 8 wherein objects associated with the first layer include an object representative of a first musician in a music based video game and objects associated with the second layer include a venue in the music based video game.
10. A method of providing images for a music based video game, comprising:
- associating a first object with a first display layer, the first object representative of a musician in the music based video game;
- associating a plurality of background objects with a second display layer, the plurality of background objects representative of a venue in the music based video game;
- iteratively rendering the first object, storing rendered information of the first object in a first memory, and displaying rendered information of the first object on the display;
- iteratively rendering the plurality of background objects, storing rendered information of the plurality of background objects in a second memory, and displaying rendered information of the plurality of background objects on the display;
- with displaying rendered information of the first object on the display utilizing the information stored in the first memory in first alternating time periods and displaying rendered information of the plurality of background objects utilizing the information stored in the second memory in second alternating time periods, the first alternating time periods and the second alternating time periods occurring at different times.
11. The method of claim 10 wherein displaying rendered information of the first object on the display and displaying rendered information of the plurality of background objects on the display occurs during both the first alternating time periods and the second alternating time periods.
12. The method of claim 11 wherein displaying rendered information of the first object on the display does not utilize the information stored in the first memory in the second alternating time periods.
13. The method of claim 12 wherein displaying rendered information of the plurality of background objects on the display does not utilize the information stored in the second memory in the first alternating time periods.
14. A handheld game system, comprising:
- memory storing scene data, the scene data including first scene data and second scene data;
- a processor configured to render the first scene data and the second scene data;
- first video memory, coupled to the processor, configured to store rendered first scene data;
- second video memory, coupled to the processor, configured to store rendered second scene data; and
- a display coupled to the first video memory and the second video memory;
- the processor being further configured to alternately: a) render the first scene data, command display on the display of the rendered first scene data, command display on the display of the rendered second scene data in the second video memory, and command storage of the rendered first scene data in the first video memory, and b) render the second scene data, command display on the display of the rendered second scene data, command display on the display of the rendered first scene data in the first video memory, and command storage of the rendered second scene data in the second video memory.
15. The handheld game system of claim 14, wherein the memory additionally stores layer information, the layer information identifying which of the scene data is first scene data and which of the scene data is second scene data.
16. The handheld game system of claim 14, wherein the first scene data includes information of a representation of an individual and a musical instrument.
17. The handheld game system of claim 16, wherein the second scene data includes information of a representation of a musical venue.
18. A method of providing images for a video game, comprising:
- associating different objects with different layers;
- rendering objects associated with a first layer of the different layers;
- storing information of rendered objects associated with the first layer in a first memory;
- rendering objects associated with a second layer of the different layers;
- combining the information of rendered objects associated with the first layer in the first memory with information of rendered objects associated with the second layer; and
- displaying the combined information.
19. The method of claim 18, further comprising storing at least some of the information of rendered objects associated with the second layer in a second memory.
20. The method of claim 19, wherein combining the information of rendered objects associated with the first layer in the first memory with information of rendered objects associated with the second layer comprises storing at least some of the information of the rendered objects associated with the first layer in the second memory.
21. The method of claim 20 wherein displaying the combined information comprises displaying the combined information stored in the second memory.
22. The method of claim 18, further comprising storing the combined information in a second memory.
Type: Application
Filed: Jun 10, 2008
Publication Date: Dec 10, 2009
Inventors: Gregory Keith Oberg (Albany, NY), Jesse Nathaniel Booth (Schoharie, NY)
Application Number: 12/136,563
International Classification: A63F 9/24 (20060101);