STORAGE MEDIUM HAVING STORED THEREON GAME PROGRAM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD
A face image is acquired during a predetermined game or before a start of the predetermined game, and a first character object is created, the first character object including the face image. Then, in the predetermined game, a game related to the first character object is advanced in accordance with an operation of a player. At least when a success in the game related to the first character object has been determined, the face image is saved in a storage area in an accumulating manner.
Latest NINTENDO CO., LTD. Patents:
- Content data holding system, storage medium, content data holding server, and data management method
- Non-transitory storage medium having information processing program stored therein, information processing apparatus, and information processing method
- Information processing system, information processing device, storage medium storing information processing program, and information processing method
- Storage medium, information processing apparatus, information processing system, and game processing method
- Information processing system, information processing device, controller device and accessory
The disclosures of Japanese Patent Application No. 2010-232869, filed on Oct. 15, 2010, and Japanese Patent Application No. 2010-293443, filed on Dec. 28, 2010, are incorporated herein by reference.
BACKGROUND OF THE INVENTION1. Field of the Invention
The present invention relates to a storage medium having stored thereon a game program, an image processing apparatus, an image processing system, and an image processing method.
2. Description of the Background Art
Conventionally, as disclosed in, for example, Japanese Laid-Open Patent Publication No. 2006-72669, a proposal is made for an apparatus that displays an image obtained by combining an image of the real world with an image of a virtual world. Further, as disclosed in, for example, Japanese Laid-Open Patent Publication No. 2010-142592, a proposal is also made for an information processing technique, such as a game using a user image, which is information obtained in the real world. In this technique, in the progression of the game, at a time when a user image included in image data (e.g., an image of a human face area) satisfies conditions defined in advance, the image data is captured.
An image of a face area (hereinafter a “face image”) is an image of the most characteristic part of a living thing, and therefore is very useful as information for reflecting the real world on a virtual world. The conventional techniques, however, do not make sufficient use of the feature of a face image in which it is possible to reflect a situation in the real world on a virtual world.
SUMMARY OF THE INVENTIONIt is an object of the present invention to assist a user in the acquisition, the collection, and the like of face images, and also to make it possible to represent a virtual world on which the real world is reflected by the face images.
To achieve the above object, the present invention may employ, for example, the following configurations. It is understood that when the description of the scope of the appended claims is interpreted, the scope should be interpreted only by the description of the scope of the appended claims. If the description of the scope of the appended claims contradicts the description of these columns, the description of the scope of the appended claims has priority.
A configuration example of a computer-readable storage medium having stored thereon a game program according to the present invention is executed by a computer of a game apparatus that displays an image on a display device. The game program causing the computer to execute an image acquisition step, a step of creating a first character object, a first game processing step, a determination step, and a step of saving in a second storage area in an accumulating manner. The image acquisition step acquires a face image and temporarily stores the acquired face image in a first storage area, during a predetermined game or before a start of the predetermined game. The step of creating a first character object creates a first character object, the first character object being a character object including the face image stored in the first storage area. The first game processing step, in the predetermined game, advances a game related to the first character object in accordance with an operation of a player. The determination step determines a success in the game related to the first character object. The step of saving in a second storage area in an accumulating manner, at least when a success in the game has been determined in the determination step, saves the face image stored in the first storage area, in a second storage area in an accumulating manner.
Based on the above, until a success in the game is determined, the player cannot save in the second storage area the face image acquired during the game or before the start of the game and temporarily stored in the first storage area, and therefore enjoys a sense of tension. On the other hand, the face image may be saved in the second storage area when a success in the game has been determined, whereby, for example, it is possible to utilize the acquired face image even after the game has ended in the first game processing step. This causes the player to tackle the game of the first game processing step very enthusiastically and with their concentration.
In addition, in the image acquisition step, before the start of the predetermined game, the face image may be acquired and temporarily stored in the first storage area.
Based on the above, it is possible to execute the game of the first game processing step such that the face image acquired before the start of the predetermined game serves as a target to be saved in the second storage area.
In addition, the game program may further cause the computer to execute a step of creating a second character object. The step of creating a second character object creates a second character object, the second character object being a character object including a face image selected automatically or by the player from among the face images saved in the second storage area. In this case, in the first game processing step, in the predetermined game, a game related to the second character object may be additionally advanced in accordance with an operation of the player.
Based on the above, it is possible to utilize in a game the face image saved in the second storage area. Accordingly, it is possible to advance a game related to a character object including the face image acquired in the current predetermined game and related to a character object including a previously stored face image. This makes it possible to make various representations in the game.
In addition, the game program may further cause the computer to execute a step of creating a second character object and a second game processing step. The step of creating a second character object creates a second character object, the second character object being a character object including a face image selected automatically or by the player from among the face images saved in the second storage area. The second game processing step advances a game related to the second character object in accordance with an operation of the player.
Based on the above, the player can create the second character object including a face image selected from among the face images saved in the second storage area, and execute the game of the second game processing step. That is, the player can enjoy the game of the second game processing step by utilizing the face images stored by succeeding in the game of the first game processing step. In this case, the character object that appears in the game includes a face image selected automatically or by an operation of the player, and therefore, the player can simply introduce a mental picture of the real world into a virtual world.
In addition, the game apparatus may be capable of acquiring an image from a capturing device. In this case, in the image acquisition step, the face image may be acquired from the capturing device before the start of the predetermined game.
Based on the above, it is possible to cause a character object including a face image captured by the capturing device, to appear in a game, and save the face image captured by the capturing device, in an accumulating manner.
In addition, the game apparatus may be capable of acquiring an image from a first capturing device that captures a front direction of a display surface of the display device, and an image from a second capturing device that captures a direction of a back surface of the display surface of the display device, the first capturing device and the second capturing device serving as the capturing device. In this case, the image acquisition step may include: a step of acquiring a face image captured by the first capturing device in preference to acquiring a face image captured by the second capturing device; and a step of, after the face image from the first capturing device has been saved in the second storage area, permitting the face image captured by the second capturing device to be acquired.
Based on the above, the acquisition of a face image using the first capturing device is preferentially made, the first capturing device capturing the front direction of the display surface of the display device. This increases the possibility that a face image of the player of the game apparatus or the like who views the display surface of the display device is preferentially acquired. This increases the possibility of restricting the acquisition of an image with the second capturing device in the state where the player of the game apparatus or the like is not specified, the second capturing device capturing the direction of the display surface of the display device.
In addition, the game program may further cause the computer to execute a step of specifying attributes of the face images and a step of prompting the player to acquire a face image. The step of specifying attributes of the face images specifies attributes of the face images saved in the second storage area. The step of prompting the player to acquire a face image prompts the player to acquire a face image corresponding to an attribute different from the attributes specified from the face images saved in the second storage area.
Based on the above, it is possible to reduce the imbalance among the attributes of the face images saved in an accumulating manner, and thereby assist the player who wishes to save face images having various attributes in an accumulating manner.
In addition, the first game processing step may include a step of advancing the game related to the first character object by attacking the character objects in accordance with an operation of the player. In this case, in the first game processing step, an attack on the first character object may be a valid attack for succeeding in the game related to the first character object, and an attack on the second character object may be an invalid attack for succeeding in the game related to the first character object.
Based on the above, the player needs to control an attack on the second character object in the game of the first game processing step, and therefore selects and attacks the first character object. Thus, the player needs to correctly recognize the first character object, and requires concentration.
In addition, the game program may further cause the computer to execute a step of creating a third character object. The step of creating a third character object creates a third character object, the third character object being a character object including a face image different from the face image included in the second character object. In this case, the second game processing step may include a step of advancing the game related to the second character object by attacking the character objects in accordance with an operation of the player. In the second game processing step, an attack on the second character object may be a valid attack for succeeding in the game related to the second character object, and an attack on the third character object may be an invalid attack for succeeding the game related to the second character object.
Based on the above, the player needs to control an attack on the third character object in the game of the second game processing step, and therefore, selects and attacks the second character object. Thus, the player needs to correctly recognize the second character object, and requires concentration.
In addition, the game program may further cause the computer to execute a step of creating a third character object and a step of creating a fourth character object. The step of creating a third character object creates a third character object, the third character object being a character object including the face image stored in the first storage area and being smaller in dimensions than the first character object. The step of creating a fourth character object creates a fourth character object, the fourth character object being a character object including a face image different from the face image stored in the first storage area and being smaller in dimensions than the first character object. In this case, the first game processing step may include: a step of advancing the game related to the first character object by attacking the character objects in accordance with an operation of the player; a step of, when the fourth character object has been attacked, advancing deformation of the face image included in the first character object; and a step of, when the third character object has been attacked, reversing the deformation such that the face image included in the first character object approaches the original face image stored in the first storage area.
Based on the above, the player needs to correctly recognize the first character object, the third character object, and the fourth character object, and requires concentration. Particularly when the acquired face image is a face image of a person in an intimate relationship with the player, the player can tackle the game of the first game processing step increasingly enthusiastically.
In addition, the game program may further cause the computer to execute a step of creating a third character object and a step of creating a fourth character object. The step of creating a third character object creates a third character object, the third character object being a character object including the same face image as the face image included in the second character object and being smaller in dimensions than the second character object. The step of creating a fourth character object creates a fourth character object, the fourth character object being a character object including a face image different from the face image included in the second character object and being smaller in dimensions than the second character object. In this case, the second game processing step may include: a step of advancing the game related to the second character object by attacking the character objects in accordance with an operation of the player; a step of, when the fourth character object has been attacked, advancing deformation of the face image included in the second character object; and a step of, when the third character object has been attacked, reversing the deformation such that the face image included in the second character object approaches the original face image saved in the second storage area.
Based on the above, the player needs to correctly recognize the second character object, the third character object, and the fourth character object, and requires concentration.
In addition, in the step of creating the first character object, a character object including a face image obtained by deforming the face image stored in the first storage area may be created as the first character object. In this case, the first game processing step may include a step of, when the game related to the first character object has been successful, restoring the deformed face image to the original face image stored in the first storage area.
Based on the above, for example, when the acquired face image is a face image of a person in an intimate relationship with the player, the player can tackle the game of the first game processing step increasingly enthusiastically in order to restore the deformed face image to the original face image.
In addition, in the step of creating the second character object, a character object including a face image obtained by deforming the face image saved in the second storage area may be created as the second character object. In this case, the second game processing step may include a step of, when the game related to the second character object has been successful, restoring the deformed face image to the original face image saved in the second storage area.
Based on the above, for example, when the acquired face image is a face image of a person in an intimate relationship with the player, the player can tackle the game of the second game processing step increasingly enthusiastically in order to restore the deformed face image to the original face image.
In addition, in the image acquisition step, the face image may be acquired and temporarily stored in the first storage area during the predetermined game. In this case, in the first game processing step, in accordance with the creation of the first character object based on the acquisition of the face image during the predetermined game, the first character object may be caused to appear in the predetermined game, and the game related to the first character object may be advanced.
Based on the above, it is possible to execute the game of the first game processing step such that the face image acquired during the predetermined game serves as a target to be saved in the second storage area.
In addition, the game program may further cause the computer to execute a captured image acquisition step, a display image generation step, and a display control step. The captured image acquisition step acquires a captured image captured by a real camera. The display image generation step generates a display image in which a virtual character object that appears in the predetermined game is placed so as to have, as a background, the captured image acquired in the captured image acquisition step. The display control step displays on the display device the display image generated in the display image generation step. In this case, in the image acquisition step, during the predetermined game, at least one face image may be extracted from the captured image displayed on the display device, and may be temporarily stored in the first storage area.
Based on the above, a face image included in a captured image of the real world displayed as a background appears as a character object. This makes it possible to save the face image in an accumulating manner by a success in a game related to the character object.
In addition, in the display image generation step, the display image may be generated by placing the first character object such that, when displayed on the display device, the first character object overlaps a position of the face image in the captured image, the face image extracted in the image acquisition step.
Based on the above, it is possible to represent the first character object as if appearing from the captured image, and display an image as if the first character object is present in a real space captured by the real camera.
In addition, in the captured image acquisition step, captured images of a real world captured in real time by the real camera may be repeatedly acquired. In the display image generation step, the captured images repeatedly acquired in the captured image acquisition step may be sequentially set as the background. In the image acquisition step, face images corresponding to the already extracted face image may be repeatedly acquired from the captured images sequentially set as the background. In the step of creating the first character object, the first character object may be repeatedly created so as to include the face images repeatedly acquired in the image acquisition step. In the display image generation step, the display image may be generated by placing the repeatedly created first character object such that, when displayed on the display device, the repeatedly created first character object overlaps positions of the face images in the respective captured images, the face images repeatedly acquired in the image acquisition step.
Based on the above, even when the capturing position and the capturing direction of the real camera have changed, it is possible to place the first character object in accordance with the changes, and even when the position and the expression of a person who is a subject from which a face image has been acquired have changed, it is also possible to reflect the changes on the first character object. This makes it possible to display the first character object as if present in a real space represented by a captured image captured in real time.
In addition, the game apparatus may be capable of using image data stored in storage means for storing data not temporarily. In this case, in the image acquisition step, before the start of the predetermined game, at least one face image may be extracted from the image data stored in the storage means, and may be temporarily stored in the first storage area.
Based on the above, a face image is acquired from image data stored in advance in the game apparatus. This makes it possible that a face image acquired in advance by another application or the like (e.g., a face image included in an image photographed by a camera capturing application, or included in an image received from another device by a communication application) serves as an acquisition target.
In addition, other configuration examples of the present invention may be carried out in the form of an image processing apparatus and an image processing system that include means for executing the above steps, and may be carried out in the form of an image processing method including operations performed in the above steps.
Based on a configuration example of the present invention, it is possible to assist the user in the acquisition, the collection, and the like of face images, and also to represent a virtual world on which the real world is reflected by the face images.
These and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.
A description is given of a specific example of an image processing apparatus that executes an image processing program according to an embodiment of the present invention. The following embodiment, however, is merely illustrative, and the present invention is not limited to the configuration of the following embodiment.
It should be noted that in the following embodiment, data processed by a computer is illustrated using graphs and natural language. More specifically, however, the data is specified by computer-recognizable pseudo-language, commands, parameters, machine language, arrays, and the like. The present invention does not limit the method of representing the data.
<Configuration Example of Hardware>
First, with reference to the drawings, a description is given of a hand-held game apparatus 10 as an example of the image processing apparatus that executes the image processing program according to the present embodiment. The image processing apparatus according to the present invention, however, is not limited to a game apparatus. The image processing apparatus according to the present invention may be a given computer system, such as a general-purpose computer.
It should be noted that the image processing program according to the present embodiment is a game program. The image processing program according to the present invention, however, is not limited to a game program. The image processing program according to the present invention can be applied by being executed by a given computer system. Further, the processes of the present embodiment may be subjected to distributed processing by a plurality of networked devices, or may be performed by a network system where, after main processes are performed by a server, the process results are distributed to terminals, or may be performed by a so-called cloud network.
The game apparatus 10 shown in
Projections 11A are provided at the upper long side portion of the lower housing 11, each projection 11A projecting perpendicularly (in a z-direction in
The inner surface 11B of the lower housing 11 shown in
The lower LCD 12 is accommodated in the lower housing 11. A planar shape of the lower LCD 12 is a wider-than-high rectangle, and is placed such that the long side direction of the lower LCD 12 coincides with the longitudinal direction of the lower housing 11 (the x-direction in
The touch panel 13 is one of input devices of the game apparatus 10. The touch panel 13 is mounted so as to cover the screen of the lower LCD 12. In the first embodiment, the touch panel 13 may be, but is not limited to, a resistive touch panel. The touch panel may also be a touch panel of any pressure type, such as an electrostatic capacitance type. In the first embodiment, the touch panel 13 has the same resolution (detection accuracy) as that of the lower LCD 12. The resolutions of the touch panel 13 and the lower LCD 12, however, may not necessarily need to be the same.
The operation buttons 14A through 14L are each an input device for providing a predetermined input. Among the operation buttons 14A through 14L, the cross button 14A (direction input button 14A), the button 14B, the button 14C, the button 14D, the button 14E, the power button 14F, the select button 14J, the home button 14K, and the start button 14L are provided on the inner surface (main surface) of the lower housing 11.
The cross button 14A is cross-shaped, and includes buttons for indicating at least up, down, left, and right directions, respectively. The cross button 14A is provided in an lower area of the area to the left of the lower LCD 12. The cross button 14A is placed so as to be operated by the thumb of a left hand holding the lower housing 11.
The button 14B, the button 14C, the button 14D, and the button 14E are placed in a cross formation in an upper portion of the area to the right of the lower LCD 12. The button 14B, the button 14C, the button 14D, and the button 14E, are placed where the thumb of a right hand holding the lower housing 11 is naturally placed. The power button 14F is placed in a lower portion of the area to the right of the lower LCD 12.
The select button 14J, the home button 14K, and the start button 14L are provided in a lower area of the lower LCD 12.
The buttons 14A through 14E, the select button 14J, the home button 14K, and the start button 14L are appropriately assigned functions, respectively, in accordance with the program executed by the game apparatus 10. The cross button 14A is used for, for example, a selection operation and a moving operation of a character during a game. The operation buttons 14B through 14E are used for, for example, a determination operation or a cancellation operation. The power button 14F is used to power on/off the game apparatus 10.
The analog stick 15 is a device for indicating a direction. The analog stick 15 is provided to an upper portion of the area to the left of the lower LCD 12 of the inner surface (main surface) of the lower housing 11. That is, the analog stick 15 is provided above the cross button 14A. The analog stick 15 is placed so as to be operated by the thumb of a left hand holding the lower housing 11. The provision of the analog stick 15 in the upper area places the analog stick 15 at the position where the thumb of the left hand of the user holding the lower housing 11 is naturally placed. The cross button 14A is placed at the position where the thumb of the left hand holding the lower housing 11 is moved slightly downward. This enables the user to operate the analog stick 15 and the cross button 14A by moving up and down the thumb of the left hand holding the lower housing 11. The key top of the analog stick 15 is configured to slide parallel to the inner surface of the lower housing 11. The analog stick 15 functions in accordance with the program executed by the game apparatus 10. When, for example, the game apparatus 10 executes a game where a predetermined object appears in a three-dimensional virtual space, the analog stick 15 functions as an input device for moving the predetermined object in the three-dimensional virtual space. In this case, the predetermined object is moved in the direction in which the key top of the analog stick 15 has slid. It should be noted that the analog stick 15 may be a component capable of providing an analog input by being tilted by a predetermined amount in any one of up, down, right, left, and diagonal directions.
It should be noted that the four buttons, namely the button 14B, the button 14C, the button 14D, and the button 14E, and the analog stick 15 are placed symmetrically to each other with respect to the lower LCD 12. This also enables, for example, a left-handed person to provide a direction indication input using these four buttons, namely the button 14B, the button 14C, the button 14D, and the button 14E, depending on the game program.
The first LED 16A (
The microphone hole 18 is a hole for a microphone built into the game apparatus 10 as a sound input device. The built-in microphone detects a sound from outside the game apparatus 10 through the microphone hole 18. The microphone and the microphone hole 18 are provided below the power button 14F on the inner surface (main surface) of the lower housing 11.
The upper side surface of the lower housing 11 includes an opening 17 (a dashed line shown in
The upper side surface of the lower housing 11 includes an insertion slot 11D (a dashed line shown in
The inner surface 21B of the upper housing 21 shown in
The upper LCD 22 is a display device capable of displaying a stereoscopically visible image. The upper LCD 22 is capable of displaying a left-eye image and a right-eye image, using substantially the same display area. Specifically, the upper LCD 22 is a display device using a method in which the left-eye image and the right-eye image are displayed alternately in the horizontal direction in predetermined units (e.g., in every other line). It should be noted that the upper LCD 22 may be a display device using a method in which the left-eye image and the right-eye image are displayed alternately for a predetermined time. Further, the upper LCD 22 is a display device capable of displaying an image stereoscopically visible with the naked eye. In this case, a lenticular type display device or a parallax barrier type display device is used so that the left-eye image and the right-eye image that are displayed alternately in the horizontal direction can be viewed separately with the left eye and the right eye, respectively. In the first embodiment, the upper LCD 22 is a parallax-barrier-type display device. The upper LCD 22 displays an image stereoscopically visible with the naked eye (a stereoscopic image), using the right-eye image and the left-eye image. That is, the upper LCD 22 allows the user to view the left-eye image with their left eye, and the right-eye image with their right eye, using the parallax barrier. This makes it possible to display a stereoscopic image giving the user a stereoscopic effect (stereoscopically visible image). Furthermore, the upper LCD 22 is capable of disabling the parallax barrier. When disabling the parallax barrier, the upper LCD 22 is capable of displaying an image in a planar manner (the upper LCD 22 is capable of displaying a planar view image, as opposed to the stereoscopically visible image described above. This is a display mode in which the same displayed image can be viewed with both the left and right eyes). Thus, the upper LCD 22 is a display device capable of switching between: the stereoscopic display mode for displaying a stereoscopically visible image; and the planar display mode for displaying an image in a planar manner (displaying a planar view image). The switching of the display modes is performed by the 3D adjustment switch 25 described later.
The upper LCD 22 is accommodated in the upper housing 21. A planar shape of the upper LCD 22 is a wider-than-high rectangle, and is placed at the center of the upper housing 21 such that the long side direction of the upper LCD 22 coincides with the long side direction of the upper housing 21. As an example, the area of the screen of the upper LCD 22 is set greater than that of the lower LCD 12. Specifically, the screen of the upper LCD 22 is set horizontally longer than the screen of the lower LCD 12. That is, the proportion of the width in the aspect ratio of the screen of the upper LCD 22 is set greater than that of the lower LCD 12. The screen of the upper LCD 22 is provided on the inner surface (main surface) 21B of the upper housing 21, and is exposed through an opening of the inner surface of the upper housing 21. Further, the inner surface of the upper housing 21 is covered by a transparent screen cover 27. The screen cover 27 protects the screen of the upper LCD 22, and integrates the upper LCD 22 and the inner surface of the upper housing 21, and thereby provides unity. As an example, the number of pixels of the upper LCD 22 is 800 dots×240 dots (horizontal×vertical). It should be noted that an LCD is used as the upper LCD 22 in the first embodiment. The upper LCD 22, however, is not limited to this, and a display device using EL or the like may be used. Furthermore, a display device having any resolution may be used as the upper LCD 22.
The loudspeaker holes 21E are holes through which sounds from loudspeakers 44 that serve as a sound output device of the game apparatus 10 are output. The loudspeakers holes 21E are placed symmetrically with respect to the upper LCD. Sounds from the loudspeakers 44 described later are output through the loudspeaker holes 21E.
The inner capturing section 24 functions as a capturing section having an imaging direction that is the same as the inward normal direction of the inner surface 21B of the upper housing 21. The inner capturing section 24 includes an imaging device having a predetermined resolution, and a lens. The lens may have a zoom mechanism.
The inner capturing section 24 is placed: on the inner surface 21B of the upper housing 21; above the upper edge of the screen of the upper LCD 22; and in the center of the upper housing 21 in the left-right direction (on the line dividing the upper housing 21 (the screen of the upper LCD 22) into two equal left and right portions). Such a placement of the inner capturing section 24 makes it possible that when the user views the upper LCD 22 from the front thereof, the inner capturing section 24 captures the user's face from the front thereof. A left outer capturing section 23a and a right outer capturing section 23b will be described later.
The 3D adjustment switch 25 is a slide switch, and is used to switch the display modes of the upper LCD 22 as described above. The 3D adjustment switch 25 is also used to adjust the stereoscopic effect of a stereoscopically visible image (stereoscopic image) displayed on the upper LCD 22. The 3D adjustment switch 25 is provided at an end portion shared by the inner surface and the right side surface of the upper housing 21, so as to be visible to the user, regardless of the open/closed state of the game apparatus 10. The 3D adjustment switch 25 includes a slider that is slidable to any position in a predetermined direction (e.g., the up-down direction), and the display mode of the upper LCD 22 is set in accordance with the position of the slider.
When, for example, the slider of the 3D adjustment switch 25 is placed at the lowermost position, the upper LCD 22 is set to the planar display mode, and a planar image is displayed on the screen of the upper LCD 22. It should be noted that the same image may be used as the left-eye image and the right-eye image, while the upper LCD 22 remains set to the stereoscopic display mode, and thereby performs planar display. On the other hand, when the slider is placed above the lowermost position, the upper LCD 22 is set to the stereoscopic display mode. In this case, a stereoscopically visible image is displayed on the screen of the upper LCD 22. When the slider is placed above the lowermost position, the visibility of the stereoscopic image is adjusted in accordance with the position of the slider. Specifically, the amount of deviation in the horizontal direction between the position of the right-eye image and the position of the left-eye image is adjusted in accordance with the position of the slider.
The 3D indicator 26 indicates whether or not the upper LCD 22 is in the stereoscopic display mode. For example, the 3D indicator 26 is an LED, and is lit on when the stereoscopic display mode of the upper LCD 22 is enabled. The 3D indicator 26 is placed on the inner surface 21B of the upper housing 21 near the screen of the upper LCD 22. Accordingly, when the user views the screen of the upper LCD 22 from the front thereof, the user can easily view the 3D indicator 26. This enables the user to easily recognize the display mode of the upper LCD 22 even when viewing the screen of the upper LCD 22.
Within the cover section 11C, a connector (not shown) is provided for electrically connecting the game apparatus 10 and a data storage external memory 46 (see
The left outer capturing section 23a and the right outer capturing section 23b each includes an imaging device (e.g., a CCD image sensor or a CMOS image sensor) having a predetermined common resolution, and a lens. The lens may have a zoom mechanism. The imaging directions of the left outer capturing section 23a and the right outer capturing section 23b (the optical axis of the camera) are each the same as the outward normal direction of the outer surface 21D. That is, the imaging direction of the left outer capturing section 23a and the imaging direction of the right outer capturing section 23b are parallel to each other. Hereinafter, the left outer capturing section 23a and the right outer capturing section 23b are collectively referred to as an “outer capturing section 23”. The outer capturing section 23 is an example of a second capturing device.
The left outer capturing section 23a and the right outer capturing section 23b included in the outer capturing section 23 are placed along the horizontal direction of the screen of the upper LCD 22. That is, the left outer capturing section 23a and the right outer capturing section 23b are placed such that a straight line connecting between the left outer capturing section 23a and the right outer capturing section 23b is placed along the horizontal direction of the screen of the upper LCD 22. When the user has pivoted the upper housing 21 at a predetermined angle (e.g., 90°) relative to the lower housing 11, and views the screen of the upper LCD 22 from the front thereof, the left outer capturing section 23a is placed on the left side of the user viewing the screen, and the right outer capturing section 23b is placed on the right side of the user (see
The left outer capturing section 23a and the right outer capturing section 23b are placed symmetrically with respect to the line dividing the upper LCD 22 (the upper housing 21) into two equal left and right portions. Further, the left outer capturing section 23a and the right outer capturing section 23b are placed in the upper portion of the upper housing 21 and in the back of the portion above the upper edge of the screen of the upper LCD 22, in the state where the upper housing 21 is in the open state (see
Thus, the left outer capturing section 23a and the right outer capturing section 23b of the outer capturing section 23 are placed symmetrically with respect to the center line of the upper LCD 22 extending in the transverse direction. This makes it possible that when the user views the upper LCD 22 from the front thereof, the imaging directions of the outer capturing section 23 coincide with the directions of the respective lines of sight of the user's right and left eyes. Further, the outer capturing section 23 is placed in the back of the portion above the upper edge of the screen of the upper LCD 22, and therefore, the outer capturing section 23 and the upper LCD 22 do not interfere with each other inside the upper housing 21. Further, when the inner capturing section 24 provided on the inner surface of the upper housing 21 as shown by a dashed line in
The left outer capturing section 23a and the right outer capturing section 23b can be used as a stereo camera, depending on the program executed by the game apparatus 10. Alternatively, either one of the two outer capturing sections (the left outer capturing section 23a and the right outer capturing section 23b) may be used solely, so that the outer capturing section 23 can also be used as a non-stereo camera, depending on the program. When a program is executed for causing the left outer capturing section 23a and the right outer capturing section 23b to function as a stereo camera, the left outer capturing section 23a captures a left-eye image, which is to be viewed with the user's left eye, and the right outer capturing section 23b captures a right-eye image, which is to be viewed with the user's right eye. Yet alternatively, depending on the program, images captured by the two outer capturing sections (the left outer capturing section 23a and the right outer capturing section 23b) may be combined together, or may be used to compensate for each other, so that imaging can be performed with an extended imaging range. Yet alternatively, a left-eye image and a right-eye image that have a parallax may be generated from a single image captured using one of the outer capturing sections 23a and 23b, and a pseudo-stereo image as if captured by two cameras can be generated. To generate the pseudo-stereo image, it is possible to appropriately set the distance between virtual cameras.
The third LED 29 is lit on when the outer capturing section 23 is operating, and informs that the outer capturing section 23 is operating. The third LED 29 is provided near the outer capturing section 23 on the outer surface of the upper housing 21.
The L button 14G and the R button 14H are provided on the upper side surface of the lower housing 11 shown in
It should be noted that although not shown in the figures, a rechargeable battery that serves as the power supply of the game apparatus 10 is accommodated in the lower housing 11, and the battery can be charged through a terminal provided on the side surface (e.g., the upper side surface) of the lower housing 11.
In the example shown in
The information processing section 31 is information processing means including a central processing unit (CPU) 311 that executes a predetermined program, a graphics processing unit (GPU) 312 that performs image processing, and the like. In the first embodiment, a predetermined program is stored in a memory (e.g., the external memory 45 connected to the external memory I/F 33, or the data storage internal memory 35) included in the game apparatus 10. The CPU 311 of the information processing section 31 executes the predetermined program, and thereby performs the image processing described later or game processing. It should be noted that the program executed by the CPU 311 of the information processing section 31 may be acquired from another device by communication with said another device. The information processing section 31 further includes a video RAM (VRAM) 313. The GPU 312 of the information processing section 31 generates an image in accordance with an instruction from the CPU 311 of the information processing section 31, and draws the image in the VRAM 313. The GPU 312 of the information processing section 31 outputs the image drawn in the VRAM 313 to the upper LCD 22 and/or the lower LCD 12, and the image is displayed on the upper LCD 22 and/or the lower LCD 12.
To the information processing section 31, the main memory 32, the external memory I/F 33, the data storage external memory I/F 34, and the data storage internal memory 35 are connected. The external memory I/F 33 is an interface for establishing a detachable connection with the external memory 45. The data storage external memory I/F 34 is an interface for establishing a detachable connection with the data storage external memory 46.
The main memory 32 is volatile storage means used as a work area or a buffer area of the information processing section 31 (the CPU 311). That is, the main memory 32 temporarily stores various types of data used for image processing or game processing, and also temporarily stores a program acquired from outside (the external memory 45, another device, or the like) the game apparatus 10. In the first embodiment, the main memory 32 is, for example, a pseudo SRAM (PSRAM).
The external memory 45 is nonvolatile storage means for storing the program executed by the information processing section 31. The external memory 45 is composed of, for example, a read-only semiconductor memory. When the external memory 45 is connected to the external memory I/F 33, the information processing section 31 can load a program stored in the external memory 45. In accordance with the execution of the program loaded by the information processing section 31, a predetermined process is performed. The data storage external memory 46 is composed of a readable/writable non-volatile memory (e.g., a NAND flash memory), and is used to store predetermined data. For example, the data storage external memory 46 stores images captured by the outer capturing section 23 and/or images captured by another device. When the data storage external memory 46 is connected to the data storage external memory I/F 34, the information processing section 31 loads an image stored in the data storage external memory 46, and the image can be displayed on the upper LCD 22 and/or the lower LCD 12.
The data storage internal memory 35 is composed of a readable/writable non-volatile memory (e.g., a NAND flash memory), and is used to store predetermined data. For example, the data storage internal memory 35 stores data and/or programs downloaded by wireless communication through the wireless communication module 36.
The wireless communication module 36 has the function of establishing connection with a wireless LAN by, for example, a method based on the IEEE 802.11.b/g standard. Further, the local communication module 37 has the function of wirelessly communicating with another game apparatus of the same type by a predetermined communication method (e.g., infrared communication). The wireless communication module 36 and the local communication module 37 are connected to the information processing section 31. The information processing section 31 is capable of transmitting and receiving data to and from another device via the Internet, using the wireless communication module 36, and is capable of transmitting and receiving data to and from another game apparatus of the same type, using the local communication module 37.
The acceleration sensor 39 is connected to the information processing section 31. The acceleration sensor 39 detects the magnitudes of accelerations (linear accelerations) in the directions of straight lines along three axial (x, y, and z axes in the present embodiment) directions, respectively. The acceleration sensor 39 is provided, for example, within the lower housing 11. As shown in
The angular velocity sensor 40 is connected to the information processing section 31. The angular velocity sensor 40 detects angular velocities generated about three axes (x, y, and z axes in the present embodiment) of the game apparatus 10, respectively, and outputs data indicating the detected angular velocities (angular velocity data) to the information processing section 31. The angular velocity sensor 40 is provided, for example, within the lower housing 11. The information processing section 31 receives the angular velocity data output from the angular velocity sensor 40, and calculates the orientation and the motion of the game apparatus 10.
The RTC 38 and the power circuit 41 are connected to the information processing section 31. The RTC 38 counts time, and outputs the counted time to the information processing section 31. The information processing section 31 calculates the current time (date) based on the time counted by the RTC 38. The power circuit 41 controls the power from the power supply (the rechargeable battery accommodated in the lower housing 11, which is described above) of the game apparatus 10, and supplies power to each component of the game apparatus 10.
The I/F circuit 42 is connected to the information processing section 31. A microphone 43, a loudspeaker 44, and the touch panel 13 are connected to the I/F circuit 42. Specifically, the loudspeaker 44 is connected to the I/F circuit 42 through an amplifier not shown in the figures. The microphone 43 detects a sound from the user, and outputs a sound signal to the I/F circuit 42. The amplifier amplifies the sound signal from the I/F circuit 42, and outputs the sound from the loudspeaker 44. The I/F circuit 42 includes: a sound control circuit that controls the microphone 43 and the loudspeaker 44 (amplifier); and a touch panel control circuit that controls the touch panel 13. For example, the sound control circuit performs A/D conversion and D/A conversion on the sound signal, and converts the sound signal to sound data in a predetermined format. The touch panel control circuit generates touch position data in a predetermined format, based on a signal from the touch panel 13, and outputs the touch position data to the information processing section 31. The touch position data indicates the coordinates of the position (touch position), on the input surface of the touch panel 13, at which an input has been provided. It should be noted that the touch panel control circuit reads a signal from the touch panel 13, and generates the touch position data, once in a predetermined time. The information processing section 31 acquires the touch position data, and thereby recognizes the touch position, at which the input has been provided on the touch panel 13.
An operation button 14 includes the operation buttons 14A through 14L described above, and is connected to the information processing section 31. Operation data is output from the operation button 14 to the information processing section 31, the operation data indicating the states of inputs provided to the respective operation buttons 14A through 141 (indicating whether or not the operation buttons 14A through 141 have been pressed). The information processing section 31 acquires the operation data from the operation button 14, and thereby performs processes in accordance with the inputs provided to the operation button 14.
The lower LCD 12 and the upper LCD 22 are connected to the information processing section 31. The lower LCD 12 and the upper LCD 22 each display an image in accordance with an instruction from the information processing section 31 (the GPU 312). In the first embodiment, the information processing section 31 causes the lower LCD 12 to display an image for a hand-drawn image input operation, and causes the upper LCD 22 to display an image acquired from either one of the outer capturing section 23 and the inner capturing section 24. That is, for example, the information processing section 31 causes the upper LCD 22 to display a stereoscopic image (stereoscopically visible image) using a right-eye image and a left-eye image that are captured by the inner capturing section 24, or causes the upper LCD 22 to display a planar image using one of a right-eye image and a left-eye image that are captured by the outer capturing section 23.
Specifically, the information processing section 31 is connected to an LCD controller (not shown) of the upper LCD 22, and causes the LCD controller to set the parallax barrier to on/off. When the parallax barrier is on in the upper LCD 22, a right-eye image and a left-eye image that are stored in the VRAM 313 of the information processing section 31 (that are captured by the outer capturing section 23) are output to the upper LCD 22. More specifically, the LCD controller repeatedly alternates the reading of pixel data of the right-eye image for one line in the vertical direction, and the reading of pixel data of the left-eye image for one line in the vertical direction, and thereby reads the right-eye image and the left-eye image from the VRAM 313. Thus, the right-eye image and the left-eye image are each divided into strip images, each of which has one line of pixels placed in the vertical direction, and an image including the divided left-eye strip images and the divided right-eye strip images alternately placed is displayed on the screen of the upper LCD 22. The user views the images through the parallax barrier of the upper LCD 22, whereby the right-eye image is viewed with the user's right eye, and the left-eye image is viewed with the user's left eye. This causes the stereoscopically visible image to be displayed on the screen of the upper LCD 22.
The outer capturing section 23 and the inner capturing section 24 are connected to the information processing section 31. The outer capturing section 23 and the inner capturing section 24 each capture an image in accordance with an instruction from the information processing section 31, and output data of the captured image to the information processing section 31. In the first embodiment, the information processing section 31 gives either one of the outer capturing section 23 and the inner capturing section 24 an instruction to capture an image, and the capturing section that has received the instruction captures an image, and transmits data of the captured image to the information processing section 31. Specifically, the user selects the capturing section to be used, through an operation using the touch panel 13 and the operation button 14. The information processing section 31 (the CPU 311) detects that an capturing section has been selected, and the information processing section 31 gives the selected one of the outer capturing section 23 and the inner capturing section 24 an instruction to capture an image.
When started by an instruction from the information processing section 31 (CPU 311), the outer capturing section 23 and the inner capturing section 24 perform capturing at, for example, a speed of 60 images per second. The captured images captured by the outer capturing section 23 and the inner capturing section 24 are sequentially transmitted to the information processing section 31, and displayed on the upper LCD 22 or the lower LCD 12 by the information processing section 31 (GPU 312). When output to the information processing section 31, the captured images are stored in the VRAM 313, are output to the upper LCD 22 or the lower LCD 12, and are deleted at predetermined times. Thus, images are captured at, for example, a speed of 60 images per second, and the captured images are displayed, whereby the game apparatus 10 can display views in the imaging ranges of the outer capturing section 23 and the inner capturing section 24, on the upper LCD 22 of the lower LCD 12 in real time.
The 3D adjustment switch 25 is connected to the information processing section 31. The 3D adjustment switch 25 transmits to the information processing section 31 an electrical signal in accordance with the position of the slider.
The 3D indicator 26 is connected to the information processing section 31. The information processing section 31 controls whether or not the 3D indicator 26 is to be lit on. When, for example, the upper LCD 22 is in the stereoscopic display mode, the information processing section 31 lights on the 3D indicator 26.
<Descriptions of Functions>
Next, a description is given of an overview of an example of game processing performed by the game apparatus 10. The game apparatus 10 provides the function of collecting face images by acquiring and saving, for example, face images of people through the inner capturing section 24, the outer capturing section 23, or the like in accordance with an operation of a user (hereinafter also referred to as a “player”). To collect face images, the user executes a game (first game) using an acquired face image, and when the result of the game has been successful, the user can save the acquired image. It should be noted that the user can acquire a face image, which is a target to be saved, from: an image captured by the inner capturing section 24, the outer capturing section 23, or the like before executing the first game; an image acquired by an application different from the first game before executing the first game; an image captured by the inner capturing section 24, the outer capturing section 23, or the like during the execution of the first game; or the like. As described later, the game apparatus 10 saves the face image acquired before the first game or during the first game, in a saved data storage area Do (see
The games executed by the game apparatus 10 (the first game and the second game) are, for example, each a game where the user makes an attack on enemy objects EO by aiming at them, and destroys the enemy objects EO. In the first embodiment, for example, a face image acquired by the user and yet to be saved in the saved data storage area Do, or a face image acquired by the user and already saved in the saved data storage area Do, is mapped as a texture onto a character object, such as an enemy object EO.
First, at the stage of the execution of the first game according to the first embodiment, the user can execute the first game by acquiring a desired face image through an capturing section, such as a camera. Then, when having succeeded in the first game, the user can save the acquired face image in the saved data storage area Do in an accumulating manner, cause a list of the face images to be displayed, and use the face images in the second game. Here, “in an accumulating manner” means that when the user has acquired a new face image and further succeeded in the first game, the new face image is added.
At the stage of the execution of the second game, the user can select a desired face image from among the collected face images, and create an enemy object EO. Then, the user can execute a game using the enemy object EO created using the desired face image, for example, a game where the user destroys the created enemy object EO. Without an operation of the user, however, the game apparatus 10 may automatically select a face image, for example, randomly from the saved data storage area Do, and may create an enemy object EO or the like. It should be noted that also at the stage of the execution of the first game, a character object may be created using a face image already collected in the saved data storage area Do, and may be caused to appear in the game together with a character object created using a face image yet to be saved in the saved data storage area Do. It should be noted that hereinafter, when a plurality of enemy objects EO are distinguished from one another, the enemy objects EO are referred to as, for example, “enemy objects EO1, EO2 . . . .” On the other hand, when the enemy objects EO1, EO2, and the like are collectively referred to, or a plurality of enemy objects do not need to be distinguished from one another, the enemy objects EO are referred to as “enemy objects EO”.
Next, with reference to
In
It should be noted that in
The switching of the state of being selected, however, is not limited to the pressing of the left and right directions of the cross button 14A, and the state of being selected may be switched by pressing the up and down directions. Further, the switching of the state of being selected is not limited to the pressing of the cross button 14A, and the image in the state of being selected may be switched by pressing other operation buttons 14, such as the operation button 14B (A button). Alternatively, the face image in the state of being selected may be switched by performing an operation on the touch panel 13 of the lower LCD 12. For example, the game apparatus 10 displays in advance on the lower LCD 12 a list of the face images similar to the list of the face images displayed on the upper LCD 22. Then, the game apparatus 10 may detect an operation on the touch panel 13, and thereby detect which face image has entered the state of being selected. Then, the game apparatus 10 may display the face image having entered the state of being selected, e.g., the face image G1, by surrounding the face image G1 by the heavy line L1.
As described above, the user can browse a list of currently already acquired face images, using the screen shown in
The reactions of the face images related to the face image in the state of being selected, however, are not limited to actions such as: turning its face; giving a look with one eye closed while a heart mark is displayed near the face image; and turning its face and giving a look. For example, a related face image may show reactions such as smiling and producing a voice. Conversely, a face image unrelated to the face image in the state of being selected may change its expression from a smiling expression to a straight expression. Alternatively, a face image unrelated to the face image in the state of being selected may turn in the direction opposite to the direction of the face image in the state of being selected.
Here, a related face image may be defined at the stage of, for example, the acquisition of a face image. For example, groups for classifying face images may be set in advance, and when a face image is acquired, the user may input the group to which the face image to be acquired belongs. Alternatively, for example, a group of face images may be defined in accordance with the progression of the game, and the face images may be classified. For example, when the face image G1 has been newly acquired using the face image G0 during the progression of the game, it may be determined that the face image G0 and the face image G1 are face images that are related to each other and belong to the same group.
As shown in
In
The peripheral portion H13 may have a shape suggesting the feature of the enemy object EO to appear in the game. For example, if the peripheral portion H13 has a shape representing a helmet, it is possible to represent an aggressive mental picture of the enemy object EO. In
In
On the screen shown in
Such a combination of the peripheral portion H13 and the face image G2 is displayed, whereby an enemy object EO is temporarily created. The user imagines a mental picture of the enemy to confront in the game, by the temporarily displayed enemy object EO. The user can operate the cross button 14A or the like to switch the face image in the state of being selected, and thereby can switch the face image of the enemy object EO. That is, the user can switch the faces of the enemy objects EO, one after another, and thereby can create an enemy object EO that fits a mental picture of the enemy to fight with in the game.
That is, in the game apparatus 10, for example, the face images collected by succeeding in the first game and accumulated in the saved data storage area Do as described above are used in the subsequent second game. That is, the game apparatus 10 performs game processing using enemy objects EO created using the collected face images. For example, in accordance with an operation of the user, the game apparatus 10 performs a process termed a “cast determination process” before the execution of the game, and generates enemy objects EO that fit mental pictures formed by the user, or the game apparatus 10 automatically generates enemy objects EO before the execution of the game. “Automatically” means that for example, the game apparatus 10 can generate enemy objects EO by selecting a required number of face images, i.e., generate enemy objects EO to appear in the game, randomly from among the collected face images. Further, for example, in accordance with the history of the game processing performed by the user in the past, the game apparatus 10 may create enemy objects EO by selecting face images expected to be desired next by the user, based on the properties, the taste, and the like of the user. In accordance with the execution history of the game that has been executed by the user up to the current time, the game apparatus 10 may select a face image to be used next, based on face images, together with the attributes of the subjects of the face images, such as age, gender, friendship (family, friends, and relationships in work, school, and community), or, if a subject is a living thing such as a pet, the ownership relationship of the subject. Further, for example, the game apparatus 10 may select a face image to be used next, based on the performances of the user in the game executed in the past.
The game apparatus 10 performs game processing (the second game) using the enemy objects EO created by the specification made by such operations of the user, or created by the processing of the game apparatus 10. It should be noted that a character object that appears in the game, which is described using the term “enemy object EO” as an example, is not limited to an object having an adversarial relationship with the user, and may be a friend character object. Further, the present invention is not limited to a game where there are relationships such as enemies and friends, and may be a game where a player object representing the user themselves appears. Alternatively, the present invention may be, for example, a game where an object termed an “agent” appears, the object assisting the user in executing the game.
The game apparatus 10 executes a game where various character objects, such as the enemy objects EO described above, appear. To the character objects that appear in the game, the face images collected by the user succeeding in the first game are attached by texture mapping or the like. Accordingly, in the game executed by the game apparatus 10, the character objects including the face images collected by the user themselves appear. Thus, using images of portions representing the features of people and living things, such as face images, the user can execute a game where the real-world relationships with the people represented by the face images or with the living things of the face images are reflected on the various character objects. For example, it is possible to execute a game including emotions, such as affection, friendliness, favorable impression, and hatred.
It should be noted that also in the face image selection screen shown in
<Example of Various Data>
It should be noted that programs for performing the processing of the game apparatus 10 are included in a memory built into the game apparatus 10 (e.g., the data storage internal memory 35), or included in the external memory 45 or the data storage external memory 46, and the programs are: loaded from the built-in memory, or loaded from the external memory 45 through the external memory I/F 33 or from the data storage external memory 46 through the data storage external memory I/F 34, into the main memory 32 when the game apparatus 10 is turned on; and executed by the CPU 311.
Referring to
<<Operation Data Da>>
The operation data Da indicates operation information of an operation of the user on the game apparatus 10. The operation data Da includes controller data Da1 and angular velocity data Da2. The controller data Da indicates that the user has operated a controller, such as the operation buttons 14 or the analog stick 15, of the game apparatus 10. The angular velocity data Da2 indicates the angular velocities detected by the angular velocity sensor 40. For example, the angular velocity data Da2 includes x-axis angular velocity data indicating an angular velocity about the x-axis, y-axis angular velocity data indicating an angular velocity about the y-axis, and z-axis angular velocity data indicating an angular velocity about the z-axis, the angular velocities detected by the angular velocity sensor 40. For example, the operation data from the operation buttons 14 or the analog stick 15 and the angular velocity data from the angular velocity sensor 40 are acquired per unit of time in which the game apparatus 10 performs processing (e.g., 1/60 seconds), and are stored in the controller data Da1 and the angular velocity data. Da2, respectively, in accordance with the acquisition, to thereby be updated.
It should be noted that game processing (e.g., the processes performed in
<<Real Camera Image Data Db>>
The real camera image data Db indicates a real camera image captured by either one of the outer capturing section 23 and the inner capturing section 24. In the following descriptions of processing, in the step of acquiring a real camera image, the real camera image data Db is updated using a real camera image captured by either one of the outer capturing section 23 and the inner capturing section 24. It should be noted that the cycle of updating the real camera image data Db using the real camera image captured by the outer capturing section 23 or the inner capturing section 24 may be the same as the unit of time of the processing of the game apparatus 10 (e.g., 1/60 seconds), or may be shorter than this unit of time. When the cycle of updating the real camera image data Db is shorter than the cycle of the processing of the game apparatus 10, the real camera image data Db may be updated as necessary, independently of the processing described later. In this case, in the step described later of acquiring a real camera image, the process may be performed invariably using the most recent real camera image indicated by the real camera image data Db. Hereinafter, in the present embodiment, the real camera image data Db is data indicating a real camera image captured by the outer capturing section 23 (e.g., the left outer capturing section 23a).
<<Real World Image Data Dc>>
In the game processing described later, e.g., the process of the execution of the game in step 18 of
<<Boundary Surface Data Dd>>
The boundary surface data Dd is data for, in combination with the real world image data Dc described above, generating the real world image that seems to be present on the boundary surface 3. In the first drawing method, for example, the boundary surface data Dd is data concerning the screen object, and includes: opening determination data (corresponding to data of an α-texture described later) indicating the state (e.g., the presence or absence of an opening) of each point included in the boundary surface 3; data indicating the placement position of the boundary surface 3 in the virtual space (the coordinates of the boundary surface 3 in the virtual space); and the like. Further, in the second drawing method, for example, the boundary surface data Dd is data for representing an opening in a planar polygon of the real world image, and includes: opening determination data (corresponding to data of an α-texture described later) indicating the state (e.g., the presence or absence of an opening) of each point included in the boundary surface 3; data indicating the placement position of the boundary surface 3 in the virtual space (the coordinates of the boundary surface 3 in the virtual space); and the like. The data indicating the placement position of the boundary surface 3 in the virtual space is, for example, conditional equations for a spherical surface (relational expressions for defining a spherical surface in the virtual space), and indicates the existence range of the boundary surface 3 in the virtual space.
The opening determination data indicating the state of being open is, for example, two-dimensional (e.g., a rectangular shape having 2048 pixels×384 pixels) texture data in which the alpha value (non-transparency) of each point can be set. The alpha value is a value of from “0” to “1”, with “0” being minimum and “1” being maximum. The alpha value indicates transparent by “0”, and indicates non-transparent by “1”. The opening determination data can indicate that a position where “0” is stored in the opening determination data is in the state of being open, and a position where “1” is stored is not in the state of being open. The alpha value can be set in, for example, an image of a game world generated in the game apparatus 10, or a pixel block unit including a pixel or a plurality of pixels in the upper LCD 22. In the present embodiment, “predetermined values over 0 but less than 1 (0.2 in the present embodiment)” are stored in an unopen area. This data is not used when applied to the real world image. When applied to the real world image, alpha values of “0.2” stored in the opening determination data are handled as “1”. It should be noted that an alpha values of “0.2” is used to draw a shadow ES of each of the enemy objects EO described above. The setting of the alpha value and the range of the alpha value, however, does not limit the image processing program according to the present invention.
In the image processing program according to the present embodiment, in the first drawing method, it is possible to generate the real world image having an opening by multiplying: the opening determination data corresponding to an area of the range of the visual space of the virtual camera; by color information (pixel values) of a texture of the real world image to be attached to the boundary surface 3. Further, in the second drawing method, it is possible to generate the real world image having an opening by multiplying: the opening determination data corresponding to an area of the range of the visual space of a virtual world drawing camera; by color information (pixel values) of the real world image (specifically, rendered image data of the real camera image rendered with a parallel projection described later using the real world image data Dc). This is because when alpha values of “0” stored at the position of the opening are multiplied by the color information of the real world image at the position, the values of the color information of the real world image are “0” (the state of being completely transparent).
It should be noted that in the first drawing method, as described later, an image to be displayed on the upper LCD 22 is generated by rendering a virtual space image in which virtual objects are placed so as to include an object of the real world image to which the opening determination data is applied.
In addition, in the second drawing method, specifically, as described later, the virtual space image is rendered, taking into account the opening determination data. That is, the priority of each virtual object relative to the boundary surface (the priority relative to the real world image) is determined based on the opening determination data, and the virtual space image is generated by rendering each virtual object. Then, an image to be displayed on the upper LCD 22 is generated by combining the real world image with the virtual space image generated as described above.
In addition, in the image processing program according to the present embodiment, the shape of the boundary surface 3 is a spherical surface (see
It should be noted that in the present embodiment, the opening determination data is only data corresponding to the central portion of the spherical surface shown in
The image processing for an opening created in the boundary surface 3 will be described later.
<<Back Wall Image Data De>>
The back wall image data De is data concerning a back wall BW, which is present in a second space 2. For example, the back wall image data De includes: image data for generating an image of the back wall BW; data indicating the position of a polygon model defining the back wall BW in the virtual space; and the like.
The polygon model defining the back wall BW is typically a model that has a radius greater than that of the sphere shown in
Image data (texture) to be attached to the polygon model of the back wall BW may be given data. This image data represents another space (second space 2) existing behind the real world image, and therefore, the image data is preferably an image representing unreality, such as an image representing outer space, the sky, or an area in water, because it is possible to cause the player a strange feeling as if an unreal space exists behind real space. For example, when the user is playing the game according to the present embodiment in a room, it is possible to give the user a feeling as if an unreal space exists outside the room. Alternatively, a texture of the back wall may represent landscapes that are not normally seen, such as a desert and a wilderness. As described above, the selection of a texture of the back wall BW allows the player to notice a desired mental picture in another world hidden behind a real image represented as a background of the game world.
In addition, for example, if the image data is an image that can use repeated representations, such as an image of outer space, it is possible to reduce the data size of the image data (texture). Further, if the image data is such an image, it is possible to draw an image of the back wall BW without specifying the position where the back wall BW is to be drawn in the virtual space. This is because if an image can use repeated representations, the image is drawn without depending on the position (the repeated pattern can be represented on the entire polygon model).
It should be noted that in the present embodiment, the priority of drawing described later is defined by alpha values, and therefore, it is assumed that an alpha value is defined for the image data. In the present embodiment, it is assumed that an alpha value of “1” is defined for the image data.
<<Enemy Object Data Df>
The enemy object data Df is data concerning an enemy object EO, and includes substance data Df1, silhouette data Df2, and opening shape data Df3.
The substance data Df1 is data for drawing the substance of the enemy object EO, and includes, for example, a polygon model defining a three-dimensional shape of the substance of the enemy object EO, and texture data to be mapped onto the polygon model. The texture data may be, for example, a photograph of the face of the user or the like captured by each capturing section of the game apparatus 10. It should be noted that in the present embodiment, the priority of drawing described later is defined by alpha values, and therefore, it is assumed that an alpha value is defined for the texture data. In the present embodiment, it is assumed that an alpha value of “1” is defined for the texture data.
The silhouette data Df2 is data for semi-transparently drawing in the real world image the shadow of the enemy object EO present in the second space 2, and includes a polygon model and texture data to be attached to the polygon model. For example, this silhouette model includes eight planar polygons, and is placed at the same position as that of the enemy object EO present in the second space 2. The silhouette model to which a texture is attached is drawn, for example, semi-transparently, in the real world image as viewed from the virtual world drawing camera, whereby it is possible to represent the shadow of the enemy object EO present in the second space 2. Further, the texture data of the silhouette data Df2 may be, for example, images of the enemy object EO as viewed from all directions as shown in
The opening shape data Df3 is data concerning the shape of an opening generated in the boundary surface 3 when the enemy object EO moves between a first space 1 and the second space 2. In the present embodiment, the opening shape data Df3 is data for setting alpha values of “0” at the position in the opening determination data corresponding to the position in the boundary surface 3 where the opening is generated. For example, the opening shape data Df3 is texture data that corresponds to the shape of the opening to be generated and has alpha values of “0”. It should be noted that in the present embodiment alpha values of “0” are set in the opening determination data for the shape indicated by the opening shape data Df3, the shape formed around the portion corresponding to the position through which the enemy object EO has passed in the boundary surface 3. The image processing performed when the enemy object EO generates an opening in the boundary surface 3 will be described later.
<<Bullet Object Data Dg>>
The bullet object data Dg is data concerning a bullet object BO, which is fired in accordance with an attack operation of the player. For example, the bullet object data Dg includes: a polygon model and bullet image (texture) data for drawing the bullet object BO; data indicating the placement direction and the placement position of the bullet object BO; and data indicating the moving velocity and the moving direction (e.g., a moving velocity vector) of the bullet object BO. It should be noted that in the present embodiment, the priority of drawing described later is defined by alpha values, and therefore, it is assumed that an alpha value is defined for the bullet image data. In the present embodiment, an alpha value of “1” is defined for the bullet image data.
<<Score Data Dh>>
The score data Dh indicates the score of a game where the enemy object EO appears. For example, as described above, points are added to the score of the game when the user has vanquished the enemy object BO by an attack operation, and points are deducted from the score of the game when the enemy object EO has reached the position of the user (i.e., the placement position of the virtual camera in the virtual space).
<<Motion Data Di>>
The motion data Di indicates the motion of the game apparatus 10 in real space. As an example, the motion of the game apparatus 10 is calculated by the angular velocities detected by the angular velocity sensor 40.
<<Virtual Camera Data Dj>>
The virtual camera data Dj is data concerning a virtual camera set in the virtual space. In the first drawing method, for example, the virtual camera data Dj includes data indicating the placement direction and the placement position of a virtual camera in the virtual space. Further, in the second drawing method, for example, the virtual camera data Dj includes: data indicating the placement direction and the placement position of a real world drawing camera in the virtual space; and data indicating the placement direction and the placement position of a virtual world drawing camera in the virtual space. Then, for example, the data indicating the placement direction and the placement position of the virtual camera in the virtual space in the first drawing method, and the data indicating the placement direction and the placement position of the virtual world drawing camera in the virtual space in the second drawing method change in accordance with the motion of the game apparatus 10 (angular velocities) indicated by the motion data Di. Further, the virtual camera data Dj includes angle-of-view (drawing range) data of the virtual camera. With this, in accordance with changes in the positions and the orientations of the virtual camera in the first drawing method and the virtual world drawing camera in the second drawing method, a drawing range (drawing position) in the boundary surface 3 changes.
<<Rendered Image Data Dk>>
The rendered image data Dk is data concerning an image rendered by processing described later.
In the first drawing method, the real world image is rendered as an object in the virtual space, and therefore, the rendered image data Dk includes rendered image data of the virtual space. The rendered image data of the virtual space is data indicating a virtual world image obtained by rendering with a perspective projection from the virtual camera the virtual space where the enemy object EO, the bullet object BO, the boundary surface 3 (screen object) to which the real world image is applied as a texture, and the back wall BW are placed.
On the other hand, in the second drawing method, the real world image and the virtual world image are rendered by virtual cameras different from each other, and therefore, the rendered image data Dk includes rendered image data of the real camera image and rendered image data of the virtual space. The rendered image data of the real camera image indicates the real world image obtained by rendering with a parallel projection from the real world image drawing camera a planar polygon on which a texture of the real camera image is mapped. The rendered image data of the virtual space indicates the virtual world image obtained by rendering with a perspective projection from the virtual world drawing camera the virtual space where the enemy object EO, the bullet object BO, the boundary surface 3, and the back wall BW are placed.
<<Display Image Data Dl>>
The display image data Dl indicates a display image to be displayed on the upper LCD 22. In the first drawing method, for example, a display image to be displayed on the upper LCD 22 is generated by a process of rendering the virtual space. Further, in the second drawing method, for example, a display image to be displayed on the upper LCD 22 is generated by combining the rendered image data of the camera image with the rendered image data of the virtual space by a method described later.
<<Aiming Cursor Image Data Dm>>
The aiming cursor image data Dm is image data of an aiming cursor AL that is displayed on the upper LCD 22. The image data may be given data.
It should be noted that in the present embodiment, the data concerning each object (the boundary surface data Dd, the back wall image data De, the substance data Df1, the silhouette data Df2, and the bullet image data) includes information about the priority, the information defining the priority of drawing. In the present embodiment, the information about the priority uses alpha values. The relationship between the alpha values and the image processing will be described later.
In addition, in the present embodiment, the data concerning each object used for drawing includes data indicating whether or not a depth determination is to be made between the object and another. As described above, the data is set such that a depth determination is valid between each pair of: the enemy object EO; the bullet object BO; a semi-transparent enemy object; an effect object; and the screen object (boundary surface 3). Further, the data is set such that a depth determination is valid “between the shadow planar polygon (silhouette data Df2) and the enemy object EO (substance data Df1)”, “between the shadow planar polygon (silhouette data Df2) and the bullet object BO”, “between the shadow planar polygon (silhouette data Df2) and the semi-transparent enemy object”, and “between the shadow planar polygon (silhouette data Df2) and the effect object”. Furthermore, the data is set such that a depth determination is invalid between the shadow planar polygon (silhouette data Df2) and the screen object (boundary surface data Dd).
<<Management Data Dn>>
The management data Dn is data for managing: data to be processed by the game apparatus 10, such as collected face images; data accumulated by the game apparatus 10; and the like. The management data Dn includes face image management information Dn1, a face image attribute aggregate table Dn2, and the like. The face image management information Dn1 stores: the destination for storing the data of each face image (e.g., the address in the main memory 32 or the like); the source of acquiring the face image (e.g., the inner capturing section 24 or the outer capturing section 23); the attributes of the face image (e.g., the gender, the age, and the like of the subject of the face image); information of other face images related to the face image; and the like. Further, the face image attribute aggregate table Dn2 stores by attribute the numbers of collections of the face images currently already collected by the user. For example, when the subjects of the collected face images are classified by gender, age, and the like, the collection achievement value of each category is stored. Examples of the data structures of the face image management information Dn1 and the face image attribute aggregate table Dn2 will be described later.
<<Saved Data Storage Area Do>>
The saved data storage area Do is an area where, when the information processing section 31 executes the image processing program such as a game program, data to be processed by the information processing section 31, the resulting data of the process of the information processing section 31, and the like are saved. As an example, in the present embodiment, data of a face image acquired by the game apparatus 10 through the inner capturing section 24, the outer capturing section 23, the wireless communication module 36, the local communication module 37, and the like is saved. In the present embodiment, for example, the information processing section 31 executes the first game in the state where a face image acquired by the game apparatus 10 is temporarily stored in the main memory 32. Then, when it is determined that the user has succeeded in the first game in accordance with an operation of the user, the information processing section 31 saves in the saved data storage area Do the face image temporarily stored in the main memory 32. The face image saved in the saved data storage area Do is available in the subsequent game processing or the like.
The structure of the saved data storage area Do is not particularly limited. For example, the saved data storage area Do may be placed in the same physical address space as that of a regular memory, so as to be accessible to the information processing section 31. Further, for example, the saved data storage area Do may allow in advance the information processing section 31 to secure (or allocate) a predetermined block unit or a predetermined page unit at a necessary time. Furthermore, for example, the saved data storage area Do may have a structure where connections are made by management information, such as points connecting blocks, as in the file system of a computer.
In addition, the saved data storage area Do may, for example, secure an individual area for each program executed by the game apparatus 10. Accordingly, when a game program has been loaded into the main memory 32, the information processing section 31 may access the saved data storage area Do (input and output data) based on management information or the like of the game program.
In addition, the saved data storage area Do of a program may be accessible to the information processing section 31 that is executing another program. With this, data processed in the program may be delivered to said another program. For example, the information processing section 31 that is executing the second game may create a character object by reading data of a face image saved in the saved data storage area Do as a result of the execution of the first game described later. It should be noted that the saved data storage area Do is an example of a second storage area.
<Structures of Various Data>
With reference to
In the example of
The face image identification information is information uniquely identifying the saved face image. The face image identification information may be, for example, a serial number.
The address of face image data is, for example, the address where data of the face image is stored in the data storage internal memory 35 or the data storage external memory 46. However, for example, when the data of the face image is stored in a storage medium in which a file system is constructed by an OS (operating system), a path name, a file name, and the like in the file system may be set as the address of face image data.
The source of acquiring the face image is, for example, information identifying the capturing device that has acquired the face image. As the source of acquiring the face image, for example, information identifying the inner capturing section 24, the left outer capturing section 23a, or the right outer capturing section 23b is set. However, when both the left outer capturing section 23a and the right outer capturing section 23b have been used to acquire the face image, information indicating both capturing sections is set. Further, for example, when the face image has been acquired by a capturing device other than the inner capturing section 24, the left outer capturing section 23a, and the right outer capturing section 23b, e.g., by a capturing device provided outside the game apparatus 10, information indicating such a state (e.g., “other”) is set. “When the face image has been acquired by a capturing device provided outside the game apparatus 10” is, for example, the case where an image captured by another game apparatus 10 similar to the game apparatus 10 has been acquired through the external memory interface 33, the wireless communication module 36, the local communication module 37, or the like. Furthermore, examples of such a case also include the cases: where an image obtained by a camera not included in the game apparatus 10 has been acquired; where an image obtained by a scanner has been acquired; and where an image such as a video image obtained from a video device has been acquired, each image obtained through the external memory interface 33, the wireless communication module 36, or the like.
The estimation of gender is information indicating whether the face image is male or female. The estimation of gender may be, for example, made by a process shown in another embodiment described later. The estimation of age is information indicating the age of a person represented by the face image. The estimation of age may be, for example, made by a process shown in another embodiment described later.
Each of the pieces of related image identification information 1 through N is information indicating another face image related to the face image. For example, as the pieces of related image identification information 1 through N, pieces of face image identification information of up to N related other face images may be set. The related other face images may be, for example, specified by an operation of the user through a GUI. For example, when a face image has been newly acquired, the information processing section 31 may detect, in the state where the user has operated the operation buttons 14 or the like to cause one or more face images related to the acquired face image to enter the state of being selected, an operation on the GUI of giving an instruction to set related images. Alternatively, the acquired face image may be classified by categories prepared by the game apparatus 10, such as themselves, friends, colleagues, and strangers. Then, face images belonging to the same category may be linked together using the pieces of related image identification information 1 through N. However, when face images are classified by the categories prepared by the game apparatus 10, an element “classification of face images” may be simply prepared, instead of the preparation of the entry of the pieces of related face image identification information 1 through N, so that themselves, friends, colleagues, strangers, and the like may be set. Further, in
<Example of Process Flow>
With reference to
Next, when having detected an operation of the user, the information processing section 31 performs the processes of step 9 and thereafter. For example, when the operation of the user on the GUI is an instruction to “acquire a face image with the inner capturing section 24” (“Yes” in step 9), the information processing section 31 performs a face image acquisition process 1 (step 10). Here, the instruction “to acquire a face image with the inner capturing section 24” is, for example, an instruction for acquisition using the inner capturing section 24, in accordance with an operation of the user on the GUI or the like. Subsequently, the information processing section 31 proceeds to step 19. The face image acquisition process 1 will be described later with reference to
Next, for example, when the operation of the user on the GUI is an instruction to “acquire a face image with the outer capturing section 23” (“Yes” in step 11), the information processing section 31 performs a face image acquisition process 2 (step 12). Subsequently, the information processing section 31 proceeds to step 19. Here, the instruction “to acquire a face image with the outer capturing section 23” is, for example, an instruction for acquisition using the outer capturing section 23 by an operation of the user on the GUI or the like. The face image acquisition process 2 will be described later with reference to
Next, when the operation of the user on the GUI is an instruction to display a list of collected face images (“Yes” in step 13), the information processing section 31 performs a list display process (step 14). Subsequently, the information processing section 31 proceeds to step 19. The list display process will be described later with reference to
When the operation of the user on the GUI is an instruction to determine a cast (“Yes” in step 15), the information processing section 31 performs a cast determination process (step 16). Subsequently, the information processing section 31 proceeds to step 19. The cast determination process will be described later with reference to
When the operation of the user is an instruction to execute a game (“Yes” in step 17), the information processing section 31 executes the game (step 18). The process of step 18 is an example of a second game processing step. The game apparatus 10 performs the game processing of the game where various character objects, such as enemy objects EO, created in the cast determination process in step 16 appear. The type of the game is not limited. For example, the game executed in step 18 may be a game where the user fights with enemy objects EO created in the cast determination process. In this case, for example, the user fights with enemy objects EO having face images collected in the face image acquisition process 1 in step 10 and in the face image acquisition process 2 in step 12, and displayed in the list display process in step 14. Further, for example, this game may be an adventure game where a player object representing the user moves forward by overcoming various hurdles, obstacles, and the like. Alternatively, examples of the game may include: a war simulation where historical characters appear; a management simulation where a player object appear; and a driving simulation of a vehicle or the like, where a player object appears. Yet alternatively, the game may be a novel game modeled on the original of a novel, where character objects appear. Yet alternatively, the game may be one termed a role-playing game (RPG) where the user controls a main character and characters that appear in a story, to play their roles. Yet alternatively, the game may be one where the user simply has some training with the assistance of an agent that appears.
To the character objects that appear in such game processing, face images collected by the user having succeeded in the first game in step 10 are attached by texture mapping or the like. Accordingly, in the game executed in step 18, the character objects including the face images collected by the user themselves appear. Thus, using images of portions representing people and living things, such as face images, the user can execute a game where the real-world relationships with the people (or the living things) of the face images are reflected on the various character objects. For example, it is possible to perform game processing including emotions, such as affection, friendliness, favorable impression, and hatred.
On the other hand, when the operation of the user is not an instruction to execute a game (“No” in step 17), the information processing section 31 proceeds to step 19.
Then, the information processing section 31 determines whether or not the process is to be ended. When having detected through the GUI an instruction to end the process, the information processing section 31 ends the process of
Next, the information processing section 31 performs a face image acquisition process (step 101). The CPU 311 of the information processing section 31 performs the process of step 101 as an example of image acquisition means.
The information processing section 31 obtains images captured by, for example, the inner capturing section 24, the left outer capturing section 23a, and/or the right outer capturing section 23b in predetermined cycles, and displays the obtained images on the upper LCD 22. In this case, the display cycle may be the same as the unit of time of the processing of the game apparatus 10 (e.g., 1/60 seconds), or may be shorter than this unit of time. Immediately after the power to the game apparatus 10 has been turned on and the image processing program has been loaded, or in an initial state immediately after the process of
For example, when the inner capturing section 24 is used, if the user turns their face toward the inner surface 21B of the upper housing 21 in the state where the upper housing 21 is open, the user's face is displayed on the upper LCD 22. Then, when the user has pressed, for example, the R button 14H (or the L button 14G), the information processing section 31 acquires, as data, an image from the inner capturing section 24 that is displayed on the upper LCD 22, and temporarily stores the acquired data in the main memory 32. At this time, the data of the image is only present in the main memory 32, and is not saved in the saved data storage area Do described later. The data present in the main memory 32 is only used in the game in step 106 described later, and as will be described later, is discarded when the game has not been successful and has been ended. The main memory 32 is an example of a first data storage area.
It should be noted that in the processing according to the present embodiment, the face image acquired in step 101 is texture-mapped onto the facial surface portion or the like of an enemy object EO, and the game is executed. Accordingly, in the process of step 101, it is preferable that the face image should be acquired by clipping particularly the face portion from the image acquired from the capturing section. In the present embodiment, for example, it is assumed that the following processing is performed. (1) The information processing section 31 detects the contour of the face in the acquired image. The contour of the face is estimated from the distance between the eyes, and the positional relationships between the eyes and the mouth. That is, the information processing section 31 recognizes the boundary line between the contour of the face and the background, based on the arrangement of the eyes and the mouth, using the dimensions of a standard face. The boundary line can be acquired by combining, for example, differential processing (contour enhancement) and average processing (smoothing calculation), which are normal image processing. It should be noted that the method of detecting the contour of the face may be another known method. (2) The information processing section 31 fits the obtained face image with the dimensions of the facial surface portion of the head shape of the enemy object EO by enlarging or reducing the obtained face image. This process enables the game apparatus 10 to even acquire face images varying to some extent in dimensions and attach the acquired face images to enemy objects EO.
In the game apparatus 10 according to the present embodiment, however, the process of acquiring a face image is not limited to the procedure described above. For example, when a face image is acquired, a face image having target dimensions may be acquired from the capturing section, instead of the acquisition of an image from a given distance and in given dimensions. For example, a face image may be acquired on the condition that a distance from a subject is established such that the distance between the eyes of the face image obtained from the subject approximates a predetermined number of pixels. For example, the information processing section 31 may derive the distance from the subject. Alternatively, on the condition that a distance from a subject is established, the information processing section 31 may, for example, lead a person who is the subject, or the user who is the capturer, to adjust the angle of the subject's face with respect to the direction of the optical axis of the capturing section. Further, instead of the user pressing, for example, the R button 14H (or the L button 14G) to save the image, when it is determined that the adjustment of the distance from the subject and the adjustment of the angle of the face with respect to the direction of the optical axis of the capturing section are completed, the information processing section 31 may save the image. For example, the information processing section 31 may display marks representing target positions for positioning the eyes and the mouth, in superposition with the face image of the subject on the upper LCD 22. Then, when the positions of the eyes and the mouth of the subject that have been acquired from the capturing section have fallen within predetermined tolerance ranges from the marks of the target positions corresponding to the eyes and the mouth, the information processing section 31 may save the image in a memory.
It should be noted that when the face image is acquired in step 101, the information processing section 31 updates the number of acquired face images in the corresponding row of the face image attribute aggregate table Dn2 shown in
Next, the information processing section 31 displays the image acquired in the process of step 101 on, for example, the upper LCD 22 (step 102).
Next, the information processing section 31 performs a process of selecting an enemy object EO (step 103). Here, the information processing section 31 prompts the user to select the head shape of an enemy object EO. For example, the information processing section 31 may display the list of head shapes as shown in
Then, the information processing section 31 executes a game using the generated enemy object EO (step 106). The CPU 311 of the information processing section 31 performs the process of step 106 as an example of first game processing means. Here, the type of the game is not limited. The game is, for example, a game simulating a battle with the enemy object EO. Alternatively, the game may be, for example, a game where the user competes with the enemy object EO in score. Then, after the execution of the game, the information processing section 31 determines whether or not the user has succeeded in the game (step 107). The information processing section 31 performs the process of step 107 as an example of means for determining a success or a failure. A “success” is, for example, the case where the user has defeated the enemy object EO in the game where the user fights with the enemy object EO. Alternatively, a “success” is, for example, the case where the user has scored more points than the enemy object EO in the game where the user competes with the enemy object EO in score. Yet alternatively, a “success” may be, for example, the case where the user has reached a goal in a game where the user overcomes obstacles and the like set by the enemy object EO.
It should be noted that in the game executed in step 106, as well as a character object including the face image acquired in step 101, a character object using a face image already collected in the past may be caused to appear. For example, when a face image already collected in the past is attached to an enemy object EO or a friend object and appears, the user can play a game on which human relationships in the real world and the like are reflected.
When the user has succeeded in the game, the information processing section 31 saves, in the saved data storage area Do of the game, data of the face image present in the main memory 32 that has been acquired in step 101 described above, in addition to data of face images that have been saved up to the current time (step 109). The CPU 311 of the information processing section 31 performs the process of step 109 as an example of means for saving. The saved data storage area Do of the game is a storage area where the information processing section 31 that executes the game can perform writing and reading, the storage area constructed in, for example, the main memory 32, the data storage internal memory 35, or the data storage external memory 46. Data of a new face image is stored in the saved data storage area Do of the game, whereby the information processing section 31 that executes the game can display on the screen of the upper LCD 22 the data of the new face image by adding the data to, for example, the list of the face images described with reference to
At this time, to manage the face image newly saved in the saved data storage area Do of the game, the information processing section 31 generates the face image management information Dn1 described with reference to
In addition, the information processing section 31 may estimate the attributes of the face image added to the saved data storage area Do, to thereby update the aggregate result of the face image attribute aggregate table Dn2 described with reference to
In addition, the information processing section 31 may permit the user to, for example, copy or modify the data stored in the saved data storage area Do of the game, or transfer the data through the wireless communication module 36. Then, the information processing section 31 may, for example, save, copy, modify, or transfer the face image stored in the saved data storage area Do in accordance with an operation of the user through the GUI, or with an operation of the user through the operation buttons 14.
On the other hand, when the user has not succeeded in the game, the information processing section 31 inquires of the user as to whether or not to retry the game (step 108). For example, the information processing section 31 displays on the upper LCD 22 a message indicating an inquiry about whether or not to retry the game, and receives the selection of the user in accordance with an operation on the GUI provided on the lower LCD 12 (e.g., a positive icon, a negative icon, or a menu) through the touch panel 13, an operation through the operation buttons 14, or the like. When the user has given an instruction to retry the game, the information processing section 31 returns to step 106. On the other hand, when the user has not given an instruction to retry the game, the information processing section 31 discards the face image acquired in step 101 (step 110), and ends the process. It should be noted that when the game has not been successful, the information processing section 31 may discard the face image acquired in step 101, without waiting for an instruction to retry the game in step 108.
With reference to
With reference to
When a face image has not already been acquired by the inner capturing section 24 (“No” in step 121), the information processing section 31 prompts the user to first perform a face image acquisition process with the inner capturing section 24 (step 124), and ends the process of this subroutine. More specifically, for example, the information processing section 31 displays on the upper LCD 22 a message indicating “In the game apparatus 10, if a face image has not already been acquired by the inner capturing section 24, a face image acquisition process cannot be performed with the outer capturing section 23”. Alternatively, the information processing section 31 may request the user to first register the face image of the owner.
On the other hand, when a face image has already been acquired by the inner capturing section 24 (“Yes” in step 121), the information processing section 31 performs a face image management assistance process 3 (step 122). The face image management assistance process 3 will be described later with reference to
It should be noted that in the processing according to the present embodiment, the face image acquired in step 123 can also be texture-mapped onto the facial surface portion or the like of an enemy object EO, and the game can be executed. Accordingly, in the process of step 123, it is preferable that the face image should be acquired by clipping particularly the face portion from the image acquired from the capturing section, by a process similar to that of step 101 described above. Further, also when a face image is acquired in step 123, the information processing section 31 updates the number of acquired face images in the corresponding row of the face image attribute aggregate table Dn2 shown in
Next, the information processing section 31 displays the image acquired in the process of step 123 on, for example, the upper LCD 22 (step 125).
Next, the information processing section 31 performs a process of selecting an enemy object EO (step 126). Here, the information processing section 31 prompts the user to select the head shape of an enemy object EO. For example, the information processing section 31 may display the list of head shapes as shown in
Then, the information processing section 31 executes a game using the generated enemy object EO (step 129). The CPU 311 of the information processing section 31 performs the process of step 129 as an example of the first game processing means. The game executed in step 129 is similar to that of step 106. That is, the type of the game executed in step 129 varies, and possible examples of the game may include: a game simulating a battle with the enemy object EO; and a game where the user competes with the enemy object EO in score. Then, after the execution of the game, the information processing section 31 determines whether or not the user has succeeded in the game (step 130). The information processing section 31 performs the process of step 130 as an example of the means for determining a success or a failure. A “success” is, for example, the case where the user has defeated the enemy object EO in the game where the user fights with the enemy object EO. Alternatively, a “success” is, for example, the case where the user has scored more points than the enemy object EO in the game where the user competes with the enemy object EO in score. Yet alternatively, a “success” may be, for example, the case where the user has reached a goal in a game where the user overcomes obstacles and the like set by the enemy object EO.
It should be noted that in the game executed in step 129, as well as a character object including the face image acquired in step 123, a character object using a face image already collected in the past may be caused to appear. For example, when a face image already collected in the past is attached to an enemy object EO or a friend object and appears, the user can play a game on which human relationships in the real world and the like are reflected.
When the user has succeeded in the game, the information processing section 31 saves, in the saved data storage area Do of the game, data of the face image present in the main memory 32 that has been acquired in step 123 described above, in addition to data of face images that have been saved up to the current time (step 132), and ends the process of the subroutine. The CPU 311 of the information processing section 31 performs the process of step 132 as an example of the means for saving. Data of a new face image is stored in the saved data storage area Do of the game, whereby the information processing section 31 that executes the game can display on the screen of the upper LCD 22 the data of the new face image by adding the data to, for example, the list of the face images described with reference to
At this time, as in the face image acquisition process 1 in step 10, to manage the face image newly saved in the saved data storage area Do of the game, the information processing section 31 generates the face image management information Dn1 described with reference to
On the other hand, when the user has not succeeded in the game, the information processing section 31 inquires of the user as to whether or not to retry the game (step 131). For example, the information processing section 31 displays on the upper LCD 22 a message indicating an inquiry about whether or not to retry the game, and receives the selection of the user in accordance with an operation on the GUI provided on the lower LCD 12 (e.g., a positive icon, a negative icon, or a menu) through the touch panel 13, an operation through the operation buttons 14, or the like. When the user has given an instruction to retry the game, the information processing section 31 returns to step 129. On the other hand, when the user has not given an instruction to retry the game, the information processing section 31 discards the face image acquired in step 123 (step 133), and ends the process of the subroutine. It should be noted that when the game has not been successful, the information processing section 31 may discard the face image acquired in step 123, without waiting for an instruction to retry the game in step 131.
Next, the information processing section 31 waits for an operation of the user (step 141). Then, in accordance with an operation of the user, the information processing section 31 determines whether or not a face image is in the state of being selected (step 142). The determination of whether or not a face image is in the state of being selected is made based on, when the list of the face images are displayed on the upper LCD 22 as shown in
Then, when any one of the face images is in the state of being selected, the information processing section 31 searches for face images related to the face image in the state of being selected, using the face image management information Dn1 (see
Then, the information processing section 31 performs a process of causing the found face images to react, such as causing the found face images to give looks to the face image in the state of being selected (step 144). The process of causing the found face images to react can be performed by, for example, the following procedure. For example, the following are prepared in advance: a plurality of patterns of eyes, in which the orientation of eyes are directed to another face image as shown in
In addition, concerning the orientation of a face, patterns of a face image are prepared in which, on the assumption that the case of being directed in the normal direction of the screen is 0 degrees, the orientation is changed in the left-right directions at angles, e.g., 30 degrees, 60 degrees, and 90 degrees. Further, patterns are also prepared in which the orientation is changed in the up-down directions at, for example, approximately 30 degrees. Further, patterns may be prepared in which, for a face image whose orientation has been changed in the left-right direction at an angle of 90 degrees, the orientation is further changed in the up-down direction, i.e., diagonally upward (e.g., 15 degrees upward, 30 degrees upward, and 45 degrees upward) and diagonally downward (e.g., 15 degrees downward, 30 degrees downward, and 45 degrees downward). Then, based on the positional relationships between the face image in the state of being selected and the found face images, angles may be determined, and the angles of faces closest to the corresponding angles may be selected. Further, to emphasize intimacy, an expression such as an animation of a three-dimensional model closing one eye may be displayed. Further, a heart mark and the like may be prepared in advance, and displayed near the face images related to the face image in the state of being selected.
In the determination in step 142, when a face image is not in the state of being selected, the information processing section 31 performs another process (step 145). Said another process includes, for example, an operation on another GUI provided on the lower LCD 12, and a process on operation buttons 14 other than the operation buttons 14 used for the selection of face images (buttons 14a, 14b, 14c, and the like). Subsequently, the information processing section 31 determines whether or not the process is to be ended (step 146). For example, when having detected that the button 14c (B button) has been pressed while the screen shown in
Next, the information processing section 31 detects a selection operation of the user through the GUI, the operation buttons 14, or the like, and receives the selection of the head shape of an enemy object EO (step 161). When the selection of the head shape of an enemy object EO has been ended, subsequently, the information processing section 31 displays a list of face images (step 162). Then, the information processing section 31 detects a selection operation of the user through the GUI, the operation buttons 14, or the like, and receives the selection of a face image (step 163). It should be noted that in the example of the process of
Then, the information processing section 31 sets the selected face image as a texture of the enemy object EO (step 164). Then, the information processing section 31 generates the enemy object EO by texture-mapping the selected face image onto the facial surface portion of the enemy object EO (step 165). The enemy object generated in step 165 is an example of a second character object. Then, the information processing section 31 displays the generated enemy object EO on the screen of the upper LCD 22 in the form of, for example, the enemy object EO shown in
In addition, the information processing section 31 performs a process of causing related face images to react (step 166). This process is similar to the processes of steps 143 and 144 of
Next, the information processing section 31 performs a process of prompting the user to acquire a face image corresponding to an unacquired attribute (step 1A2). For example, the information processing section 31 may display on the lower LCD 12 or the upper LCD 22 a message combining the attribute “male”, the attribute “10's”, with the phrase “the number of acquired images is 0”, based on the table shown in
It should be noted that here, the description is given, taking the face image management assistance process 1 shown in
Then, when an acquired image is not present (“No” in the determination in step 1000), the information processing section 31 ends the process. On the other hand, when an acquired image is present (“Yes” in the determination in step 1000), the information processing section 31 proceeds to step 1001. Then, the information processing section 31 receives a request to acquire the face image (step 1001). The information processing section 31 recognizes the request to acquire the face image, for example, when having received an acquisition instruction through the L button 14G or the R button 14H in the state where the face image is displayed on the upper LCD 22 through the inner capturing section 24 or the outer capturing section 23.
Then, the information processing section 31 estimates the attributes, e.g., the gender and the age, of the face image acquired through the inner capturing section 24 or the outer capturing section 23 and displayed on the upper LCD 22 (step 1002). For example, the gender can be estimated from the size of the skeleton including the cheekbones and the mandible that are included in the face image, and the dimensions of the face. That is, the information processing section 31 calculates the relative dimensions of the contour of the face relative to the distance between the eyes and the distances between the eyes and the mouth (e.g., the width of the face, and the distance between the eyes and the chin). Then, when the relative dimensions are close to statistically obtained male average values, it may be determined that the face image is male. Further, for example, when the relative dimensions are close to statistically obtained female average values, it may be determined that the face image is female.
In addition, the information processing section 31 may store, in advance, feature information by gender and by age bracket (e.g., under 10, 10's, 20's, 30's, 40's, 50's, 60's, or 70 or over), such as the average positions of parts of faces and the number of wrinkles in the portions of faces. Then, the information processing section may calculate the feature information of the face image, for example, acquired through the outer capturing section 23 and displayed on the upper LCD 22, and may estimate the age bracket closest to the calculated feature information. The above descriptions of the specification of the gender and the age, however, is illustrative, and the determination of the gender and the age is not limited to the above process. In the process of step 1002, it is possible to apply various gender determination techniques and various age specification techniques that are conventionally proposed.
Next, the information processing section 31 prompts the user to acquire a face image having an unacquired attribute (step 1003). For example, the information processing section 31 may display on the upper LCD 22 a message prompting the user to acquire a face image having an unacquired attribute. This process is similar to that of
On the other hand, when acquisition target face images have not been switched, the information processing section 31 ends the process as it is. Here, “when acquisition target face images have not been switched” is, for example, the case where the user has ended the face image management assistance process 2 through the GUI, the operation button 14C (B button), or the like. Alternatively, for example, when the state where acquisition target face images are not switched has continued for a predetermined time, the information processing section 31 may determine that acquisition target face images have not been switched. In this case, the information processing section 31 proceeds to step 101 of
In addition, in the determination process of step 1004, “when acquisition target face images have not been switched” is, for example, the case where the amount of change in the distance between the eyes and the amounts of change in the distances between the eyes and the mouth are within tolerances. Alternatively, for example, when an acquisition instruction from the L button 14G or the R button 14H has not been simply canceled, but has continued for a predetermined time, the information processing section 31 may determine that the acquisition instruction has not been canceled.
Based on the processes of
It should be noted that in the present embodiment, an example of the process is shown where age brackets are classified as under 10, 10's, 20's, 30's, 40's, 50's, 60's, or 70 or over. The present invention, however, is not limited to such classification of age brackets. For example, the age brackets may be further classified into smaller categories. Alternatively, age brackets may be roughly classified, such as children, adults, and the elderly.
In the present embodiment, when having received an acquisition instruction through the L button 14G or the R button 14H, the information processing section 31 recognizes a request to acquire a face image. Instead of such a process, however, for example, as has already been described in the present embodiment, to acquire from the capturing section a face image having target dimensions, in a process of deriving the distance between the game apparatus 10 and the face, the angle between the optical axis of the capturing section and the face, and the like, the information processing section 31 may estimate the attributes, e.g., the gender and the age, of the face image. That is, when, for example, the information processing section 31 acquires a face image in real time or in each frame cycle (e.g., 1/60 seconds) for such a deriving process, the information processing section 31 may specify the attributes of the face image from the acquired face image.
<Detailed Example of Game Processing>
Here, with reference to
First, in the present embodiment, a description is given of an overview of a game that can be played by the player executing the game program with the game apparatus 10. The game according to the present embodiment is a so-called shooting game where the player, as a main character of the game, shoots down enemy characters that appear in a virtual three-dimensional space prepared as a game world. For example, the virtual three-dimensional space forming the game world (a virtual space (also referred to as a “game space”)) is displayed on a display screen of the game apparatus 10 (e.g., the upper LCD 22) from the player's point of view (a so-called first-person point of view). As a matter of course, display may be performed from a third-person point of view. When the player has shot down an enemy character, points are added to the score. In contrast, when an enemy character has collided with the player (specifically, when the enemy character has reached within a certain distance from the position of the virtual camera), points are deducted from the score.
In addition, in the game according to the present embodiment, display is performed by combining an image of the real world acquired by the capturing section included in the game apparatus 10 (hereinafter referred to as a “real world image”), with a virtual world image representing the virtual space. Specifically, the virtual space is divided into an area closer to the virtual camera (hereinafter referred to as a “front area”) and an area further from the virtual camera (hereinafter referred to as a “back area”). Then, an image representing a virtual object present in the front area is displayed in front of the real world image, and the virtual object present in the back area is displayed behind the real world image. More specifically, as will be described later, combination is made such that the virtual object present in the front area is given preference over the real world image, and the real world image is given preference over the virtual object present in the back area.
The method of combining the real world image with the virtual world image is not limited. For example, the real world image may be rendered with the virtual object by a common virtual camera such that real world image may be present as an object in the same virtual space as the virtual object (more specifically, for example, by being attached as a texture of a virtual object).
In addition, in another example, a first rendered image may be obtained by rendering the real world image from a first virtual camera (hereinafter referred to as a “real world drawing camera”), and a second rendered image may be obtained by rendering the virtual object from a second virtual camera (hereinafter referred to as a “virtual world drawing camera”). Then, the first rendered image may be combined with the second rendered image such that the virtual object present in the front area is give preference over the real world image, and the real world image is given preference over the virtual object present in the back area.
In the first method, typically, the object to which the real world image is applied as a texture (hereinafter referred to as a “screen object”) may be placed at a position, which is the boundary between the front area and the back area, and may be drawn together with the virtual object, such as an enemy object, as viewed from the common virtual camera. In this case, typically, the object to which the real world image is attached is an object having a surface which has a certain distance from the virtual camera and whose normal line coincides with the direction of the line of sight of the virtual camera, and the real world image may be attached to this surface (hereinafter referred to as a “boundary surface”) as a texture.
In addition, in the second method, the second rendered image is obtained by rendering the virtual object while making a depth determination (determination by Z-buffering) based on the boundary surface between the front area and the back area (hereinafter referred to simply as a “boundary surface”), and the first rendered image is obtained by performing rendering by attaching the real world image as a texture to a surface which has a certain distance from the virtual camera and whose normal line coincides with the direction of the line of sight of the virtual camera. Then, when the second rendered image is combined with the first rendered image such that the second rendered image is given preference over the first rendered image, a combined image is generated, in which the real world image seems to be present on the boundary surface.
In either method, the relationships between the distance from, and the angle of view of, the virtual camera and the size of the object of the real world image (the size in the direction of the line of sight) are set such that the real world image includes the range of the field of view of the virtual camera.
It should be noted that hereinafter, the first method is referred to as a “first drawing method”, and the second method is referred to as a “second drawing method”.
In addition, when predetermined event occurrence conditions in the game have been satisfied, a part of the real world image is opened, and display is performed such that the virtual space in the back area can be viewed through the opening. Further, an enemy character object is present in the front area, and when predetermined conditions have been satisfied, a special enemy character (a so-called “boss character”) appears in the back area. This stage is completed by shooting down the boss character. Several stages are prepared, and the game is completed by completing all the stages. In contrast, when predetermined game over conditions have been satisfied, the game is over.
In a typical example of the first drawing method described above, for the opening in the real world image, data indicating the position of the opening may be set on the boundary surface of the screen object. More specifically, the non-transparency of a texture to be applied to the boundary surface (a so-called α-texture) may indicate open or unopen. Further, in the second drawing method, data indicating the position of the opening may be set on the boundary surface.
In addition, in the present embodiment, the open/unopen state is set in the real world image. Alternatively, another image processing may be performed on the real world image. For example, given image processing can be performed by common technical knowledge of those skilled in the art, such as attaching dirt to the real world image, or pixelizating the real world image. Also in these examples, data may be set that indicates the position where image processing is performed on the boundary surface.
<Game World>
As described above, in the game according to the present embodiment, a game screen is displayed that represents the virtual space having such an improved sense of depth that the existence of the virtual space (back area) is felt also behind the real image. It should be noted that the real world image may be a regular image captured by a monocular camera, or may be a stereo image captured by a compound eye camera.
In the game according to the present embodiment, an image captured by the outer capturing section 23 is used as the real world image. That is, a real world image in the periphery of the player captured by the outer capturing section 23 (a real-world moving image acquired in real time) is used during the game play. Accordingly, when the user (the player of the game) holding the game apparatus 10 has changed the imaging range of the outer capturing section 23 by changing the orientation of the game apparatus 10 in the left-right direction or the up-down direction during the game play, the real world image displayed on the upper LCD 22 also changes so as to follow the change in the imaging range.
Here, the change in the orientation of the game apparatus 10 during the game play is made roughly in accordance with: (1) the player's intention; or (2) the intention (scenario) of the game. When the player has intentionally changed the orientation of the game apparatus 10 during play, the real world image captured by the outer capturing section 23 changes. This makes it possible to intentionally change the real world image displayed on the upper LCD 22.
In addition, the angular velocity sensor 40 of the game apparatus 10 detects the change in the orientation of the game apparatus 10, and the orientation of the virtual camera is changed in accordance with the detected change. More specifically, the current orientation of the virtual camera is changed in the direction of the change in the orientation of the outer capturing section 23. Further, the current orientation of the virtual camera is changed by the amount of change (angle) in the orientation of the outer capturing section 23. That is, when the orientation of the game apparatus 10 is changed, the real world image changes, and the displayed range of the virtual space changes. That is, a change in the orientation of the game apparatus 10 changes the real world image in conjunction with the virtual world image. This makes it possible to display a combined image as if the real world is associated with the virtual world. It should be noted that in the present embodiment, the position of the virtual camera is not changed. Alternatively, the position of the virtual camera may be changed by detecting the movement of the game apparatus 10.
It should be noted that in the second drawing method, such a process of changing the orientation of the virtual camera is applied to the virtual world drawing camera, but is not applied to the real world drawing camera.
In addition, when an object is displayed at a local position, such as the end of the screen (e.g., the right end or the left end) during the game play, the player naturally intends to attempt to capture the object at the center of the screen, and therefore moves the game apparatus 10 (outer capturing section 23). As a result, the real world image displayed on the screen changes. Such a change in the orientation of the game apparatus 10 (a change in the real world image) can be naturally made by the user, by performing programming such that an object displayed in accordance with the scenario of the game is intentionally displayed at the end of the screen.
<Details of Virtual Space>
(Drawing of Real World Image)
The real world image captured by the outer capturing section 23 is combined with the virtual space such that the real world image seems to be present at the boundary position between the front area and the back area of the virtual space.
In the second drawing method, to display the real world image, a planar polygon to which a texture of the real world image is attached is placed in the virtual space. In the virtual space, the relative position of the planar polygon relative to the real world drawing camera is always fixed. That is, the planar polygon is placed so as to have a certain distance from the real world drawing camera, and is placed such that the normal direction of the planar polygon coincides with the point of view (optical axis) of the real world drawing camera.
In addition, the planar polygon is set to include the range of the field of view of the real world drawing camera. Specifically, the size of the planar polygon and the distance of the planar polygon from the virtual camera are set such that the planar polygon can include the range of the field of view of the virtual camera. The real world image is attached to the entire surface of the planar polygon on its virtual camera side. Thus, when the planar polygon to which the real world image is attached is drawn from the virtual camera, display is performed such that the real world image corresponds to the entire area of an image generated by the virtual camera.
It should be noted that as shown in
(Process of Opening Real World. Image)
Further, in the game according to the present embodiment, an opening is provided in the real world image so that the player recognizes the existence of the back area behind the real world image. More clearly, the portion of the opening included in the real world image is displayed in a transparent or semi-transparent manner, and is combined with the world behind this portion. With this, in the game, the occurrence of a predetermined event triggers the opening (removal) of a part of the real world image, and an image representing another virtual space existing behind the real world image (back area) is displayed through the opening.
In the present embodiment, the boundary surface is a spherical surface, and such a process of displaying the back area by providing an opening in the real world image is achieved in the first drawing method by a texture attached to the inner surface of the spherical screen object described above, as shown in
As described above, the screen object to which the real world image is attached and on which the α-texture is set is drawn from the virtual camera, and therefore, drawing is performed such that the real world image having an opening is present on the boundary surface (the inner surface of the sphere). In the α-texture, the portion corresponding to the real world image is calculated by drawing from the virtual camera.
Also in the second drawing method, data indicating the position of an opening is set on the boundary surface of the virtual space (here, the inner surface of the sphere). Typically, data is set that indicates the presence or absence of an opening at each point of the boundary surface. More specifically, a spherical object similar to the above is placed in the virtual world where a virtual object is present, and a similar α-texture is set on the spherical object. Then, when the real world image is rendered, rendering is performed by applying to the planar polygon described above an α-texture corresponding to the portion drawn by the virtual world drawing camera, the corresponding α-texture included in the α-texture set on the spherical object. Alternatively, after a process is performed of making the opening transparent in the real world image using the α-texture corresponding to this portion, rendering is performed by the real world drawing camera such that the real world image after this process is attached to the planar polygon described above. It should be noted that this spherical object is an object used only to calculate an opening, but is an object not drawn when the virtual world is drawn.
It should be noted that in the present embodiment, data indicating an opening is data having information of each point of the boundary surface. Alternatively, the data may be information defining the position of an opening in the boundary surface by a calculation formula.
In the second space, a polygon (object) is placed, to which a background image (texture) of the second space included in the field of view of the virtual camera through an opening is to be attached. The background of the second space is occasionally referred to as a “back wall”.
In the first space, objects are placed so as to represent enemy characters and various characters representing bullets for shooting down the enemy characters. Also in the second space, predetermined objects (e.g., some of the enemy characters) are placed. The objects placed in the virtual space move in the virtual space in accordance with logic (algorithm) programmed in advance.
In addition, some of the enemy characters can move between the first space and the second space through an opening formed in the boundary surface, or can move between the first space and the second space by forming an opening in the boundary surface themselves. A particular event for forming an opening in the game is, for example, an event where an enemy character collides with the boundary surface (a collision event). Alternatively, the event is where in the progression of the game scenario, the boundary surface is destroyed based on predetermined timing, and an enemy character present in the second space enters the first space (an enemy character appearance event). Yet alternatively, an opening may be automatically formed in accordance with the passage of time. Yet alternatively, an opening may be repaired in accordance with a predetermined game operation of the player. For example, the player may reduce (repair) a formed opening by hitting the opening with a bullet.
In addition, as shown in
On the boundary surface 3, a camera image CI, which is a real world image captured by a real camera built into the game apparatus 10 (
In the present embodiment, the real world image is a planar view image. The virtual world image is also a planar view image. That is, a planar view image is displayed on the upper LCD 22. The real world image, however, may be a stereoscopically visible image. The present embodiment is not limited by the type of the real world image. It should be noted that in the present embodiment, the camera image CI may be a still image, or may be a real-time real world image (moving image). In the game according to the present embodiment program, the camera image CI is a real-time real world image. Further, the camera image CI, which is a real world image, is not limited by the type of the camera. For example, the camera image CI may be an image obtained by a camera that can be externally connected to the game apparatus 10. Furthermore, in the present embodiment, the camera image CI may be an image acquired from the outer capturing section 23 (compound eye camera) and/or the inner capturing section 24 (monocular camera). In the game according to the present embodiment program, the camera image CI is an image acquired using one of the left outer capturing section 23a and the right outer capturing section 23b of the outer capturing section 23 as a monocular camera.
As described above, the first space 1 is a space closer when viewed from the virtual camera than the boundary surface 3, and is also a space surrounded by the boundary surface 3. Further, the second space 2 is a space behind the boundary surface 3 as viewed from the virtual camera. Although not shown in
As described above, however, the image processing program according to the present invention is not limited to a game program, and these settings and rules do not limit the image processing program according to the present invention. It should be noted that as shown in
On the screen, display is performed such that the enemy object EO moves between the first space 1 and the second space 2, using an opening (hole) produced in the real world image due to the game scenario or an event.
It should be noted that in the image processing program according to the present embodiment, objects present in the first space 1 or the second space 2 have three types: enemy objects EO, a bullet object BO, and a back wall BW. The image processing program according to the present invention is not limited to the types of the objects. In the image processing program according to the present embodiment, objects are virtual physical bodies present in the virtual space (the first space 1 and the second space 2). For example, in the image processing program according to the present embodiment, given objects, such as obstacle objects, may be present.
<Examples of Forms of Display>
First, a description is given of an aiming cursor AL, which is displayed commonly in
For example, the aiming cursor AL is set so as to be fixed in the direction of the line of sight of the virtual camera, i.e., at the center of the screen of the upper LCD 22. In this case, as described above, in the present embodiment, the direction of the line of sight of the virtual camera (the virtual camera in the first drawing method or the virtual world drawing camera in the second drawing method) is changed in accordance with the imaging direction of the outer capturing section 23. Thus, the player can change the direction of the aiming cursor AL in the virtual space by changing the orientation of the game apparatus 10. Then, the player performs an attack operation by, for example, pressing the button 14B (A button) of the game apparatus 10 with the thumb of the right hand holding the lower housing 11. With this, the player fires the bullet object BO by the attack operation, to thereby vanquish an enemy object EO and repair an opening present in the boundary surface 3, in the game according to the present embodiment.
Next, descriptions are given separately of
In
The enemy object EO is, for example, an object obtained by using, as a texture, an image (e.g., a photograph of a person's face) stored in the data storage external memory 46 or the like of the game apparatus 10, and attaching the image to a three-dimensional polygon model of a predetermined shape (a polygon model representing a three-dimensional shape of a human head) by a predetermined method.
Further, in the present embodiment, the camera image CI displayed on the upper LCD 22 is, as described above, a real-time real world image captured by the real camera built into the game apparatus 10. Alternatively, for example, the camera image CI may be an image (e.g., a photograph of a landscape) stored in the data storage external memory 46 or the like of the game apparatus 10.
In the state where the camera image CI is displayed on the upper LCD 22, the enemy object EO can arbitrarily move. For example, the enemy object EO present in the first space 1 can move to the second space 2.
It should be noted that all the eight planar polygons are rendered. When the enemy object EO is present behind an unopen area in the boundary surface 3, the substance model of the enemy object EO is hidden by the boundary surface (screen object) 3 based on a depth determination, and therefore is not drawn. It is set, however, such that a silhouette model is not subjected to a depth determination with the boundary surface (screen object) 3, and therefore, even when the enemy object EO (and its silhouette model) is present behind an unopen area in the boundary surface 3, the silhouette model is drawn, and the shadow is displayed as shown in
On the upper LCD 22, display is performed such that images are combined together in the following preference order.
(1) An image of an object present in the first space 1; (2) in an unopen area in the real world image, a combined image of a shadow image of an object present in the second space 2 and the real world image (e.g., a semi-transparent shadow image is combined with the real world image); and (3) in an open area in the real world image, an image (substance image) of an object present in the second space 2 is preferentially combined, and a back wall image is combined in the back of the image. Depending on the state of the movement of the enemy object EO present in the second space 2, however, there may be a scene where the enemy object EO is present across an open area and an unopen area. That is, there may be a scene where the enemy object EO is present on the edge of an opening, as viewed from the virtual camera.
More specifically, as shown in
In addition, a depth determinations is valid between each pair of: an enemy object; a bullet object; a semi-transparent enemy object, an effect object, and the screen object. A depth determination is valid “between the shadow planar polygon and the enemy object”, “between the shadow planar polygon and the bullet object”, “between the shadow planar polygon and the semi-transparent enemy object”, and “between the shadow planar polygon and the effect object”. A depth determination is invalid between the shadow planar polygon and the screen object.
When a depth determination is valid, rendering is performed in accordance with a normal perspective projection. A hidden surface is removed in accordance with the depth direction from the virtual camera. When a depth determination is invalid, an object is rendered even if a target object is present in an area closer to the virtual camera than that of the object.
Then, in the present embodiment, when rendering is performed, it is possible to set a formula for rendering on the object-by-object basis. Specifically, the formulas are set as follows.
The substance of the enemy object, the bullet object, the semi-transparent enemy object, and the effect object are drawn by the following formula.
“color of object×non-transparency of object+color of background×(1−non-transparency of object)”
The screen object is drawn by the following formula.
“color of object (color of real world image)×non-transparency of texture of object+color of background×(1−non-transparency of texture of object)”
The silhouette model of the enemy object is drawn by the following formula.
“color of object×(1−non-transparency of material of background)+color of background×non-transparency of material of background)”
It should be noted that when the enemy object is drawn, the background of the enemy object is the screen object (boundary surface 3), and therefore, in the above formula, “non-transparency of material of background” is “non-transparency of material of screen object (boundary surface 3)”.
Based on the various settings as described above, when the enemy object is present behind an unopen portion in the boundary surface, not the substance but the shadow of the enemy object is displayed. When the enemy object is present in front of the boundary surface, or when the enemy object is present in an opening in the boundary surface, not the shadow but the substance of the enemy object is displayed.
In addition, in the game according to the present embodiment, an opening present in the boundary surface 3 can be repaired by hitting it with the bullet object BO.
It should be noted that, as described above, on the upper LCD 22, the real-time real world image captured by the real camera built into the game apparatus 10 is displayed as an image such that the real-time real world image seems to be present on the boundary surface 3. A change in the direction of the game apparatus 10 in real space also changes the imaging range captured by the game apparatus 10, and therefore also changes the camera image CI displayed on the upper LCD 22. In this case, the game apparatus 10 changes the position and the direction of the virtual camera (the virtual world drawing camera in the second drawing method) in the virtual space in accordance with the motion of the game apparatus 10 in real space. With this, the enemy object EO displayed as if placed in real space and an opening present in the boundary surface 3 are displayed as if placed at the same positions in real space even when the direction of the game apparatus 10 has changed in real space. For example, it is assumed that the imaging direction of the real camera of the game apparatus 10 is turned left. In this case, the display position of the enemy object EO displayed on the upper LCD 22 and the opening present in the boundary surface 3 move in the direction opposite to the turn in the imaging direction of the real camera (in the right direction), that is, the direction of the virtual camera (the virtual world drawing camera in the second drawing method) in the virtual space, where the enemy object EO and the opening present in the boundary surface 3 are placed, moves to the left as does that of the real camera. Thus, even when a change in the direction of the game apparatus 10 also changes the imaging range of the real camera, the enemy object EO and the opening present in the boundary surface 3 are displayed on the upper LCD 22 as if placed in a real space represented by the camera image CI.
<<Examples of Operations of Image Processing>>
Next, with reference to
<<Example of Image Processing>>
With reference to
Referring to
Next, the information processing section 31 performs an enemy-object-related process (step 53), and proceeds to the subsequent step 54. With reference to
Referring to
Then, when the conditions for the appearance for an enemy object EO have been satisfied, the information processing section 31 proceeds to the subsequent step 62. On the other hand, when the conditions for the appearance of an enemy object EO have not been satisfied, the information processing section 31 proceeds to the subsequent step 63.
In step 62, the information processing section 31 generates and initializes the enemy object data Df corresponding to the enemy object EO that has satisfied the conditions for the appearance, and proceeds to the subsequent step 63. For example, the information processing section 31 acquires the substance data Df1, the silhouette data Df2, the opening shape data Df3, and data of polygons corresponding to the enemy object EO, using the group of various programs Pa stored in the main memory 32. The information processing section 31 generates the enemy object data Df including the above items of data. Further, for example, the information processing section 31 initializes: data indicating the placement direction and the placement position of the polygons corresponding to the enemy object EO in the virtual space; and data indicating the moving velocity and the moving direction of the enemy object EO in the virtual space, the data included in the generated enemy object data Df. The initialization is made by a known method.
Next, the information processing section 31 moves the enemy object EO placed in the virtual space (step 63), and proceeds to the subsequent step 64. As an example, the information processing section 31 updates data indicating the placement position of the enemy object EO, the data included in the enemy object data Df, based on the data indicating the moving velocity and the moving direction of the enemy object EO in the virtual space, the data included in the enemy object data Df. At this time, the information processing section 31 updates the data indicating the placement direction of the enemy object EO, the data included in the enemy object data Df, based on the data indicating the moving direction. After the update, the information processing section 31 may update the data indicating the moving velocity and the moving direction of the enemy object EO in the virtual space, the data included in the enemy object data Df. The update of the data indicating the moving velocity and the moving direction allows the enemy object EO to move in the virtual space at a given velocity in a given direction.
Next, the information processing section 31 determines whether or not the enemy object EO has reached a certain distance from the position of the virtual camera (the virtual camera in the first drawing method or the virtual world drawing camera in the second drawing method) (step 64). For example, the information processing section 31 compares the data indicating the placement position of the enemy object EO, the data included in the enemy object data Df, with data indicating the placement position of the virtual camera (the virtual camera in the first drawing method or the virtual world drawing camera in the second drawing method), the data included in the virtual camera data Dj. Then, when the two items of data have satisfied predetermined conditions (e.g., the distance between the placement position of the enemy object EO and the placement position of the virtual camera has fallen below a predetermined value), the information processing section 31 determines that the enemy object EO has reached the certain distance from the position of the virtual camera, and when the two items of data have not satisfied the predetermined conditions, the information processing section 31 determines that the enemy object EO has not reached the certain distance from the position of the virtual camera. It should be noted that hereinafter, when the term “virtual camera” is simply used without distinguishing between the first drawing method and the second drawing method, the “virtual camera” refers to the virtual camera in the first drawing method or the virtual world drawing camera in the second drawing method. When it is determined that the enemy object EO has reached the certain distance from the position of the virtual camera, the information processing section 31 proceeds to the subsequent step 65. On the other hand, when it is determined that the enemy object EO has not reached the certain distance from the position of the virtual camera, the information processing section 31 proceeds to step 66.
In step 65, the information processing section 31 performs a point deduction process, and proceeds to the subsequent step 66. For example, the information processing section 31 deducts a predetermined value from the score of the game indicated by the score data Dh, to thereby update the score data Dh using the score after the deduction. It should be noted that in the point deduction process, the information processing section 31 may perform a process of causing the enemy object EO having reached the certain distance from the position of the virtual camera, to disappear from the virtual space (e.g., initializing the enemy object data Df concerning the enemy object EO having reached the certain distance from the position of the virtual camera, such that the enemy object EO is not present in the virtual space). Further, the predetermined value in the point deduction process may be a given value, and for example, may be set by the group of various programs Pa stored in the main memory 32.
In step 66, the information processing section 31 determines whether or not the enemy object EO is to pass through the boundary surface 3 (the enemy object EO is to move between the first space 1 and the second space 2). For example, the information processing section 31 compares the data indicating the placement position of the enemy object EO, the data included in the enemy object data Df, with the data indicating the placement position of the boundary surface 3, the data included in the boundary surface data Dd. Then, when the two items of data have satisfied predetermined conditions, the information processing section 31 determines that the enemy object EO is to pass through the boundary surface 3. When the two items of data have not satisfied the predetermined conditions, the information processing section 31 determines that the enemy object EO is not to pass through the boundary surface 3. It should be noted that the predetermined conditions are, for example, that the coordinates (placement position) of the enemy object EO in the virtual space satisfy conditional equations for the spherical surface of the boundary surface 3. As described above, the data indicating the placement position of the boundary surface 3 in the virtual space indicates the existence range of the boundary surface 3 in the virtual space, and is, for example, conditional equations for the spherical surface (the shape of the boundary surface 3 according to the present embodiment). When the placement position of the enemy object EO satisfies the conditional equations, the enemy object EO is present on the boundary surface 3 in the virtual space. In the present embodiment, for example, in such a case, it is determined that the enemy object EO is to pass through the boundary surface 3.
When it is determined that the enemy object EO is to pass through the boundary surface 3, the information processing section 31 proceeds to the subsequent step 67. On the other hand, when it is determined that the enemy object EO is not to pass through the boundary surface 3, the information processing section 31 ends the process of this subroutine.
In step 67, the information processing section 31 performs a process of updating the opening determination data included in the boundary surface data Dd, and ends the process of the subroutine. This process is a process for registering, in the boundary surface data Dd, information of an opening produced in the boundary surface 3 by the enemy object EO passing through the boundary surface 3. For example, in the first drawing method and the second drawing method, the information processing section 31 multiplies: the alpha values of the opening determination data of an area having its center at a position corresponding to the position where the enemy object EO passes through the boundary surface 3 in the virtual space, the opening determination data included in the boundary surface data Dd; by the alpha values of the opening shape data Df3. The opening shape data Df3 is texture data in which alpha values of “0” are stored and which has its center at the placement position of the enemy object EO. Accordingly, based on the multiplication, the alpha values of the opening determination data of the area where the opening is generated so as to have its center at the placement position of the enemy object EO (the coordinates of the position where the enemy object EO passes through the boundary surface 3) are “0”. That is, the information processing section 31 can update the state of the boundary surface (specifically, the opening determination data) without determining whether or not an opening is already present in the boundary surface 3. It should be noted that it may be determined whether or not an opening is already present at the position of the collision between the enemy object and the boundary surface. Then, when an opening is not present, an effect may be displayed such that a real world image corresponding to the collision position flies as fragments.
In addition, in the updating process of the opening determination data, the information processing section 31 may perform a process of staging the generation of the opening (e.g., causing a wall to collapse at the position where the opening is generated). In this case, the information processing section 31 needs to determine whether or not the position where the enemy object EO passes through the boundary surface 3 (the range where the opening is to be generated) has already been open. The information processing section 31 can determine whether or not the range where the opening is to be generated has already been open, by, for example, multiplying: data obtained by inverting the alpha values of the opening shape data Df3 from “0” to “1”; by the alpha values of the opening determination data multiplied as described above. That is, when the entire range where the opening is to be generated has already been open, the alpha values of the opening determination data are “0”. Thus, the multiplication results are “0”. On the other hand, when even a part of the range where the opening is to be generated is not open, there is a part where the alpha values of the opening determination data are not “0”. Thus, the multiplication results are other than “0”.
It should be noted that the opening shape data Df3 of the enemy object EO is texture data in which alpha values of “0” are stored so as to correspond to the shape of the enemy object EO. The information processing section 31 may convert the alpha values of the texture data into “1”, based on a predetermined event. When the above process is performed after the conversion, the alpha values of the opening shape data Df3 are “1”. Thus, the alpha values of the opening determination data are not changed. In this case, the enemy object EO passes through the boundary surface 3 without forming an opening. That is, this makes it possible to stage the enemy object EO as if it slips through the boundary surface 3 (see
Referring back to
Referring to
Next, the information processing section 31 determines whether or not the user of the game apparatus 10 has performed a firing operation (step 72). For example, with reference to the controller data Da1, the information processing section 31 determines whether or not the user has performed a predetermined firing operation (e.g., pressing the button 14B (A button)). When the firing operation has been performed, the information processing section 31 proceeds to the subsequent step 73. On the other hand, when the firing operation has not been performed, the information processing section 31 proceeds to the subsequent step 74.
In step 73, in accordance with the firing operation, the information processing section 31 places the bullet object BO at the position of the virtual camera in the virtual space, sets the moving velocity vector of the bullet object BO, and proceeds to the subsequent step 74. For example, the information processing section 31 generates the bullet object data Dg corresponding to the firing operation. Then, for example, the information processing section 31 stores the data indicating the placement position and the placement direction (the direction of the line of sight) of the virtual camera, the data included in the virtual camera data Dj, in the data indicating the placement position and the placement direction of the bullet object BO, the data included in the generated bullet object data Dg. Further, for example, the information processing section 31 stores a given value in the data indicating the moving velocity vector, the data included in the generated bullet object data Dg. The value to be stored in the data indicating the moving velocity vector may be set by the group of various programs Pa stored in the main memory 32.
In step 74, the information processing section 31 determines whether or not the enemy object EO and the bullet object BO have made contact with each other in the virtual space. For example, by comparing the data indicating the placement position of the enemy object EO, the data included in the enemy object data Df, with the data indicating the placement position of the bullet object BO, the data included in the bullet object data Dg, the information processing section 31 determines whether or not the enemy object EO and the bullet object BO have made contact with each other in the virtual space. For example, when the data indicating the placement position of the enemy object EO and the data indicating the placement position of the bullet object BO have satisfied predetermined conditions, the information processing section 31 determines that the enemy object EO and the bullet object BO have made contact with each other. If not, the information processing section 31 determines that the enemy object EO and the bullet object BO have not made contact with each other. It should be noted that the predetermined conditions are, for example, that the distance between the placement position of the enemy object EO and the placement position of the bullet object BO falls below a predetermined value. The predetermined value may be, for example, a value based on the size of the enemy object EO.
When it is determined that the enemy object EO and the bullet object BO have made contact with each other, the information processing section 31 proceeds to the subsequent step 75. On the other hand, when it is determined that the enemy object EO and the bullet object BO have not made contact with each other, the information processing section 31 proceeds to the subsequent step 76.
In step 75, the information processing section 31 performs a point addition process, and proceeds to the subsequent step 76. For example, in the point addition process, the information processing section 31 adds predetermined points to the score of the game indicated by the score data Dh, to thereby update the score data Dh using the score after the addition. Further, in the point addition process, the information processing section 31 performs a process of causing both objects having made contact with each other based on the determination in step 84 described above (i.e., the enemy object EO and the bullet object BO), to disappear from the virtual space (e.g., initializing the enemy object data Df concerning the enemy object EO having made contact with the bullet object BO, and the bullet object data Dg concerning the bullet object BO having made contact with the enemy object EO, such that the enemy object EO and the bullet object BO are not present in the virtual space). It should be noted that the predetermined points in the point addition process may be a given value, and may be, for example, set by the group of various programs Pa stored in the main memory 32.
In step 76, the information processing section 31 determines whether or not the bullet object BO has made contact with an unopen area in the boundary surface 3. For example, using the placement position of the bullet object BO included in the bullet object data Dg and the opening determination data, the information processing section 31 determines whether or not the bullet object BO has made contact with an unopen area in the boundary surface 3.
For example, the information processing section 31 determines whether or not the data indicating the placement position of the bullet object BO, the data included in the bullet object data Dg, satisfies conditional equations for the spherical surface of the boundary surface 3, as in the process of the enemy object EO. Then, when the data indicating the placement position of the bullet object BO does not satisfy the conditional equations for the spherical surface, the information processing section 31 determines that the bullet object BO has not made contact with the boundary surface 3. On the other hand, when the data indicating the placement position of the bullet object BO satisfies the conditional equations for the spherical surface of the boundary surface 3, the bullet object BO is present on the boundary surface 3 in the virtual space. At this time, the information processing section 31, for example, acquires the alpha values of the opening determination data of a predetermined area having its center at a position corresponding to the position where the bullet object BO is present on the boundary surface 3. The predetermined area is a predetermined area having its center at the contact point of the bullet object BO and the boundary surface 3. Then, when the alpha values of the opening determination data corresponding to at least a part of the predetermined area are alpha values of “1”, which correspond to an unopen area, the information processing section 31 determines that the bullet object BO has made contact with an unopen area in the boundary surface 3.
When it is determined that the bullet object BO has made contact with an unopen area in the boundary surface 3, the information processing section 31 proceeds to the subsequent step 77. On the other hand, when it is determined that the bullet object BO has not made contact with an unopen area in the boundary surface 3, the information processing section 31 proceeds to the subsequent step 78.
In step 77, the information processing section 31 performs a process of updating the opening determination data, and proceeds to the subsequent step 78. For example, in the updating process, the information processing section 31 updates, in the boundary surface 3, the alpha values of the opening determination data of the predetermined area having its center at the position corresponding to the placement position of the bullet object BO that has made contact with the unopen area in the boundary surface 3 based on the determination, to alpha values of “1”, which correspond to an unopen area. When the bullet object BO has made contact with the unopen area by this updating process, all the alpha values of the opening determination data in a predetermined area having its center at the contact point are updated to “1”. Accordingly, when there is a part where the alpha values of the opening determination data are set to “0” in the predetermined area having its center at the contact point, the alpha values of the opening determination data of this part are also updated to “1”. That is, when the bullet object BO has made contact with the edge of an opening provided in the boundary surface 3, the opening included in a predetermined area having its center at the position of the contact is repaired to the state of being unopen. Further, in the updating process, the information processing section 31 performs a process of causing the bullet object BO having made contact based on the determination in step 76, to disappear from the virtual space (e.g., initializing the bullet object data Dg concerning the bullet object BO having made contact with the unopen area in the boundary surface 3, such that the bullet object BO is not present in the virtual space). It should be noted that the predetermined area used in the updating process may be a given area, and may be, for example, set by the group of various programs Pa stored in the main memory 32.
In step 78, the information processing section 31 determines whether or not the bullet object BO has reached a predetermined position in the virtual space. The predetermined position may be, for example, the position where a back wall BW is present in the virtual space. In this case, for example, the information processing section 31 determines whether or not the data indicating the placement position of the bullet object BO, the data included in the bullet object data Dg, indicates that the bullet object BO has collided with the back wall BW.
Then, when the bullet object BO has reached the predetermined position, the information processing section 31 proceeds to the subsequent step 77. On the other hand, when the bullet object BO has not reached the predetermined position, the information processing section 31 ends the process of this subroutine.
In step 77, the information processing section 31 performs a process of causing the bullet object BO having reached the predetermined position based on the determination in step 76 described above, to disappear from the virtual space, and ends the process of the subroutine. For example, the information processing section 31 performs a process of causing the bullet object BO having reached the predetermined position based on the determination in step 76 described above, to disappear from the virtual space (e.g., initializing the bullet object data Dg concerning the bullet object BO such that the bullet object BO is not present in the virtual space).
Referring back to
Next, in accordance with the motion of the game apparatus 10, the information processing section 31 changes the position of the virtual camera in the virtual space (step 56), and proceeds to the subsequent step 57. For example, using the motion data Di, the information processing section 31 imparts the same changes as those in the imaging direction of the real camera of the game apparatus 10 in real space, to the virtual camera in the virtual space, to thereby update the virtual camera data Dj using the position and the direction of the virtual camera after the changes. As an example, if the imaging direction of the real camera of the game apparatus 10 in real space has turned left by A°, the direction of the virtual camera in the virtual space also turns left by A°. With this, the enemy object EO and the bullet object BO displayed as if placed in real space are displayed as if placed at the same positions in real space even when the direction and the position of the game apparatus 10 have changed in real space.
Next, the information processing section 31 performs a process of updating the display image (step 57), and proceeds to the subsequent step 58. With reference to
First, a description is given of the display image updating process in the first drawing method.
Referring to
Next, the information processing section 31 generates a display image by a process of rendering the virtual space (step 82), and ends the process of this subroutine. For example, the information processing section 31 generates an image obtained by rendering the virtual space where the boundary surface 3 (screen object), the enemy object EO, the bullet object BO, and the back wall BW are placed, to thereby update the rendered image data of the virtual space using the generated image, the rendered image data included in the rendered image data Dk. Further, the information processing section 31 updates the display image data Dl using the rendered image data of the virtual space. With reference to
As shown in
In the perspective projection described above, the object present in the second space 2 (the enemy object EO or the back wall BW in the present embodiment) is present behind the boundary surface 3. Here, the boundary surface 3 is the screen object to which the texture data of the real camera image is applied in the direction of the field of view (the range of the field of view) of the virtual camera C0 in step 81 described above. Further, as described above, to the texture data of the real camera image, the opening determination data corresponding to each position is applied. Accordingly, in the range of the field of view of the virtual camera C0, the real world image to which the opening determination data is applied is present.
It should be noted that in the present embodiment, for example, in an area having the opening determination data in which alpha values of “0” are stored (an open area), the information processing section 31 draws (renders) images of a virtual object and the back wall BW that are present in the second space 2, in an area that can be viewed through the open area. Further, in an area having the opening determination data in which alpha values of “0.2”, which correspond to an unopen area, are stored (an area handled as an area where alpha values of “1” are stored as an unopen area), the information processing section 31 does not draw the virtual object and the back wall BW that are present in the second space 2. That is, in the image to be displayed, the real world image attached in step 81 described above is drawn in the portion corresponding to this area.
Therefore, in an area having the opening determination data in which “0” is stored as viewed from the virtual camera C0, rendering is performed such that image data included in the substance data Df1 or the back wall image data De is drawn. Then, on the upper LCD 22, images of the virtual object and the back wall BW are displayed in the portion corresponding to this area.
In addition, in an area having the opening determination data in which alpha values of “0.2”, which indicate an unopen area, are stored as viewed from the virtual camera C0 (an area handled as an area where alpha values of “1” are stored as an unopen area), the virtual object and the back wall BW that are present in the second space 2 are not drawn. That is, in the image to be displayed on the upper LCD 22, the real world image is drawn in the portion corresponding to this area. For the shadow ES (silhouette model) of the enemy object EO present in the second space 2 described above, however, a depth determination is set to invalid between the boundary surface 3 and the shadow ES. Accordingly, alpha values of “1” of the silhouette model are greater than alpha values of “0.2” of the boundary surface 3, and therefore, the shadow ES is drawn in an area where alpha values of “1”, which indicate an unopen area, are stored (an area having the opening determination data in which alpha values of “0.2” are stored). With this, an image of the shadow ES is drawn on the real world image.
In addition, when the enemy object EO is present in the first space 1 such that the shadow ES (silhouette model) of the enemy object EO has a size included in the substance model and is placed in such a manner, and such that a depth determination is set to valid between the substance model of the enemy object EO and the silhouette model of the shadow ES, the silhouette model is hidden by the substance model, and therefore is not drawn.
It should be noted that in the present embodiment, as shown in
In addition, the silhouette data Df2 included in the enemy object data Df corresponding to the enemy object EO according to the present embodiment is set such that the normal directions of a plurality of planar polygons correspond to radiation directions as viewed from the enemy object EO, and to each planar polygon, a texture of the silhouette image of the enemy object EO as viewed from the corresponding direction is applied. Accordingly, in the image processing program according to the present embodiment, the shadow of the enemy object EO in the virtual space image is represented as an image on which the orientation of the enemy object EO in the second space 2 is reflected.
In addition, the information processing section 31 performs the rendering process such that the image data included in the aiming cursor image data Dm is preferentially drawn at the center of the field of view of the virtual camera C0 (the center of the image to be rendered).
By the above process, the information processing section 31 renders with a perspective projection the enemy object EO, the bullet object BO, and the back wall BW that are placed in the virtual space, and generates a virtual world image as viewed from the virtual camera C0 (an image including the aiming cursor AL), to thereby update the rendered image data of the virtual space (step 82). Then, the information processing section 31 updates the display image data Dl, using the updated rendered image data of the virtual space.
Next, a description is given of the display image updating process in the second drawing method.
In
In the present embodiment, as shown in
First, a planar polygon is considered, on which a texture having i pixels is mapped in 1 unit of a coordinate system of the virtual space where the planar polygon is placed. In this case, a texture having i pixels×i pixels is mapped onto an area of 1 unit×1 unit of the coordinate system. Here, it is assumed that the display screen of the upper LCD 22 has horizontal W dots×vertical H dots, and the entire texture of the real camera image corresponds to the entire display screen having W dots×H dots. That is, it is assumed that the size of the texture data of the camera image is horizontal W pixels×vertical H pixels.
In this case, the planar polygon only needs to be placed such that 1 dot×1 dot on the display screen corresponds to a texture of 1 pixel×1 pixel in the real camera image, and the above coordinate system only needs to be defined as shown in
With the arrangement as described above, an area of 1 unit×1 unit in the above coordinate system corresponds to an area of i pixels×i pixels in the texture, and therefore, an area of horizontal (W/i)×vertical (H/i) in the planar polygon corresponds to the size of W pixels×H pixels in the texture.
As described above, the planar polygon placed in the coordinate system of the virtual space is rendered with a parallel projection such that 1 pixel in the real camera image (texture) corresponds to 1 dot on the display screen. Thus, a real world image is generated that corresponds to the camera image obtained from the real camera of the game apparatus 10.
It should be noted that as described above, the texture data of the real camera image included in the real world image data Dc is updated by the real camera image data Db. There is, however, a case where a size of horizontal A×vertical B of an image in the real camera image data Db does not coincide with a size of horizontal W×vertical H of the texture data. In this case, the information processing section 31 updates the texture data by a given method. For example, the information processing section 31 may update the texture data, using an image obtained by enlarging or reducing the sizes of horizontal A and vertical B of the image in the real camera image data Db so as to coincide with an image having a size of W×H (an image of the texture data). Alternatively, for example, it is assumed that the sizes of horizontal A and vertical B of the image in the real camera image data Db are greater than the sizes of horizontal W and vertical H of the texture data, respectively. In this case, for example, the information processing section 31 may update the texture data by clipping an image having a size of W×H (an image of the texture data) from a predetermined position in the image in the real camera image data Db. Yet alternatively, for example, it is assumed that at least one of the sizes of horizontal A and vertical B of the image in the real camera image data Db is smaller than the sizes of horizontal W and vertical H in the texture data. In this case, for example, the information processing section 31 may update the texture data by enlarging the image in the real camera image data Db so as to excess the size of the texture data, and subsequently clipping an image having a size of W×H (an image of the texture data) from a predetermined position in the enlarged image.
In addition, in the present embodiment, the horizontal×vertical size of the display screen of the upper LCD 22 coincides with the horizontal×vertical size of the texture data in the real camera image; however, these sizes do not need to coincide with each other. In this case, the size of the display screen of the upper LCD 22 and the size of the real world image do not coincide with each other. The information processing section 31 may change the size of the real world image by a known method when the real world image is displayed on the display screen of the upper LCD 22.
Next, as shown in
Here, first, a description is given of the position of the boundary surface 3 (opening determination data). As described above, in the image processing program according to the present embodiment, a real image in which an opening is provided is generated by multiplying the opening determination data by color information of the real world image (the rendered image data of the real camera image). Accordingly, for example, 1 horizontal coordinate unit×1 vertical coordinate unit in the rendered image data of the real camera image (see the positional relationships in the planar polygon in
tan θ=(H/2i)/D=H/2Di
Thus, when a virtual world image is generated by performing a perspective projection on the enemy object EO and the like described later, taking the boundary surface 3 into account, the settings of the virtual world drawing camera C2 for generating the virtual world image are “the angle of view θ in the Y-axis direction=tan−1 (H/2Di), and the aspect ratio=W:H”. Then, the boundary surface 3 (specifically, the opening determination data indicating the state of the boundary surface 3) is placed at the view coordinates of Z=Z0 from the virtual world drawing camera C2. With this, the range of the boundary surface 3 in the field of view of the virtual world drawing camera C2 has a size of W×H.
Next, the rendering process of the virtual space is described. The information processing section 31 generates an image obtained by rendering the virtual space such that the boundary surface 3 is present at the position described above. The information processing section 31 performs the rendering process taking into account the combination of the real world image to be made later. An example of the rendering process is specifically described below.
The information processing section 31 renders with a perspective projection from the virtual world drawing camera C2 the enemy object EO, the bullet object BO, and the back wall BW that are placed in the virtual space, such that the boundary surface 3 is present as shown in
In the perspective projection described above, the object present in the second space 2 (the enemy object EO or the back wall BW in the present embodiment) is present behind the boundary surface 3. Here, the opening determination data is set in the boundary surface 3. As described above, the opening determination data is texture data of a rectangle in which alpha values are stored, and sets of coordinates in the texture data correspond to positions on the boundary surface in the virtual space. Thus, the information processing section 31 can specify an area of the opening determination data in the range of the field of view of the virtual world drawing camera C2, the area corresponding to the object present in the second space 2.
It should be noted that in the present embodiment, for example, in an area having the opening determination data in which alpha values of “0” are stored (an open area), the information processing section 31 draws (renders) images of a virtual object and the back wall that are present in the second space 2, in an area that can be viewed through the open area. Further, in an area having the opening determination data in which alpha values of “0.2”, which correspond to an unopen area, are stored (an area handled as an area where alpha values of “1” are stored as an unopen area), the information processing section 31 does not draw the virtual object and the back wall that are present in the second space 2. That is, in the image to be displayed, a real world image is drawn in the portion corresponding to this area by a combination process in step 85 described later.
Therefore, in an area having the opening determination data in which “0” is stored as viewed from the virtual world drawing camera C2, rendering is performed such that image data included in the substance data Df1 or the back wall image data De is drawn. Then, on the upper LCD 22, images of the virtual object and the back wall are displayed in the portion corresponding to this area by the combination process in step S85 described later.
In addition, in an area having the opening determination data in which alpha values of “0.2”, which indicate an unopen area, are stored as viewed from the virtual world drawing camera C2 (an area handled as an area where alpha values of “1” are stored as an unopen area), the virtual object and the back wall that are present in the second space 2 are not drawn. That is, in the image to be displayed on the upper LCD 22, a real world image is drawn in the portion corresponding to this area by the combination process in step 85 described later. For the shadow ES (silhouette model) of the enemy object EO described above, however, a depth determination is set to invalid between the shadow ES and the boundary surface 3. Accordingly, alpha values of “1” of the silhouette model are greater than alpha values of “0.2” of the boundary surface 3, and therefore, the shadow ES is drawn in an area where alpha values of “1”, which indicate an unopen area, are stored. With this, the shadow ES of the enemy object EO is drawn on the real world image. Further, when the enemy object EO is present in the first space 1 such that the silhouette model of the enemy object EO has a size included in the substance model and is placed in such a manner, and such that a depth determination is set to valid between the substance model of the enemy object EO and the silhouette model, the silhouette model is hidden by the substance model, and therefore is not drawn.
It should be noted that in the present embodiment, as shown in
It should be noted that the silhouette data Df2 included in the enemy object data Df corresponding to the enemy object EO according to the present embodiment is set such that the normal directions of a plurality of planar polygons correspond to radiation directions as viewed from the enemy object, and to each planar polygon, a texture of the silhouette image of the enemy object as viewed from the corresponding direction is applied. Accordingly, in the image processing program according to the present embodiment, the shadow ES of the enemy object EO in the virtual space image is represented as an image on which the orientation of the enemy object in the second space 2 is reflected.
By the above process, the information processing section 31 renders with a perspective projection the enemy object EO, the bullet object BO, and the back wall BW that are placed in the virtual space, and generates a virtual world image as viewed from the virtual world drawing camera C2, to thereby update the rendered image data of the virtual space (step 84 of
Next, the information processing section 31 generates a display image obtained by combining the real world image with the virtual space image (step 85), and ends the process of this subroutine.
For example, the information processing section 31 generates a combined image of the real world image and the virtual space image by combining the rendered image data of the real camera image with the rendered image of the virtual space such that the rendered image of the virtual space is given preference. Then, the information processing section 31 generates a display image by preferentially combining the image data included in the aiming cursor image data at the center of the combined image (the center of the field of view of the virtual world drawing camera C2) (
As described above, the updating process of the display image (subroutine) is completed by the first drawing method or the second drawing method.
Referring back to
Next, the information processing section 31 determines whether or not the game is to be ended (step 59). Conditions for ending the game may be, for example: that the predetermined conditions described above (the game is completed or the game is over) have been satisfied; or that the user has performed an operation for ending the game. When the game is not to be ended, the information processing section 31 proceeds to step 52 described above, and repeats the same process. On the other hand, when the game is to be ended, the information processing section 31 ends the process of the flow chart.
<Operations and Effects of Image Processing According to First Embodiment>
As described above, in the image processing program according to the present embodiment, as shown in the processes of
In addition, when the operation of the user on the GUI at the start of the game is an instruction to “acquire a face image with the inner capturing section 24” (“Yes” in step 9 of
In addition, based on the image processing program according to the present embodiment, when the user has succeeded in the first game, the user can collect, in the saved data storage area Do, various face images, such as a face image of the user themselves, face images of people around the user, a face image included in an image obtained by a video device, and a face image of a living thing owned by the user. The game apparatus 10 can display the collected face images, for example, on the screen as shown in
In addition, based on the image processing program according to the present embodiment, it is possible to generate an enemy object EO by texture-mapping a face image selected from among the collected face images onto the facial surface portion of the enemy object EO, and execute the game. The user can freely determine a cast by attaching a face image selected from among the collected face images to the enemy object EO that appears in the game. Accordingly, during the execution of the game, the user can enhance the possibility of becoming increasingly enthusiastic about the game, by an effect obtained from the face of the enemy object EO.
In addition, based on the image processing program according to the present embodiment, in the case where an enemy object EO is generated, when a face image has entered the state of being selected in order to be attached to the enemy object EO, face images related to the face image in the state of being selected show reactions. Accordingly, it is possible to cause the user who determines the cast of the enemy object EO, a sense of affinity for the virtual reality world, a familiarity with the face images displayed as a list, and emotions similar to those toward people in the real world, and the like.
It should be noted that in the first embodiment, an enemy object EO is generated by attaching a face image to the enemy object EO. Such a process, however, is not limited to the generation of an enemy object EO, and can also be applied to the generation of character objects in general that appear in the game. For example, a face image acquired by the user may be attached to an agent who guides an operation on the game apparatus 10 or the progression of the game. Alternatively, a face image acquired by the user may be attached to characters that appear in the game apparatus 10, such as: a character object representing the user themselves; a character object that appears in the game in a friendly relationship with the user; a character object representing the owner of the game apparatus; and the like.
In the above descriptions, a person's face is assumed to be a face image; however, the present invention is not limited to a face image of a person, and can also be applied to a face image of an animal. For example, face images may be collected by performing the face image acquisition process described in the first embodiment, in order to acquire face images of various animals, such as mammals, e.g., dogs, cats, and horses, birds, fish, reptiles, amphibians, and insects. For example, with the game apparatus 10, it is possible to represent the relationships between people and animals in the real world, such that, as shown in the relationships between the people on the screen shown in
It should be noted that the relationship between pet and master, the relationships between the master and their family, and the like may be defined by the face image management information Dn1 through a UIF (user interface), so that reference can be made to these relationships for the relationships between face images. It may be set such that emotions such as love and hate, and good and bad emotions toward a pet of a loved person and a pet of a hated person can be defined. Alternatively, for example, setting may be stored in the face image management information Dn1 such that an animal whose face image has succeeded in being saved in the saved data storage area Do of the game when the result of the game executed with a face image of the master has been successful is in an intimate relationship with the master. With the game apparatus 10, the user can execute a game in which a character object is generated and on which consciousness in the real world is reflected, based on the various face images collected as described above.
In addition, in the image processing program according to the present embodiment, as shown in
In addition, in the image processing program according to the present embodiment, as shown in
In addition, in the image processing program according to the present embodiment, display is performed such that a real world image obtained from a real camera and a virtual space image including an object present behind the real world image are combined.
Therefore, in the image processing program according to the present embodiment, it is possible to generate an image capable of attracting the user's interest, by performing drawing so as to represent unreality in a background in which a real world image is used.
In addition, when an object is present behind the real world image in a combined image to be displayed (e.g., the enemy object EO present in the second space 2), a substance image of the object is displayed in the real world image (boundary surface 3), in an area where an opening is present. Further, a shadow image of the object is displayed in the real world image, in an area where an opening is not present (see
Therefore, in the image processing program according to the present embodiment, it is possible to generate an image in which the user can recognize the activities, such as the number and the moving directions, of objects present behind the real world image.
In addition, in the image processing program according to the present embodiment, an image of an unreal space, such as an image of outer space, can be used as image data of the back wall BW. The image of the unreal space can be viewed through an opening in the real world image. The opening is specified at a position in the virtual space. Then, the orientation of the real camera and the orientation of the virtual camera are associated together.
Therefore, in the image processing program according to the present embodiment, it is possible to provide an opening at a position corresponding to the orientation of the real camera, and represent the opening at the same position in the real world image. That is, in the image processing program according to the present embodiment, even when the orientation of the real camera has changed, the opening is represented at the same position in real space. This makes it possible to generate an image that can be recognized by the user as if real space is linked with the unreal space.
In addition, the real world image in which an opening is represented is generated by the multiplication of the real world image obtained from the real camera and alpha values.
Therefore, in the image processing program according to the present embodiment, it is possible to represent and generate an opening by a simplified method.
In addition, an opening in the real world image that is generated by the enemy object EO passing through the boundary surface 3 is generated by multiplying: the opening shape data Df3 included in the enemy object data Df; by the opening determination data corresponding to a predetermined position.
Therefore, in the image processing program according to the present embodiment, it is possible to set an opening corresponding to the shape of a character having collided, by a simplified method.
In addition, in the image processing program according to the present embodiment, it is possible to draw a shadow image by comparing alpha values. Further, it is possible to switch between the on/off states of the drawing of a shadow image by changing alpha values set in the silhouette data Df2.
Therefore, in the image processing program according to the present embodiment, it is possible to leave the drawing of a shadow image to the GPU, and also switch between the display and hiding of a shadow by a simplified operation.
Second EmbodimentWith reference to
In addition, on the background of the screen, for example, an enemy object EO1 is displayed, the enemy object EO1 created in accordance with the procedure described with reference to the example of the screen shown in
In addition, in
In addition, to any one or more of the enemy objects EO1 through EO7, e.g., to the enemy object EO6, the same face image as that of the enemy object EO1 is attached. On the other hand, to the other enemy objects EO2 through EO5, face images different from that of the enemy object EO1 are attached.
In addition, on the screen shown in
That is, the enemy object EO1 freely moves around on the screen shown in
Therefore, when the user has changed the orientation of the game apparatus 10 relative to the enemy objects EO1 through EO7 that freely move around in the virtual space, the user can point the aiming cursor AL displayed on the screen at the enemy objects EO1 through EO7. When the user has pressed the operation button 14B (A button) corresponding to a trigger button in the state where the aiming cursor AL is pointed at the enemy objects EO1 through EO7, the user can fire a bullet at the enemy objects EO1 through EO7.
In the game according to the present embodiment, however, an attack on, among the enemy objects EO1 through EO7, those other than one having the same face image as that of the enemy object EO1 is not a valid attack. For example, when the enemy object EO1 or an enemy object having the same face image as that of the enemy object EO1 has been attacked by the user, the user scores points, or the enemy objects lose points. Further, when the enemy objects EO2 through EO7, each of which is smaller in dimensions than the enemy object EO1, have been attacked by the user, the user scores more points. Alternatively, when the enemy objects EO2 through EO7 have been attacked, the enemy objects lose more points than when the enemy object EO1 has been attacked. An attack on, among the enemy objects EO2 through EO7, those having face images different from that of the enemy object EO1, however, is an invalid attack. That is, the user is obliged to attack an enemy object having the same face image as that of the enemy object EO1. Hereinafter, an enemy object having a face image different from that of the enemy object EO1 is referred to as a “misidentification object”. It should be noted that in
With reference to
Next, the information processing section 31 generates misidentification objects (step 31). The misidentification objects may be generated by, for example, attaching face images other than the face image of the enemy objects EO specified in step 30, to the facial surface portion of the head shape of the enemy objects EO. The specification of the face images of the misidentification objects is not limited. For example, the face images of the misidentification objects may be selected from among face images already acquired by the user, as shown in
Next, the information processing section 31 starts the game of the enemy objects EO and the misidentification objects (step 32). Then, the information processing section 31 determines whether or not the user has made an attack (step 33). The attack of the user is detected by a trigger input, for example, the pressing of the operation buttons 14B in the state where the aiming cursor AL shown in
Then, the information processing section 31 determines whether or not the game is to be ended (step 37). The game is ended, for example, when the user has destroyed all the propagating enemy objects EO, or when the score of the user has exceeded a reference value. Alternatively, the game is ended, for example, when the enemy objects EO have propagated so as to exceed a predetermined limit, or when the points lost by the user have exceeded a predetermined limit. When the game is not to be ended, the information processing section 31 returns to step 33.
As described above, based on the image processing program according to the present embodiment, it is possible to create enemy objects EO using collected face images, and execute the game. This enables the user to execute a game in a virtual reality world, based on face images of people existing in the real world.
In addition, based on the game apparatus 10 according to the present embodiment, the game is executed by confusing the user in combination of appropriate enemy objects EO and misidentification objects. Accordingly, the user needs to correctly recognize the face images of the enemy objects EO. As a result, the user requires a capacity to distinguish the enemy objects EO and concentration. Thus, the game apparatus 10 according to the present embodiment makes it possible to cause the user a sense of tension when the game is executed, or to stimulate the user's brain while the user recognizes the face images.
In the second embodiment, for example, as in the cast determination process in step 16 and the process of the execution of the game in step 18 of
With reference to
That is, as in the case described in the second embodiment, the information processing section 31 of the game apparatus 10 executes the game according to the present embodiment as an example of the processing of the cast determination process in the first embodiment (step 16 of
In addition, in the second embodiment, a description is given of an example of the game processing of the game where a face image is acquired, and enemy objects EO including the acquired face image and misidentification objects are used. Then, in the second embodiment, it is determined that an attack on a misidentification object is an invalid attack.
In the present embodiment, a description is given of a game where, when an attack on a misidentification object has been detected, a part of the face image of an enemy object EO is replaced with a part of another face image, instead of the game according to the second embodiment. For example, the enemy object EO is formed by combining the peripheral portion of the enemy object EO (see H13 in
In the game according to the present embodiment, for example, it is easy to point the aiming cursor AL at the enemy object EO1, which is larger in dimensions, and therefore, even when the user has attacked the enemy object EO1 and a bullet has hit the enemy object EO1, the points scored by the user or the damage inflicted on the enemy object EO1 are small. Further, it is difficult to point the aiming cursor AL at the enemy object EO11, which is smaller in dimensions, and therefore, when the user has attacked the enemy object EO11 and a bullet has hit the enemy object EO11, the points scored by the user or the damage inflicted on the enemy object EO11 are greater than those in the case of the enemy object EO1.
In addition, in the present embodiment, when the misidentification objects EO12 through EO16 have been attacked by the user, a part of the face image attached to the enemy object EO1 is replaced with that of another face image. For example, in the case of
With reference to
Then, when having detected an attack on the enemy objects EO (step 43), the information processing section 31 reduces the deformation of the face image, and brings the face image of the enemy object EO1 closer to the face image that is originally attached (step 44). In this case, when the enemy object EO11 shown in
On the other hand, when having detected an attack on the misidentification objects EO12 through EO16 and the like (step 45), the information processing section 31 advances the switching of parts of the face image attached to the enemy object EO1. That is, the information processing section 31 additionally deforms the face image (step 46). Further, when having detected a state other than an attack on the enemy objects EO and an attack on the misidentification objects, the information processing section 31 performs another process (step 47). Said another process is similar to that in the case of step 34 of
Then, the information processing section 31 determines whether or not the game is to be ended (step 48). It is determined that the game is to be ended, for example, when the deformation of the face image of the enemy object EO has exceeded a reference limit. Alternatively, it is determined that the game is to be ended, for example, when the user has destroyed the enemy objects EO and scored points of a predetermined limit. When the game is not to be ended, the information processing section 31 returns to step 43.
As described above, based on the game apparatus 10 according to the present embodiment, when the user has succeeded in attacking the enemy objects EO, the deformed face image is restored. Further, when the misidentification objects have been attacked by the user, the deformation of the face image further is advanced. Accordingly, the user needs to tackle the game with their concentration, and this increases a sense of tension during the execution of the game, and therefore makes it possible to train concentration. Further, based on the game apparatus 10 according to the present embodiment, a face image of the user or a face image of a person close to the user is deformed. This makes it possible to increase the possibility that the user becomes enthusiastic about a game in a virtual reality world on which the real world is reflected.
In the third embodiment, for example, as in the process of step 30 of
With reference to
That is, as in the case described in the second embodiment, the information processing section 31 of the game apparatus 10 can execute the game according to the present embodiment as an example of the processing of the cast determination process in the first embodiment (step 16 of
In addition, in the third embodiment, a description is given of an example of the game processing of the game where a face image is acquired, and enemy objects EO including the acquired face image and misidentification objects are used. Further, in the third embodiment, when an attack on a misidentification object has been detected, a part of the face image of one of the enemy objects EO is replaced with a part of another face image.
In the present embodiment, a description is given of a process where at the start of the game, a part of the face included in an enemy object EO is already replaced with a part of another face image, and when the user has won the game, the part of the face included in the enemy object EO returns to that of the original face image, instead of the game according to the second embodiment and the game according to the third embodiment.
In addition, on the screen shown in
Before the start of the game, however, parts of the faces are switched between the enemy object EO20 and the enemy objects EO21 through EO25. For example, noses are switched between the enemy object EO20 and the enemy object EO22. Further, for example, left eyebrows and left eyes are switched between the enemy object EO20 and the enemy object EO25. The switching of parts of the faces may be, for example, performed on a polygon-by-polygon basis when the face images are texture-mapped, the polygons forming three-dimensional models onto which the face images are texture-mapped.
Such switching of parts of the faces may be performed by, for example, randomly changing the number of parts to be switched and target parts to be switched. Further, for example, the number of parts to be switched may be determined in accordance with a success or a failure in, and the score of, the game that has already been executed or another game. For example, when the performance, or the degree of achievement, of the user has been excellent in the game that has already been executed, the number of parts to be switched is decreased. When the performance, or the degree of achievement, of the user has been poor, the number of parts to be switched is increased. Alternatively, the game may be divided into levels, and the number of parts to be switched may be changed in accordance with the level of the game. For example, the number of parts to be switched is decreased at an introductory level, whereas the number of parts to be switched is increased at an advanced level.
In addition, a face image of the user may be acquired by the inner capturing section 24, face recognition may be performed, and the number of parts to be switched may be determined in accordance with the expression obtained from the recognition. For example, a determination may be made on: the case where the face image is smiling; the case where the face image is surprised; the case where the face image is sad; and the case where the face image is almost expressionless. Then, the number of parts to be switched may be determined in accordance with the determination result. The expression of the face may be determined in accordance with: the dimensions of the eyes; the area of the mouth; the shape of the mouth; the positions of the contours of the cheeks relative to reference points, such as the centers of the eyes, the center of the mouth, and the nose; and the like. For example, an expressionless face image of the user may be registered in advance, and the expression of the user's face may be estimated from the difference values between: values obtained when a face image of the user has been newly acquired, such as the dimensions of the eyes, the area of the mouth, the shape of the mouth, the positions of the contours of the cheeks from the reference points, and the like; and values obtained from the face image registered in advance. It should be noted that such a method of estimating the expression of the face is not limited to the above procedure, and various procedures can be used.
In the present embodiment, in the game apparatus 10, the user executes a game where the user fights with the enemy objects EO, parts of whose faces are switched as shown in
With reference to
As described above, based on the image processing program according to the present embodiment, for example, the game is started in the state where a face image of the user has been acquired by the inner capturing section 24, and parts of the faces have been switched between the acquired face image and another face image. Then, when the user has succeeded in the game, for example, when the user has won battles with the enemy objects EO, the face images whose parts are switched are restored to the original face images.
Therefore, for example, when the face image, a part of whose face is switched, is a face image of the user themselves, or is a face image of a person intimate with the user, the user is given a high motivation to succeed in the game.
In addition, parts of the faces are switched at the start of the game according to the present embodiment, in accordance with the performance in another game, and therefore, it is possible to give the user a handicap or an advantage based on the result of said another game. Further, parts of the faces are switched in accordance with the level of the game, and therefore, it is possible to represent the difficulty level of the game by the degree of the deformation of the faces.
In the fourth embodiment, for example, as in the processes of step 30 of
With reference to
In addition, on the background of the screen, for example, enemy objects EO and an aiming cursor AL are displayed, the enemy objects EO and the aiming cursor AL created in accordance with the procedure described in the above embodiments. Display is performed such that face images selected in the cast determination process and the like are texture-mapped on the facial surface portions of the enemy objects EO. Then, when the user of the game apparatus 10 has pressed the operation button 14B (A button) corresponding to a trigger button in the state where the aiming cursor AL is pointed at the enemy objects EO, the user can fire a bullet at the enemy objects EO.
Also during the game, the game apparatus 10 sequentially performs a predetermined face recognition process on the camera image CI captured by the real camera (e.g., the outer capturing section 23), and determines the presence or absence of a person's face in the camera image CI. Then, when the game apparatus 10 has determined in the face recognition process that a person's face is present in the camera image CI, and conditions for the appearance of an acquisition target object AO have been satisfied, an acquisition target object AO appears from the portion recognized as a face in the camera image CI.
As shown in
For example, similarly to the enemy objects EO, the acquisition target object AO is placed in the virtual space described above, and an image of the virtual space (virtual world image), in which the acquisition target object AO and/or the enemy objects EO are viewed from the virtual camera, is combined with a real world image obtained from the camera image CI, whereby display is performed on the upper LCD 22 as if the acquisition target object AO and/or the enemy objects EO are placed in real space. In accordance with an attack operation using the game apparatus 10 (e.g., pressing the button 14B (A button)), a bullet object BO is fired in the direction of the aiming cursor AL, and the acquisition target object AO also serves as a target of attack for the user. Then, when the user has won a battle with the acquisition target object AO, the user can store in the saved data storage area Do the face image attached to the acquisition target object AO.
It should be noted that not only winning a battle with the acquisition target object AO, but also completing the game where the user attacks the enemy objects EO, that is, completing the game that has already been executed when the face of the face image attached to the acquisition target object AO has been recognized, may be added to conditions for storing the face image of the acquisition target object AO in the saved data storage area Do. For example, possible conditions for completing the game where the user attacks the enemy objects EO may be that a predetermined number or more of enemy objects EO are defeated. In this case, the specification of the game is that, during the execution of the game where the user attacks the enemy objects EO, when the acquisition target object AO has appeared in the middle of the game and has been defeated, the face image of the acquisition target object AO can be additionally acquired.
It should be noted that a face image used for the acquisition target object AO may be a face image obtained from a face recognized in the camera image CI (a still image), or may be a face image obtained from a face recognized by repeatedly performing face recognition on the repeatedly captured camera image CI (a moving image). For example, in the second case, when the expression and the like of the person's face repeatedly captured in the camera image CI has changed, the changes are reflected on a texture of the acquisition target object AO. That is, it is possible to reflect in real time the expression of the person captured by the real camera of the game apparatus 10, on the expression of the face image attached to the acquisition target object AO.
In addition, the acquisition target object AO that appears from the portion recognized as a face in the camera image CI may be placed so as to always overlap the recognized portion when displayed in combination with the camera image CI. For example, changes in the direction and the position of the game apparatus 10 (i.e., the direction and the position of the outer capturing section 23) in real space also change the imaging range captured by the game apparatus 10, and therefore also change the camera image CI displayed on the upper LCD 22. In this case, the game apparatus 10 changes the position and the direction of the virtual camera in the virtual space in accordance with the motion of the game apparatus 10 in real space. With this, the acquisition target object AO displayed as if placed in real space is displayed as if placed at the same position in real space even when the direction and the position of the game apparatus 10 have changed in real space. Further, on the upper LCD 22, a real-time real world image captured by the real camera built into the game apparatus 10 is displayed, and therefore, a subject may move in real space. In this case, the game apparatus 10 sequentially performs a face recognition process on the repeatedly captured camera image CI, and thereby sequentially places the acquisition target object AO in the virtual space such that the acquisition target object AO is displayed so as to overlap the position of the recognized face when combined with the camera image CI. Thus, even when changes in the imaging direction and the imaging position of the game apparatus 10 or a change in the position of the captured person have changed in the camera image CI the position and the size of the face image having appeared as the acquisition target object AO, it is possible to draw the acquisition target object AO so as to overlap the face image by these processes.
It should be noted that the acquisition target object AO displayed on the upper LCD 22 may be displayed by, for example, enlarging, reducing, or deforming the face image actually captured and displayed in the camera image CI, or may be displayed by changing the display direction of the model to which the face image is attached. Such image processing differentiates the actually captured face image from the acquisition target object AO, and therefore enables the user of the game apparatus 10 to easily determine that the acquisition target object AO has appeared from the camera image CI.
Next, with reference to
It should be noted that programs for performing these processes are included in a memory built into the game apparatus 10 (e.g., the data storage internal memory 35), or included in the external memory 45 or the data storage external memory 46, and the programs are: loaded from the built-in memory, or loaded from the external memory 45 through the external memory I/F 33 or from the data storage external memory 46 through the data storage external memory I/F 34, into the main memory 32 when the game apparatus 10 is turned on; and executed by the CPU 311
The processing operations performed by executing the image processing program according to the fifth embodiment are performed as follows. For the processing operations performed by executing the image processing program according to the first embodiment, a during-game face image acquisition process described later is performed during the game processing described with reference to
In addition, various data stored in the main memory 32 in accordance with the execution of the image processing program according to the fifth embodiment is similar to the various data stored in accordance with the execution of the image processing program according to the first embodiment, except that appearance flag data, face recognition data, and acquisition target object data are further stored. It should be noted that the appearance flag data indicates an appearance flag indicating whether the current state of the appearance of the acquisition target object AO is “yet to appear”, “during appearance”, or “already appeared”, and the appearance flag is set to “yet to appear” in the initialization in step 51 described above (
Referring to
In step 202, the information processing section 31 performs a yet-to-appear process, and proceeds to the subsequent step 203. With reference to
Referring to
Next, the information processing section 31 determines whether or not conditions for the appearance of the acquisition target object AO in the virtual space have been satisfied (step 212). For example, the conditions for the appearance of the acquisition target object AO, on an essential condition that a person's face has been recognized in the camera image in step 221 described above, may be: that the acquisition target object AO appears only once from the start to the end of the game; that the acquisition target object AO appears at predetermined time intervals; that in accordance with the disappearance of the acquisition target object AO from the virtual world, a new acquisition target object AO appears; or that the acquisition target object AO appears at a random time. When the conditions for the appearance of the acquisition target object AO have been satisfied, the information processing section 31 proceeds to the subsequent step 213. On the other hand, when the conditions for the appearance of the acquisition target object AO have not been satisfied, the information processing section 31 ends the process of this subroutine.
In step 213, the information processing section 31 sets an image of the face recognized in the face recognition process in step 211 described above, as a texture of the acquisition target object AO, and proceeds to the subsequent step. For example, in the camera image indicated by the camera image data Db, the information processing section 31 sets an image included in the region of the face indicated by the face recognition result of the face recognition process in step 211 described above, as a texture of the acquisition target object AO, to thereby update the acquisition target object data using the set texture.
Next, the information processing section 31 sets the acquisition target object AO, using the face image obtained from the face recognized in the face recognition process in step 211 (step 214), and proceeds to the subsequent step. As an example, in accordance with the region of the image of the face recognized in the face recognition process in step 211, the information processing section 31 sets the size and the shape of a polygon (e.g., a planar polygon) corresponding to the state of the start of the appearance of the acquisition target object AO, and sets the acquisition target object AO corresponding to the state of the start of the appearance by attaching the texture of the face image set in step 213 to the main surface of the polygon, to thereby update the acquisition target object data.
Next, the information processing section 31 newly places the acquisition target object AO in the virtual space (step 215), and proceeds to the subsequent step. For example, when the camera image is displayed on the upper LCD 22, the information processing section 31 places the acquisition target object AO at the position in the virtual space, at which a perspective projection is performed such that the acquisition target object AO overlaps the position of the face image obtained from the face recognized in step 211, to thereby update the acquisition target object data.
In the present embodiment, an image is generated by rendering with a perspective projection from the virtual camera the virtual space where the acquisition target object AO is newly placed in addition to the enemy objects EO, and a display image including at least the generated image is displayed. Here, to make representation such that the acquisition target object AO appears from the face image in the camera image displayed on the upper LCD 22, the information processing section 31 places the acquisition target object AO in the virtual space such that the acquisition target object AO overlaps the region corresponding to the face image in the boundary surface 3 on which the texture of the camera image is mapped, and performs a perspective projection on the placed acquisition target object AO from the virtual camera. It should be noted that the method of placing the acquisition target object AO in the virtual space is similar to the example of the placement of the enemy object EO described with reference to
Next, the information processing section 31 sets the appearance flag to “during appearance” to thereby update the appearance flag data (step 216), and ends the process of this subroutine.
Referring back to
In step 204, the information processing section 31 performs a during-appearance process, and proceeds to the subsequent step. For example, in step 204, the information processing section 31 represents the state of the acquisition target object AO appearing, by gradually changing the face image included in the camera image to a three-dimensional object. Specifically, as in step 211 described above, the information processing section 31 sets the face image as a texture of the acquisition target object AO, based on the result of a face recognition performed on the camera image. Then, as an example, the information processing section 31 sets the acquisition target object AO by performing a morphing process for changing a planar polygon to predetermined three-dimensional polygons (e.g., a three-dimensional model formed by combining a plurality of polygons so as to represent a human head shape). Then, as in step 215, the information processing section 31 places the acquisition target object AO subjected to the morphing process at the position in the virtual space, at which a perspective projection is performed such that the acquisition target object AO overlaps the position of the face image obtained from the face recognized in step 204, to thereby update the acquisition target object data. When the acquisition target object AO appears from the image of the face recognized in the real world image, the acquisition target object AO is represented so as to gradually change from planar to three-dimensional in the face image, by performing such a morphing process.
It should be noted that the three-dimensional polygons, to which the planar polygon is changed by the morphing process, include polygons of various possible shapes. As a first example, the acquisition target object AO is generated by performing the morphing process to change the planar polygon to three-dimensional polygons having the shape of the head of a predetermined character. In this case, the image of the face recognized in the camera image in the face recognition process is mapped as a texture onto the facial surface of the head-shaped polygons. As a second example, the acquisition target object AO is generated by performing the morphing process to change the planar polygon to plate polygons having a predetermined thickness. In this case, the image of the face recognized in the camera image in the face recognition process is mapped as a texture onto the main surface of plate polygons. As a third example, the acquisition target object AO is generated by performing the morphing process to change the planar polygon to three-dimensional polygons having the shape of a predetermined weapon (e.g., missile-shaped polygons). In this case, the image of the face recognized in the camera image in the face recognition process is mapped as a texture onto a part of the weapon-shaped polygons (e.g., mapped onto the missile-shaped polygons at the head of the missile).
Next, the information processing section 31 determines whether or not the during-appearance process on the acquisition target object AO has ended (step 205). For example, when the morphing process on the acquisition target object AO has reached its final stage, the information processing section 31 determines that the during-appearance process has ended. Then, when the during-appearance process on the acquisition target object AO has ended, the information processing section 31 proceeds to the subsequent step 206. On the other hand, when the during-appearance process on the acquisition target object AO has not ended, the information processing section 31 proceeds to the subsequent step 207. For example, when the polygon corresponding to the acquisition target object AO has changed to a three-dimensional model by repeating the morphing process in step 204, the three-dimensional model formed by combining a plurality of polygons so as to represent a human head shape, the information processing section 31 determines that the morphing process on the acquisition target object AO is at the final stage.
In step 206, the information processing section 31 sets the appearance flag to “already appeared” to thereby update the appearance flag data, and proceeds to the subsequent step 207.
In step 207, the information processing section 31 determines whether or not the acquisition target object AO has already appeared. For example, with reference to the appearance flag data, the information processing section 31 makes a determination in step 207 described above, based on whether or not the appearance flag is set to “already appeared”. When the acquisition target object AO has already appeared, the information processing section 31 proceeds to the subsequent step 208. On the other hand, when the acquisition target object AO has not already appeared, the information processing section 31 ends the process of this subroutine.
In step 208, the information processing section 31 performs an already-appeared process, and ends the process of the subroutine. With reference to
Referring to
Next, the information processing section 31 sets an image of the face recognized in the face recognition process in step 221 described above (an image included in the face area in the camera image), as a texture of the acquisition target object AO (step 222), and proceeds to the subsequent step. For example, in the camera image indicated by the real camera image data Db, the information processing section 31 sets an image included in the region of the face indicated by the face recognition result of the face recognition process in step 101 described above, as a texture of the acquisition target object AO, to thereby update the acquisition target object data using the set texture.
Next, the information processing section 31 sets the acquisition target object AO corresponding to the region of the image of the face recognized in the face recognition process in step 221 described above (step 223), and proceeds to the subsequent step. For example, the information processing section 31 sets the acquisition target object AO by attaching the texture of the face image set in step 222 to the facial surface portion of a three-dimensional model formed by combining a plurality of polygons so as to represent a human head shape, to thereby update the acquisition target object data. It should be noted that in step 223, the polygons to which the face image obtained from the face recognized in the face recognition process in step 221 is attached as a texture may be, for example, enlarged, reduced, or deformed, or the texture of the face image may be deformed.
Next, the information processing section 31 places the acquisition target object AO set in step 223 described above in the virtual space (step 224), and proceeds to the subsequent step. For example, as in step 215, when the camera image is displayed on the upper LCD 22, the information processing section 31 places the acquisition target object AO at the position in the virtual space, at which a perspective projection is performed such that the acquisition target object AO overlaps the position of the face image obtained from the face recognized in step 221, to thereby update the acquisition target object data. It should be noted that in step 223, the acquisition target object AO may be placed such that the facial surface portion to which the texture of the face image is attached opposes the virtual camera, or the orientation of the acquisition target object AO may be changed to a given direction in accordance with the progression of the game.
Next, the information processing section 31 determines whether or not the acquisition target object AO and the bullet object BO have made contact with each other in the virtual space (step 225). For example, using the position of the acquisition target object AO indicated by the acquisition target object data and the position of the bullet object BO indicated by the bullet object data Dg, the information processing section 31 determines whether or not the acquisition target object AO and the bullet object BO have made contact with each other in the virtual space. When the acquisition target object AO and the bullet object BO have made contact with each other, the information processing section 31 proceeds to the subsequent step 226. On the other hand, when the acquisition target object AO and the bullet object BO have not made contact with each other, the information processing section 31 proceeds to the subsequent step 229.
In step 226, the information processing section 31 performs a point addition process, and proceeds to the subsequent step. For example, in the point addition process, the information processing section 31 adds predetermined points to the score of the game indicated by the score data Dh, to thereby update the score data Dh using the score after the addition. Further, in the point addition process, the information processing section 31 performs a process of causing the bullet object BO having made contact based on the determination in step 225 described above, to disappear from the virtual space (e.g., initializing the bullet object data Dg concerning the bullet object BO having made contact with the acquisition target object AO, such that the bullet object BO is not present in the virtual space).
Next, the information processing section 31 determines whether or not the acquisition of the face image attached to the acquisition target object AO having made contact with the bullet object BO has been successful (step 227). As an example of means for determining whether or not the acquisition of the face image has been successful, the information processing section 31 performs the process of step 227. Then, when the acquisition of the face image has been successful, the information processing section 31 proceeds to the subsequent step 228. On the other hand, when the acquisition of the face image has not been successful, the information processing section 31 proceeds to the subsequent step 228.
Here, a success in the acquisition of the face image is, for example, the case where the user has won a battle with the acquisition target object AO. As an example, a predetermined life value for existing in the virtual space is set for the acquisition target object AO, and when the acquisition target object AO has made contact with the bullet object BO, a predetermined number is subtracted from the life value. Then, when the life value of the acquisition target object AO has become 0 or below, the acquisition target object AO is caused to disappear from the virtual space, and it is determined that the acquisition of the face image attached to the acquisition target object AO has been successful.
In step 228, when the acquisition of the face image has been successful, the information processing section 31 saves the data that indicates the face image obtained from the face recognized in step 221 and is stored in the main memory 32, in addition to data of the face image that has been saved in the saved data storage area Do up to the current time, and proceeds to the subsequent step 229. As an example of the means for saving, the CPU 311 of the information processing section 31 performs the process of step 228. As described above, the saved data storage area Do is a storage area in which the information processing section 31 can write and read and which is constructed in, for example, the data storage internal memory 35 or the data storage external memory 46. When data of a new face image is stored in the saved data storage area Do, the information processing section 31 can display the data of the new face image on the screen of the upper LCD 22, for example, in addition to the list of face images described with reference to
At this time, to manage the face image newly saved in the saved data storage area Do of the game, the information processing section 31 generates and saves the face image management information Dn1 described with reference to
In addition, the information processing section 31 may estimate the attributes of the face image added to the saved data storage area Do, to thereby update the aggregate result of the face image attribute aggregate table Dn2 described with reference to
In addition, the information processing section 31 may permit the user to, for example, copy or modify the data stored in the saved data storage area Do, or transfer the data through the wireless communication module 36. Then, the information processing section 31 may, for example, save, copy, modify, or transfer the face image stored in the saved data storage area Do in accordance with an operation of the user through the GUI, or with an operation of the user through the operation buttons 14.
In addition, in the process of step 228 described above, the information processing section 31 may cause the acquisition target object AO that is the target used to succeed in the acquisition of the face image, to disappear from the virtual space. In this case, the information processing section 31 initializes the acquisition target object data concerning the acquisition target object AO that is the target used to succeed in the acquisition of the face image, such that the acquisition target object AO is not present in the virtual space.
In step 229, the information processing section 31 determines whether or not the acquisition of the face image attached to the acquisition target object AO present in the virtual space has failed. Then, when the acquisition of the face image attached to the acquisition target object AO has failed, the information processing section 31 proceeds to the subsequent step 230. It should be noted that in the case where a plurality of acquisition target objects AO are present, when any one of the face images attached to the acquisition target objects AO has failed, the information processing section 31 proceeds to the subsequent step 230, On the other hand, when the acquisition of none of the face images attached to the acquisition target objects AO has failed, the information processing section 31 ends the process of this subroutine.
Here, a failure in the acquisition of the face image is, for example, the case where the user has lost a battle with the acquisition target object AO. As an example, when the acquisition target object AO has continued to be present in the virtual space for a predetermined time or longer, it is determined that the acquisition of the face image attached to the acquisition target object AO has failed.
In step 230, when the acquisition of the face image has failed, the information processing section 31 discards the data that indicates the face image obtained from the face recognized in step 221 described above and is stored main memory 32, and ends the process of the subroutine. It should be noted that in the process of step 230 described above, the information processing section 31 may cause the acquisition target object AO that is the target used to fail in the acquisition of the face image, to disappear from the virtual space. In this case, the information processing section 31 initializes the acquisition target object data concerning the acquisition target object AO that is the target used to fail in the acquisition of the face image, such that the acquisition target object AO is not present in the virtual space.
As described above, based on the processes of
It should be noted that in the fifth embodiment described above, as an example, when the user has attacked and defeated the acquisition target object AO that appears during the game, permission is given to store the face image attached to the acquisition target object AO in the saved data storage area Do. Alternatively, permission may be given to store the face image in the saved data storage area Do, by executing another game where the user fights with the acquisition target object AO. As an example, in a game where the user competes with enemy objects in score, the acquisition target object AO appears, to which a face image included in a camera image captured during the game is attached. Then, when the user has scored more points than the acquisition target object AO that has appeared during the game, permission is given to store the face image attached to the acquisition target object AO in the saved data storage area Do. As another example, in a game where the user overcomes obstacles set by enemy objects, the acquisition target object AO appears, to which a face image included in a camera image captured during the game is attached. Then, when the user has overcome the obstacles set by the acquisition target object AO that has appeared during the game, and the user has reached a goal, permission is given to store the face image attached to the acquisition target object AO in the saved data storage area Do.
In addition, in the first through fifth embodiments described above, as an example, a face image acquired in the image processing based on the flow chart shown in
When a face image already acquired by an application different from the application of the image processing serves as a target to be stored in the saved data storage area Do, at least one face image is extracted by performing a face recognition process on photographed images saved during the execution of the different application, and the extracted face image serves as a target to be stored. Specifically, to prompt selection of a character (face image) to appear in the first game or the second game, when the character is displayed on the upper LCD 22 and/or the lower LCD 12 (e.g., steps 30, 40, 90, 103, 126, 140, 160, and 162), at least one character including a face image acquired in advance by executing the different application is also displayed as a selection target. In this case, before the character is displayed, at least one face image is extracted by performing a face recognition process on the photographed images saved in advance, and a given face image among the extracted face images is displayed as a selection target in addition to the character. Then, when the user has selected the character including the face image obtained by the extraction to appear in the game, and the user has won a battle with the character, the face image is stored in the saved data storage area Do.
As described above, a face image already acquired by an application different from the application of the image processing also serves as a target to be stored in the saved data storage area Do. This increases the variations of face images that can be acquired by the user, and therefore makes it easy to collect face images. Additionally, a face image unexpected by the user is suddenly added as a target to participate in the game, and therefore, it is also possible to prevent weariness in collecting face images.
In the above descriptions, as an example, the angular velocities generated in the game apparatus 10 are detected, and the motion of the game apparatus 10 in real space is calculated using the angular velocities. Alternatively, the motion of the game apparatus 10 may be calculated using another method. As a first example, the motion of the game apparatus 10 may be calculated using the accelerations detected by the acceleration sensor 39 built into the game apparatus 10. As an example, when the computer performs processing on the assumption that the game apparatus 10 having the acceleration sensor 39 is in a static state (i.e., performs processing on the assumption that the acceleration detected by the acceleration sensor 39 is the gravitational acceleration only), if the game apparatus 10 is actually in a static state, it is possible to determine, based on the detected acceleration, whether or not the game apparatus 10 is tilted relative to the direction of gravity, and also possible to determine to what degree the game apparatus 10 is tilted. As another example, when it is assumed that the game apparatus 10 having the acceleration sensor 39 is in a dynamic state, the acceleration sensor 39 detects the acceleration corresponding to the motion of the acceleration sensor 39 in addition to a component of the gravitational acceleration. This makes it possible to determine the motion direction and the like of the game apparatus 10 by removing the component of the gravitational acceleration by a predetermined process. Specifically, when the game apparatus 10 having the acceleration sensor 39 is moved by being dynamically accelerated with the user's hand, it is possible to calculate various motions and/or positions of the game apparatus 10 by processing the acceleration signals generated by the acceleration sensor 39. It should be noted that even when it is assumed that the acceleration sensor 39 is in a dynamic state, it is possible to determine the tilt of the game apparatus 10 relative to the direction of gravity by removing the acceleration corresponding to the motion of the acceleration sensor 39 by a predetermined process.
As a second example, the motion of the game apparatus 10 may be calculated using the amount of movement of a camera image captured in real time by the real camera built into the game apparatus 10 (the outer capturing section 23 or the inner capturing section 24). For example, when the motion of the game apparatus 10 has changed the imaging direction and the imaging position of the real camera, the camera image captured by the real camera also changes. Accordingly, it is possible to calculate the angle of change in the imaging direction of the real camera, the amount of movement of the imaging position, and the like, using changes in the camera image captured by the real camera built into the game apparatus 10. As an example, a predetermined physical body is recognized in a camera image captured by the real camera built into the game apparatus 10, and the imaging angles and the imaging positions of the physical body are chronologically compared to one another. This makes it possible to calculate the angle of change in the imaging direction of the real camera, the amount of movement of the imaging position, and the like, from the amounts of changes in the imaging angle and the imaging position. As another example, the entire camera images captured by the real camera built into the game apparatus 10 are chronologically compared to one another. This makes it possible to calculate the angle of change in the imaging direction of the real camera, the amount of movement of the imaging position, and the like, from the amounts of changes in the imaging direction and the imaging range in the entire image.
As a third example, the motion of the game apparatus 10 may be calculated by combining at least two of: the angular velocities generated in the game apparatus 10; the accelerations generated in the game apparatus 10; and a camera image captured by the game apparatus 10. This makes it possible that in the state where it is difficult to estimate the motion of the game apparatus 10 in order to calculate the motion from one parameter, the motion of the game apparatus 10 is calculated by combining this parameter with another parameter, whereby the motion of the game apparatus 10 is calculated so as to compensate for such a state. As an example, to calculate the motion of the game apparatus 10 in the second example described above, if the captured camera image has moved chronologically in a horizontal direction, it may be difficult to accurately determine whether the capturing angle of the game apparatus 10 has rotated about the vertical axis, or the game apparatus 10 has moved horizontally. In this case, it is possible to easily determine, using the angular velocities generated in the game apparatus 10, whether the game apparatus 10 has moved so as to rotate or moved horizontally.
In addition, as a fourth example, the motion of the game apparatus 10 may be calculated using so-called AR (augmented reality) technology.
In addition, in the above descriptions, as an example, mainly, a planar image (a planar view image, as opposed to the stereoscopically visible image described above) of the real world based on a camera image CI acquired from either one of the outer capturing section 23 and the inner capturing section 24 is displayed on the upper LCD 22. Alternatively, an image stereoscopically visible with the naked eye (a stereoscopic image) may be displayed on the upper LCD 22. For example, as described above, the game apparatus 10 can display on the upper LCD 22 a stereoscopically visible image (stereoscopic image) using camera images acquired from the left outer capturing section 23a and the right outer capturing section 23b. In this case, drawing is performed such that the enemy objects EO are present in the stereoscopic image displayed on the upper LCD 22, and the acquisition target object AO appears from the stereoscopic image.
For example, to draw the enemy objects EO and the acquisition target object AO in the stereoscopic image, the image processing described above is performed using a left-eye image obtained from the left outer capturing section 23a and a right-eye image obtained from the right outer capturing section 23b. Specifically, in the image processing described above, either one of the left-eye image and the right-eye image is used as the camera image from which a face image is extracted by performing a face recognition process, and the enemy objects EO or the acquisition target object AO obtained by mapping a texture of the face image obtained from the one of the images are set in the virtual space. Further, a perspective transformation is performed from two virtual cameras (a stereo camera), on the enemy objects EO, the acquisition target object AO, and the bullet object BO, and the like that are placed in the virtual space, whereby a left-eye virtual world image and a right-eye virtual world image are obtained. Then, a left-eye display image is generated by combining a left-eye real world image with the left-eye virtual world image, and a right-eye display image is generated by combining a right-eye real world image with the right-eye virtual world image. Then, the left-eye display image and the right-eye display image are output to the upper LCD 22.
In addition, in the above descriptions, a real-time moving image captured by the real camera built into the game apparatus 10 is displayed on the upper LCD 22, and display is performed such that the enemy objects EO and the acquisition target object AO appear in the moving image (camera image) captured by the real camera. In the present invention, however, the images to be displayed on the upper LCD 22 have various possible variations. As a first example, a moving image recorded in advance, or a moving image or the like obtained from television broadcast or another device, is displayed on the upper LCD 22. In this case, the moving image is displayed on the upper LCD 22, and the enemy objects EO and the acquisition target object AO appear in the moving image. As a second example, a still image obtained from the real camera built into the game apparatus 10 or another real camera is displayed on the upper LCD 22. In this case, the still image obtained from the real camera is displayed on the upper LCD 22, and the enemy objects EO and the acquisition target object AO appear in the still image. Here, the still image obtained from the real camera may be a still image of the real world captured in real time by the real camera built into the game apparatus 10, or may be a still image of the real world photographed in advance by the real camera or another real camera, or may be a still image obtained from television broadcast or another device.
In addition, in the above embodiments, the upper LCD 22 is a parallax barrier type liquid crystal display device, and therefore is capable of switching between stereoscopic display and planar display by controlling the on/off states of the parallax barrier. In another embodiment, for example, the upper LCD 22 may be a lenticular type liquid crystal display device, and therefore may be capable of displaying a stereoscopic image and a planar image. Also in the case of the lenticular type, an image is displayed stereoscopically by dividing two images captured by the outer capturing section 23, each into vertical strips, and alternately arranging the divided vertical strips. Also in the case of the lenticular type, an image can be displayed in a planar manner by causing the user's right and left eyes to view one image captured by the inner capturing section 24. That is, even the lenticular type liquid crystal display device is capable of causing the user's left and right eyes to view the same image by dividing one image into vertical strips, and alternately arranging the divided vertical strips. This makes it possible to display an image, captured by the inner capturing section 24, as a planar image.
In addition, in the above embodiments, the descriptions are given using the hand-held game apparatus 10. The present invention, however, may be achieved by causing a stationary game apparatus or an information processing apparatus, such as a general personal computer, to execute the image processing program according to the present invention. Alternatively, in another embodiment, not only a game apparatus but also any hand-held electronic device may be used, such as a personal digital assistant (PDA), a mobile phone, a personal computer, or a camera. For example, a mobile phone may include two display sections and a real camera on the main surface of a housing.
In addition, in the above descriptions, the image processing is performed by the game apparatus 10. Alternatively, at least some of the process steps in the image processing may be performed by another device. For example, when the game apparatus 10 is configured to communicate with another device (e.g., a server or another game apparatus), the process steps in the image processing may be performed by the cooperation of the game apparatus 10 and said another device. As an example, the game apparatus 10 performs a face image acquisition process and game processing for permitting face images to be saved in an accumulating manner, and the face images that serve as targets to be permitted to be saved when the game has been successful may be saved in another device. In this case, a plurality of game apparatuses 10 save face images in another device in an accumulating manner, and this further encourages collection of face images. Additionally, this may also possibly create a different enjoyment by browsing face images saved by other game apparatuses 10. As another example, another device may perform the processes of steps 52 through 57 of
In addition, the shape of the game apparatus 10, and the shapes, the number, the placement, or the like of the various buttons of the operation button 14, the analog stick 15, and the touch panel 13 that are provided in the game apparatus 10 are merely illustrative, and the present invention can be achieved with other shapes, numbers, placements, and the like. Further, the processing orders, the setting values, the criterion values, and the like that are used in the image processing described above are also merely illustrative, and it is needless to say that the present invention can be achieved with other orders and values.
In addition, the image processing program (game program) described above may be supplied to the game apparatus 10 not only from an external storage medium, such as the external memory 45 or the data storage external memory 46, but also via a wireless or wired communication link. Further, the program may be stored in advance in a non-volatile storage device of the game apparatus 10. It should be noted that examples of the information storage medium having stored thereon the program may include a CD-ROM, a DVD, and any other optical disk storage medium similar to these, a flexible disk, a hard disk, a magnetic optical disk, and a magnetic tape, as well as a non-volatile memory. Furthermore, the information storage medium for storing the program may be a volatile memory that temporarily stores the program. Such storage media can be defined as storage media that can be read by a computer or the like. For example, a computer or the like is caused to read and execute the program stored in each of these storage media, and thereby can provide the various functions described above.
While the invention has been described in detail, the foregoing description is in all aspects illustrative and not restrictive. It is understood that numerous other modifications and variations can be devised without departing from the scope of the invention. It is understood that the scope of the invention should be interpreted only by the appended claims. It is also understood that one skilled in the art can implement the invention in the equivalent range based on the description of the invention and common technical knowledge, from the description of the specific embodiments of the invention. Further, throughout the specification, it should be understood that terms in singular form include the concept of plurality unless otherwise specified. Thus, it should be understood that articles or adjectives indicating the singular form (e.g., “a”, “an”, “the”, and the like in English) include the concept of plurality unless otherwise specified. Furthermore, it should be understood that terms used in the present specification have meanings generally used in the art unless otherwise specified. Therefore, unless otherwise defined, all the jargons and technical terms have the same meanings as those generally understood by one skilled in the art of the invention. In the event of any contradiction, the present specification (including meanings defined herein) has priority.
(Appended Notes)
The above embodiments can be exemplified by the following forms (referred to as “appended notes”). The components included in each appended note can be combined with the components included in the other appended notes.
(Appended Note 1)
A computer-readable storage medium having stored thereon a game program to be executed by a computer that displays an image on a display device, the game program causing the computer to execute as:
an image acquisition step of acquiring a face image;
a step of creating a first character object based on the acquired face image; and
a game processing step of executing a game by displaying the first character object together with a second character object, the second character object being different from the first character object,
the game processing step including:
-
- a step of contributing to a success in the game by an attack on the first character object, the attack made in accordance with an operation of a player; and
- a step of invalidating an attack on the second character object, the attack made in accordance with an operation of the player.
(Appended Note 2)
A computer-readable storage medium having stored thereon a game program to be executed by a computer that displays an image on a display device, the game program causing the computer to execute as:
an image acquisition step of acquiring at least one face image;
a step of creating a first character object, the first character object including one of the acquired face images; and
a game processing step of executing a game by displaying the first character object together with a second character object and a third character object, the second character object being smaller in dimensions than the first character object and including the one of the acquired face images, the third character object being smaller in dimensions than the first character object and including a face image other than the one of the acquired face images,
the game processing step including:
-
- a step of advancing deformation of the face image included in the first character object by an attack on the third character object, the attack made in accordance with an operation of a player; and
- a step of, by an attack on the second character object in accordance with an operation of the player, reversing the deformation such that the face image included in the first character object approaches the acquired original face image.
(Appended Note 3)
A computer-readable storage medium having stored thereon a game program to be executed by a computer that displays an image on a display device, the game program causing the computer to execute as:
an image acquisition step of acquiring a face image;
a step of creating a character object, the character object including a face image obtained by deforming the acquired face image;
a game processing step of receiving an operation of a player, and advancing a game related to the face image;
a step of determining a success or a failure in the game, the success or the failure made in accordance with an operation of the player; and
a step of, when a result of the game has been successful, restoring the deformed face image to the acquired original face image.
(Appended Note 4)
An image processing apparatus connectable to a display device, the image processing apparatus comprising:
image acquisition means for acquiring a face image;
means for creating a first character object based on the acquired face image; and
game processing means for executing a game by displaying on the display device the first character object together with a second character object, the second character object being different from the first character object,
the game processing means including:
-
- means for contributing to a success in the game by an attack on the first character object, the attack made in accordance with an operation of a player; and
- means for invalidating an attack on the second character object, the attack made in accordance with an operation of the player.
(Appended Note 5)
An image processing apparatus connectable to a display device, the image processing apparatus comprising:
image acquisition means for acquiring at least one face image; means for creating a first character object, the first character object including one of the acquired face images; and
game processing means for executing a game by displaying the first character object together with a second character object and a third character object, the second character object being smaller in dimensions than the first character object and including the one of the acquired face images, the third character object being smaller in dimensions than the first character object and including a face image other than the one of the acquired face images,
the game processing means including:
-
- means for advancing deformation of the face image included in the first character object by an attack on the third character object, the attack made in accordance with an operation of a player; and
- means for, by an attack on the second character object in accordance with an operation of the player, reversing the deformation such that the face image included in the first character object approaches the acquired original face image.
(Appended Note 6)
An image processing apparatus connectable to a display device, the image processing apparatus comprising:
image acquisition means for acquiring a face image;
means for creating a character object, the character object including a face image obtained by deforming the acquired face image;
game processing means for receiving an operation of a player, and advancing a game related to the face image by displaying the character object on the display device;
means for determining a success or a failure in the game, the success or the failure made in accordance with an operation of the player; and
means for, when a result of the game has been successful, restoring the deformed face image to the acquired original face image.
(Appended Note 7)
An image processing apparatus comprising:
a display device;
image acquisition means for acquiring a face image;
means for creating a first character object based on the acquired face image; and
game processing means for executing a game by displaying on the display device the first character object together with a second character object, the second character object being different from the first character object,
the game processing means including:
-
- means for contributing to a success in the game by an attack on the first character object, the attack made in accordance with an operation of a player; and
- means for invalidating an attack on the second character object, the attack made in accordance with an operation of the player.
(Appended Note 8)
An image processing apparatus comprising:
a display device;
image acquisition means for acquiring at least one face image;
means for creating a first character object, the first character object including one of the acquired face images; and
game processing means for executing a game by displaying on the display device the first character object together with a second character object and a third character object, the second character object being smaller in dimensions than the first character object and including the one of the acquired face images, the third character object being smaller in dimensions than the first character object and including a face image other than the one of the acquired face images,
the game processing means including:
-
- means for advancing deformation of the face image included in the first character object by an attack on the third character object, the attack made in accordance with an operation of a player; and
- means for, by an attack on the second character object in accordance with an operation of the player, reversing the deformation such that the face image included in the first character object approaches the acquired original face image.
(Appended Note 9)
An image processing apparatus comprising:
a display device;
image acquisition means for acquiring a face image;
means for creating a character object, the character object including a face image obtained by deforming the acquired face image;
game processing means for receiving an operation of a player, and advancing a game related to the face image by displaying the character object on the display device;
means for determining a success or a failure in the game, the success or the failure made in accordance with an operation of the player; and
means for, when a result of the game has been successful, restoring the deformed face image to the acquired original face image.
(Appended Note 10)
An image processing system comprising:
a capturing device;
a display device that displays information including an image acquired by the capturing device; and
an image processing apparatus that cooperates with the capturing device and the display device,
the image processing apparatus including:
-
- image acquisition means for acquiring a face image;
- means for creating a first character object based on the acquired face image; and
- game processing means for executing a game by displaying on the display device the first character object together with a second character object, the second character object being different from the first character object,
the game processing means including:
-
- means for contributing to a success in the game by an attack on the first character object, the attack made in accordance with an operation of a player; and
- means for invalidating an attack on the second character object, the attack made in accordance with an operation of the player.
(Appended Note 11)
An image processing system comprising:
a capturing device;
a display device that displays information including an image acquired by the capturing device; and
an image processing apparatus that cooperates with the capturing device and the display device,
the image processing apparatus including:
-
- image acquisition means for acquiring at least one face image;
- means for creating a first character object, the first character object including one of the acquired face images; and
- game processing means for executing a game by displaying on the display device the first character object together with a second character object and a third character object, the second character object being smaller in dimensions than the first character object and including the one of the acquired face images, the third character object being smaller in dimensions than the first character object and including a face image other than the one of the acquired face images,
the game processing means including:
-
- means for advancing deformation of the face image included in the first character object by an attack on the third character object, the attack made in accordance with an operation of a player; and
- means for, by an attack on the second character object in accordance with an operation of the player, reversing the deformation such that the face image included in the first character object approaches the acquired original face image.
(Appended Note 12)
An image processing system comprising:
a capturing device;
a display device that displays information including an image acquired by the capturing device; and
an image processing apparatus that cooperates with the capturing device and the display device,
the image processing apparatus including:
-
- image acquisition means for acquiring a face image;
- means for creating a character object, the character object including a face image obtained by deforming the acquired face image;
- game processing means for receiving an operation of a player, and advancing a game related to the face image by displaying the character object on the display device;
- means for determining a success or a failure in the game, the success or the failure made in accordance with an operation of the player; and
- means for, when a result of the game has been successful, restoring the deformed face image to the acquired original face image.
(Appended Note 13)
An information processing method performed by a computer that displays an image on a display device, the computer executing:
an image acquisition step of acquiring a face image;
a step of creating a first character object based on the acquired face image; and
a game processing step of executing a game by displaying the first character object together with a second character object, the second character object being different from the first character object,
the game processing step including:
-
- a step of contributing to a success in the game by an attack on the first character object, the attack made in accordance with an operation of a player; and
- a step of invalidating an attack on the second character object, the attack made in accordance with an operation of the player.
(Appended Note 14)
An information processing method performed by a computer that displays an image on a display device, the computer executing:
an image acquisition step of acquiring at least one face image;
a step of creating a first character object, the first character object including one of the acquired face images; and
a game processing step of executing a game by displaying the first character object together with a second character object and a third character object, the second character object being smaller in dimensions than the first character object and including the one of the acquired face images, the third character object being smaller in dimensions than the first character object and including a face image other than the one of the acquired face images,
the game processing step including:
-
- a step of advancing deformation of the face image included in the first character object by an attack on the third character object, the attack made in accordance with an operation of a player; and
- a step of, by an attack on the second character object in accordance with an operation of the player, reversing the deformation such that the face image included in the first character object approaches the acquired original face image.
(Appended Note 15)
An information processing method performed by a computer that displays an image on a display device, the computer executing:
an image acquisition step of acquiring a face image;
a step of creating a character object, the character object including a face image obtained by deforming the acquired face image;
a game processing step of receiving an operation of a player, and advancing a game related to the face image;
a step of determining a success or a failure in the game, the success or the failure made in accordance with an operation of the player; and
a step of, when a result of the game has been successful, restoring the deformed face image to the acquired original face image.
A storage medium having stored thereon a game program, an image processing apparatus, an image processing system, and an image processing method, according to the present invention can generate a new image by combining a real world image with a virtual world image, and therefore are suitable for use as a game program, an image processing apparatus, an image processing system, an image processing method, and the like that perform a process of displaying various images on a display device.
Claims
1. A computer-readable storage medium having stored thereon a game program to be executed by a computer of a game apparatus that displays an image on a display device, the game program causing the computer to execute:
- an image acquisition step of acquiring a face image and temporarily storing the acquired face image in a first storage area, during a predetermined game or before a start of the predetermined game;
- a step of creating a first character object, the first character object being a character object including the face image stored in the first storage area;
- a first game processing step of, in the predetermined game, advancing a game related to the first character object in accordance with an operation of a player;
- a determination step of determining a success in the game related to the first character object; and
- a step of, at least when a success in the game has been determined in the determination step, saving the face image stored in the first storage area, in a second storage area in an accumulating manner.
2. The computer-readable storage medium having stored thereon the game program according to claim 1, wherein
- in the image acquisition step, the face image is acquired and temporarily stored in the first storage area before the start of the predetermined game.
3. The computer-readable storage medium having stored thereon the game program according to claim 1, further causing the computer to execute:
- a step of creating a second character object, the second character object being a character object including a face image selected automatically or by the player from among the face images saved in the second storage area, wherein
- in the first game processing step, in the predetermined game, a game related to the second character object is additionally advanced in accordance with an operation of the player.
4. The computer-readable storage medium having stored thereon the game program according to claim 1, further causing the computer to execute:
- a step of creating a second character object, the second character object being a character object including a face image selected automatically or by the player from among the face images saved in the second storage area; and
- a second game processing step of advancing a game related to the second character object in accordance with an operation of the player.
5. The computer-readable storage medium having stored thereon the game program according to claim 2, wherein
- the game apparatus is capable of acquiring an image from a capturing device, and
- in the image acquisition step, the face image is acquired from the capturing device before the start of the predetermined game.
6. The computer-readable storage medium having stored thereon the game program according to claim 5, wherein
- the game apparatus is capable of acquiring an image from a first capturing device that captures a front direction of a display surface of the display device, and an image from a second capturing device that captures a direction of a back surface of the display surface of the display device, the first capturing device and the second capturing device serving as the capturing device, and
- the image acquisition step includes: a step of acquiring a face image captured by the first capturing device in preference to acquiring a face image captured by the second capturing device; and a step of, after the face image from the first capturing device has been saved in the second storage area, permitting the face image captured by the second capturing device to be acquired.
7. The computer-readable storage medium having stored thereon the game program according to claim 1, further causing the computer to execute:
- a step of specifying attributes of the face images saved in the second storage area; and
- a step of prompting the player to acquire a face image corresponding to an attribute different from the attributes specified from the face images saved in the second storage area.
8. The computer-readable storage medium having stored thereon the game program according to claim 3, wherein
- the first game processing step includes: a step of advancing the game related to the first character object by attacking the character objects in accordance with an operation of the player, and
- in the first game processing step, an attack on the first character object is a valid attack for succeeding in the game related to the first character object, and an attack on the second character object is an invalid attack for succeeding in the game related to the first character object.
9. The computer-readable storage medium having stored thereon the game program according to claim 4, further causing the computer to execute:
- a step of creating a third character object, the third character object being a character object including a face image different from the face image included in the second character object, wherein
- the second game processing step includes: a step of advancing the game related to the second character obj ect by attacking the character objects in accordance with an operation of the player, and
- in the second game processing step, an attack on the second character object is a valid attack for succeeding in the game related to the second character object, and an attack on the third character object is an invalid attack for succeeding the game related to the second character object.
10. The computer-readable storage medium having stored thereon the game program according to claim 3, further causing the computer to execute:
- a step of creating a third character object, the third character object being a character object including the face image stored in the first storage area and being smaller in dimensions than the first character object; and
- a step of creating a fourth character object, the fourth character object being a character object including a face image different from the face image stored in the first storage area and being smaller in dimensions than the first character object, wherein
- the first game processing step includes: a step of advancing the game related to the first character object by attacking the character objects in accordance with an operation of the player; a step of, when the fourth character object has been attacked, advancing deformation of the face image included in the first character object; and a step of, when the third character object has been attacked, reversing the deformation such that the face image included in the first character object approaches the original face image stored in the first storage area.
11. The computer-readable storage medium having stored thereon the game program according to claim 4, further causing the computer to execute:
- a step of creating a third character object, the third character object being a character object including the same face image as the face image included in the second character object and being smaller in dimensions than the second character object; and
- a step of creating a fourth character object, the fourth character object being a character object including a face image different from the face image included in the second character object and being smaller in dimensions than the second character object, wherein
- the second game processing step includes: a step of advancing the game related to the second character obj ect by attacking the character objects in accordance with an operation of the player; a step of, when the fourth character object has been attacked, advancing deformation of the face image included in the second character object; and a step of, when the third character object has been attacked, reversing the deformation such that the face image included in the second character object approaches the original face image saved in the second storage area.
12. The computer-readable storage medium having stored thereon the game program according to claim 1, wherein
- in the step of creating the first character object, a character object including a face image obtained by deforming the face image stored in the first storage area is created as the first character object, and
- the first game processing step includes: a step of, when the game related to the first character object has been successful, restoring the deformed face image to the original face image stored in the first storage area.
13. The computer-readable storage medium having stored thereon the game program according to claim 4, wherein
- in the step of creating the second character object, a character object including a face image obtained by deforming the face image saved in the second storage area is created as the second character object, and
- the second game processing step includes: a step of, when the game related to the second character object has been successful, restoring the deformed face image to the original face image saved in the second storage area.
14. The computer-readable storage medium having stored thereon the game program according to claim 1, wherein
- in the image acquisition step, the face image is acquired and temporarily stored in the first storage area during the predetermined game, and
- in the first game processing step, in accordance with the creation of the first character object based on the acquisition of the face image during the predetermined game, the first character object is caused to appear in the predetermined game, and the game related to the first character object is advanced.
15. The computer-readable storage medium having stored thereon the game program according to claim 14, further causing the computer to execute:
- a captured image acquisition step of acquiring a captured image captured by a real camera;
- a display image generation step of generating a display image in which a virtual character object that appears in the predetermined game is placed so as to have, as a background, the captured image acquired in the captured image acquisition step; and
- a display control step of displaying on the display device the display image generated in the display image generation step, wherein
- in the image acquisition step, during the predetermined game, at least one face image is extracted from the captured image displayed on the display device, and is temporarily stored in the first storage area.
16. The computer-readable storage medium having stored thereon the game program according to claim 15, wherein
- in the display image generation step, the display image is generated by placing the first character object such that, when displayed on the display device, the first character object overlaps a position of the face image in the captured image, the face image extracted in the image acquisition step.
17. The computer-readable storage medium having stored thereon the game program according to claim 16, wherein
- in the captured image acquisition step, captured images of a real world captured in real time by the real camera are repeatedly acquired,
- in the display image generation step, the captured images repeatedly acquired in the captured image acquisition step are sequentially set as the background,
- in the image acquisition step, face images corresponding to the already extracted face image are repeatedly acquired from the captured images sequentially set as the background,
- in the step of creating the first character object, the first character object is repeatedly created so as to include the face images repeatedly acquired in the image acquisition step, and
- in the display image generation step, the display image is generated by placing the repeatedly created first character object such that, when displayed on the display device, the repeatedly created first character object overlaps positions of the face images in the respective captured images, the face images repeatedly acquired in the image acquisition step.
18. The computer-readable storage medium having stored thereon the game program according to claim 1, wherein
- the game apparatus is capable of using image data stored in storage means for storing data not temporarily, and
- in the image acquisition step, before the start of the predetermined game, at least one face image is extracted from the image data stored in the storage means, and is temporarily stored in the first storage area.
19. An image processing apparatus connectable to a display device, the image processing apparatus comprising:
- image acquisition means for acquiring a face image and temporarily storing the acquired face image in a first storage area, during a predetermined game or before a start of the predetermined game;
- means for creating a character object, the character object including the face image stored in the first storage area;
- game processing means for, in the predetermined game, advancing a game related to the character object in accordance with an operation of a player by displaying a game related to the character object on the display device;
- determination means for determining a success in the game related to the character object; and
- means for, at least when a success in the game has been determined by the determination means, saving the face image stored in the first storage area, in a second storage area in an accumulating manner.
20. An image processing apparatus comprising:
- a display device;
- image acquisition means for acquiring a face image and temporarily storing the acquired face image in a first storage area, during a predetermined game or before a start of the predetermined game;
- means for creating a character object, the character object including the face image stored in the first storage area;
- game processing means for, in the predetermined game, advancing a game related to the character object in accordance with an operation of a player by displaying the game related to the character object on the display device;
- determination means for determining a success in the game related to the character object; and
- means for, at least when a success in the game has been determined by the determination means, saving the face image stored in the first storage area, in a second storage area in an accumulating manner.
21. An image processing system that displays information including an image on a display device, the image processing system comprising:
- image acquisition means for acquiring a face image and temporarily storing the acquired face image in a first storage area, during a predetermined game or before a start of the predetermined game;
- means for creating a character object, the character object including the face image stored in the first storage area;
- game processing means for, in the predetermined game, advancing a game related to the character object in accordance with an operation of a player by displaying the game related to the character object on the display device;
- determination means for determining a success in the game related to the character object; and
- means for, at least when a success in the game has been determined by the determination means, saving the face image stored in the first storage area, in a second storage area in an accumulating manner.
22. An image processing method performed by a computer that displays an image on a display device, the computer executing:
- an image acquisition step of acquiring a face image and temporarily storing the acquired face image in a first storage area, during a predetermined game or before a start of the predetermined game;
- a step of creating a character object, the character object including the face image stored in the first storage area;
- a game processing step of, in the predetermined game, advancing a game related to the character object in accordance with an operation of a player;
- a determination step of determining a success in the game related to the character object; and
- a step of, at least when a success in the game has been determined in the determination step, saving the face image stored in the first storage area, in a second storage area in an accumulating manner.
Type: Application
Filed: Apr 6, 2011
Publication Date: Apr 19, 2012
Applicant: NINTENDO CO., LTD. (Kyoto)
Inventor: Toshiaki SUZUKI (Kyoto)
Application Number: 13/080,989
International Classification: A63F 9/24 (20060101);