ENTERTAINMENT DEVICE AND ENTERTAINMENT METHODS

- D YOUNG & CO LLP

An entertainment device comprises a display, an object position detector to detect a three-dimensional position of an object in front of an image plane of the display, and a source position detector to detect a source position of a pointing source with respect to the display. A pointing direction indicates a direction in which the pointing source is pointing. The device also comprises a direction detector to detect a pointing direction of a user control device. The pointing direction of the user control device is associated with the pointing direction, so that a change in the pointing direction of the user control device causes a change in the pointing direction of the pointing source. The device also comprises an alignment detector to detect whether the three-dimensional position lies on a line passing through the source position in the direction of the pointing direction of the pointing source.

Latest D YOUNG & CO LLP Patents:

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention

The present invention relates to an entertainment device and entertainment methods.

2. Description of the Prior Art

Recently, motion controllers such as the controller for the Nintendo® Wii® (sometimes known as the “Wiimote”) have become popular for controlling entertainment device. Such controllers typically use motion data from internal motion sensors such as gyroscopes and accelerometers combined with optical data generated in dependence on a light source mounted on a display to detect the position and orientation of the motion controller. Accordingly, a user can use the motion controller to point at objects displayed on the display to select one of the objects. Furthermore, the user can move the motion controller to control motion of a game character. However, such motion control is limited to objects displayed on the display.

Additionally, so-called three-dimensional (3D) TVs which can display images to be viewed in 3D are becoming more popular. Such TVs allow a user to perceive a three-dimensional image, for example by the user wearing suitable viewing glasses to view the TV. As such, there is increasing interest in videogames which can output images which can be displayed on 3D TVs so that the user perceives the video game content as three-dimensional images.

It is an object of the present invention to provide improved techniques for three-dimensional display.

SUMMARY OF THE INVENTION

In a first aspect, there is provided an entertainment device comprising: means for displaying an image on a display to a user; means for detecting a three-dimensional position of an object in front of an image plane of the display; means for detecting a source position of a pointing source with respect to the display, the pointing source having an associated pointing direction indicative of a direction in which the pointing source is pointing; means for detecting a pointing direction of a user control device, the pointing direction of the user control device being associated with the pointing direction of the pointing source so that a change in the pointing direction of the user control device causes a change in the pointing direction of the pointing source; and means for detecting whether the three-dimensional position lies on a line passing through the source position in the direction of the pointing direction of the pointing source.

In a second aspect, there is provided an entertainment method comprising: displaying an image on a display to a user; detecting a three-dimensional position of an object in front of an image plane of the display; detecting a source position of a pointing source with respect to the display, the pointing source having an associated pointing direction indicative of a to direction in which the pointing source is pointing; detecting a pointing direction of a user control device, the pointing direction of the user control device being associated with the pointing direction of the pointing source so that a change in the pointing direction of the user control device causes a change in the pointing direction of the pointing source; and detecting whether the three-dimensional position lies on a line passing through the source position in the direction of the pointing direction of the pointing source.

By detecting whether the three-dimensional position lies on a line passing through the source position in the direction of the pointing direction of the pointing source, embodiments of the invention can advantageously detect whether a user is using the user control device to cause the pointing source to point at an object in front of the image plane of the display. For example, a user could use the user control device to point at a real object in front of the display to select that object for image processing. As another example, during a game executed by the entertainment device, the user could use the user control device to point at a computer generated object which is caused to appear in front of the image plane of the display. Embodiments of the present invention therefore can provide a more immersive and diverse experience for a user. In some embodiments, the pointing source corresponds to a position of a game character within a game executed by the entertainment device. In other embodiments, the pointing source corresponds to the position of the user control device.

In a third aspect, there is provided an entertainment device comprising: generating means for generating a three-dimensional image to be displayed on a display; means for detecting, with respect to the display, a three-dimensional image display region in which the three-dimensional image can be perceived as three-dimensional by a viewer viewing the display; and means for detecting a source position and pointing direction of a user control device for controlling the entertainment device, the source position being indicative of a position of the user control device with respect to the display, and the pointing direction being indicative of a direction in which the user control device is pointing with respect to the display; in which the generating means is operable to generate the three-dimensional image within the image display region so that the three-dimensional image appears to be associated with the source position and the pointing direction of the user control device.

In a fourth aspect, there is provided an entertainment method comprising: generating, using an entertainment device, a three-dimensional image to be displayed on a display; detecting, with respect to the display, a three-dimensional image display region in which the three-dimensional image can be perceived as three-dimensional by a viewer viewing the display; detecting a source position and pointing direction of a user control device for to controlling the entertainment device, the source position being indicative of a position of the user control device with respect to the display, and the pointing direction being indicative of a direction in which the user control device is pointing with respect to the display; and generating the three-dimensional image within the image display region so that the three-dimensional image appears to be associated with the source position and the pointing direction of the user control device.

By generating a three-dimensional image within the image display region so that the three-dimensional image appears to be associated with the source position and the pointing direction of the user control device, embodiments of the present invention advantageously provide a more interactive user experience. For example, the user control device could act as a flame thrower within a game with the flames appearing to come from the motion controller. A more powerful and immersive 3D experience can therefore be provided to the user.

In a fifth aspect, there is provided an entertainment device comprising: generating means for generating a three-dimensional image to be displayed on a display; means for detecting, with respect to the display, a three-dimensional image display region in which the three-dimensional image can be perceived as three-dimensional by a viewer viewing the display; means for detecting the apparent position of the three-dimensional image with respect to the display; and controlling means for controlling the appearance of the three-dimensional image in dependence upon the relative position of the three-dimensional image with respect to the image display region.

In a sixth aspect, there is provided an entertainment method comprising: generating a three-dimensional image to be displayed on a display; detecting, with respect to the display, a three-dimensional image display region in which the three-dimensional image can be perceived as three-dimensional by a viewer viewing the display; detecting the apparent position of the three-dimensional image with respect to the display; and controlling the appearance of the three-dimensional image in dependence upon the relative position of the three-dimensional image with respect to the image display region.

Embodiments of the present invention advantageously address a problem which may occur when displaying 3D images, for example when rendering 3D images for 3D video games which use pseudo-realistic physics engines. Typically, such video games may use realistic physics models to predict the motion of objects within a game such as debris from an explosion. The motion of objects within physics based video games or physics based virtual reality simulations typically allows 3D objects to travel unrestricted through the virtual environment.

However, this may mean that objects may quickly move out of the 3D image display region. This can degrade the 3D effect and possibly cause the user to experience headaches and/or nausea because it is more likely that the user may only be able to view one image of a stereo pair if the object is outside the three-dimensional image display region. Therefore, embodiments of the invention control the appearance of the 3D image in dependence upon the relative position of the 3D image with respect to the 3D image display region.

For example, the path or trajectory of the object (such as explosion debris in a game) could be controlled so that the object remains within the 3D image display region. As another example, the appearance of the 3D image could be caused to fade when a position of the 3D image is within a threshold distance of an edge of the 3D image display region. Accordingly, a likelihood that the user experiences nausea and/or headaches can be reduced. Furthermore, a more diverse and immersive 3D experience can be provided, because the user is more likely to feel comfortable experiencing the 3D experience for longer periods of time.

BRIEF DESCRIPTION OF THE DRAWINGS

The above and other objects, features and advantages of the invention will be apparent from the following detailed description of illustrative embodiments which is to be read in connection with the accompanying drawings, in which:

FIG. 1 is a schematic diagram of an entertainment device;

FIG. 2 is a schematic diagram of a cell processor;

FIG. 3 is a schematic diagram of a video graphics processor;

FIGS. 4A and 4B are schematic diagrams of a stereoscopic camera and captured stereoscopic images;

FIG. 5A is a schematic diagram of a stereoscopic camera;

FIGS. 5B and 5C are schematic diagrams of a viewed stereoscopic image;

FIG. 6 is a schematic diagram of a motion controller in accordance with embodiments of the present invention;

FIG. 7 is a schematic diagram of a user using the motion controller to control the entertainment device in accordance with embodiments of the present invention;

FIG. 8 is a schematic diagram of a user using the motion controller to control the entertainment device in accordance with embodiments of the present invention;

FIG. 9 is a schematic diagram of a user using the motion controller to control the entertainment device in accordance with embodiments of the present invention;

FIG. 10 is a schematic diagram showing a plan view of the user seen from above viewing a display at two different horizontal positions in accordance with embodiments of the present invention;

FIG. 11 is a schematic diagram of the user using the motion controller to point at a computer generated object as seen from one side of the display in accordance with embodiments of the present invention;

FIG. 12 is a schematic diagram of a three-dimensional (3D) image display region shown with respect to the display seen from above in accordance with embodiments of the present invention;

FIG. 13 is a schematic diagram of the 3D image display region viewed from the side in accordance with embodiments of the present invention;

FIG. 14 is a schematic diagram of control of the appearance of a 3D image in accordance with embodiments of the present invention;

FIG. 15 is a schematic diagram of a 3D icon field in accordance with embodiments of the present invention;

FIG. 16 is a flow chart of an entertainment method in accordance with embodiments of the present invention;

FIG. 17 is a flow chart of an entertainment method in accordance with embodiments of the present invention; and

FIG. 18 is a flow chart of an entertainment method in accordance with embodiments of the present invention.

DESCRIPTION OF THE PREFERRED EMBODIMENTS

An entertainment device and entertainment methods are disclosed. In the following description, a number of specific details are presented in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to a person skilled in the art that these specific details need not be employed to practise the present invention. Conversely, specific details known to the person skilled in the art are omitted for the purposes of clarity where appropriate.

FIG. 1 schematically illustrates the overall system architecture of a Sony® Playstation 3® entertainment device. A system unit 10 is provided, with various peripheral devices connectable to the system unit.

The system unit 10 comprises: a Cell processor 100; a Rambus® dynamic random access memory (XDRAM) unit 500; a Reality Synthesiser graphics unit 200 with a dedicated video random access memory (VRAM) unit 250; and an I/O bridge 700.

The system unit 10 also comprises a Blu Ray® Disk BD-ROM® optical disk reader 430 for reading from a disk 440 and a removable slot-in hard disk drive (HDD) 400, accessible through the I/O bridge 700. Optionally the system unit also comprises a memory card reader 450 for reading compact flash memory cards, Memory Stick® memory cards and the like, which is similarly accessible through the I/O bridge 700.

The I/O bridge 700 also connects to four Universal Serial Bus (USB) 2.0 ports 710; a gigabit Ethernet port 720; an IEEE 802.11b/g wireless network (Wi-Fi) port 730; and a Bluetooth® wireless link port 740 capable of supporting up to seven Bluetooth connections.

In operation the I/O bridge 700 handles all wireless, USB and Ethernet data, including data from one or more game controllers 751. For example when a user is playing a game, the I/O bridge 700 receives data from the game controller 751 via a Bluetooth link and directs it to the Cell processor 100, which updates the current state of the game accordingly.

The wireless, USB and Ethernet ports also provide connectivity for other peripheral devices in addition to game controllers 751, such as: a remote control 752; a keyboard 753; a mouse 754; a portable entertainment device 755 such as a Sony Playstation Portable® entertainment device; a video camera such as an EyeToy® video camera 756; a microphone headset 757; and a motion controller 758. The motion controller 758 will be described in more detail later below.

In embodiments of the present invention, the video camera is a stereoscopic video camera 1010. Such peripheral devices may therefore in principle be connected to the system unit 10 wirelessly; for example the portable entertainment device 755 may communicate via a Wi-Fi ad-hoc connection, whilst the microphone headset 757 may communicate via a Bluetooth link.

The provision of these interfaces means that the Playstation 3 device is also potentially compatible with other peripheral devices such as digital video recorders (DVRs), set-top boxes, digital cameras, portable media players, Voice over IP telephones, mobile telephones, printers and scanners.

In addition, a legacy memory card reader 410 may be connected to the system unit via a USB port 710, enabling the reading of memory cards 420 of the kind used by the Playstation® or Playstation 2® devices.

In the present embodiment, the game controller 751 is operable to communicate wirelessly with the system unit 10 via the Bluetooth link However, the game controller 751 can instead be connected to a USB port, thereby also providing power by which to charge the battery of the game controller 751. In addition to one or more analogue joysticks and to conventional control buttons, the game controller is sensitive to motion in 6 degrees of freedom, corresponding to translation and rotation in each axis. Consequently gestures and movements by the user of the game controller may be translated as inputs to a game in addition to or instead of conventional button or joystick commands. Optionally, other wirelessly enabled peripheral devices such as the Playstation Portable device may be used as a controller. In the case of the Playstation Portable device, additional game or control information (for example, control instructions or number of lives) may be provided on the screen of the device. Other alternative or supplementary control devices may also be used, such as a dance mat (not shown), a light gun (not shown), a steering wheel and pedals (not shown) or bespoke controllers, such as a single or several large buttons for a rapid-response quiz game (also not shown).

The remote control 752 is also operable to communicate wirelessly with the system unit 10 via a Bluetooth link The remote control 752 comprises controls suitable for the operation of the Blu Ray Disk BD-ROM reader 430 and for the navigation of disk content.

The Blu Ray Disk BD-ROM reader 430 is operable to read CD-ROMs compatible with the Playstation and PlayStation 2 devices, in addition to conventional pre-recorded and recordable CDs, and so-called Super Audio CDs. The reader 430 is also operable to read DVD-ROMs compatible with the Playstation 2 and PlayStation 3 devices, in addition to conventional pre-recorded and recordable DVDs. The reader 430 is further operable to read BD-ROMs compatible with the Playstation 3 device, as well as conventional pre-recorded and recordable Blu-Ray Disks.

The system unit 10 is operable to supply audio and video, either generated or decoded by the Playstation 3 device via the Reality Synthesiser graphics unit 200, through audio and video connectors to a display and sound output device 300 such as a monitor or television set having a display 305 and one or more loudspeakers 310. The audio connectors 210 may include conventional analogue and digital outputs whilst the video connectors 220 may variously include component video, S-video, composite video and one or more High Definition Multimedia Interface (HDMI) outputs. Consequently, video output may be in formats such as PAL or NTSC, or in 720p, 1080i or 1080p high definition. Audio processing (generation, decoding and so on) is performed by the Cell processor 100. The Playstation 3 device's operating system supports Dolby® 5.1 surround sound, Dolby® Theatre Surround (DTS), and the decoding of 7.1 surround sound from Blu-Ray® disks.

In the present embodiment, the video camera 756 comprises a single charge coupled to device (CCD), an LED indicator, and hardware-based real-time data compression and encoding apparatus so that compressed video data may be transmitted in an appropriate format such as an intra-image based MPEG (motion picture expert group) standard for decoding by the system unit 10. The camera LED indicator is arranged to illuminate in response to appropriate control data from the system unit 10, for example to signify adverse lighting conditions. Embodiments of the video camera 756 may variously connect to the system unit 10 via a USB, Bluetooth or Wi-Fi communication port. Embodiments of the video camera may include one or more associated microphones and also be capable of transmitting audio data. In embodiments of the video camera, the CCD may have a resolution suitable for high-definition video capture. In use, images captured by the video camera may for example be incorporated within a game or interpreted as game control inputs.

In general, in order for successful data communication to occur with a peripheral device such as a video camera or remote control via one of the communication ports of the system unit 10, an appropriate piece of software such as a device driver should be provided. Device driver technology is well-known and will not be described in detail here, except to say that the skilled man will be aware that a device driver or similar software interface may be required in the present embodiment described.

Referring now to FIG. 2, the Cell processor 100 has an architecture comprising four basic components: external input and output structures comprising a memory controller 160 and a dual bus interface controller 170A,B; a main processor referred to as the Power Processing Element 150; eight co-processors referred to as Synergistic Processing Elements (SPEs) 110A-H; and a circular data bus connecting the above components referred to as the Element Interconnect Bus 180. The total floating point performance of the Cell processor is 218 GFLOPS, compared with the 6.2 GFLOPs of the Playstation 2 device's Emotion Engine.

The Power Processing Element (PPE) 150 is based upon a two-way simultaneous multithreading Power 970 compliant PowerPC core (PPU) 155 running with an internal clock of 3.2 GHz. It comprises a 512 kB level 2 (L2) cache and a 32 kB level 1 (L1) cache. The PPE 150 is capable of eight single position operations per clock cycle, translating to 25.6 GFLOPs at 3.2 GHz. The primary role of the PPE 150 is to act as a controller for the Synergistic Processing Elements 110A-H, which handle most of the computational workload. In operation the PPE 150 maintains a job queue, scheduling jobs for the Synergistic Processing Elements 110A-H and monitoring their progress. Consequently each Synergistic Processing Element 110A-H runs a kernel whose role is to fetch a job, execute it and to synchronise with the PPE 150.

Each Synergistic Processing Element (SPE) 110A-H comprises a respective Synergistic Processing Unit (SPU) 120A-H, and a respective Memory Flow Controller (MFC) 140A-H comprising in turn a respective Dynamic Memory Access Controller (DMAC) 142A-H, a respective Memory Management Unit (MMU) 144A-H and a bus interface (not shown).

Each SPU 120A-H is a RISC processor clocked at 3.2 GHz and comprising 256 kB local RAM 130A-H, expandable in principle to 4 GB. Each SPE gives a theoretical 25.6 GFLOPS of single precision performance. An SPU can operate on 4 single precision floating point members, 4 32-bit numbers, 8 16-bit integers, or 16 8-bit integers in a single clock cycle. In the same clock cycle it can also perform a memory operation. The SPU 120A-H does not directly access the system memory XDRAM 500; the 64-bit addresses formed by the SPU 120A-H are passed to the MFC 140A-H which instructs its DMA controller 142A-H to access memory via the Element Interconnect Bus 180 and the memory controller 160.

The Element Interconnect Bus (EIB) 180 is a logically circular communication bus internal to the Cell processor 100 which connects the above processor elements, namely the PPE 150, the memory controller 160, the dual bus interface 170A,B and the 8 SPEs 110A-H, totalling 12 participants. Participants can simultaneously read and write to the bus at a rate of 8 bytes per clock cycle. As noted previously, each SPE 110A-H comprises a DMAC 142A-H for scheduling longer read or write sequences. The EIB comprises four channels, two each in clockwise and anti-clockwise directions. Consequently for twelve participants, the longest step-wise data-flow between any two participants is six steps in the appropriate direction. The theoretical peak instantaneous EIB bandwidth for 12 slots is therefore 96B per clock, in the event of full utilisation through arbitration between participants. This equates to a theoretical peak bandwidth of 307.2 GB/s (gigabytes per second) at a clock rate of 3.2 GHz.

The memory controller 160 comprises an XDRAM interface 162, developed by Rambus Incorporated. The memory controller interfaces with the Rambus XDRAM 500 with a theoretical peak bandwidth of 25.6 GB/s.

The dual bus interface 170A,B comprises a Rambus FlexIO® system interface 172A,B. The interface is organised into 12 channels each being 8 bits wide, with five paths being inbound and seven outbound. This provides a theoretical peak bandwidth of 62.4 GB/s (36.4 GB/s outbound, 26 GB/s inbound) between the Cell processor and the I/O Bridge 700 via controller 170A and the Reality Simulator graphics unit 200 via controller 170B.

Data sent by the Cell processor 100 to the Reality Simulator graphics unit 200 will to typically comprise display lists, being a sequence of commands to draw vertices, apply textures to polygons, specify lighting conditions, and so on.

Referring now to FIG. 3, the Reality Simulator graphics (RSX) unit 200 is a video accelerator based upon the NVidia® G70/71 architecture that processes and renders lists of commands produced by the Cell processor 100. The RSX unit 200 comprises a host interface 202 operable to communicate with the bus interface controller 170B of the Cell processor 100; a vertex pipeline 204 (VP) comprising eight vertex shaders 205; a pixel pipeline 206 (PP) comprising 24 pixel shaders 207; a render pipeline 208 (RP) comprising eight render output units (ROPs) 209; a memory interface 210; and a video converter 212 for generating a video output. The RSX 200 is complemented by 256 MB double data rate (DDR) video RAM (VRAM) 250, clocked at 600 MHz and operable to interface with the RSX 200 at a theoretical peak bandwidth of 25.6 GB/s. In operation, the VRAM 250 maintains a frame buffer 214 and a texture buffer 216. The texture buffer 216 provides textures to the pixel shaders 207, whilst the frame buffer 214 stores results of the processing pipelines. The RSX can also access the main memory 500 via the EIB 180, for example to load textures into the VRAM 250.

The vertex pipeline 204 primarily processes deformations and transformations of vertices defining polygons within the image to be rendered.

The pixel pipeline 206 primarily processes the application of colour, textures and lighting to these polygons, including any pixel transparency, generating red, green, blue and alpha (transparency) values for each processed pixel. Texture mapping may simply apply a graphic image to a surface, or may include bump-mapping (in which the notional direction of a surface is perturbed in accordance with texture values to create highlights and shade in the lighting model) or displacement mapping (in which the applied texture additionally perturbs vertex positions to generate a deformed surface consistent with the texture).

The render pipeline 208 performs depth comparisons between pixels to determine which should be rendered in the final image. Optionally, if the intervening pixel process will not affect depth values (for example in the absence of transparency or displacement mapping) then the render pipeline and vertex pipeline 204 can communicate depth information between them, thereby enabling the removal of occluded elements prior to pixel processing, and so improving overall rendering efficiency. In addition, the render pipeline 208 also applies subsequent effects such as full-screen anti-aliasing over the resulting image.

Both the vertex shaders 205 and pixel shaders 207 are based on the shader model 3.0 standard. Up to 136 shader operations can be performed per clock cycle, with the combined to pipeline therefore capable of 74.8 billion shader operations per second, outputting up to 840 million vertices and 10 billion pixels per second. The total floating point performance of the RSX 200 is 1.8 TFLOPS.

Typically, the RSX 200 operates in close collaboration with the Cell processor 100; for example, when displaying an explosion, or weather effects such as rain or snow, a large number of particles must be tracked, updated and rendered within the scene. In this case, the PPU 155 of the Cell processor may schedule one or more SPEs 110A-H to compute the trajectories of respective batches of particles. Meanwhile, the RSX 200 accesses any texture data (e.g. snowflakes) not currently held in the video RAM 250 from the main system memory 500 via the element interconnect bus 180, the memory controller 160 and a bus interface controller 170B. The or each SPE 110A-H outputs its computed particle properties (typically coordinates and normals, indicating position and attitude) directly to the video RAM 250; the DMA controller 142A-H of the or each SPE 110A-H addresses the video RAM 250 via the bus interface controller 170B. Thus in effect the assigned SPEs become part of the video processing pipeline for the duration of the task.

In general, the PPU 155 can assign tasks in this fashion to six of the eight SPEs available; one SPE is reserved for the operating system, whilst one SPE is effectively disabled. The disabling of one SPE provides a greater level of tolerance during fabrication of the Cell processor, as it allows for one SPE to fail the fabrication process. Alternatively if all eight SPEs are functional, then the eighth SPE provides scope for redundancy in the event of subsequent failure by one of the other SPEs during the life of the Cell processor.

The PPU 155 can assign tasks to SPEs in several ways. For example, SPEs may be chained together to handle each step in a complex operation, such as accessing a DVD, video and audio decoding, and error masking, with each step being assigned to a separate SPE. Alternatively or in addition, two or more SPEs may be assigned to operate on input data in parallel, as in the particle animation example above.

Software instructions implemented by the Cell processor 100 and/or the RSX 200 may be supplied at manufacture and stored on the HDD 400, and/or may be supplied on a data carrier or storage medium such as an optical disk or solid state memory, or via a transmission medium such as a wired or wireless network or internet connection, or via combinations of these.

The software supplied at manufacture comprises system firmware and the Playstation 3 device's operating system (OS). In operation, the OS provides a user interface enabling a to user to select from a variety of functions, including playing a game, listening to music, viewing photographs, or viewing a video. The interface takes the form of a so-called cross media-bar (XMB), with categories of function arranged horizontally. The user navigates by moving through the function icons (representing the functions) horizontally using the game controller 751, remote control 752 or other suitable control device so as to highlight a desired function icon, at which point options pertaining to that function appear as a vertically scrollable list of option icons centred on that function icon, which may be navigated in analogous fashion. However, if a game, audio or movie disk 440 is inserted into the BD-ROM optical disk reader 430, the Playstation 3 device may select appropriate options automatically (for example, by commencing the game), or may provide relevant options (for example, to select between playing an audio disk or compressing its content to the HDD 400).

In addition, the OS provides an on-line capability, including a web browser, an interface with an on-line store from which additional game content, demonstration games (demos) and other media may be downloaded, and a friends management capability, providing on-line communication with other Playstation 3 device users nominated by the user of the current device; for example, by text, audio or video depending on the peripheral devices available. The on-line capability also provides for on-line communication, content download and content purchase during play of a suitably configured game, and for updating the firmware and OS of the Playstation 3 device itself. It will be appreciated that the term “on-line” does not imply the physical presence of wires, as the term can also apply to wireless connections of various types.

Referring now to FIGS. 4A and 4B, in conventional stereoscopic image generation a stereoscopic camera 1010 generates a pair of images whose viewpoints are separated by a known distance equal to average eye separation. In FIG. 4A, both lenses of the stereoscopic camera are looking at a sequence of objects P, Q, R, S and T, and two additional objects N and O (assumed for the purposes of explanation to be positioned above the other objects). As can be seen in the resulting pair of images comprising left-eye image 1012 and right-eye image 1014, the different image viewpoints result in a different image of the objects from each lens. In FIG. 4B, an overlay image 1020 of the stereoscopic image pair illustrates that the displacement between the objects within the image pair 1012 and 1014 is inversely proportional to the distance of the object from the stereoscopic camera.

Subsequently, the stereoscopic image pair is displayed via a display mechanism (such as alternate frame sequencing and glasses with switchably occluded lenses, or lenticular lensing on an autostereoscopic display screen) that delivers a respective one of the pair of to images (1012, 1014) to a respective eye of the viewer, and the object displacement between the images delivered to each eye causes an illusion of depth in the viewed content.

This relative displacement between corresponding image elements of the left- and right-image is also referred to in stereoscopy as parallax (as distinct from the visual effect of objects at different distances panning at different speeds, also sometimes known as the parallax effect). In the context of stereoscopy, so-called ‘positive parallax’ causes an object to appear to be within or behind the plane of the screen, and in this case the displacement is such that a left eye image element is to the left of a right eye image element. Meanwhile, ‘negative parallax’ causes an object to appear to be in front of the plane of the screen, and in this case the displacement is such that a left eye image element is to the right of a right eye image element, as is the case in FIGS. 4A and 4B. Finally, ‘zero parallax’ occurs at the plane of the screen, where the user focuses their eyes and hence there is no displacement between left and right image elements.

Referring now to FIGS. 5A and 5B, the display mechanism and the position of the viewer in combination determine whether the apparent distance of the objects is faithfully reproduced. Firstly, the size of the display acts as a scaling factor on the apparent displacement (parallax) of the objects; as a result a large screen (such as in a cinema) requires a greater distance from the user (i.e. in the cinema auditorium) to produce the appropriate parallax. Meanwhile a smaller screen such as that of a domestic television requires a smaller distance.

In FIG. 5A, reference is made only to objects P and T for clarity, but it will appreciated that the following holds true for all stereoscopic image elements. In FIG. 5A, the respective distances from the objects P and T to the stereoscopic camera 1010 are δP and δT. As described previously, these respective distances result in different displacements for the objects between the images captured by the two lenses of the stereoscopic camera, as seen again in the overlay 1020 of the two captured images in FIG. 5B. In this case, both objects show negative parallax.

As seen in FIG. 5B, there is a small displacement (negative parallax) for distant object T and a large displacement (negative parallax) for nearby object P.

With a suitable 3D display arranged to project the image from the left lens to the viewers left eye, and the image from the right lens to the viewer's right eye, the viewer's brain interprets the position of the objects as being located at the point of intersection of the respective lines of sight of each eye and the object as depicted in each image. In FIG. 5B, these are at distances δP′ and δT′ from the user.

Where the factors of the size of display, the distance of the user (and individual eye separation) are correct, as in FIG. 5B, then δP′≈δP and δT′≈δT. Moreover, the relative distance between these objects (δT′−δP′) is substantially the same as in the original scene (δT−δP). Consequently in this case the sense of depth experienced by the viewer feels correct and natural, with no distortion in the separation of depth between objects or image elements in or out of the plane of the image.

Notably the apparent depth is basically correct as long as the distance of the user is correct for the current screen size, even if the user is not central to the image; as can be seen in FIG. 5C, where again δP′≈δPT′≈δT and (δT′−δP′)≈(δT−δP).

It will be appreciated that in this case for the sake of explanation the effective magnification of the captured scene is 1:1. Of course typically different scenes may zoom in or out, and different screen sizes also magnify the reproduced image. Thus more generally the apparent depth is correct if the apparent scale (magnification) along the depth or ‘Z’ axis is the same as the apparent scale in the ‘X’ and ‘Y’ axis of the image plane.

In an embodiment, the distance of the viewer is detected using, for example, a video camera such as the EyeToy coupled with a remote distance measuring system, such as an infra-red emitter and detector. Such combined devices are currently available, such as for example the so-called ‘z-cam’ from 3DV Systems (http://www.3dvsystems.com/). Alternatively a stereoscopic video camera can be used to determine the distance of the user based on the same displacement measurements noted for the stereoscopic images as described above. Another alternative is to use a conventional webcam or EyeToy camera and to use known face recognition techniques to identify faces or heads of viewers, and from these to generate a measure of viewer distance from the display screen. In any of these cases, the relative position of the camera with respect to the 3D image display is also known, so that the distance from the viewer to the 3D image can be computed.

As mentioned above, the Bluetooth 740, Wi-Fi 730 and ethernet 720 ports provide interfaces for peripherals such as the motion controller 758. The operation of the motion controller 758 in cooperation with the system unit 10 will now be described with reference to FIGS. 6 and 7.

FIG. 6 is a schematic diagram of a motion controller in accordance with embodiments of the present invention.

The motion controller 758 is operable to communicate wirelessly with the system unit 10 via the Bluetooth link However, the motion controller 758 can instead be connected to a USB port, thereby also providing power by which to charge the battery of the motion controller 758. The motion controller 758 comprises motion sensors such as accelerometers and gyroscopes which are sensitive to motion in 6 degrees of freedom, corresponding to translation and rotation in each of three orthogonal axes. Consequently gestures and movements by the user of the motion controller 758 may be translated as inputs to a game or other application executed by the system unit 10. However, it will be appreciated that the motion controller 758 could be sensitive to any other suitable number of degrees of freedom as appropriate. The motion controller 758 is operable to transmit motion data generated by the motion sensors to the system unit via the Bluetooth link or any other suitable communications link

The motion controller 758 comprises a light source 2005 and four input buttons 2010. The system unit 10 is operable to carry out image processing on images captured by the camera 756 using known techniques so as to detect a position of the light source 2005 within images captured by the camera 756. Therefore, the system unit 10 can track the position of the light source 2005 within the captured images and hence track the position of the motion controller 758 with respect to the camera 756. Accordingly, the system unit 10 is operable to generate position tracking data from the captured images which relates to the position of the light source 2005 with respect to the camera 756 or system unit 10.

The system unit 10 is operable to combine data from the motion sensors of the motion controller 758 with the position tracking data. This allows a more accurate estimation of the position (e.g. x, y, z, coordinates) and attitude (e.g. pitch, roll, yaw) of the motion controller 758 because any error in either of the position tracking data or the motion data can be compensated for by data from the other data source. In other words, the motion controller 758 acts as a user control device.

In embodiments, the light source 2005 comprises one or more light emitting diodes (LED) so as provide a full colour spectrum of colours in the visible range. However, any other suitable light source such as organic light emitting diodes (OLED), incandescent bulbs, laser light sources, and the like may be used. In the embodiment shown in FIG. 6, the light source 2005 comprises a spherical translucent housing which surrounds the LEDs, although it will be appreciated that any other suitable shape could be used. Furthermore, the light source need not comprise the housing and the light source could comprise one or more light sources mounted so that light from the light sources can be emitted to the surroundings of the motion controller 758.

The input buttons 2010 act in a similar manner to control buttons on the game controller 751 as described above. Although the embodiment of FIG. 6 has four input buttons, it will be appreciated that the motion controller 758 could comprise any number of input buttons or indeed not have any input buttons. Furthermore, the motion controller 758 could comprise one or more analogue or digital joysticks for controlling the system unit 10.

In embodiments, the motion controller is operable to control a colour and/or intensity of light emitted by the light source 2005. Alternatively, the colour and/or intensity of the light source 2005 can be controlled by the system unit 10 by sending appropriate command via the wireless link.

The control of the colour and/or intensity of the light source 2005 can be useful when two or more motion controllers are to be used to control the system unit 10. For example, each motion controller could be caused to have a different colour, thereby allowing the motion controllers to be distinguished from each other in the images captured by the camera 756. Furthermore, the colour of the light source can be controlled so as to contrast with an environmental background colour. This improves the reliability of the position tracking data because the light source is more easily distinguished from the background.

In some embodiments, the colour and/or intensity of the light source 2005 can be controlled in response to game state or operational state. For example, a red colour could indicate that the motion controller is off or not paired with the system unit 10. In another example, the light source 2005 could be caused to turn red if a game player is shot in a game, or caused to turn blue or flash different colours if a game player casts a spell in a game.

FIG. 7 shows an example of a user 2015 using the motion controller to control the system unit 10 in accordance with embodiments of the present invention. As shown in FIG. 7, the user 2015 is using the motion controller 758 to point at the display 300. The system unit 10 is operable to analyse images captured by the camera 756 and correlate the motion data from the motion sensors to detect the relative position of the light source 2005 with respect to the camera 756 (as denoted by a vector A in FIG. 7). The cell processor 100 is therefore operable to detect the pointing direction (as indicated by a vector B) of the motion controller 758. The user 2015 can therefore use the motion controller 758 to control a game or other functionality of the system unit 10 by appropriate manipulation of the motion controller 758.

For example, the user 2015 could point the motion controller 758 at menu item displayed as part of a menu on the display 300 and activate one of the input buttons 2010 so as to select that menu item. As another example, the user 2015 could use the motion controller 758 to point at a game object displayed on the display which they wish their game character to pick up and use within the game.

However, whilst this can provide some extra functionality, the user 2015 is limited in using the motion controller to point at object displayed on the display 300. To provide additionally functionality, in embodiments, the user 2015 can use the motion controller 758 to point at objects that are in front of an image plane of the display 300. This will now be described in more detail with reference to FIGS. 8 and 9.

FIG. 8 shows a schematic diagram of user using the motion controller 758 to control the system unit 10 in accordance with embodiments of the present invention. In particular, FIG. 8 shows the user 2015 using the motion controller 758 to point at an object 2020. The position of the motion controller 758 with respect to the camera as indicated by the vector A is detected by the cell processor by analysis of the images captured by the camera 756. As mentioned above, the cell processor 100 is operable to carry out known image analysis techniques on the captured images so as to detect the location of the light source within the captured images.

The direction in which the motion controller is pointing (referred to herein as a pointing direction) is denoted by the vector B in FIG. 8. The cell processor 100 is operable to combine the motion data (such as pitch, roll, and yaw data and x, y, z position data) from the motion sensors of the motion controller 758 with the position data of the light source as generated from the captured images so as to detect the position and pointing direction of the motion controller 758.

In embodiments, the object 2020 can be a real physical object. For example, the object 2020 could be a tennis ball as shown schematically in FIG. 8. As there are potentially a large number of real physical objects which a user could point at, in embodiments, the system unit 10 is operable to cause a list of possible objects for detection to be displayed on the display 300. For example, the list of possible objects for detection could comprise a ball, a face, a box and the like. In embodiments, each object is associated with a template for pattern matching by the cell processor 100. The cell processor 100 is operable to carry out known image recognition techniques and image processing techniques such as pattern matching and face detection so as to detect the position of the object 2020 with respect to the camera 756.

Whilst geometrically simple objects such as a ball or a box are likely to be less computationally expensive to detect, it will be appreciated that any suitable object could be detected subject to the necessary processing resources being available.

As mentioned above, in embodiments, the cell processor 100 is operable to carry out image processing techniques on the images captured by the camera so as to detect a three-dimensional position of the object 2020 with respect to the camera 756 as denoted by the vector C in FIG. 8. The detection of the position of the horizontal and vertical position of the object can be carried out using known image processing and pattern matching techniques to locate the object in the captured images. However, to detect the three-dimensional position of the object 2020, the distance from the camera 756 to the object 2020 also needs to be detected.

In embodiments, the system unit 10 is operable to detect the distance from the camera 756 to the object in a similar manner to that described above for detecting the distance of the viewer. For example, a so-called “z-cam” (from 3DV Systems (http://www.3dvsystems.com/)) could be used, or a stereoscopic pair of cameras could be used to detect the distance to the object 2020. However, it will be appreciated that any suitable technique for detecting the distance between the object 2020 and the camera 756 could be used.

In the embodiment shown in FIG. 8, the motion controller 758 acts as a pointing source with the pointing direction being in the direction of the vector B. The cell processor 100 is operable to detect whether the three-dimensional position of the object 2020 lies on a line passing through the source position in the direction of the pointing direction of the pointing source. In other word, the cell processor 100 is operable to detect whether the object 2020 corresponds to the pointing direction of the motion controller 758. Therefore, the system unit 10 can detect whether the user 2015 is pointing at the object 2020.

The system unit 10 is operable to enact functions of the system unit in response to detection that the user 2015 is pointing at the object 2020. For example, the user 2015 could point at an object to select that object for further image processing or to act as an object within a game. However, if will be appreciated that the pointing functionality of pointing at an object in front of the image plane of the display 300 could be used in any suitable way to interact with the system unit 10.

Although the embodiment shown in FIG. 8, the motion controller 758 acts as the pointing source, in other embodiments, the motion controller 758 need not be the pointing source. This will now be described in more detail with reference to FIG. 9.

FIG. 9 shows a schematic diagram of the user 2015 using the motion controller 758 to control the system unit 10 in accordance with embodiments of the present invention. However, in the embodiment illustrated with respect to FIG. 9, the system unit 10 is operable to cause the display 300 to display a game character 2025, which acts as the pointing to source. In other words, in some embodiments, the pointing source corresponds to a position of the game character within a game executed by the system unit 10. However, in other embodiments, the pointing source corresponds to the position of the motion controller 758. In embodiments, the pointing source has an associated pointing direction indicative of a direction in which the pointing source is pointing.

The cell processor 100 is operable to determine the position of the game character 2025 with respect to the camera 756. As illustrated in FIG. 9, the position of the game character 2025 with respect to the camera is illustrated by the vector A′. To achieve this functionality, the position of the camera 756 with respect to the display 300 can be calibrated by a user via a suitable calibration user interface executed by the system unit 10 and displayed on the display 300. Alternatively the user 2015 could be instructed to place the camera 756 at a predetermined position with respect to the display 300 as instructed by a suitable message displayed 300 on the display by the system unit 10. As part of the calibration or setup process, a user may also input information regarding make and model of the display, screen size, display resolution and the like. Alternatively, this information may be acquired automatically by the system unit 10 via a suitable communication link between the display 300 and the system unit 10. However, it will be appreciated that any suitable method for determining the position of the camera with respect to the display 300 could be used.

As shown in FIG. 9, in some embodiments, the pointing source corresponds to the position of the game character 2025. In these embodiments, game character is the source of the pointing direction as indicated by the vector B′. The pointing direction of the motion controller 758 (as indicated by a vector B″ with the motion controller 758 as the source of the vector B″) is associated with the pointing direction B′ of the game character 2025. In embodiments, the cell processor is operable to associate the pointing direction B′ of the game character 2025 with the pointing direction B″ of the motion controller 758 so that they are parallel with each. However, any other appropriate association between the pointing direction of the game character 2025 and the pointing direction of the motion controller 758 could be used.

More generally, in embodiments, the pointing direction of the motion controller 758 (user control device) is associated with the pointing direction of the pointing source so that a change in the pointing direction of the motion controller 758 (user control device) causes a change in the pointing direction of the pointing source. In embodiments, this applies when the motion controller acts as the pointing source and when a computer generated object such as a game character acts as the pointing source. In other words, in embodiments, movement of the to motion controller 758 such that the pointing direction of the motion controller 758 changes causes a change in the pointing direction of the pointing source.

As mentioned above, the system unit 10 is also operable to detect the position of the object 2020 with respect to the camera 756. Therefore, the three-dimensional position of the object 2020 with respect to the camera 756 can be detected in a similar manner to that described above with reference to FIG. 8. The position of the object 2020 is denoted by the vector C′ in FIG. 9.

As mentioned above, the pointing direction of the pointing source is associated with the pointing direction of the motion controller 758. In the embodiment illustrated in FIG. 9, the pointing direction B′ of the game character 2025 is associated with the pointing direction B″ of the motion controller 758 so that B′ is parallel to B″. For example, the user 2015 could manipulate the motion controller 758 so that the game character 2025 points at the object 2020. The cell processor 100 is operable to detect whether the three-dimensional position of the object (as defined by C′) lies on a line passing through the source position in the direction of the pointing direction of the pointing source (game character 2025).

Therefore, in embodiments, the cell processor 100 can detect whether the game character 2025 is pointing at the object 2025. Additionally, in some embodiments, the cell processor 100 is operable to generate the game character 2025 so that the pointing direction B′ is associated with the pointing direction B″ of the motion controller 758 so that the pointing direction B′ moves with changes in the pointing direction B″. Therefore, the user 2015 can control the pointing direction B′ of the game character 2025 by appropriate manipulation and control of the motion controller 758. In other words, in some embodiments, the pointing source corresponds to a position of a game character within a game executed by the entertainment device. However, it will be appreciated that the pointing source could correspond to a position of any suitable computer generated object generated by the system unit 10 (entertainment device).

In some embodiments, the game character 2025 is a two-dimensional image displayed on the display 300 by the system unit 10. However, using a two-dimensional image can mean that it is difficult to convey the pointing direction B′ of the game character to the user 2015. For example, it may be difficult to convey to the user 2015 that the game character 2025 is pointing at the object 2020 if the game character is displayed as a two dimensional image. Therefore, in some embodiments, the cell processor 100 is operable to generate the game character 2025 so that when displayed on the display 300 the game character 2025 appears as a three-dimensional image. In embodiments, this is achieved using the 3D viewing described above with reference to FIGS. 4A, 4A, 5A, 5B, and 5C, although it will be appreciated that any other suitable technique could be used.

Furthermore, where the game character 2025 is displayed as a 3D representation, in some embodiments, the three-dimensional position of the game character 2025 may be generated by the cell processor 100 so that the three-dimensional position is appears at an image plane which is different from the image plane of the display. In this case, the cell processor 100 is operable to determine the position A′ of the game character by calculating the apparent position of the game character with respect to the display 300. This technique will be described in more detail later below.

In the embodiments described above with reference to FIGS. 8 and 9, the object 2020 was treated as a real object. However, in some embodiments, the object 2020 is a computer generated object. For example the cell processor 100 could generate the object 2020 as a stereo pair of images such that the object appears in front of the display 300. However, the position of the object 2020 should then be determined so that a user can user the motion controller 758 to point to the object 2020. A way in which this may be achieved in accordance with embodiments of the present invention will now be described with reference to FIGS. 10 and 11.

FIG. 10 shows a schematic plan view of the user seen from above viewing the display at two different horizontal positions. The cell processor 100 is operable to cause the display 300 to display a stereo pair of images corresponding to the object 2020 so that, when viewed in a suitable manner (for example by using polarised glasses) the user 2015 will perceive the object 2020 in front of the display 300. In FIG. 10, the user 2015 is illustrated being positioned at a first position (position 1) and a second position (position 2), which is to the right of position 1. When the user 2015 is at position 1, then the user should perceive the object 2020 at a first object position 2035. However, when the user is at position 2, the user should perceive the object 2020 at a second object position 2040. Therefore, it will be understood from FIGS. 5A, 5B, 5C and 10, that a three-dimensional position at which the computer generated object will appear to the users will depend on the position of the user with respect to the display 300.

Therefore, in embodiments, the system is operable to detect the position of the user's face with respect to the display. In embodiments, the cell processor 100 is operable to carry out known face recognition techniques on the images captured by the camera 756 so as to detect the position of the user's face with respect to the camera 756. Providing the position of to the camera with respect to the display 300 is known, the cell processor can calculate the position of the user's face with respect to the display. Furthermore, as the position at which the object will appear to the user depends on the distance from the user to the display as described above, in embodiments, the system unit 10 is operable to detect the distance from the display 300 to the user 2015 as described above.

To determine the position of the object with respect to the display, the object is assumed to lie on a line (as indicated by the dotted line 2045) between a midpoint 2050 between the stereo pair of images 2030 and a midpoint 2055 between the user's eyes. Additionally, the distance between the user's eyes can be estimated by the cell processor 100, either by analysis of the captured images or by using the average interpupillary distance (distance between the eyes) of adult humans. However, any other suitable technique for estimating the interpupillary distance could be used.

The cell processor 100 is operable to calculate lines between the stereo pair of images 2030 and the user's eyes as indicated by the dashed lines 2060 and 2065. As the distance between the stereo pair of images 2030 can be calculated from known parameters of the display such as display resolution, screen size and the like, the apparent position of the computer generated object in the horizontal and vertical directions can be estimated from the intersection of the lines 2045, 2060, and 2065. For example the lines 2045, 2060 and 2065 interest at the first object position 2035. Therefore the computer generated object can reasonably be assumed to be perceived by the user at the first object position 2035. Similarly, when the user is at position 2, the object can be assumed to appear to the user at the second object position 2040. More generally, the apparent position of the object in the horizontal and vertical directions can be calculated by the cell processor for any stereo pair of images in a similar manner to that described above for the first object position 2035.

In some embodiments, the line 2045 is not calculated so as to save processing resources. However, the use of three intersecting lines to calculate the apparent position of the object can improve the accuracy of the calculation.

Here, the directions horizontal (x) and vertical (y) are taken to mean direction in the image plane of the display. The depth (z) is taken to mean the distance from the display 300. However, it will be appreciated that any other suitable coordinate system could be used to define the apparent position of the object, the user, the motion controller, and the like.

FIG. 10 illustrates how the horizontal position (x) position of the object and the depth of the object from the display (z) may be determined However, the vertical position (y) to also needs to be determined A technique by which this may be achieved will now be described with reference to FIG. 11.

FIG. 11 shows a schematic diagram of the user 2015 using the motion controller 758 to point at a computer generated object as seen from one side of the display 300. As mentioned above, the system unit 10 is operable to detect the distance of the user 2015 from the display 300. Additionally, the cell processor 100 is operable to detect the position of the user's eyes 2090 with respect to the camera 756 using known eye detection techniques (in some cases in combination with known face recognition techniques). In embodiments, the cell processor 100 is operable to detect by analysis of the captured images and depth data generated by the distance detector, a vector D (as illustrated by dashed line 2070 in FIG. 11) which indicates a position of the user's eyes 2090 with respect to the camera 756. As mentioned above, the position of the camera 756 with respect to the display 300 can be input by a suitable calibration process or preset in any other suitable way. Therefore, the cell processor 100 is operable to detect the distance between the user's eyes 2090 and the display 300 using appropriate trigonometry calculations.

As shown in FIG. 11, the system unit 10 is operable to cause computer generated objects (such as the stereo pair of images 2030 and a stereo pair of images 2075) to be displayed on the display 300. As FIG. 11 shows a side view of the display, it will be appreciated that the stereo pair 2030 and the stereo pair 2075 will appear as one image when viewed from the side because there share the same horizontal axis, although each stereo pair in fact comprises two images. When viewed in an appropriate manner by the user 2015, the stereo pair of images 2075 should cause the user 2015 to perceive a virtual object at a position 2080 in front of the display 300.

The vertical position of a computer generated object is assumed to lie on a line between the user's eyes 2090 and the position of the stereo pair of images corresponding to that object. For example, the position 2080 lies on a line (indicated by the dashed line 2095) between the user's eyes 2090 and the stereo pair of images 2075. Additionally, the position 2035 lies on a line (indicated by dashed line 3000) between the user's eyes 2090 and the stereo pair of images 2030. As mentioned above, the horizontal position (x) and depth (z) can be determined using the technique described above with reference to FIG. 10. In embodiments, the system unit 10 is operable to calculate the vertical position (y) of a computer generated object from the relative position of the user's eyes 2090 with respect to the display 300 and the position of the stereo pairs of images as displayed on the display 300 using appropriate trigonometric calculations.

As mentioned above, the system unit 10 is operable to detect whether the three-dimensional position of an object lies on a line passing through the source position in the direction of the pointing direction of the pointing source. In the embodiment illustrated in FIG. 11, the motion controller 758 is the pointing source, and the pointing direction of the motion controller 758 (as indicated by a dotted line 2085 in FIG. 11) corresponds to the three-dimensional position of the object 2035. Therefore, the system unit 10 can detect that the user 2015 is pointing at the position 2035 rather than the position 2080.

In other words, by combining the distance data from the camera 756 with the positions of the motion controller 758 and the computer generated objects (such as the position 2035 and the position 2080), the cell processor 100 can detect which object is being pointed at by the user.

Accordingly, embodiments of the present invention provide an enhanced technique for allowing a user to interact with an entertainment device. For example, in the case of a computer generated object, the user could use the motion controller 758 to point at the object to select the object in a game, or to pick up the object in a game. In the case of a real object, the user could select a real object for further image processing or as a real object for augmented reality applications. However, it will be appreciated that the embodiments described herein could enable any suitable functionality in which a user can point at an object (computer generated or a real physical object) to be achieved.

In some embodiments, the cell processor 100 is operable to generate a three-dimensional image such as a virtual image feature so that the three-dimensional image appears to be associated with the source position and pointing direction of the motion control device 758. This will now be described in more detail with reference to FIGS. 12 and 13.

FIG. 12 is a schematic diagram of a three-dimensional (3D) image display region shown with respect to the display 300 seen from above. The 3D image display region 3010 is a region in front of the display 300 in which a three-dimensional image (for example a stereo pair of images) can be perceived as three-dimensional by the user 2015. The 3D image display region 3010 occurs where the viewpoint of both of the user's eyes 2090 overlap. The origin of the 3D image display region can also be understood by referring to FIGS. 5B and 5C.

Outside of the 3D image display region 3010, only one of the left or the right image of a stereo pair of images is likely to be seen by one eye. Therefore, outside the 3D image display region, the 3D effect will not be perceived correctly because only one image of the stereo pair is likely to be viewable by the user's eyes 2090. This can cause the user 2015 to experience headaches and/or nausea.

In embodiments, the system unit 10 is operable to detect the 3D image display region by detecting the position of the user's eyes 2090 with respect to the display. In embodiments, the cell processor 100 is operable to analyse the images captured by the camera 756 so as to detect the horizontal (x) and vertical (y) positions of the user's eyes with respect to the display 300. Additionally, as mentioned above, the system unit 10 is operable to detect the distance (z) of the user from the display 300. Typically, in the x-z plane (i.e. a plane normal to the image plane of the display when viewed from above), the 3D image display region corresponds to a triangular region bounded by: a line 3015 from the user's right eye 3020 to a left-hand edge 3025 of a display area of the display 300; a line 3030 from the user's left eye 3035 to a right-hand edge 3040 of the display area of the display 300; and a line 3045 corresponding to an image plane of the display 300.

In the y-z plane (i.e. a plane normal to the image plane of the display when viewed from the side), the 3D image display region corresponds to a triangle from the user's eyes 2090 to the display 300. This is illustrated in more detail with respect to FIG. 13. FIG. 13 shows a schematic diagram of the 3D image display region viewed from the side. The 3D image display region 3010 is bounded by: a line 3050 from the user's eyes 2090 to a lower edge 3055 of the display area of the display 300; a line 3060 from the user's eyes 2090 to an upper edge 3065 of the display area of the display 300; and a line 3070 corresponding to the image plane of the display 300. The cell processor 100 can then therefore determine the 3D image display region based on the position of the user's eyes 2090 with respect to the display 300.

In other words, when considered in three dimensions, the 3D image display area 3010 can be thought of as an irregular four-sided pyramid with a base corresponding to the display area of the display 300 and the apex located between the display and the user's eyes 2090. However, it will be appreciated that any other suitable 3D image display region could be used.

In some embodiments, the system unit 10 is operable to generate the three-dimensional image so that it appears to lie along the pointing direction B of the motion controller 758. This is illustrated with respect to FIG. 13.

As shown in FIG. 13, the system unit 10 is operable to generate a 3D image 4000. In the example shown in FIG. 13, the 3D image corresponds to flames which are caused to appear to come from the motion controller 758 along the pointing direction B. The user 2015 can therefore manipulate the motion controller 758 to control where the 3D appears to point. Therefore, the motion controller 758 could act as a flame thrower within a game with the flames appearing to come from the motion controller. However, any other suitable functionality could be realised such as tracer bullets, a laser pointer and the like. By generating the 3D image 400 so that it appears to emanate from the motion controller 758, for example to provide an illusion that the user 2015 is pointing or firing a weapon into the display 300, embodiments of the present invention can create a more powerful and immersive 3D experience for the user 2015.

In embodiments, the cell processor 100 is operable to generate the 3D image so that it is positioned within the 3D image display region 3010. If the motion controller 758 is detected to be positioned outside the 3D image display region 3010, the cell processor 100 is operable to generate the 3D image 4000 so that the 3D image 4000 only appears within the 3D image display region 3010 rather than outside. This helps prevent the user 2015 from experiencing headaches and/or nausea.

The use of a 3D image display region can also be important when rendering 3D images for 3D games which use pseudo-realistic physics engines. Typically, such video games may use realistic physics models to predict the motion of objects within a game such as debris from an explosion. The motion of objects within physics based video games or physics based virtual reality simulations typically allows 3D objects to travel unrestricted through the virtual environment. However, this may mean that objects may quickly move out of the 3D image display region, thus degrading the 3D effect and possibly causing the user to experience headaches and/or nausea. Therefore, to address this problem, in some embodiments, the system unit 10 is operable to control the appearance of the 3D image in dependence upon the relative position of the 3D image with respect to the 3D image display region. This will now be described in more detail with reference to FIGS. 14 and 15.

FIG. 14 shows a schematic view of control of the appearance of a 3D image in accordance with embodiments of the present invention. In particular, FIG. 14 shows the user's eyes 2090 and the display 300. As mentioned above, the system unit 10 is operable to generate a 3D image. In some embodiments, the system unit is operable to generate the 3D image so that, over a sequence of image frames, the 3D image appears to move towards the user 2015.

For example, a 3D object may be caused by a so-called physics engine to move along a trajectory 4010 as indicated in FIG. 14. However, this may cause the object to leave the 3D image display region 3010 over relatively few image frames. Therefore, in some to embodiments, the cell processor 100 is operable to control the apparent motion of the 3D image so that the 3D image appears to remain within the 3D image display region 3010.

This is illustrated schematically in FIG. 14. At a time t1, the 3D image appears at a position 4000a. At a time t2, the 3D image appears at a position 4000b. At a time t3, the 3D image appears at a position 4000c. At a time t4, the 3D image appears at a position 4000d. Here t1, t2, t3, and t4 refer to image frames of an image sequence. Typically, the frame rate is 1/25 seconds, although it will be appreciated that any suitable frame rate could be used.

Therefore, over t1 to t4, the 3D will appear to move towards the user 2015 but remain within the 3D image display region 3010. In other words, in embodiments, the cell processor 100 is operable to steer the object so that it follows a trajectory 4015 and the object remains within the 3D image display region 3010.

In contrast, referring to FIG. 14, if the trajectory of the object followed the trajectory 4010, the object would leave the 3D image display region 3010 after the time t2. Therefore, by controlling the path or trajectory of the object so that the object remains within the 3D image display region, embodiments of the present invention can reduce the likelihood that the user will experience any nausea and/or headaches.

In some embodiments, the cell processor 100 is operable to control the appearance of the 3D image so that the 3D image appears to fade when a position of the 3D image is within a threshold distance of an edge of the 3D image display region 3010. This is illustrated in FIG. 14.

As shown in FIG. 14, the physics engine of a game executed by the system unit 10 may cause the object to follow a trajectory 4020. For example, the cell processor 100 may cause the 3D image to appear at a position 4000e at a time t5, at a position 4000f at a time t6, at a position 4000g at a time t7, and at a position 4000h at a time t8. In embodiments, the cell processor 100 is operable to detect when the object is within a threshold distance 4050 of an edge (such as edge 4055) of the 3D image display region 3010.

If the object is within the threshold distance 4050 of the edge 4055 of the 3D image display region 3010, then the cell processor causes the 3D image to appear to fade at the positions 4000f, 4000g, and 4000h over the corresponding image frame times t6, t7, and t8.

However, it will be appreciated that the 3D image could be caused to fade over any suitable number of image frames and that any suitable threshold distance could be used. It should be appreciated that the term “fade” can be taken to mean become more transparent, become less bright or any other suitable image processing operation that causes the object to to become less apparent over a sequence of image frames.

In some embodiments, virtual objects which have a predefined relationship to the virtual environment (such as trees at the side of a road) or choreographed objects are caused to fade. However, in some embodiments, free-moving objects such as virtual objects whose position is generated by a physics based engine (e.g. explosion debris) can be caused to be steered. Accordingly, a likelihood that the user experiences nausea and/or headaches is reduced.

Additionally, the use of a 3D image display region can help improve the display of icons for controlling the system unit 10. This will now be described with reference to FIG. 15.

FIG. 15 shows a schematic diagram of a 3D icon field in accordance with embodiments of the present invention. In particular, FIG. 15 shows a plurality of icons 5005, 5010, 5015, 5020, 5025, 5030, and 5035. The cell processor 100 together with the other functional units of the system unit 10 is operable to generate one or more icons such as those illustrated in FIG. 15. In embodiments, each icon is associated with a control function of the system unit 10. The plurality of icons 5005 to 5035 comprise one or more subgroups of icons. As shown in FIG. 15, a first subgroup comprises the icons 5005, 5010, 5015 and 5020. A second subgroup comprises the icons 5025 and 5030. A third subgroup comprises the icon 5035. Each subgroup can comprise one or more icons. However, it will be appreciated that any number of icons could be used and each subgroup could comprise one or more icons as appropriate.

In embodiments, the cell processor 100 is operable to generate the subgroups so that each respective subgroup appears located at a respective distance from the display. As illustrated in FIG. 15, the first subgroup is caused to appear at a distance f from the display 300. The second subgroup is caused to appear at a distance g from the display 300. The third subgroup is caused to appear at a distance h from the display 300.

To try to ensure that the icons are positioned within the 3D image display region 3010 so as to preserve the 3D effect and reduce nausea and/or headaches on the part of the user, in embodiments, the cell processor is operable to control the number of icons in each subgroup in dependence upon the relative position of each subgroup with respect to the 3D image display region 3010. For example, at the distance h from the display, there is only space for one icon to be displayed, whereas, at the distance f, four icons can be displayed without having to display some or all of one or more icons outside the 3D image display region 3010. In other words, the cell processor 100 is operable to dynamically adjust the number of icons in each subgroup in dependence on the relative position of the subgroup with respect to the 3D image display position.

In some embodiments, more frequently accessed icons can be displayed closer to the user whilst less frequently used icons can be displayed closer to the display 300. However, any other suitable method for ordering the icons could be used. Additionally, in some embodiments, the icons can be generated by the cell processor 100 so that they appear semi-transparent so that the user can see icons which may appear to be behind other icons. However, any other suitable method of displaying the icons could be used.

A method of detecting where the motion controller is pointing will now be described with reference to FIG. 16.

FIG. 16 is a flow chart of an entertainment method in accordance with embodiments of the present invention.

At a step s10, the cell processor 100 causes an image (such as a control screen, game image and the like) to be displayed to the user 2015.

At a step s12, the system unit 10 detects a three-dimensional position of an object in front of an image plane of the display. For example, the object could be a real object or a computer generated object as described above with reference to FIGS. 8 to 11.

Then, at a step s14, the system unit 10 detects a source position of a pointing source with respect to the display using techniques as described above with reference to FIGS. 6 to 11. As mentioned above, the pointing source has an associated pointing direction indicative of a direction in which the pointing source is pointing.

At a step s16, the system unit detects a pointing direction of a user control device (such as the motion controller 758). As mentioned above, the pointing direction of the user control device (e.g. motion controller 758) is associated with the pointing direction of the pointing source so that a change in the pointing direction of the user control device causes a change in the pointing direction of the pointing source.

Then, at a step s 18, the system unit 10 detects whether the three-dimensional position lies on a line passing through the source position in the direction of the pointing direction of the pointing source. In other words, the system unit 10 can detect whether the user is using the motion controller 758 to point at a real physical object and/or a computer generated object in front of the display.

A method of associating a three-dimensional image with the motion controller will now be described with reference to FIG. 17.

FIG. 17 is a flow chart of an entertainment method in accordance with embodiments of the present invention.

At a step s20, the system unit 10 generates a 3D image such as the 3D image 4000 described with reference to FIG. 13.

At a step s22, the system unit 10 detects, with respect to the display 300, the 3D image display region 3010 in which the 3D image can be perceived as three-dimensional by the user 2015 using the techniques described above.

Then, at a step s24, the system unit detects a source position and pointing direction of a user control device (such as motion controller 758) for controlling the entertainment device (such as system unit 10). The source position is indicative of a position of the user control device (such as motion controller 758) with respect to the display 300, and the pointing direction is indicative of a direction in which the user control device is pointing with respect to the display 300. In embodiments, the source position and pointing direction of the user control device are detected using the techniques described above. However, it will be appreciated that any other suitable method for detecting the source position and pointing direction of the user control device could be used.

At a step s26, the system unit 10 generates the 3D image within the 3D image display region so that the 3D image appears to be associated with the source position and pointing direction of the user control device. In embodiments, the system unit 10 generates the 3D image so that is appears to be associated with the source position and pointing direction of the user control device in a similar manner to that described above with reference to FIG. 13. However, it will be appreciated that any other suitable technique could be used.

A method for controlling the appearance of a 3D image in dependence upon its relative position with respect to a 3D image display region will now be described with reference to FIG. 18.

FIG. 18 is a flow chart of an entertainment method in accordance with embodiments of the present invention.

At a step s30, the system unit 10 generates a 3D such as a 3D image described above with reference to FIGS. 14 and 15 for displaying on a display (such as display 300).

Then, at a step s32, the system unit 10 detects the 3D image display region in a similar manner to that described above with reference to FIGS. 12 and 13. As mentioned above, the 3D image display region is an image display region in which the three-dimensional image can be perceived as three-dimensional by a viewer viewing the display.

At a step s34, the system unit 10 detects the apparent position of the 3D image with respect to the display. In embodiments, the position of the 3D image is detected in a similar manner to the techniques described above for determining the position of the computer generated object (described with reference to FIGS. 10 and 11). However, it will be appreciated that any other suitable method for detecting the apparent position of the 3D image could be used.

At a step s36, the system unit 10 controls the appearance of the 3D in dependence upon the relative position of the three-dimensional image with respect to the image display region. In embodiments, the appearance of the 3D image can be controlled in a similar manner to that described above with respect to FIGS. 14 and 15, although it will be appreciated that any other suitable technique could be used.

It will be appreciated that any or all of the above techniques could be combined as appropriate. For example, a user could use the motion controller to point at, and select, an icon displayed on the display in a similar manner to that described with reference to FIG. 15. However, any appropriate functionality achieved by appropriate combination of the above embodiments could be used.

Additionally, although the use of one motion controller 758 has been described above, any number of motion controllers could be used to implement the functionality of the embodiments described above. For example, a first user could use a first motion controller to point at a first object and a second user could use a second motion controller to point at a second object.

Additionally, any real physical object could be pointed to. In some embodiments, the system unit 10 is operable to implement an augmented reality application for an augmented reality environment using known techniques. In these embodiments, the real physical object to which a user can point using the motion controller 758 could be an augmented reality marker, although any other suitable real physical object could be used.

The various methods set out above may be implemented by adaptation of an existing entertainment device, for example by using a computer program product comprising processor implementable instructions stored on a data carrier (a storage medium) such as a floppy disk, optical disk, hard disk, PROM, RAM, flash memory or any combination of these or other storage media, or transmitted via data signals on a network such as an Ethernet, a wireless network, the Internet, or any combination of these of other networks, or realised in hardware as an ASIC (application specific integrated circuit) or an FPGA (field programmable gate array) or other configurable circuit suitable to use in adapting the existing equivalent device.

In conclusion, although a variety of embodiments have been described herein, these are provided by way of example only, and many variations and modifications on such embodiments will be apparent to the skilled person and fall within the scope of the present invention, which is defined by the appended claims and their equivalents.

Although illustrative embodiments of the invention have been described in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various changes and modifications can be effected therein by one skilled in the art without departing from the scope and spirit of the invention as defined by the appended claims

Claims

1. An entertainment device comprising:

a display to display an image to a user;
an object position detector to detect a three-dimensional position of an object in front of an image plane of the display;
a source position detector to detect a source position of a pointing source with respect to the display, the pointing source having an associated pointing direction indicative of a direction in which the pointing source is pointing;
a direction detector to detect a pointing direction of a user control device, the pointing direction of the user control device being associated with the pointing direction of the pointing source so that a change in the pointing direction of the user control device causes a change in the pointing direction of the pointing source; and
an alignment detector to detect whether the three-dimensional position lies on a line passing through the source position in the direction of the pointing direction of the pointing source.

2. A device according to claim 1, in which:

the object is a computer generated object;
the image comprises the computer generated object; and
the entertainment device is operable to generate the computer generated object so that, when displayed on the display, the computer generated object appears to be positioned at the three-dimensional position.

3. A device according to claim 2, comprising:

a user position detector to detect a position of the user with respect to the display; and
a computer generated object position detector to detect an apparent position of the computer generated object with respect to the display.

4. A device according to claim 1, in which:

the object is a real object.

5. A device according to claim 4, in which the object position detector comprises:

a camera operable to capture one or more images of the real object; and
a processor operable to carry out image processing on the captured images so as to detect the three-dimensional position of the real object.

6. A device according to claim 5, in which the camera comprises a distance detector detect a relative distance between the camera and the real object.

7. A device according to claim 1, in which the pointing source corresponds to a position of a game character within a game executed by the entertainment device.

8. A device according to claim 1, in which the pointing source corresponds to the position of the user control device.

9. A device according to claim 8, comprising:

a generator to generate a three-dimensional image to be displayed on a display;
an image region detector to detect, with respect to the display, a three-dimensional image display region in which the three-dimensional image can be perceived as three-dimensional by a viewer viewing the display;
in which the generator is operable to generate the three-dimensional image within the image display region so that the three-dimensional image appears to be associated with the position and the pointing direction of the user control device.

10. An entertainment system comprising:

an object position detector to detect a three-dimensional position of an object in front of an image plane of a display;
a source position detector to detect a source position of a pointing source with respect to the display, the pointing source having an associated pointing direction indicative of a direction in which the pointing source is pointing;
a direction detector to detect a pointing direction of a user control device, the pointing direction of the user control device being associated with the pointing direction of the pointing source so that a change in the pointing direction of the user control device causes a change in the pointing direction of the pointing source; and
an alignment detector to detect whether the three-dimensional position lies on a line passing through the source position in the direction of the pointing direction of the pointing source.

11. An entertainment method comprising:

displaying an image on a display to a user;
detecting a three-dimensional position of an object in front of an image plane of the display;
detecting a source position of a pointing source with respect to the display, the pointing source having an associated pointing direction indicative of a direction in which the pointing source is pointing;
detecting a pointing direction of a user control device, the pointing direction of the user control device being associated with the pointing direction of the pointing source so that a change in the pointing direction of the user control device causes a change in the pointing direction of the pointing source; and
detecting whether the three-dimensional position lies on a line passing through the source position in the direction of the pointing direction of the pointing source.

12. An entertainment device comprising:

a generator to generate a three-dimensional image to be displayed on a display;
an image region detector to detect, with respect to the display, a three-dimensional image display region in which the three-dimensional image can be perceived as three-dimensional by a viewer viewing the display; and
a control device detector to detect a source position and pointing direction of a user-operated control device for controlling the entertainment device, the source position being indicative of a position of the user-operated control device with respect to the display, and the pointing direction being indicative of a direction in which the user-operated control device is pointing with respect to the display;
in which the generator is operable to generate the three-dimensional image within the image display region so that the three-dimensional image appears to be associated with the source position and the pointing direction of the user-operated control device.

13. A device according to claim 12, in which the three-dimensional image is associated with the source position and the pointing direction of the user-operated control device so that the three-dimensional image appears to lie along the pointing direction.

14. An entertainment method comprising:

generating, using an entertainment device, a three-dimensional image to be displayed on a display;
detecting, with respect to the display, a three-dimensional image display region in which the three-dimensional image can be perceived as three-dimensional by a viewer viewing the display;
detecting a source position and pointing direction of a user control device for controlling the entertainment device, the source position being indicative of a position of the user control device with respect to the display, and the pointing direction being indicative of a direction in which the user control device is pointing with respect to the display; and
generating the three-dimensional image within the image display region so that the three-dimensional image appears to be associated with the source position and the pointing direction of the user control device.

15. An entertainment device comprising:

a generator to generate a three-dimensional image to be displayed on a display;
an image region detector to detect, with respect to the display, a three-dimensional image display region in which the three-dimensional image can be perceived as three-dimensional by a viewer viewing the display;
an apparent position detector to detect the apparent position of the three-dimensional image with respect to the display; and
a controller to control an appearance of the three-dimensional image in dependence upon the relative position of the three-dimensional image with respect to the image display region.

16. A device according to claim 15, in which the generator is operable to generate the three-dimensional image so that, over a sequence of image frames, the three-dimensional image appears to move towards the viewer.

17. A device according to claim 16, in which the controller is operable to control apparent motion of the three-dimensional image so that the three-dimensional image appears to remain within the three-dimensional image display region.

18. A device according to claim 16, in which the controller is operable to control the appearance of the three-dimensional image so that the three-dimensional image appears to fade when a position of the three-dimensional image is within a threshold distance of an edge of the three-dimensional image display region.

19. A device according to claim 15, in which:

the generator is operable to generate one or more icons each associated with a function of the device, the one or more icons comprising one or more subgroups of icons generated by the generator so that the respective subgroups appear located at respective distances from the display; and
the controller is operable to cause the generator to control the number of icons in each subgroup in dependence upon the relative position of each subgroup with respect to the three-dimensional image display region.

20. An entertainment method comprising:

generating a three-dimensional image to be displayed on a display;
detecting, with respect to the display, a three-dimensional image display region in which the three-dimensional image can be perceived as three-dimensional by a viewer viewing the display;
detecting an apparent position of the three-dimensional image with respect to the display; and
controlling an appearance of the three-dimensional image in dependence upon the relative position of the three-dimensional image with respect to the image display region.

21. A tangible, non-transitory computer program product comprising a storage medium on which is stored computer readable program code, the program code, when executed by a processor, cause the processor to perform an entertainment method,

the method comprising:
displaying an image on a display to a user;
detecting a three-dimensional position of an object in front of an image plane of the display;
detecting a source position of a pointing source with respect to the display, the pointing source having an associated pointing direction indicative of a direction in which the pointing source is pointing;
detecting a pointing direction of a user control device, the pointing direction of the user control device being associated with the pointing direction of the pointing source so that a change in the pointing direction of the user control device causes a change in the pointing direction of the pointing source; and
detecting whether the three-dimensional position lies on a line passing through the source position in the direction of the pointing direction of the pointing source.

22. A tangible, non-transitory computer program product comprising a storage medium on which is stored computer readable program code, the program code, when executed by a processor, cause the processor to perform an entertainment method, the method comprising:

generating, using an entertainment device, a three-dimensional image to be displayed on a display;
detecting, with respect to the display, a three-dimensional image display region in which the three-dimensional image can be perceived as three-dimensional by a viewer viewing the display;
detecting a source position and pointing direction of a user control device for controlling the entertainment device, the source position being indicative of a position of the user control device with respect to the display, and the pointing direction being indicative of a direction in which the user control device is pointing with respect to the display; and
generating the three-dimensional image within the image display region so that the three-dimensional image appears to be associated with the source position and the pointing direction of the user control device.

23. A tangible, non-transitory computer program product comprising a storage medium on which is stored computer readable program code, the program code, when executed by a processor, cause the processor to perform an entertainment method, the method comprising:

generating a three-dimensional image to be displayed on a display;
detecting, with respect to the display, a three-dimensional image display region in which the three-dimensional image can be perceived as three-dimensional by a viewer viewing the display;
detecting an apparent position of the three-dimensional image with respect to the display; and
controlling an appearance of the three-dimensional image in dependence upon the relative position of the three-dimensional image with respect to the image display region.
Patent History
Publication number: 20110306413
Type: Application
Filed: Jun 1, 2011
Publication Date: Dec 15, 2011
Applicant: D YOUNG & CO LLP (London)
Inventors: Ian Bickerstaff (London), Ian Michael Hocking (London), Nigel Kershaw (London), Simon Benson (London)
Application Number: 13/150,357
Classifications
Current U.S. Class: Visual (e.g., Enhanced Graphics, Etc.) (463/31)
International Classification: A63F 9/24 (20060101);