Autostereoscopic display system
An autostereoscopic display system includes a lenticular lens display screen that projects a plurality of views of a scene from its front surface. A plurality of video projectors are disposed to the rear of the display screen and focus on a convergence point of the display screen's rear surface. Imaging computers drive the video projectors, each having a memory storing a scene to be displayed on the display screen. Each computer renders the scene from a preselected viewpoint that may be different from the viewpoints of the other imaging computers.
Latest Patents:
This application is a division of U.S. patent application Ser. No. 09/921,090 filed Aug. 2, 2001, the specification of which is fully incorporated by reference herein.
BACKGROUND OF THE INVENTIONAs display screens have grown in size and fineness of resolution, investigators have experimented with placing several such display screens adjacent to each other and causing three dimensional graphical data to be displayed on them. In 1992, the University of Illinois introduced a multi-user, room-sized immersive environment called the Pyramid CAVE (for “CAVE automatic virtual environment”). Three dimensional graphics were projected onto the walls and floors of a large cube composed of display screens, each typically measuring eight to ten feet. The cubic environment uses stereoscopic projection and spatialized sound to enhance immersion. Computers and display systems by Silicon Graphics, Inc. have created multi-panel displays which process three dimensional graphics, imaging and video data in real time. However, known “CAVES” and light displays by SGI and others share a single apex point of view, with all panels around the viewers having only perspective views streaming from that apex point. Further, much of the prior work requires shuttered or Polaroid glasses on the viewer for stereoscopic output. A need therefore continues to exist for multiple-display imaging systems permitting the imaging of three-dimensional scenes from multiple perspectives. Further, the treatment of animation graphics across multiple displays currently requires extremely high end, custom hardware and software and large bandwidth capability. The cost and communication requirements or rendering and displaying animation across multiple displays should be reduced.
SUMMARY OF THE INVENTIONAccording to one aspect of the invention, a multiple-display video system and method are provided by which a rendering image processor is coupled to a plurality of virtual cameras, which in one embodiment occupy separate nodes on a network. Associated with the rendering image processor is a first memory that defines a world having three dimensional spatial coordinates, a second memory for storing graphical image data for a plurality of objects, and a third memory for storing instructions on the positioning of the objects in the world. For each virtual camera, a viewpoint of the world is defined and stored. The rendering image processor renders a scene of the world according to the viewpoint of the virtual camera. Each virtual camera has at least one display associated with it to display the scene rendered according to the virtual camera's viewpoint. The virtual camera viewpoints may be chosen to be different from each other.
According to a second aspect of the invention, a rendering node or server has first, second and third memories as above defined, the third memory storing instructions for positioning the objects in the virtual world and animating these objects. A plurality of clients, which are preferably disposed remotely from the server, each have associated memory and processing capability. Each of the clients has one or more display units associated with it, and viewpoints are established for each. Each of the clients stores, prior to a first time, graphical image data for the objects to be displayed. Each of the clients constructs a respective scene based on instructions received from the server at the first time. The previous storage of the graphical image data (such as textural and geometric data) associated with the animated objects dramatically reduces the amount of bandwidth necessary to communicate animation instructions from the server to each of the clients, permitting real-time animation effects across a large number of associated displays.
In a third aspect of the invention, these displays may be physically sited to be contiguous with each other so as to create a single large display. Relatedly, contiguous displays can be directed to display the scene or overlapping scenes and the viewpoints of the displays can be varied so that, to an observer passing by the displays, the rendered scene appears to shift as a function of the position of the observer, such as it would if the observer were looking at a real scene through a bank of windows. Other viewpoint shifts are possible to produce, e.g., arcuate or circumferential virtual camera arrays, of either convex or concave varieties.
According to a fourth aspect of the invention, a large multiple-screen animated array may be provided at a commercial location and used to display a combination of animations and text data derived from a local database. These data, such as the Flight Information Data System (FIDS) of an airline at an airport, can be used to display such things as airline arrivals and departures on predetermined portions of the displays. The present invention provides apparatus for producing an overlay of the FIDS data on the animated sequences.
According to a fifth aspect of the invention, the method and system of the invention may be used to illuminate large lenticular arrays to create an autostereoscopic display.
BRIEF DESCRIPTION OF THE DRAWINGSFurther aspects of the invention and their advantages can be discerned in the following detailed description, in which like characters denote like parts and in which:
Each of the hubs 18-22 has associated with it a respective rendering server 38, 44 or 48. The rendering server 38 controls clients 40 and 42 through hub 18. The rendering server 44 controls a client 46 through hub 20. The rendering server 46 controls a client 48 through hub 22. The rendering servers 38, 44 and 48 and their respective clients 40-42, 46, 50 together constitute the imaging computers 38-50 that run the multipanel displays in the embodiment illustrated in
Server/client groups 24, 26 and 28 preferably are kept isolated from each other by the use of hubs 18-22 to prevent unnecessary cross talk. Each of the imaging computers 38-50 has a set 52, 54, 56, 58 of projectors, each projector 52-58 being controlled by a “virtual camera” set up by the software as will be described below and accepting one video channel output from a respective controlling imaging computer 38-50. The illustrated CRT projectors 52-58 are exemplary only in kind and number and are one of many possible kinds of display units, which also include rear projectors, various kinds of flat panel displays or autostereoscopic projection screens (see
The system 10 also includes a plurality of video multiplexers 60, 62, each of which accepts one or more channels per client workstation 38-50. The multiplexers 60, 62 are used to relay video signals from the imaging computers 38-50 to a monitoring station at which are positioned monitors 64, 66 for user-induced functional changes, imagery updating or image alignment as may be necessary for a particular type of video wall or other multiunit display. A single monitor 64 or 66 may be connected to each of the multiplexers 60, 62, so as to be capable of instantaneous switching between the large number of video channels present.
The server 12 further provides high speed conduits 69, 70, 71 with each of the hubs 18, 20 and 22 while keeping those hubs 18-22 effectively isolated from each other. As controlled by an overall executable program on main server 12, conduits 69-71 may pass packets of positional data or sequencing information that relay positioning and rendering queues among the rendering servers 38, 44, 48. The conduits 69-71 further simultaneously transmit FIDS text data as overlay text information on animations displayed on the (e.g.) video wall created by units 52-58.
A further workstation 72, which may be UNIX-based, monitors activity on the entire system through main server 12. Workstation 72 also supports a link 74 to the outside world, through firewall 76. The external connection permits data pertaining to the imaging array to be accessed remotely through the firewall 76, and permits remote network management of the system. For example, artwork shown on the video wall constituted by projection units 52-58 may be transformed or reconstituted by commands issued remotely, and may also be viewed remotely to verify image quality and stability. The *.cfg file, described below and copied to each of the rendering computers 38, 44, 48, contains animation start functions and further permits the recognition of an interrupt sent from the workstation 72 in order to effect changes in the animation. Path 74 may be used to load new sets of textures and geometries onto the hard drive storage of server 12, and thence to rendering servers 38, 44, 48, in order to partly or completely replace the imagery shown on the video wall, nearly instantaneously. In the illustrated embodiment, it is preferred that these changes be done by replacing the old *.cfg file with a new one.
System 10 is modular in its design, easily permitting the addition of further rendering servers and associated client imaging computers, with no theoretical upward limit to the number of video channels to be included in the total system.
The *.ini file may contain as many as two hundred separate parameter adjustments, and an even greater number of specifications of parameters pertaining to the animation. The *.ini file on any one imaging computer will differ from the *.ini file on any other imaging computer in its assignment of station ID and node ID. In the illustrated embodiment, each imaging computer controls four stations or virtual cameras. Each imaging computer will also be assigned a unique node number. The *.ini file further contains a bit which tells the system whether the imaging computer in question is a render server or not. The imaging computer uses the station ID contained in the *.ini file to determine which of the several virtual cameras or viewpoints it should use; to minimize network traffic the parameters for all of the virtual cameras for all of the viewpoints are stored on each imaging computer hard drive.
As loaded and executing on one of the general-purpose processors of the imaging computer, the *.cfg file responds to commands from the *.ini file. The *.cfg file is an artwork developer's tool for configuring specific sequences of preloaded art material to behave in certain ways. The *.cfg file responds directly to the textures and geometries which the art developer has established for the animation sequences, and has a direct association with all textures and geometries that are stored on all mass storage media in the system. The *.cfg file controls how the animation progresses; it contains calls to portions of the rendering sequence, such as layerings, timings of certain sequences and motions of specific objects found in the texture and geometry files. The *.cfg file either contains or points to all of the information that any rendering client or server would need to handle its portion of the full rendering of the entire multi-channel array. For any one contiguous display, the *.cfg files distributed to the imaging computers controlling the individual display panels will be identical to each other, but the information and calls therein are accessed and interpreted differently from one computer to the next according to whether the computer has been identified in the *.ini file as a render server or not, the node ID of the imaging computer, and the station IDS controlled by that imaging computer. The *.cfg file also contains command lines used to make an interrupt, as when the system administrator wishes to change the animation or other scene elements during runtime.
All of the software components shown in
Each of the rendering servers and clients has stored thereon a world scene 556 or a replica 558, 560 thereof. These world scenes are constructed using a library of graphical imaging data files (in this embodiment, partitioned into geometry and texture files) 562, 564 and 566 stored on the hard drives. The render server 38 further has foreground, background, viewpoint generation and sequencing algorithms 568 which it accesses to set the viewpoints. Algorithms 568 together make up an overall system monitoring protocol which permits the system administrator to manually review or intervene in making on-line changes and adjustments to any viewpoint already established on the system.
Also present on all rendering computers (servers and clients) is an executable (*.exe) file which, when executed by any imaging computer's processor, interprets data stream commands coming from the rendering server and received by each of the clients. The render server 38 further keeps a clock 570 that is used to synchronize the animation across all of the displays.
At process step 102, “virtual cameras” are created by the render server viewpoint algorithm which correspond to each of the output video channels. These “virtual cameras” are logical partitions of the processors and memories of imaging computers 38-50, four such virtual cameras being created for each imaging computer 38-50 in the illustrated embodiment. The system administrator sets up the properties of these virtual cameras in the software in advance of execution. The “align cameras” process 102 begins selecting previously stored imaging textures and geometries so as to lead to the creation of the final set of images. Camera alignment step 102 is linked to a step 104, which in the illustrated airport terminal embodiment establishes each of these virtual cameras as driving a display for either a desk or as a gate. Process step 104 makes it possible to assign certain text data to each of the virtual camera nodes established at step 102. Registration with the FIDS server at step 104 also includes defining a prescribed set of locations for the overlay of the animation by these text data.
Step 102 establishes which prestored geometries and texture files are needed for a scene. Step 106 queries these files and loads them. A geometry file possesses information on the exterior limits of a displayed object. A texture file relates to a color/surface treatment of such an object or of the background. These geometries and textures are stored prior to runtime on the mass storage device(s) of each server and client, so that they don't have to be transmitted over the network.
At step 112, each rendering server or node 38, 44, 48 establishes a scene by compiling the previously loaded geometries and textures, setting their values in terms of displayed geometric positions and orientations within this newly created scene. As this operation is taking place, the results are sent (step 114) by each render server and are received (step 110) by each client 40-42, 46, 50. This data flow of vector positions and orientations, also known as sequencing instructions, across the network tells the imaging computers 38-50 (and the virtual cameras set up by them) how to direct their respective portions of the full scene's animation layout across any of the screens or displays of the composite video array. The novel approach of transmitting geometries and textures to clients/virtual camera nodes first, and compositing them later into scenes (step 116) using subsequently transmitted vector information, provides the technical advantage of greatly reducing the amount of information that has to flow across the network between the rendering servers 38, 44, 48 and their respective clients 40-42, 46, 50. After texture and geometry loading, the transmissions between the servers 38, 44, 48 and their respective clients 40-42, 46, 50 consist only of vector positions of the stored textures and geometries instead of transmitting very large graphical data sets generating by the rendering computers.
At step 116, which takes place in each of the client and server imaging computers, the positions and orientations are used to place the geometries within scenes. The placement step 116 uses a coordinate system previously established by the software. The geometries, positions and orientations may change or may be modified as rapidly as the rendering servers 38, 44, 48 and the client computers 40-42, 46, 50 can individually generate the subsequent set of rendered images, or as fast as the speed of the network in relaying new positions and coordinates to the referenced client computers to produce the full scene, whichever factor is more limiting.
Once the geometries pertaining to the animation are properly positioned at step 116, at step 118 the FIDS data accessed by the UNIX server 12 (which in turn is linked to the network via path 74,
At the termination of each of these cycles at a step 122, the texture memory is purged to replenish available space for new imaging data in the animation to be loaded. The process then reverts to step 106 for the next cycle.
Within any world, a scene may be rendered from several different viewpoints, each of which is associated with a particular virtual camera. Each virtual camera is associated with a scene graph. In some instances, the same scene graph may be shared between or among several virtual cameras, where their perspective views intersect. If, for example, two different rows of virtual cameras cross each other at some intersection point, then only those two overlapping virtual cameras might end up sharing a particular scene graph since they share the same viewpoint perspective field. Virtual camera windows depicting different scenes would use different scene graphs. In this manner, the viewpoint is determined before the scene is rendered.
At step 150 in
Next, at step 154, a corresponding identity matrix for the scene graph is enabled. Position and orientation are parameterizations within an X, Y and Z coordinate system which defines the identity matrix. In
At step 158 parallax settings are selected. The parallax settings may be used to establish a separation distance between virtual cameras along a linear path that is virtually spaced from the scene being rendered. This shape of this path is arbitrary. The path may be curved or straight;
In many cases, a convergence angle may be desired among the virtual cameras on the path, depending on the type of scene selected, and this convergence angle is supplied at step 160. For example, when a scene is being rendered in multiple displays, it may be desirable for the viewpoint established in the scene to vary from one display to the next as an observer walks along the displays on a path parallel to them. The establishment of a convergence angle provides for a balanced and smooth proportional viewing of a scene and the matching of infinity point perspective from one display to the next. At step 162, after all of these coordinates and parameters have been selected by the user, the viewpoint of the scene is created and stored in the virtual camera memory and is available at runtime for the rendering and projection of the image.
In
For example, and as laid out in
The clients and server(s) communicate using an application level protocol. Server-shortened command stubs are provided as a way to map the software animation application programming interface (API) calls to their distributed equivalents. Reciprocally, the clients' API or stub procedures provide a way to map commands received by the servers over the network to local software API calls. Copies of the APIs reside both on the rendering servers 38, 44, 48 and their respective clients 40-42, 46 and 50. Both the server and the matching client(s) maintain a copy of the current scene graph, which may be edited remotely through the network, and each scene graph is identical across each server group (e.g., group 24
A naming scheme or module 200 allows the client and the server to which the client is connected to address remote objects within the scene graph and to specify operations to be performed on them. The name module 200 is linked to a pointer to a name map at 202.
In the communication protocol, both the client and the server use calls to the software's network functions to connect to a multicast group. For example, the rendering server 38 issues commands to its multicast group 24. The application level protocol uses a net item syntax that is included within the animation software. In the actual transmission of information between any of the clients 40-42 and the server 38, a timing interval referenced as a type field is used to distinguish data items from command items. In the illustrated embodiment, the command items are distinguished from the data items by the most significant four bits of the type field, which are all ones. Type values 0XF0 to 0XFF are reserved for command codes.
The server loads a terrain model and computes the behavior at 204 for the activity taking place within the terrain. It initiates changes to the scene graph at 206 by making software calls to the client stub procedures. It may also make use of the naming module 200 to name objects in the scene graph. The rendering server 38 may also use a command encoding/decoding module 208 to process items addressed to it by respective clients, or by commands delivered to it from outside the network to re-edit or recompile an updated set of scene graph features at 206. The server 38 initializes and controls the scene at 210.
Rendering server 38 is responsible for initializing the animation simulation at 204 and also manages swap synchronization at 212 of all client computers linked with it. The main role of the associated clients 40-42 (and similar logic within server 38 itself) is to render the scene from the respective viewpoints of the virtual camera objects that have been created in them, which have been adjusted for their respective viewing pyramids (see
In the software protocol shown in
When the listening thread 220 detects a parcel of flight data in response to a preloaded data query, it delivers a sequential set of commands to a desk monitor thread 230, a flight monitor thread 234 and a command listen thread 238. Threads 230, 234 and 238 each activate in response to receiving these commands and route appropriate information to either a desk or a gate.
The desk monitor thread 230 selects which desks are to receive which sets of flight arrival and departure information; different ones of these information sets pertain to particular desks. For each desk, a desk thread 232 is updated (233) by the system. Flight monitor thread 234 completes a process of determining a flight thread 236. Once this occurs, the command listen thread 238 acknowledges the arrival of all of the data, which is now fully parsed. The command listen thread 238 issues commands as to how the text is to be allocated within the video array as well as into the independent gates within the terminal, switching a set of command threads 240, 242, 244 (a representative three of which are shown) to complete this stage of the process. Command threads 240-244 are “fire and forget” operations, which engage and then detach, logging a respective update thread 246, 248 or 250 as they finish.
The illustrated embodiment is one form of overlaying text associated with animations displayed along large video walls with other adjacent screens that are located at gates within an airport environment. The present invention is also useful in situations where rapidly changing or time-variant text is closely integrated with large video walls having a multiplicity of screens where detailed animations, simulations and video overlays stretch along the full length of the video wall, and where such animations are to be monitored and modified remotely by the users via the Internet. The present invention has applications which include public municipal stations, malls, stadiums, museums, and scientific research laboratories and universities.
Each motherboard 300 must be equipped with a BIOS 302 which acknowledges the presence of multiple graphics cards 304-318 plugged into their specific slots. In the illustrated embodiment these include both 32-bit and 64-bit PCI slots 304-316, numbering up to seven slots per motherboard, and one AGP high speed slot 318. The BIOS 302 built onto the motherboard must be able to assign different memory addresses to each of the cards 304-318, enabling separate video driver information to be sent to this specific card through the PCI or AGP bus (not shown), in turn allowing for video output information data to be allocated to that card. Once this is achieved, the imaging system can detect each card and direct each respective virtual camera windowing aperture frame to the VGA output of that card. Different video cards and their manufacturers have differing means of assigning these addresses for their respective video drivers under this arrangement, requiring that all video cards loaded onto the motherboard 300 in a multiple array be of the same type. The customization of the video drivers for this imaging system array and its software controls allows for different video card types to share the same motherboard under the operating systems of Windows NT 4.0 and Windows 2000, if the motherboard chosen to be used has a BIOS 302 that can acknowledge all the separate cards and assign unique memory addresses for those cards.
In a preferred hardware configuration, an AGP card 318 with one VGA output port can share the same motherboard with at least three PCI cards 304-308 of the same type, providing a total of four video output channels on that motherboard 300. This is a typical arrangement for all the rendering servers and their client counterparts with the multiple channel imaging software being used. Each video output then occupies the same resolution value and color depth for that computer, which can be modified independently on each video channel. Using dual or even quad CPU processors 320, 322 (a representative two of which are shown) on motherboard 300 maximizes the graphical computational speed delivered through the AGP and PCI buses to the graphics cards to enhance the speed of the rendering animation. Since the textures and geometries of the animation sequence reside on all of the hard drives 324 existing on their designated computers, the speed of accessing those libraries is maximized through the motherboard's own SCSI or fiber channel buses 325 (
Choosing the number of video cards per motherboard 300 must also take into account the most efficient use of available CPU speed on the board 300, the speed of the onboard network, and the presence of other cards running in the system. The addition of video frame grabber cards (not shown) on the motherboard 300 concurrently allows for live outside video to be introduced to the outputted animation video as nondestructive overlays, and may be routed along the video array at a desired degree of resolution.
A primary PCI bus controller 332 is joined directly to the I/O bridge 340 and serves as the maximum throughput device for the PCI cards 304, 306, 308 connected to the motherboard, in the illustrated embodiment operating at 66 MHz. The other PCI controller interfaces 328, 330 are attached at a juncture 356 between I/O bridge 340 and south bridge 342, and in the illustrated embodiment run at secondary, lower speeds of 33 MHz. It is preferred that the PCI graphics cards 304, 306, 308 or their equivalents communicate at bus speeds of at least 66 MHz to the rest of the system.
South bridge 342 joins all “legacy” devices such as SCSI controllers (one shown at 325), IDE controllers (one shown at 336), onboard networks and USB ports (not shown). It also connects to network port 358, from which is transferred positional coordinates of an animation's formatted graphics. South bridge 342 is meant to attach to lower-speed, data storage devices including the disk array 324 from which source data for the system is derived. The architecture shown in
Each of the graphics cards 304-318 has a respective graphics card CPU or processor 362, 364, 366 or 368. The “processor” or processing function of the invention is therefore, in the illustrated embodiment, made up by CPUs 320, 322, and 362-368. The graphics processors 362-368 complete the image rendering processes started by general-purpose processors 320 and 322. General-purpose processors 320 and 322 also handle all of the nonrendering tasks required by the invention.
It is also useful to consider the ability of the motherboard, its drivers, and its BIOS to perform these tasks within other operating systems such as LINUX, running on separate rendering servers and client computer systems in a manner that is more efficient in the retrieving and compiling of the graphical data. This may also be a determining factor in the methodology of accessing the fullest computational time usage of the multiple CPU processors on the motherboards in terms of multithreading of the animation rendering software integrated within the functions of the graphics chart cards chosen for the system.
The preferably UNIX-based main server 438 joining the hubs linked to the groups of rendering servers is the entry point for the introduction of the FIDS text data to be overlaid on the various animation screens of the multi-channel imaging system. A total of eight virtual camera windows may be provided for each of the rendering servers 402, 404, 406 and there is no upper limit to the number of rendering servers which can be brought into the system. The number of client computers 408-414 in each server group may number as high as eight, matching the number of separate virtual camera windows permitted within each server, or have no upper limit if repetition is required for establishing more operations taking place on these separate client computers that distinguish them from the first group. Situations where this might arise would be in the creation of a backup of the system, the introduction of additional universes running on separate rendering servers simultaneously with nondestructive overlays presented on the first group, or where additional features are implemented specifically on certain client boxes. Each rendering server 402-406 may be identified with one particular world, or it may function to elaborate upon that same world with an additional set of virtual camera windows set up on another rendering server with its own set of new clients. The hardware used with each client and its respective server must be the same for purposes of symmetry in computing of the final video image, but different sets of hardware, including graphics cards, drivers and motherboards, may be used in each separate rendering server group.
In a standard contiguous video wall arrangement, each rendering server 402-406 provides a consecutive set of video channels that match precisely in a graphical sense as one views the video array from left to right, with the last image of the first group matching its right side with the left side of the first image from the second rendering server group, and so on. Under this arrangement, there is no upper limit to the length of the video wall, and the real-time animation rendering is regulated by the processing speed of each client computer box, the server computer boxes, and the network that joins them.
Each rendering server and its adjoining client computer units make up contiguous portions of the video wall, which may be directed both horizontally or vertically, numbering from bottom to top for vertical video walls. A video wall constructed according to the system may have other shapes and directions, including cylindrical, domed, spherical, parabolic, rear or front screen projected configurations, and may include additional tiers of horizontal rows of video screens. This feature included within this multi-channel imaging system is enabled because the virtual camera windows the user selects to assign viewpoints to specific video card outputs are based upon a coordinate system that the user is able to define and control as a part of the viewpoint software, with the animation rendering portion of the software responding to those portions of worlds the user has established within each rendering server or client computer.
As shown in
All video drivers introduced into the system may be used to access worlds, but some worlds may be created to suit one video card's manner of displaying imagery through its own specific video driver. In addition to this, newer graphics cards that are recently introduced to the market may be loaded and tested against the existing video cards present on the system without having to rewrite software code for the entire system. By distinguishing and separating the newer cards' video driver from another set of video drivers already present within the system, a new set of differentiated tests may be implemented into the video array while the system remains continually online.
In each camera base instance, the same worlds may be used, or separate worlds may be newly introduced. The parallax value in each base configuration 508, 510, 514 is chosen by the user, as well as the three-dimensional coordinate system parameters that describe the particular virtual camera base orientation responsible for capturing the viewpoints within a particular world. The “horizontal”, linear based configuration 508 has a parallax value set as a virtual distance between each of the virtual cameras 509. On a separate rendering server 504 and its clients 520, 522, an arcing base 510 anchors convergent viewpoints whose coordinates the user may select in the software's parameters. Such curved camera bases are able to work with the convergence used in certain animations which encourage the viewer to focus more on activity and objects that exist in the foreground as opposed to the more distant background features, depending on the angles between the curving set of viewpoints. Also, within certain types of generated worlds, a linear horizontal base may not provide needed convergence but a curved virtual camera base will. The arcuate path 510 can be used, for example, in a set of displays arranged along a wall to simulate a set of windows in the wall to the outside. As the viewer moves along the wall, the viewpoint changes such that what the viewer is seeing mimics what he or she would see if those displays really were windows.
The circular virtual camera base 514 covers a full 360° sweep of an animated world. This camera base lends itself to more three dimensional applications of animation viewing, requiring the system to allocate geometries and textures around the entire perimeter of a world. An endless base 514 can be used to show portions of multiple worlds in larger detail. Arcing virtual camera bases like base 510 can be used in video projection for “caves” and other rounded enclosures, where the projected imagery surrounds the viewer or viewers in a theater type arrangement. In this instance, the three dimensional coordinate system that defines the viewpoints set by the user of this system determines the degree of arc of the projected imagery against a curved or sloping screen surface. Since the viewpoint controls within the software allow for both flat plane as well as curved surface structure, the nonlinear aspects of projecting against any curved surface may be programmed into the system to compensate for the curvature of the projection screen, even if that curved surface is discontinuous. The final image will be viewed as a transposition of a flat rectilinear scene onto a curved surface screen, without distortions or with reduced distortions, in either a rear projected or a front projected format. Consequently, the contiguous set of images along an arc may also be joined together seamlessly, in the same fashion as a set of contiguous flat images that are precisely matched along each other on a flat display screen.
While three representative virtual camera baselines or paths 508, 510, 514 have been shown, others are possible. The illustrated baselines are all within a single plane, but this need not be the case. For example, the viewpoints of contiguous displays could differ one from the next in elevation, such that, as a passer-by viewed these displays, he or she would perceive the same scene from an ever-higher viewpoint. Suppose that the displays were placed along a wall, and passers-by viewing the displays were walking up a ramp. The viewpoints of the displays could be selected such that the perceived change in viewpoint matched, or was a function of, the viewer's real change in elevation. Nor would the change in viewpoint from one virtual camera to the next have to be at a constant spacing; a set of viewpoints could be chosen such that the change in viewpoint from one virtual camera to the next could be accelerated or decelerated.
The software controls enable the user to set the shapes of the viewpoint windows themselves, thereby creating apertures that are rectangular, triangular, or keystoned, depending on the nature of the projection screen's shapes. Prior to the invention, the projection apparatus had to be fitted with special lenses and apertures on the projectors to create an undistorted balanced image on a curved screen. According to the invention, the networked set of rendering server and client computers all share the same programmed curvilinear settings for projecting each image on an elongated curved screen, and are not limited in number of terms of channels used in the full system. This feature provides the capability of increasing the resolution of the final projected image along the inside of the caved enclosure by increasing the number of channels per horizontal degree of view. The system further provides for the introduction of rows or tiers of curved images, vertically, which can be especially useful in the projection of images within large domes or spheres, or where imagery is seen both above and below the vantage point of the viewers. The use of superimposed projected imagery as illustrated in
The modularity of the system as shown in
In certain cases both front and rear projection may be chosen for an installation involving different cave enclosures, altering the manner in which images appear on the enclosed viewing screen. In such an embodiment a group of rendering servers and their client computers would be assigned for rear projection, and another separate group would be assigned to front projection imagery, each specifically addressing the nonlinearity corrections necessary for projecting onto curved surfaces. A single cave enclosure may provide both front and rear screen viewing zones simultaneously within the same chamber, as in the case of a sphere or dome inside a large spheroidal or domed theater enclosure. Within this structure, the outer spheroidal sets of screens may use front protection, joined with one group of rendering servers and their rendering clients, and an inner sphere or domed structure would make use of rear projection for another associated group of rendering servers and their own rendering clients.
As shown for example in
Since each rendering server 606 and its rendering clients 608, 610 (a representative two of which are shown) has established within it a software set of angled viewpoint controls assigned to video output ports, such ports may be used to supply images to angled projectors 612-626 that converge their output beams on a central point behind the autostereoscopic screen device 604. These screen devices are available from several manufacturers but their construction and operation may be summarized as follows. Screen device 604 is a rear projection system that includes two large rectangular lenticular lenses 605, 607 positioned one behind the other, on a central axis 632, with their vertical lenticules identical in spacing number such as 50 lines per inch. A front view detail of each of these lenticular lenses 605, 607 is shown at 609. The lenticules are arranged to be parallel to each other and are separated laterally by a fractional amount of a single lenticule. This lateral offset is determined by the focal length of the lenses, which should also be identical, and the spacing between the two lenses, which the user may adjust to shift the convergence point of the incident projectors placed behind the viewing screen assembly 604. Clear spacing plates such as acrylic plates 611, 613 may be used between the lenses to keep their separation fixed. The designer may also insert an additional circular lenticular lens 615 (a front view detail being shown at 617) between the two outer vertical lenticular lenses to change the size of the viewing cone or angle of viewing for 3D images to be viewed by audiences in front of the screen assembly.
The video projectors 612-626 should have identical focal length lenses, resolution and aperture size, and should be anchored along a single stationary arc having an axis 632 which is orthogonal to the screen 604. With very large screens, the degree of arcing is slight. If the size of the rear screen assembly 604 is small, the arcing is more pronounced. While eight projectors 612-626 are shown, any number of projectors greater than or equal to two can be used. Screen device 604 receives the array of light beams directed towards the back of the screen, and after that array travels through several layers of lenticular lensing material sandwiched inside the screen, re-projects the projector light rays from the front of the screen with a summation of each of the projectors' rays across a widened viewing aperture. The point of convergence 636 of all of the projectors' beams is located at the intersection of a central axis 632, itself perpendicular to the plane of screen 604, and a rear surface 634 of the rear lenticular lens 605.
The rectangular pattern created on the back of the rear lenticular screen by video projectors 612-626 should be identical in size and shape, and any keystone corrections should be done electronically either within each video projector 612-626 or by software operating within the graphics cards in the imaging computer 608 or 610 driving the projectors.
In this embodiment, increasing the number of projectors 612-626 increases the number of views visible to viewers in front of the screen 604. The distance between the projectors 612-626 and convergence point 636 is determined by the size of the rectangular image they create on the rear lenticular lens 605 of screen 604, with the objective of completely filling the viewing aperture of the rear lenticular lens 605.
If the number of the projectors 612-626 is large, as in eight or more, and if the resolution of the projectors 612-626 is large, for example 1280×1024 pixels each, then the lenticular lenses themselves will be able to support a number of lines per inch greater than 50 and as high as 150, thereby increasing the total number of views perceived on the front of the screen for 3D viewing.
The typical light path for a rear projector beam first passes through the rear lenticular lens 605 at a given incident angle with respect to surface 634. The rear lenticular lens 605 then refracts this incident beam at an angle determined by the focal length of the lenticular lens 605 and the angle of the incident beam, as well as the distance of the projector from convergence point 636. The first, rear lenticular lens 605 establishes an initial number of viewing zones and directs these rays through the middle, circular lenticular lens 615, which widens the viewing zones set by the first, rear lenticular lens 605. The amount of widening is set by the focal length of this middle lens. As the ray passes through the front lenticular lens 607, which preferably is identical to the rear lens and is offset to the right or left by a fractional distance less than the width of a single lenticule, the number of contiguous perspective viewing zones is multiplied. The amount of this multiplication is determined by the number of lines per inch of the lenticular lens, the number of projectors arrayed behind the rear lenticular lens, the amount of right or left offset distance of the front lenticular lens relative to the rear lenticular lens, and the separation distance between the planes of the front and rear lenticular lenses. Usually, this multiplication factor is three times. The lenticular lenses are held firmly into flat positions by glass plates or by acrylic plates 611, 613 mounted in frames, depending on the thickness of the lenticular lenses being used. The projector array 612-626 in conjunction with screen 604 possesses the ability to repeat the total number of views delivered to the back of the screen several times in order to provide an even wider 3D convergent viewing zone for large audiences to collectively view such autostereoscopic images in a large theatre environment, or along a video wall.
In this embodiment, with eight projections 612-626 positioned behind the screen 604, a viewer in front of screen 604 would see a succession of eight stereo views of the given scene, with his or her left eye observing a left perspective view, and his or her right eye seeing a right perspective view, the view determined by the angle of view that he or she has with respect to the front surface of screen 607.
Several screens may be optically joined together to provide an immersive 3D enclosure, consisting of the screens' individual systems of lenticules, or the screen may be curved or shaped to arc around the audience's viewing perspectives. The real-time rendering facilities inherent in the distributed image processing of the invention permit the rapid movement associated with large-scale, high-resolution motion 3D viewing.
With the addition of a video multiplexer 628, autostereoscopic flat panel devices such as device 600 may be joined to the system, for smaller 3D viewing applications that don't require stereo glasses or head-tracking devices. Furthermore, a lenticular printer 630 may be added to the system to view, edit, and print lenticular photos and 3D animations created within the multi-channel imaging system. This is a particularly useful aspect of the system in that it gives the 3D lenticular work creator the ability to view artwork changes instantaneously on a 3D screen with regard to a lenticular image he is constructing, instead of having to reprint an image array many times on an inkjet or laser printer to fit the kind of 3D viewing he wishes to make.
The way in which autostereoscopic images may be delivered or constructed within the system of the invention is based on the parameters set up to control the perspective fields of the various images to be assembled. This specialized software is capable of selecting these values for a given 3D world, which may be computer generated or transferred from an external source of 3D data from digital camera sources or film photography scannings. Such controls may regulate viewing distance from a centralized scene, viewing angles, parallax adjustments between virtual cameras, the number of virtual cameras used, perspective convergence points, and the placement of objects or background material compositionally for the scene.
Since there is no upper limit to the number of viewpoints created by the system, recorded source data that possess only a low number of views, or even just two views, may be expanded through a mathematical algorithm used within the system to generate more views between or among the original set of views. The results of this 3D reconstruction of an actual scene may be composited with other autostereo images in much the same way as portions of a 3D world may be joined together. For the 3D flat panel display 600, software interleaving functions that are established within the multi-channel imaging system may be used to optically join multiple perspective views in combination with a video multiplexer to support a minimum of four channels, with the upper limit regulated by the line pitch of the lenticular lens positioned on the 3D panel, as well as the flat panel 600's total screen resolution.
In summary, a real-time, animated, multiple screen display system has been shown and described in which is set up a plurality of virtual cameras, each having its own viewpoint. The present invention permits animated objects to displace themselves across multiple displays, allows changing text data to be superimposed on these images, and permits multiple screen contiguous displays of other than flat shape and capable of displaying scenes from different viewpoints.
While the present invention has been described in conjunction with the illustrated embodiments, the invention is not limited thereto but only by the scope and spirit of the appended claims.
Claims
1. A method for rendering images in a multiple display unit video system, comprising the steps of:
- forming a network including a server and a plurality of clients in communication with the server;
- storing, prior to a first time, in memories associated with each of the clients, graphical image data for each of a plurality of objects to be displayed by one or more of the multiple display units;
- storing, prior to the first time, in memories associated with each of the clients, a scene in which the objects are to be displayed;
- transmitting, from the server to each of the clients, at the first time, object position and aspect data;
- rendering, by each of the clients, using the stored graphical image data, the stored scene and the object position and aspect data, images to be displayed in each of the displays; and
- displaying the rendered images on the displays driven by the clients.
2. The method of claim 1, and further comprising the steps of
- connecting, prior to the first time, at least one display unit to the server;
- storing, in a memory associated with the server and prior to the first time, the graphical image data;
- storing, in the memory associated with the server and prior to the first time, the scene; and
- rendering, by the server, using the stored textural and geometric data, the stored scene, and the object position and aspect data, images to be displayed in said at least one display unit connected to the server.
3. The method of claim 1, and further comprising the steps of
- connecting a plurality of display units to each client; and
- using the stored graphical image data, the stored scene, and the object and position data, rendering images for each of the connected display units.
4. The method of claim 1, and further comprising the steps of
- storing, prior to the first time, in the memories associated with each of the clients, a plurality of viewpoints of the scene;
- assigning and storing, prior to the first time, in the memories associated with each of the clients, each of a plurality of station identities, each station identity associated with a display unit;
- for each station identity, assigning to the last said identity one of the stored viewpoints; and
- at the first time, rendering images of the scene and selected textural and geometric data according to each of the assigned viewpoints.
5. A modular multiple display system, comprising:
- a central node;
- a plurality of rendering nodes each coupled to the central node, each rendering node having a processor and a memory used to define a three dimensional virtual world in which at least one depicted object is placed, each world being a subset of a single universe shared among all of the rendering nodes; and
- for each world, at least one virtual camera having a viewpoint into said world, a display associated with said at least one virtual camera and displaying the depicted object from the viewpoint.
6. The system of claim 5, wherein the object is displayed as a motion picture and exhibits movement within the world with respect to time.
7. The system of claim 6, wherein each rendering node is associated with a respective video driver, the video drivers being preselectable as different from one another.
8. The system of claim 6, wherein the object is animated.
9. The system of claim 5, where said at least one virtual camera is one of a plurality of virtual cameras each associated with a separate display, one of said worlds being associated with at least two of the virtual cameras.
10. The system of claim 9, wherein at least one virtual camera is instantiated by a client node coupled to but physically remote from a rendering node.
11. The system of claim 5, wherein the central node provides a communication path among the rendering nodes coupled thereto for the sharing of data, the rendering nodes otherwise being isolated from each other.
12. The system of claim 5, wherein the central node provides overlay data to selected ones of the rendering nodes such that the rendering nodes may render scenes of their respective worlds as overlaid with selected portions of the data.
13. The system of claim 12, wherein at least one display is an autostereoscopic display device for simultaneously displaying at least two viewpoints of said depicted object, at least two virtual cameras coupled to the display device for transmitting imaging data concerning the depicted object from said at least two viewpoints, the overlay data overlaying both viewpoints.
14. Apparatus for depicting at least one object in a multiple display system, comprising:
- plural rendering node means each coupled to central node means, at least one virtual camera means coupled to each rendering node means, each virtual camera means driving a respective display unit,
- each rendering node means including means for creating a three-dimensional world in which the object will be depicted, each world being a subset of a universe shared among all of the rendering node means and being preselectable as being possibly different from others of the worlds;
- each rendering node means further including means for creating at least one viewpoint into the respective world, the viewpoint used in defining a respective virtual camera means, the virtual camera means driving an associated display to depict said at least one object in a world from the viewpoint of the virtual camera means.
15. A method for depicting at least one object in a multiple display system, comprising the steps of:
- connecting each of a plurality of rendering nodes to a central node;
- for each rendering node, creating a three-dimensional world in which the object will be depicted, each world being a subset of a universe shared among all of the rendering nodes and being preselectable as possibly different from others of the worlds;
- for each rendering node, creating at least one viewpoint into the respective world, the viewpoint used in defining a respective virtual camera; and
- using each virtual camera to drive a respective display coupled thereto in order to depict said at least one object in a world from the viewpoint of the virtual camera.
16. The method of claim 15, and further comprising the step of using the rendering node to render the object as a motion picture.
17. The method of claim 16, and further comprising the step of associating each rendering node with a respective video driver; and
- preselecting the video drivers as being possibly different from one another.
18. The method of claim 15, and further comprising, for at least one of the worlds, the step of creating a plurality of virtual cameras each having a respective viewpoint into said at least one world; and
- selecting a viewpoint of one of the virtual cameras as being possibly different from a viewpoint of another one of the virtual cameras.
19. The method of claim 18, and further comprising the step of physically disposing at least one of the virtual cameras to be remote from the rendering node.
20. The method of claim 15, and further comprising the step of sharing data among worlds only through the central node.
21. The method of claim 15, and further comprising the steps of
- retrieving text overlay data to the central node;
- predetermining areas of the displays on which the text overlay data is to be overlaid;
- determining which portions of the text overlay data are to be overlaid on which areas of the displays;
- transmitting the portions to selected ones of the rendering nodes; and
- at each rendering node, rendering scenes as including the transmitted portions of the text overlay data.
22. An autostereoscopic display system, comprising:
- a lenticular lens display screen having a front surface and a rear surface, the lenticular lens display screen projecting a plurality of views of a scene from the front surface of the display screen;
- a plurality of video projectors disposed to the rear of the lenticular lens display screen, each of the video projectors focused on a convergence point on the rear surface of the lenticular lens display screen; and
- a plurality of imaging computers driving the video projectors, memories of each of the imaging computers storing a scene to be displayed on the lenticular lens display screen, each imaging computer rendering the scene from one or more viewpoints preselectable to be different from other ones of the viewpoints, each projector projecting an image from a respective one of the viewpoints.
23. An autostereoscopic display system, comprising:
- a lenticular lens display screen having a front surface and a rear surface, multiple viewpoints of a scene visible to a viewer in front of the front surface of the screen;
- a plurality of video projectors disposed to the rear of the screen, each of the video projectors focused on a convergence point on the rear surface of the screen;
- a plurality of client imaging computers driving the video projectors, memories of each of the client imaging computers storing the scene to be displayed, object imaging data used to render animated objects within the scene, and a plurality of the viewpoints from which the scene is to be rendered; and
- a rendering server having a memory for storing animation sequencing instructions, the rendering server coupled to each of the imaging computers for communicating the sequencing instructions to the client imaging computers at a time after the storing, by the client imaging computers, of the scene and the object imaging data, each of the client imaging computers rendering the scene from one or more of the stored viewpoints responsive to the sequencing instructions and causing the projectors to project respective images from respective ones of the viewpoints.
24. An autostereoscopic display system, comprising:
- at least one flat panel display;
- a lenticular lens positioned on the flat panel display;
- a plurality of video channels being received by the flat panel display, a plurality of viewpoints of an imaged scene being transmitted by respective ones of the video channels, the flat panel display and lenticular lens permitting a viewer to view different ones of the viewpoints from different positions relative to the lenticular lens; and
- for each viewpoint, a virtual camera coupled to the flat panel display for transmitting thereto a respective channel of video data, the virtual camera rendering the imaged scene from a respective viewpoint.
25. The autostereoscopic display system of claim 24, wherein the virtual cameras are logical partitions of one or more imaging computers.
26. The autostereoscopic display system of claim 24, further including other flat panel displays like said flat panel display, the flat panel displays together forming a video wall.
27. An autostereoscopic display system, comprising:
- at least first and second autostereoscopic display devices having characteristics which are different from each other;
- a plurality of virtual cameras coupled to each of the autostereoscopic display devices, each virtual camera rendering a scene from a preselected viewpoint; and
- for each autostereoscopic display device, images of a scene appearing thereon being viewable from different ones of the viewpoints depending on the position of a viewer relative to the display device.
28. The system of claim 27, wherein the virtual cameras are logical partitions of one or more imaging computers.
29. The system of claim 27, in which at least one of the autostereoscopic display devices is a flat panel display on which has been positioned a lenticular lens array, at least one other of the autostereoscopic devices not including a flat panel display.
30. An autostereoscopic display system, comprising:
- an autostereoscopic display device displaying at least two different viewpoints of an imaged scene; and
- at least two virtual cameras coupled to the display device for supplying respective channels of video data corresponding to said at least two different viewpoints, said at least two virtual cameras being comprised of a single central processor unit and a single graphics processor card coupled to the central processor unit with at least two video output ports, each port outputting a channel of video data to the display device.
Type: Application
Filed: Sep 24, 2004
Publication Date: Mar 24, 2005
Applicant:
Inventors: David Mark (San Francisco, CA), Brett Weichers (Cedar Falls, IA)
Application Number: 10/955,339