Scene splitting for perspective presentations

- ModViz, Inc.

A controlling device 110 that splits a 3D scene 131 into 3D sub-scenes, each including a sub-volume 133 of the 3D scene 131, and distributes the 3D sub-scenes to multiple rendering devices 120. Each rendering device 120 independently determines a 2D sub-image 141 responsive to its 3D sub-scene and a rendering viewpoint 132. The 2D sub-images 141 are composited using a back-to-front partial ordering with respect to the rendering viewpoint 132.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention

The invention relates to scene splitting for perspective presentations.

2. Related Art

In some applications of computing devices, it is desirable to present a visualization of a scene to a user. Some of these applications include the following:

    • CAD (computer aided design);
    • computer aided search, such as used in the oil and gas industry;
    • computer simulations, such as battlefield simulations and flight simulation; and
    • video games, including multiplayer video games.

One problem in the known art is that computing the scene to be presented requires relatively large resources, including both computing power and memory.

Known solutions include breaking up computing the scene into parts, and assigning each of those parts to a separate graphics processor. These separate graphics processors each operate under control of a single controlling processor, which determines how to break up computing the scene into parts. The controlling processor sends each separate graphics processor a set of commands telling the receiver what to render. Each graphics processor generates data showing how to render its part of the scene. This data might be sent back to the controlling processor for presentation, or might be sent on to a presenting device, such as a graphics compositor, a monitor, or a set of monitors.

While this method generally achieves the goal of providing increased resources to render the scene, it still has several drawbacks. One drawback is that it might still take substantial resources to compose a single image for the presentation device, from the distinct sub-images generated by multiple graphics processors. For example, if one of the graphics processors is assigned objects to render that are “behind” others, as seen from a selected viewpoint, rendering the 2D (2-dimensional) image for display might involve substantial resources, including such effects as occlusion and partial occlusion, transparency, and reflection.

Some known systems distribute the 3D (3-dimensional) scene for rendering in a relatively simple manner, such as slices of the 3D scene to be rendered, and include specialized hardware as a graphics compositor. These systems include the HP “Sepia” product and the Orad “DVG” product. However, specialized hardware can be quite expensive, and is in general not very suitable for flexible configuration of the system.

Other known systems also distribute the 3D scene for rendering in a relatively simple manner, and include software to perform the function of a graphics compositor (either in the controlling device itself, or in a separate processor). However, software solutions are subject to the drawback that they are much slower when the data they work with does not fit into rapidly accessible memory, such as main memory (as opposed to disk drive storage).

Moreover, both hardware and software “flat” distribution solutions are subject to the drawback that they use substantial network bandwidth, and might involve limitations due to use of that resource.

Other known systems also distribute the 3D scene for rendering in a more complex manner, and a tree or other multi-tiered structure for the rendering processors to deliver their results to a graphics compositor (again, either in the controlling device itself, or in a separate processor). However, multi-tier solutions are subject to the draw-back that they involve substantially greater latency between the time the rendering processor generates its portion of the 3D scene, and when the 3D scenes can be combined into a 2D image capable of being presented.

Accordingly, it would be advantageous to provide methods and systems in which 3D scenes might be rendered, and composed into 2D images, and which are not subject to drawbacks of the known art.

SUMMARY OF THE INVENTION

The invention provides techniques, embodied in methods and systems, including scene splitting for perspective presentations.

A system embodying the invention includes a controlling device and a set of rendering devices, with the effect that the controlling device can distribute a set of objects to be rendered to the rendering devices. The controlling device splits up the 3D scene to be rendered into a set of 3D sub-scenes, each of which is relatively smaller than the original 3D scene. Each rendering device determines a 2D image in response to the 3D sub-scene assigned to it, and in response to a rendering viewpoint. In one embodiment, elements of a 3D scene are included within an enclosing volume, such as a cube, and a set of 3D sub-scenes are each included within an enclosing sub-volume, such as a smaller cube (i.e., a “cubelet”) proportional to the entire scene's larger enclosing cube. Each rendering device determines a 3D rendering of the elements in its sub-volume, as seen from that rendering viewpoint. Each rendering device also determines a 2D image of the 3D rendering, as seen from that rendering viewpoint.

Each rendering device sends the 2D image it determines to a compositor, which combines that 2D image with the 2D images from rendering devices in “front” of it with respect to the rendering viewpoint. In various embodiments, the 2D images might be sent for composition in one of several ways, such as one of (a) directly to the controlling device, (b) in a multi-tier hierarchy, such as one determined by the controlling device in response to the rendering viewpoint, (c) a switch coupling rendering devices in response to the rendering viewpoint. A result of compositing the 2D images should be suitable for sending to a presentation device.

After reading this application, those skilled in the art would recognize that the invention provides an enabling technology by which substantial advance is made in the art of rendering scenes.

For example, the invention might be used to provide one or more of, or some combination or extension of, any of the following.

    • rendering 3D scenes in substantially real-time, such as for example as might be used in battlefield simulations, flight simulations, other testing or training devices, and the like;
    • rendering 3D scenes in various detail and from various selected perspectives, such as for example as might be used in computer-aided design, in examination of computer simulations of natural phenomena such as weather simulations or wind-tunnel simulations, and the like; and
    • rendering 3D scenes to present information, such as for example as might be used in computer-aided presentation or search of databases, user interfaces for computer-aided control of real-time systems or other systems, and the like.

After reading this application, these and other and further uses of the invention would be clear to those skilled in the art.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 shows a block diagram of a system including a controlling device and a set of rendering devices.

FIG. 2 shows a process flow diagram of a method of using a system including a controlling device and a set of rendering devices.

INCORPORATED DISCLOSURES

This application incorporates by reference and claims priority of at least the following documents.

    • Application Ser. No. 60/676,240, filed Apr. 29, 2005, in the name of inventor Thomas Ruge, titled “Scene Splitting for Perspective Presentations”, attorney docket number 233.1008.01
    • Application Ser. No. 60/676,254, filed Apr. 29, 2005, in the name of inventor Thomas Ruge, titled “Alpha Blending”, attorney docket number 233.1012.01
    • Application Ser. No. 60/676,241, filed Apr. 29, 2005, in the name of inventor Thomas Ruge, titled “Compression of Streams of Rendering Commands”, attorney docket number 233.1007.01

These documents are hereby incorporated by reference as if fully set forth herein, and are sometimes referred to herein as the “incorporated disclosures”. Inventions described herein can be used in combination or conjunction with technology described in the incorporated disclosures.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

In the description herein, a preferred embodiment of the invention is described, including preferred process steps and data structures. Those skilled in the art would realize, after perusal of this application, that embodiments of the invention might be implemented using a variety of other techniques not specifically described, without undue experimentation or further invention, and that such other techniques would be within the scope and spirit of the invention.

Definitions

The general meaning of each of these following terms is intended to be illustrative and in no way limiting.

    • The phrases (1) “controlling device” and (2) “rendering device”, and the like, refer respectively to devices for (1) controlling the allocation of rendering commands, and (2) actually rendering 3D scenes and 2D images of those 3D scenes from a selected perspective, as further described below.
    • In one embodiment, there is a single controlling device and as many rendering devices as necessary so that information for rendering each cubelet can be rendered with that information fitting into the graphics memory of at most one rendering device, such as for example a 5×5×5 array of rendering devices. However, in the context of the invention, there is no particular requirement of having only a single controlling device or of having a specified number of rendering devices.
    • For example, in the oil and gas industry, a database of 50 gigabytes might be allocated into sub-portions in a 10×10×10 array of rendering devices, with the effect of presenting each rendering device with only about 50 megabytes of information to process. In one embodiment, each rendering device might have about 128 megabytes of graphics memory, with the effects that the rendering information would fit into graphics memory, and that each rendering device might operate relatively quickly.
    • The phrases (1) “compositor” and (2) “presentation device”, and the like, refer respectively to devices (1) for composing 2D images in response to a 3D scene, or for composing a single 2D image from multiple 2D images, and (2) for making a presentation to a user in response to one or more 2D images, as further described below.
    • In one embodiment, the presentation device might include a device for breaking information about a 2D image into a set of information for presentation on multiple display panels, monitors, or projection devices, such as for example a “power wall” of 5×5 display panels. However, in the context of the invention, there is no particular requirement of having any particular number of presentation devices. In alternative embodiments, the 2D image might be transmitted to another computing device for additional processing before, or instead of, actually being presented to a user.
    • The phrases “model”, “3D scene”, “3D sub-scene”, “rendering viewpoint”, “visualization of a scene”, “front”, “2D image”, and the like, all refer to facts and information about objects in the 3D scene and the 2D image or images used to represent those objects, as further described below.
    • In one embodiment, a “model” includes information about what objects are to be represented in the 3D scene, as distinguished from a “3D scene”, which includes information about where objects are placed in an encompassing volume, what they look like, and what their effects are on viewing other such objects (i.e., whether they are opaque, transparent, translucent, reflective, and the like).
    • In one embodiment, a “3D sub-scene”, similar to a “3D scene”, includes information similar to a 3D scene, but only for an selected portion of that 3D scene, such as for example a set of cubelets within a cube encompassing that 3D scene. However, in the context of the invention, there is no particular requirement of using a set of cubelets for allocating rendering commands. In addition or instead of cubelets, the system might allocate rendering commands in response to other volumes, whether smoothly space-filling or not, such as for example, tetrahedra or spheres.
    • In one embodiment, the “rendering viewpoint” might be static, or might be dynamic, such as in response to (1) controls by a user, (2) a set of sensors, such as motion sensors focused on the user, (3) a time-varying parameter, such as in a roller-coaster ride, and the like. The “front” of a 3D scene is that 2D image presented to a viewer at the rendering viewpoint.
    • In one embodiment, a “2D image” includes a set of information for 2D presentation, such as for example pixel values for color (e.g., red, green, and blue) or a set of presentable polygons or vectors. In the context of the invention, there is no particular requirement of any one selected representation of a 2D image, nor is there any particular requirement of actually presenting the 2D image to a user.
    • The phrases (1) “network bandwidth”, (2) “multi-tier hierarchy”, (3) “switch”, and the like, refer respectively to (1) a rate at which information can be sent back or forth between the controlling device and the rendering devices, or among the rendering devices where appropriate, (2) an arrangement in which the controlling device is coupled to the rendering device using one or more intermediate devices, such as for example partial compositors, and (3) an arrangement in which the controlling device and the rendering devices are coupled, as further described below.
    • The phrases (1) “scene splitting”, (2) “encompassing volume”, (3) “encompassing sub-volume”, (4) “cubelet”, and the like, refer to concepts relating to allocation of rendering commands by the controlling device to the rendering devices, as further described below.

The scope and spirit of the invention is not limited to any of these definitions, or to specific examples mentioned therein, but is intended to include the most general concepts embodied by these and other terms.

System Elements

FIG. 1 shows a block diagram of a system including a controlling device and a set of rendering devices.

A system 100 includes elements as shown in FIG. 1, plus possibly other elements as described in the incorporated disclosure. These elements include at least a controlling device 110, a set of rendering devices 120, a (conceptual—not shown but understood by those skilled in the art) encompassing volume 130, and a (conceptual—not shown but understood by those skilled in the art) 2D image 140 capable of presentation.

The controlling device 110 includes elements as shown in FIG. 1, plus possibly other elements as described in the incorporated disclosure. These elements include at least a model or database 111, a communication network 112, and a set of rendering commands 113.

The rendering devices 120 each include elements as shown in FIG. 1, plus possibly other elements as described in the incorporated disclosure. These elements include, for each rendering device 120, at least an input port 121, a processor and memory 122, and an output port 123.

As described herein, the encompassing volume 130 includes elements as shown in FIG. 1, plus possibly other elements as described in the incorporated disclosure. These elements include at least the following:

    • a 3D scene 131 to be rendered (as represented by information available to the controlling device 110 or the rendering devices 120);
    • a rendering viewpoint 132 (as represented by information available to the controlling device 110 or the rendering devices 120); and
    • a set of sub-volumes 133 (as determined by the controlling device 110).

As described herein, the 2D image 140 includes an image responsive to the 3D scene 131 and the rendering viewpoint 132.

After reading this application, it would be clear to those skilled in the art that the 2D image 140 is responsive to at least the following:

    • a 2D sub-image 141 presented by each of the sub-volumes 133, each responsive to the rendering viewpoint 132;
    • a back-to-front partial ordering 142 of the sub-volumes 133, also responsive to the rendering viewpoint 132; and
    • a composition of each of those 2D sub-images 141, responsive to the back-to-front partial ordering 142.

As described herein, each rendering device 120, allocated rendering commands for its sub-volume 133, need only compute the 2D sub-image 141 for its own sub-volume 133, responsive to the rendering viewpoint 132. This has the effect of generating a 2D sub-image 141 for each such sub-volume 133.

After reading this application, it would be clear to those skilled in the art that each of the 2D sub-images 141 need only encompass those three faces (for a cubelet) of the sub-volume 142 viewable from the rendering viewpoint 132. That 2D sub-image 141 has a size proportional to O(1/n2), where n is a number of rendering devices 120 on a side of a cubic arrangement thereof.

The system 100 also optionally includes a compositing device 150. The compositing device 150 includes elements as shown in FIG. 1, plus possibly other elements as described in the incorporated disclosure. These elements include at least an input port 151, a compositing element 152, and an output port 153.

The input port 151 is coupled to the 2D sub-images 141, and to the back-to-front partial ordering 142. The compositing element 152 is coupled to the input port 151, and generates the 2D image 140 (as represented by data in memory, storage, or a signal). The output port 153 is coupled to the 2D image 140.

The system 100 also optionally includes a presentation device 160. The presentation device 160 is coupled to the 2D image 140 (as represented by data in memory, storage, or a signal), and is capable of presenting that 2D image 140 to a user 170.

Although the user 170 is shown herein as a person, in the context of the invention, there is no particular requirement that the user 170 is so limited. The user 170 might include a group of people, a computer imaging or motion detection program, an image compression program such as JPEG or MPEG, a system including a broadcast or other distribution system for images, an analysis program for 2D image 140, or even an artificial intelligence program capable of reviewing the 2D image 140.

Method of Operation

FIG. 2 shows a process flow diagram of a method of using a system including a controlling device and a set of rendering devices.

Although described serially, the flow points and method steps of the method 200 can be performed by separate elements in conjunction or in parallel, whether asynchronously or synchronously, in a pipelined manner, or otherwise. In the context of the invention, there is no particular requirement that the method must be performed in the same order in which this description lists flow points or method steps, except where explicitly so stated.

The method 200 includes flow points and process steps as shown in FIG. 2, plus possibly other flow points and process steps as described in the incorporated disclosure. These flow points and process steps include at least the following:

    • At a flow point 210, the method 200 is ready to determine a 2D image 140 in response to a model, the model including a 3D scene 131 and a rendering viewpoint 132.
    • At an (optional) step 211, further described below, the controlling device 110 determines a set of sub-volumes 133, and allocates them to the rendering devices 120.
    • In the context of the invention, there is no particular requirement that the controlling device 110 allocates sub-volumes 133 on a one-for-one basis with rendering devices 120.
    • At a step 212, the controlling device 110 allocates portions of the 3D scene 131 to the rendering devices 120, and sends them information identifying the rendering
    • At a step 213, the rendering devices 120 each render their allocated portions of the 3D scene 131 independently with respect to the rendering viewpoint 132, with the effect of each independently generating a 2D sub-image 141.
    • At a step 214, further described below, the rendering devices 120 each couple their independently generated 2D sub-images 141 to the compositing device 150, responsive to the back-to-front partial ordering 142.
    • At a step 215, further described below, the compositing device 150 combines the 2D sub-images 141 responsive to the rendering viewpoint 132, generating the complete 2D image 140.
    • At an (optional) step 216, the presentation device 160 presents the complete 2D image 140 to the user 170. In one embodiment, the presentation device 160 might include more than one power wall, such as for example a cube of 6 power walls to give the illusion of being suspended within the 3D scene. In such embodiments, the method 200 would determine the 2D image 140 for each such power wall with respect to a distinct rendering viewpoint 132.
    • At a flow point 220, the method 200 has finished determining a 2D image 140 in response to a model, the model including a 3D scene 131 and a rendering view-point 132.
    • In one embodiment, the method 200 is repeated rapidly enough that the user 170 sees the 2D image 140 as a motion picture, with the effect that the user 170 sees the 3D scene 131 itself as a virtual reality motion picture. In such embodiments, the model might be responsive to user inputs or other inputs, with the effect that the 3D scene 131 and the rendering viewpoint 132 might change rapidly with time, and with the effect that the user 170 would perceive a view very much like actually interacting with a virtual reality as defined by the model.
      Software Package Overview

The system 100 uses “Sub-Volumes” to split the 3D scene. These sub-volumes are initially defined by a configuration file (see a sample configuration file be-low). A sub-volume as implemented by the system 100 is presented by a cube (defined in 3 dimensions by xmin, xmax, ymin, ymax, zmin, zmax). The content of each sub-volume is rendered by an individual rendering device 120. The splitting of the 3D-scene happens by assigning each object to at least one sub-volume. The criteria that determines where to assign an individual 3D-object is the spatial overlap between the spatial representation of a 3D object (the “bounding box”) and all sub-volumes. The 3D-object will be copied onto all rendering devices 120 that are assigned to the sub-volumes that overlap or enclose the “bounding box” of a 3D object. This algorithm makes sure that every rendering device 120 has a copy of at least all the 3D objects it has to render.

Sample Configuration file

  • #
  • # Copyright (C) ModViz, Inc. 2004
  • # All Rights Reserved
  • #
  • # This sample VGP configuration file uses two rendering nodes in alpha
  • # compositing mode. Render1 renders a sub-volume encompassing the WORLD_VOLUME
  • # minX to 0 and the entire WORLD_VOLUME in Y and Z. Render2 renders a
  • # sub-volume encompassing 0 to WORLD_VOLUME maxX and the entire WORLD_VOLUME in
  • # Y and Z. This splits the WORLD_VOLUME in half down the X=0 plane and gives
  • # each render node half of the WORLD_VOLUME. These two render nodes send their
  • # rendered buffers to the AppNode which composites and displays the results in
  • # the original application context.
  • File: # version number of config file

VERSION=0.9

  • # Application level configuration
  • # CONTEXThd —STRATEGY=which OGL context to use (LAST=last one created by the
  • # application)
  • # WORLD_VOLUME=bounding box of all 3D vertices in the application {minX,maxX,
  • # minY,maxY, minZ,maxZ}
  • AppNode:

CONTEXT_STRATEGY=LAST

  • # Rendering node configuration
  • # NAME=unique name for this node
  • # IP_ADDRESS=host ip address and port this node runs on (port is normally the
  • # xinetd configured port)
  • # SUB_VOLUME=normalized bounding box of the 3D vertices that should be sent to
  • # this node {minX,maxX, minY,maxY, minZ,maxZ}

RenderNode:

NAME=Render1

IP_ADDRESS=127.0.0.1:24900

SUB_VOLUME={−1,0, −1,1, −1,1}

  • RenderNode:

NAME=Render2

IP_ADDRESS=127.0.0.1:24902

SUB_VOLUME={0,1, −1,1, −1,1}

The parameter of a sub-volume can change dynamically if the controlling device 110 determines a more optimal sub-volume configuration. More optimal is defined by a better load balance of all rendering devices 120. The optimum preferably includes all rendering device 120 needing the same time to render their individual part of a 3D-scene.

The change of the sub-volumes can be expensive (i.e. it takes a long time), because 3D-objects have to be transferred from one rendering device 120 to another. In order to prevent this costly operation, the system 100 implements an optional way of giving a copy of all 3D-objects to all rendering devices 120. In order to prevent an over-load of the memory 122 of the rendering devices 120, the system 100 can write 3D-objects on to a cheaper slower memory with higher capacity (e.g. a hard disk associated with a rendering device 120) of each rendering device 120.

Alternative Embodiments

Although preferred embodiments are disclosed herein, many variations are possible which remain within the concept, scope, and spirit of the invention. These variations would become clear to those skilled in the art after perusal of this application.

After reading this application, those skilled in the art will recognize that these alternative embodiments and variations are illustrative and are intended to be in no way limiting. After reading this application, those skilled in the art would recognize that the techniques described herein provide an enabling technology, with the effect that advantageous features can be provided that heretofore were substantially infeasible.

Claims

1. A method, including steps of

allocating information representing a three-dimensional scene among a set of sub-scenes;
generating a two-dimensional sub-image for each sub-scene, responsive to a rendering viewpoint;
combining the two-dimensional sub-images, responsive to the rendering viewpoint.

2. A method as in claim 1, including steps of presenting a result of the steps of combining.

3. A method as in claim 1, including steps of sending information representing each sub-scene to a substantially independent computing device.

4. A method as in claim 1, wherein the steps of generating include rendering the sub-scenes substantially concurrently and substantially independently.

5. A method as in claim 1, wherein the sub-images include substantially compact and continuous planar regions.

6. A method as in claim 1, wherein the sub-scenes include substantially compact and continuous spatial regions.

7. A method as in claim 1, wherein the sub-scenes smoothly fill substantially the entire scene.

8. A method as in claim 1, wherein the steps of allocating include steps of

determining a set of rendering commands associated with the scene; and
optimizing the set of sub-scenes with respect to at least one selected parameter.

9. A method as in claim 8, wherein the parameter includes at least one of: a number of rendering commands, an amount of bandwidth for sending rendering commands, an amount of memory for maintaining rendering commands, an amount of time for performing rendering commands.

10. A method as in claim 8, wherein the steps of optimizing include positioning planar borders between sub-scenes.

11. A method as in claim 8, wherein

the sub-scenes include rectilinear sub-objects of a rectilinear object encompassing the scene; and
the steps of optimizing include positioning planar borders between sets of sub-scenes, with the effect of allocating selected spatial regions of the scene to selected sub-scenes.

12. A method as in claim 1, wherein the steps of combining include

determining a partial ordering of the sub-images responsive to the rendering viewpoint; and
combining any overlapping sub-images in response to the partial ordering.

13. A method as in claim 12, wherein the partial ordering is responsive to a back-to-front ordering of the sub-images responsive to the rendering viewpoint.

14. A method as in claim 12, wherein the steps of combining overlapping sub-images include steps of

coupling the sub-images in a hierarchy responsive to the partial ordering; and
combining overlapping sub-images substantially concurrently and substantially independently.

15. A method as in claim 12, wherein the steps of combining overlapping sub-images include steps of

coupling the sub-images using a switch responsive to the partial ordering; and
combining overlapping sub-images substantially concurrently and substantially independently.

16. A method as in claim 1, wherein the steps of generating include steps of, for at least one selected sub-scene

allocating information representing that sub-scene among a set of sub-sub-scenes;
generating a sub-sub-image for each sub-sub-scene, responsive to the rendering viewpoint; and
combining the sub-sub-images, responsive to the rendering viewpoint.

17. A method as in claim 16, wherein the at least one sub-scene is selected responsive to at least one of: a desired fineness of detail, a proximity of the rendering viewpoint to the sub-scene, a rate of change of the sub-scene, a relative range of angles within the sub-scene with respect to the rendering viewpoint.

18. A method as in claim 16, wherein the at least one selected sub-scene is selected responsive to at least one of: a number of rendering commands, an amount of bandwidth for sending rendering commands, an amount of memory for maintaining rendering commands, an amount of time for performing rendering commands.

19. Apparatus including

a set of computing devices, at least one of which takes on the role of a controlling device, and at least one of which takes on the role of a rendering device;
a communication link between the controlling device and one or more rendering devices;
information, at the controlling device, representing a set of objects in a three-dimensional scene and a rendering viewpoint with respect to that scene; and
information, at one such rendering device, representing a two-dimensional sub-image associated with only a portion of that scene;
wherein at least one of those devices takes on the role of a compositing device.

20. Apparatus as in claim 19, wherein that portion of the scene includes a substantially compact and continuous spatial sub-region of the scene.

21. Apparatus as in claim 19, wherein that sub-image includes a substantially compact and continuous planar region.

22. Apparatus as in claim 19, wherein the compositing device includes the controlling device.

23. Apparatus as in claim 19, including information, at the compositing device, representing a back-to-front partial ordering of one or more such sub-images, with respect to the rendering viewpoint.

24. Apparatus as in claim 22, wherein

the compositing device includes more than one computing device taking on the role of a portion of the compositing device;
the portions of the compositing device include a hierarchy responsive to the partial ordering.

25. Apparatus as in claim 22, wherein

the compositing device includes more than one computing device taking on the role of a portion of the compositing device;
the portions of the compositing device include a switch responsive to the partial ordering.
Patent History
Publication number: 20070070067
Type: Application
Filed: Apr 26, 2006
Publication Date: Mar 29, 2007
Applicant: ModViz, Inc. (Oakland, CA)
Inventor: Thomas Ruge (Oakland, CA)
Application Number: 11/412,410
Classifications
Current U.S. Class: 345/421.000
International Classification: G06T 15/40 (20060101);