Client/server-based animation software, systems and methods

- Accelerated Pictures, LLC

Various embodiments of the invention provide novel software, systems and methods for animation and/or filmmaking. In a set of embodiments, for example, a client-server system provides the ability to control various aspects of a live-action and/or an animated scene, including cameras and/or light sources (either real and/or virtual), animated characters, and other objects. This can include, merely by way of example, moving cameras, light and/or the like, as well as rendering animated objects (e.g., based on movements of the objects themselves and/or based on movements of cameras, lights, etc.).

Latest Accelerated Pictures, LLC Patents:

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

The present disclosure may be related to the following commonly assigned applications/patents:

This application claims priority from co-pending U.S. Provisional Patent Application No. 60/623,414 filed Oct. 28, 2004 by Alvarez et al. and entitled “Client/Sever-Based Animation Software.”

This application also claims priority from co-pending U.S. Provisional Patent Application No. 60/623,415 filed Oct. 28, 2004 by Alvarez et al. and entitled “Control Having Interchangeable Coordinate Control Systems.”

This application is also related to co-pending U.S. patent application Ser. No. ______, filed on a date even herewith by Alvarez et al. and entitled “Camera and Animation Controller, Systems and Methods.”

The respective disclosures of these applications/patents are incorporated herein by reference in their entirety for all purposes.

COPYRIGHT STATEMENT

A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.

FIELD OF THE INVENTION

The present invention relates to the field of animation and filmmaking in general and, in particular, to software, systems and methods for creating and/or editing animations and/or films, including any type of film-based and/or digital still and/or video image production.

BACKGROUND OF THE INVENTION

Animated films have long been favorites of both children and adults. More recently, advances in computer animation have facilitated the process of making animated films (and in storyboarding and/or adding computer effects to live-action films). Generally, animation software has been used on a PC with display, keyboard, mouse, animation software and rendering software. Each PC is a standalone unit that contains the animation data to be worked on and has the animation software that will provide to the animation data the added contributions and movements imparted by the programmer or artist at the PC.

In some cases, especially in large organizations, networked computers have been used in the animation process. Referring to FIG. 1, a typical network might comprise a central server system S with version tracking software 100, which stores the animation data files in bulk storage 101. When a user wishes to perform an animation task, she accesses the server, checks out the relevant data files, and alters the animation data files with animation software 111 and actuators 115 (here represented as a keyboard and mouse) resident in the PC. In order to check her work, the artist replays the altered animation data locally through rendering software 112 resident on the PC, viewing the animation data movements at the PC's local display 114. Finally, and at the end of a working session, the user often will check in the altered data files as a new version added to bulk storage 101 and tracked by version tracking software 100.

Other systems perform animation and rendering at a server (and/or a server farm). These systems generally require very powerful servers, because the server(s) have to render (i.e., generate an image or sequence of images for) animations produced by many client workstations. Moreover, users may suffer long delays while waiting for a scene to be rendered by the server(s).

Such systems present significant limitations. For instance, the traditional server-based animation systems fail to take maximum advantage of available computing resources. While most artists have relatively high-performance workstations, systems that render animations at a server often fail to take full advantage of the processing available on the workstation. Conversely, systems that rely on the workstation to perform the animation and rendering fail to take advantage of a principle strength of server-class computers: high performance file input/output. While rendering generally is a very processor-intensive task, animating generally is less processor-intensive but involves accessing a large amount of data, a task at which severs generally excel.

Moreover, client-based animation systems make the management of data (including version control, security of intellectual property, etc.) quite cumbersome. Merely by way of example, if two or more programmers or artists are working upon otherwise substantially identical portions of animation, incompatible variations can be introduced. These incompatible variations are a direct result of the local temporary storage of the modified data. When it is remembered that the final animation project is typically composed of many man years of effort, the presence of incompatible variations can present severe complication.

Additionally, modern animation includes the use of expensive tools and processes to generate models of the three-dimensional shapes describing objects or characters used in the animation. Local unsupervised and inconsistent modification of the models in checked out animation data can occur. Further, presuming that model modification is made during the animation process, animation data previously recorded must, in the usual case, be completely reworked.

Furthermore, both animation software and the work product of the animators is subject to a high risk of piracy. By storing the suite of animation software at the local PC 110 and/or allowing a user to obtain all relevant files related to an animation, the producer of a movie exposes these assets to unauthorized copying.

Hence, existing systems, which generally concentrate the animation and rendering tasks together on either a server or a client, suffer significant drawbacks.

Definition of Terms

Certain terms, as used in this disclosure, have the following defined meanings:

Model. A three-dimensional shape, usually described in terms of coordinates and mathematical data, describing the shape of any character or object. Examples of characters include actors, animals, or other beings whose animation can tell or portray the story. In the usual case, the model is typically provided in a neutral pose (known in the art as a “da Vinci pose”), in which the model is shown standing with limbs spread apart and head looking forward. It is understood in the art that in, many situations, the generation of the model can be extraordinarily expensive. In some cases, the model is generated, scanned or otherwise digitized with recorded spatial coordinates of numerous points on its surface. A virtual representation of the model can occur when the data is reconstructed. Furthermore, the model may include connectivity data, such that the collection of points defining the model can be treated as the vertices of polygonal approximations of the surface shape of the model. The model may include various mathematical smoothing and/or interpolation algorithms. Such models can include collections of spatial points ranging from hundreds of points to hundreds of thousands or more points.

Render. To make a model viewable as an image, such as by applying textures to a model and/or imaging the model using a real or virtual camera or by photographing a real object.

Rig. In general, the term “rig” is used to refer to a deformation engine that specifies how movement of the model should translate into animation of a character based on the model. This is the software and data used to deform or transform the “neutral pose” of the model into a specific “active pose” variation of the model. Taking the example of the human figure, the rig would impart to the model the skeletal joint movement including shoulder, elbow, hand, finger, neck, head, hip, knee, and foot movement and the like. By having animation software manipulate a rig incorporated to a model, animated movement of the model is achieved.

Texture. In the usual modern case, one or more are mapped onto the surface of a model to provide a digital image portrayed by the model as manipulated by the rig.

Virtual Character. The model as deformed by the rig and presented by the texture in animation.

Virtual Set. The vicinity or fiducial reference point and coordinate system with respect to which the location of any element may be specified.

Prop. An object on the virtual set usually comprising a model without a rig.

Scene. A virtual set, one or more props, and one or more virtual characters.

Action. Animation associated with a scene. It should be noted that upon editing of the final animation story, portions of an action may be distributed without regard to time for example at the beginning, middle and end of the animation story.

Editing. The process by which portions of actions are assembled to construct a story, narrative, or other product.

Actuator. A device such as a mouse or keyboard on a personal computer enabling input to the animation software. This term includes our novel adaptation of a “game controller” for imparting animation to characters.

BRIEF SUMMARY OF THE INVENTION

Various embodiments of the invention provide novel software, systems and methods for animation and/or filmmaking. In a set of embodiments, for example, a client-server system provides the ability to control various aspects of a live-action and/or an animated scene, including cameras and/or light sources (either real and/or virtual), animated characters, and other objects. This can include, merely by way of example, moving cameras, lights and/or the like, as well as rendering animated objects (e.g., based on movements of the objects themselves and/or based on movements of cameras, lights, etc.).

One set of embodiments, for example, provides systems that can be used in the filmmaking process and/or systems for producing animated works. An exemplary system, in accordance with some embodiments, includes an animation client computer, which may comprise a first processor, a display device, at least one input device, and/or animation client software. The system may further include an animation server computer comprising a processor and animation server software.

In certain embodiments, the animation client software may comprise instructions executable by the first processor to accept a set of input data from the at least one input device. The set of input data may indicate a desired position for an animated object, which might comprise a set of one or more polygons and/or a set of one or more textures to be applied to the set of one or more polygons. The animation client software might further comprise instructions executable by the first processor to transmit the set of input data for reception by the animation server computer.

The animation server software can comprise instructions executable by the second processor to receive the set of input data from the animation client computer and/or to process the input data to determine the desired position of the animated object. The animation server software may also comprise additional instructions executable by the second processor to calculate a set of joint rotations defining the desired position of the animated object and/or to transmit the set of joint rotations for reception by the animation client computer.

The animation client software, then, may comprise further instructions executable by the first processor to receive the set of joint rotations defining the position of the animated object and/or to calculate (perhaps based on the set of joint rotations) a set of positions for the set of one or more polygons. There may also be additional instructions executable by the first processor to apply to the set of one or more polygons at least one of the textures from the set of one or more textures to render the animated object in the desired position. The rendered animated object then may be displayed by the animation client, and/or the set of joint rotations may be stored at a data store associated with the animation server computer.

In a particular embodiment, the animation client computer may be a plurality of animation client computers including a first animation client computer and a second animation client computer. The first animation client computer might comprise the input device(s), while the second animation client computer might comprise the display device(s). The animation server computer then, might receive the set of input data from the first animation client computer and/or transmit the set of joint rotations for reception by the second animation client computer, which might be configured to receive the set of joint rotations, calculate a set of positions for the set of one or more polygons based on the set of joint rotations, apply to the set of one or more polygons at least one of the textures from the set of one or more textures to render the animated object in the desired position, and/or display on the display device the rendered animated object.

In another set of embodiments, the animation client software may comprise instructions executable by the first processor to accept a set of input data (which might indicate a desired position for an object) from the at least one input device and/or instructions executable by the first processor to transmit the set of input data for reception by the animation server computer. In some embodiments, the animation server software comprises instructions executable by the second processor to receive the set of input data from the animation client computer and/or to transmit for reception by the animation client computer a set of position data, perhaps based on the set of input data received from the animation client computer. The animation client software might further comprise instructions executable by the first processor to receive the set of position data from the animation server computer and/or to place the object in the desired position, based at least in part on the set of position data.

In various embodiments, the object can be a virtual object (including without limitation a virtual camera, a virtual light source, etc.) and/or a physical object (including without limitation a device, such as a camera, a light source, etc., in communication with the animation client computer, and/or any other appropriate object). Merely by way of example, the object may be an animated character, which might comprise a set of polygons and at least one texture, such that placing the object in the desired position comprises rendering the animated character in the desired position.

In one aspect, the set of position data might comprise data (such as joint rotations, joint angles, etc.) defining a position of the object and/or defining a deformation of a rig describing the object. In another aspect, the set of position data might comprise a position and/or orientation of a real or virtual camera; the position of the object in the scene may be affected by the position and/or orientation of the real or virtual camera, such that the placement of the object depends on the position and/or orientation of the real or virtual camera.

In certain embodiments, the animation server has an associated data store configured to hold a set of one or more object definition files for the animated object, the set of one or more object definition files collectively specifying a set of polygons and textures that define the object (e.g., the object definition files may comprise one or more textures associated with the object). Hence, the animation client software may comprise instructions executable by the first processor to download from the animation server computer at least a portion of the set of one or more object definition files necessary to render the object. In some cases, however, the downloaded portion of the set of one or more object definition files may be insufficient to independently recreate the animated object without additional data, which might be resident on the animation server computer. Similarly, in some configurations, the animation client computer might be unable to upload to the animation server computer any modifications of the at least a portion of the set of one or more object definition files.

In other configurations, the animation client software comprises further instructions executable by the first processor to modify the object definition files to produce a set of modified object definition files. Optionally, the animation server software comprises instructions executable by the second processor to receive the set of modified object definition files and/or to track changes to the set of object definition files. In some cases, the animation server computer may be configured to identify a user of the animation client computer and/or to determine whether to accept the set of modified object definition files, perhaps based on an identity of the user of the animation client computer. In other cases, the animation server computer may be configured to distribute the set of modified object definition files to a set of animation client computers comprising at least a second animation client computer.

In some embodiments, the data store is configured to hold a plurality of sets of one or more object definition files for a plurality of animated objects. Optionally, the animation server software might comprise further instructions executable by the second processor to determine whether to provide to the animation client computer one or more of the sets of the object definition files, based on, for example, a set of payment or billing information and/or an identity of a user of the animation client computer.

In other embodiments, the animation server software further comprises instructions executable by the second processor to identify a user of the animation client computer and/or to determine, (e.g., based on an identification of the user and/or a set of payment or billing information) whether to allow the animation client computer to interact with the animation server software.

In further embodiments, the animation server software comprises instructions executable by the second processor to store the set of position data at a data store (which might be associated with the animation server computer). In an aspect, the animation server software comprises instructions to store a plurality of sets of position data (each of which may be, but need not be, based on a separate set of input data) and/or to track a series of changes to a position of the object, based on the plurality of sets of position data.

In a particular set of embodiments, the animation client computer is a first animation client computer, and the system comprises a second animation client computer in communication with the animation server computer. The second animation client computer may comprise a third processor, a second display device, a second input device, and/or second animation client software.

The second animation software may comprise instructions executable by the third processor to accept a second set of input data (which may indicate a desired position for a second object) from the second input device and/or to transmit the second set of input data for reception by the animation server computer. The animation server software may comprise instructions executable by the second processor to receive the second set of input data from the animation client computer and/or to transmit (e.g., for reception by the second animation client computer) a second set of position data, which may be based on the second set of input data received from the second animation client computer. The second animation client software may further comprise instructions executable by the third processor to receive the second set of position data from the animation server computer and/or to place the second object in the desired position, perhaps based on the second set of position data.

The first object and the second object may be the same object. Accordingly, in some cases, the animation server software might comprise instructions to transmit the second set of position data for reception by the first animation client computer, and the animation client software on the first animation client computer might further comprise instructions to place the object in a position defined by the second set of position data, such that the first display displays the object in a position desired by a user of the second animation client computer. In other cases (e.g., if the first object and the second object are not the same object), the second set of position data might have no impact on a rendering of the first object on the first client computer, and/or the first set of position data might have no impact on a rendering of the second object on the second client computer.

A variety of input devices may be used. Exemplary devices include a joystick, a game controller, a mouse, a keyboard, a steering wheel, an inertial control system, an optical control system, a full or partial body motion capture unit, an optical, mechanical or electromagnetic system configured to capture the position or motion of an actor, puppet or prop, and/or the like.

In another set of embodiments, a system for producing animated works comprises a first animation client computer comprising a first processor, a first display device, at least one first input device, and first animation client software. The system further comprises an animation server computer in communication with the animation client computer and comprising a second processor and animation server software.

The first animation client software comprises instructions executable by the first processor to accept a first set of input data from the at least one input device; the first set of input data indicates a desired position for a first object. The first animation client software also comprises instructions to transmit the first set of input data for reception by the animation server computer. The animation server software comprises instructions executable by the second processor to receive the first set of input data from the first animation client computer, to calculate a first set of position data (perhaps based on the first set of input data received from the first animation client computer) and to render the first object, based at least in part on the first set of position data. The first animation client software further comprises instructions to display the first object in the desired position.

The system may further comprise a second animation client computer comprising a third processor, a second display device, at least one second input device, and second animation client software. The second animation client software can comprise instructions executable by the third processor to accept a second set of input data from the input device, the set of input data indicating a desired position for a second object and/or to transmit the second set of input data for reception by the animation server computer. The animation server software may further comprise instructions to receive the second set of input data from the second animation client computer and/or instructions to transmit for reception by the second animation client computer a second set of position data, based on the second set of input data received from the second animation client computer.

In some cases, the second animation client software comprises instructions to receive the second set of position data from the animation server computer. The second animation client software may also comprise instructions to place the second object in the desired position that object, based at least in part on the second set of position data.

Another set of embodiments provides animation client computers and/or animation server computers, which may be similar to those described above.

A further set of embodiments provides animation software, including software that can be used to operate the systems described above. An exemplary animation software package may be embodied on at least one computer readable medium and may comprise an animation client component and an animation server component. The animation client component might comprise instructions executable by a first computer to accept a set of input data from at least one input device at the first computer and/or to transmit the set of input data for reception by a second computer. The input data may indicate a desired position for an object.

The animation server component may comprise instructions executable by a second computer to receive the set of input data from the first computer and/or to transmit for reception by the first computer a set of position data, based on the set of input data received from the first computer. The animation client component, then, may comprise further instructions executable by the first computer to receive the set of position data from the second computer and/or to place the animated object in the desired position, based at least in part on the set of position data.

Still another set of embodiments provides methods, including without limitation methods that can be implemented by the systems and/or software described above. An exemplary method of creating an animated work comprises accepting at an animation client computer a set of input data (which might indicate a desired position for an object) from at least one input device, and/or transmitting the set of input data for reception by an animation server computer. In some cases, the method further comprises receiving at the animation server computer the set of input data from the animation client computer and/or transmitting for reception by the animation client computer a set of position data, based on the set of input data received from the animation client computer. The set of position data from the animation server computer may be received at the client computer. The method can further include placing the object in the desired position, based at least in part on the set of position data.

BRIEF DESCRIPTION OF THE DRAWINGS

A further understanding of the nature and advantages of the present invention may be realized by reference to the remaining portions of the specification and the drawings wherein like reference numerals are used throughout the several drawings to refer to similar components. In some instances, a sublabel is associated with a reference numeral to denote one of multiple similar components. When reference is made to a reference numeral without specification to an existing sublabel, it is intended to refer to all such multiple similar components.

FIG. 1 is a block diagram of the prior art animation software design illustrating artist and/or programmer PCs connected to a server system for checking out animation data files, processing the animation data files, and returning the animation data files to bulk storage of the animation data at the server, the exemplary server here being shown with version tracking software;

FIG. 2 is a block diagram of an animation system in accordance with one set of embodiments;

FIG. 3 is a block diagram an animation system in accordance with another set of embodiments;

FIG. 4A is a representation of a model that can be animated by various embodiments of the invention;

FIG. 4B is a schematic representation of a rig suitable for deforming the model of FIG. 4A, the rig here having manipulation at the neck, shoulders, elbow, hand, hips, knees, and ankles;

FIG. 4C is a schematic representation of texture for placement over the model of FIG. 4A to impart a texture to a portion of the exterior of the model in the form of a man's suit;

FIG. 5 is a representation of a scene;

FIG. 6 is a generalized schematic drawing illustrating various components of a client/server animation system, in accordance with embodiments of the invention;

FIG. 7 is a flow diagram illustrating a method of creating an animated work, in accordance with various embodiments of the invention; and

FIG. 8 is a generalized schematic drawing of a computer architecture that can be used in various embodiments of the invention.

DETAILED DESCRIPTION

Various embodiments of the invention provide novel software, systems and methods for animation and/or filmmaking (the term “filmmaking” is used broadly herein to connote creating and/or producing any type of film-based and/or digital still and/or video image production, including without limitation feature-length films, short films, television programs, etc.). In a set of embodiments, for example, a client-server system provides the ability to control various aspects of a live-action and/or an animated scene, including cameras and/or light sources (either real and/or virtual), animated characters, and other objects. This can include, merely by way of example, moving cameras, lights and/or the like, as well as rendering animated objects (e.g., based on movements of the objects themselves and/or based on movements of cameras, lights, etc.).

Merely by way of example, in a set of embodiments, a client animation computer accepts input (e.g., via one or more input devices) and provides that input to an animation server computer. (In some cases, the client animation may provide raw input from the input device.) The input indicates a desired movement and/or position of an animated character, relative to other objects in a virtual scene. The animation server computer, after receiving the input, calculates a set of data (including, merely by way of example, data describing a deformation of a model, such as joint rotations and/or joint angles) that describe the desired movement and/or position of the character. (The use of joint rotations in animation is described below.) After calculating the set of joint angles, the animation server computer transmits the set of joint angles to the animation client computer. The animation client computer then renders the animated character in the desired position, based on the set of joint angles, as well as a set of polygons and one or more textures defining the animated character. (As used herein, the term “polygons” broadly refers not only to the traditional polygons used to form a model of an object, but also to any other structures that commonly are used to form a model of an object, including merely by way of example, NURBS surfaces, subdivision surfaces, level sets, volumetric representations, and point sets, among others.)

In this way, the animation client computer can store some of the files necessary to render the character, and can in fact render the character if provided the proper joint angles. This is beneficial, in many situations, because it relieves the animation server of the relatively processor-intensive task of rendering the animation. This arrangement, however, also allows the server to perform the joint calculations, which, while generally not as processor-intensive as the rendering process, often impose relatively high file input/output (“I/O”) requirements, due to the extensive size of the databases used to hold data for performing the calculation of joint angles.

This exemplary system, then, provides a distribution of work that takes advantage of the strength of the animation client (that is, the ability to provide a plurality of animation client computers for performing the processor-intensive rendering tasks for various animation projects), while also taking advantage of the strength of typical server computers (that is, the ability to accommodate relatively high file I/O requirements). By contrast, in systems where a central server provides rendering services, extremely powerful (and therefore expensive) servers (and in many cases, server farms) are required to provide the rendering services. Ironically, such systems often also feature relatively powerful workstations as animation clients, but the processing power of the workstations is not harnessed for the rendering.

This exemplary system provides additional advantages, especially when compared with systems on which the animation (i.e., joint rotation calculation) and rendering processes occur on the animation client. Merely by way of example, the exemplary system described above facilitates the maintenance of data. For instance, since the joint rotations for a particular animation are calculated at the animation server, they can easily be stored there as well, and a variety of version-tracking and change-management protocols may be employed. By contrast, when individual clients (as opposed to an animation server) calculate joint rotations (and/or other position data), such data must either be stored at the several client machines or uploaded to the server after calculation, and management of that data therefore becomes much more burdensome.

Moreover, because, in the exemplary system described above, the physics engine that calculates the joint rotations remains on the server, the system can be configured to prevent the animation client from accessing sufficient data to independently perform the animation process, preventing unauthorized copying of animations and thereby providing greater security for that intellectual property.

Because various embodiments of the invention can be used to create animated works, it is helpful to provide a brief overview of the animation process. Referring first to FIG. 4A, a model 10 the form of a human figure is disclosed. Model 10 includes face 11, neck 12, arms 14 with elbow 15 and wrist 16 leading to hand 17. The model further includes hip 18 knees 19 and ankles 20. In a virtual character, the “model” (or “virtual model”) is a geometric description of the shape of the character in one specific pose (commonly called the “model pose,” “neutral pose,” or “reference pose.” The neutral pose used in the model is commonly a variation on the so called “da Vinci pose” in which the model is shown standing with eyes and head looking forward, arms outstretched, legs straight with feet approximately shoulder width apart.

The model can be duplicated in any number of ways. In one common prior art process, a clay model or human model is scanned or digitized, recording the spatial coordinates of a numerous points on the surface of the physical model so that a virtual representation of the model may be reconstructed from the data. It is to be understood that such models can be the product of great effort, taking man years to construct.

The model also includes connectivity data (also called an “edge list”). This data is recorded at the time of scanning or inferred from the locations of the points, so that the collection of points can be treated as the vertices of a polygonal approximation of the surface shape of the original physical model. It is common, but not required, in the prior art for various mathematical smoothing and interpolation algorithms to be performed on the virtual model, so as to provide for a smoother surface representation than is achieved with a pure polygonal representation. One skilled in the art will appreciate that virtual models commonly include collections of spatial coordinates ranging from hundreds of points to hundreds of thousands or more points.

Referring to FIG. 4B, a rig 30 is illustrated which is compatible with model 10 shown in FIG. 4A. Rig 30 includes head 31, neck 32, eyes 33, shoulders 34 elbows 35 and wrist 36. Further, hips 38 knees 39 and ankles 40 are also disclosed. Simply stated rig 30 is mathematically disposed on model 10 so that animation can move the rig 30 at neck 32, shoulders 34 elbows 35 and wrist 36. Further, movement of hips 38, knees 39, and ankles 40 can also occur through manipulation of the rig 30.

The rig 30 enables the model 10 to move with realistic changes of shape. The rig 30 thus turns the model 10 into a virtual character commonly required to move and bend, such as at the knees or elbows, in order to convey a virtual performance. The software and data used to deform (or transform) the “neutral pose” model data into a specific “active pose” variation of the model is commonly called a “rig” or “IK rig” where (“IK” is a shortened form of “Inverse Kinematics”).

“Inverse Kinematics” (as in “IK Rig”) is a body of mathematics that enables the computation of joint angles (or joint rotations) from joint locations and skeletal relationships. “Forward Kinematics” is the term of art for computing joint locations based on a collection of joint angles and skeletal relationships.

To a programmer skilled in the art, a rig is a piece of software which has as its inputs a collection of joint rotations, joint angles and/or joint locations (“the right elbow is bent 30 degrees” or “the tip of the left index finger is positioned 2 cm above the center of the light switch”), the skeletal relationships between the joints (“the head bone is connected to the neck bone”) and a neutral pose representation of the virtual model, and has as its output a collection of spatial coordinates and connectivity data describing the shape that the virtual actor's body takes when posed as described by the input date.

To an artist skilled in the prior art, a rig is a visual representation of the skeleton of the virtual actor, with graphical or other controls which allow the artist to manipulate the virtual actor. In the case of an “Inverse Kinematics Rig” the artist might place a mouse on the left index finger of the virtual actor and dragging the left index finger across the screen so as to cause the virtual actor's arm to extend in a pointing motion. In the case of a “Forward Kinematics Rig” the artist might click on the elbow of the virtual character and bend or straighten the rotation of the elbow joint by dragging the mouse across the screen or by typing a numeric angle on the keyboard.

Referring to FIG. 4C, texture 50 is illustrated here in the form of only a man's suit having a coat 51 and pants 52. In actual fact, texture 50 would include many other surfaces. For example, a human face, socks, shoes, hands (possibly with gloves) would all be part of the illustrated texture 50. It is common for a virtual actor to be drawn (or “rendered”) to the screen with, for example, blue eyes and a red jacket. The virtual model described earlier contains purely spatial data. Additional data and/or software, commonly called “Textures,” “Maps,” “Shaders,” or “Shading” is employed to control the colors used to render the various parts of the model.

In the earliest forms of the prior art, the color (or “texture”) information was encoded directly into the model, in the form of a “Vertex Color.” A Vertex Color is commonly an RGB triplet (Red 0-255, Green 0-255, Blue 0-255) assigned to a specific vertex in the virtual model. By assigning different RGB triplets to different vertices, the model may be colored in such a way as to convey blue eyes and a red dress.

While vertex colors are still used in the creation of virtual characters, in the current state of the art it is more common to make use of “texture coordinates” and “texture maps.” “Texture coordinates” are additional data that is recorded with the vertex location data and connectivity data in order to allow an image (or “texture”) to be “mapped” onto the surface.

In order to provide realistic coloring to the eyes of the virtual character (perhaps including veins in the whites of the eyes and/or color striations in the iris), a digital image of an eye will be acquired (possibly via a digital camera or possibly by an artist who paints such a picture using computer software). The digital image of the eye is a “texture map” (or “texture”). The vertices in the model that comprise the surface of the eye will be tagged with additional data (“texture coordinates”) that is analogous to latitude and longitude coordinates on a globe. The texture is “mapped” onto the surface by use of the texture coordinates.

Referring back to FIG. 4B, the virtual character is instructed to look to the left (for example by rotating the neck controls 32 or eye controls 33 in the rig), the virtual model is deformed in a manner which rotates all of the vertices making up the head 11 to the left. The head texture is then rendered in the desired location on the screen based upon the vertex locations and the texture coordinates of those vertices.

Referring to FIG. 5, an animated scene comprises some or all of the following: a virtual set (set 60 being shown), one or more props (floor 69, wall 70, chair 61 and table 62 being shown), and one or more virtual characters (model 10 manipulated by rig 30 having texture 50 being shown [here designated only by the 10]).

Each of the virtual sets, props, and characters has a fiducial reference point with respect to which the location of the element may be specified. Here the fiducial reference point is shown at 65. The virtual set 60, props (see chair 61 and table 62), and character 10 are assembled together by specifying their spatial locations using a shared coordinate system from fiducial 65. The choice of coordinate system is arbitrary, but a common practice is to locate the virtual set at the origin of the coordinate system.

Often, the background (or “virtual set”) is essentially a virtual character with either no rig (in the case of a purely static virtual set) or what is commonly a very simple rig (where the joint angles might control the opening angles of a door 67 or the joint locations might control the opening height of a window 68). It is common to embellish the scene used in an action with a variety of props. As with the background, props are again essentially virtual characters which are used to represent inanimate objects.

For the purposes of creating a motion picture sequence, animation data is associated with the elements of a scene to create an action. A sequence of images may be constructed by providing the rig of character 10 with a sequence of input data (“animation data”) such as 24 sets of joint angles per second, so as to produce a 24 frame per second movie. In the prior art, the animation data provided to the rigs are commonly compressed through the use of various interpolation techniques.

For example, it is common in the prior art to compress the animation data into “key frames.” A key frame is typically associated with a specific point in the timeline of the sequence of images (“t=2.4 seconds”) and specifies joint angles or joint locations for some or all of the joints in the rig. Any joints (or more generally input parameters) whose values are not specified in this key frame interpolate their values at t=2.4 seconds from other preceding and following key frames that do specify values for those joints or input parameters. The various families of mathematical formulas used to interpolate between key frame values (such as “Bezier curves” and “b-Splines”) are well known to practitioners of the art.

Several methods are commonly used by artists to specify the input data provided to the rig. Merely by way of example, in the “animation” method, the artist indicates a specific point in the timeline (“t=2.4 seconds”), adjusts one or more joint angles or locations (for example using the keyboard or by manipulating on-screen controls using a mouse), and “sets a key frame” on those joint angles or locations. The artist then moves to a different point on the timeline (“t=3.0 seconds”) and again adjusts joint angles or locations before “setting a key frame.”

Once the artist has set two or more key frames, the artist can move an indicator on the timeline or press a “play” button to watch the animation that she has created. By repeatedly adding, modifying, or moving key frames while repeatedly watching the playback of the animation, the artist can create the desired performance.

In the “motion capture,” method the artist performs the motion in some manner while the computer records the motion. Common input devices for use with motion capture include a full-body suit equipped with sensing devices to record the physical joint angles of a human actor and so-called “Waldo” devices which allow a skilled puppeteer to control a large number of switches and knobs with their hands (Waldo devices are most commonly used for recording facial animations). It is common to perform multiple captures of the same motion, during which sequence of captures the actor repeatedly reenacts the same motions until data is collected which is satisfactory both artistically and technically.

In the “procedural” method, custom software is developed which generates animation data from high-level input data. Procedural animation is commonly used when animating non-human actors such as flocks of birds or falling rocks. In FIG. 5, bird 63 illustrates this technique.

In the “hybrid” method, the motion capture and/or procedural method is used to specify the initial data. For example, the initial movement of bird 63 would be used in the procedural method. The data obtained via the motion capture or procedural method is then compressed in a manner that makes it technically similar (or compatible with) data obtained via the animation method. For example, presuming that bird 63 was going to interact with character 10 in scene 60, modification of the procedural image of bird 63 would occur. Once the initial data has been compressed, it is then in the hybrid method manipulated, re-timed, and/or extended through the use of the animation method.

The animation software often plays back previously specified animations by interpolating animation data at a specific point in time, providing the interpolated animation data to the rigs, making use of the rigs to deform the models, applying textures to the models, and presenting a rendered image on the display. The animation software then advances to a different point in the timeline and repeats the process.

Based on this general description of the animation process, we turn now to FIG. 2, which illustrates a client/server animation system in accordance with a set of embodiments. The system comprises a plurality of animation client computers 200 in communication (e.g., via a network) with server computer 210 as shown in FIG. 2. (The network can be any suitable network, including without limitation a local area network, wide area network, wired and/or wireless network, the Internet, an intranet or extranet, etc. Those skilled in the art will appreciate that any of a variety of connection facilities, including without limitation such networks, can provide communication between the server 210 and the clients 200.) In accordance with some embodiments, models 10, rigs 30 and/or textures 50 (and/or portions thereof) may be stored at the client 200, e.g., at model, rig, and texture storage 201. In this particular embodiment, such storage has advantages.

Presuming that a major studio is either subscribing or alternately maintaining models 10, rigs 30, and textures 50, there can be great expense and effort in developing these discrete components of a character. Further, and because of the effort and expense required, the owner or operator of the client 200 may not choose to share the contents of texture storage 201 with anyone, including the provider of animation server 210.

The animation client computer may, in some embodiments, include rendering software 203 operatively connected to model, rig, and texture storage 201. The rendering software may be part of an animation client application. Furthermore, a controller 202 (here shown as a keyboard and mouse) operates through network connection 205. It should be noted that any suitable controller, including those described in U.S. patent application Ser. No. ______, (attorney docket number 020071-000210), already incorporated by reference, can be used in accordance with various embodiments.

The animation server computer 210 includes animation data storage 211, animation software 212, and/or version tracking software 214. Presuming that the artist or programmer has created the models 10, rigs 30, and textures 50, manipulation of an Action on a scene 60 can either occur from the beginning (de novo) or, alternately, the artist and/or programmer may check out a previous version of the Action through the network connection 205 by accessing animation server 210 and retrieving from animation data storage 211 the desired data to animation software 212.

In either event, utilizing the animation techniques described above, input data (e.g., based from the controller 202 and/or an actuator thereof) is received by the client 200. In some cases, the input data may be described by an auxiliary coordinate system, in which case the input data may be processed as described in U.S. patent application Ser. No. ______ (attorney docket number 020071-000210), already incorporated by reference. Other processing may be provided as well, as necessary to format the raw input data received from the controller 202.

The animation client 200 in turn transmits the input (either as raw input data and/or after processing by the animation client computer 200), is transmitted (e.g., via network connection 205) to the animation server computer 210, and, more particularly, animation software 212 (which might be incorporated in an animation server software). Processing of the selected Action will occur at animation software 212 within animation server 210. Such processing will utilize the techniques above described. In particular, a set of joint rotations may be calculated, based on the input data. The joint rotations will describe the position and/or motion desired in the Action.

Playback will occur by having animation software 212 emit return animation information through network connection 205 and then to rendering software 203. Rendering software 203 will access model, rig, and texture storage 201 to display at display 204 the end result of modifications introduced by the artist and or programmer at the client 200.

Thus, it will be seen that when an Action is modified at a client 200, if the models, rigs and/or textures are only available at client 200 then replay can only occur at client 200. This replay is not possible with the information possessed by server 210 because the model, rig and/or texture storage 201 is not resident in or available to animation server 210. (It should be noted, however, that in various embodiments, some or all portions of the models, rigs, and/or textures may be stored at the animation server in addition to—or instead of—at the animation client). Alternatively, if the models rigs and textures are available at client 200 and PC 200A then replay can occur at both client 200 and client 200A, allowing for cooperative work activities between two users.

It should be understood that animation server 210 (and/or another server in communication therewith) can provide a number of services. For example, the server 210 can provide access control; for instance, client 200 is required to log in to server 210. Furthermore, subscription may be required as a prerequisite for access to server 210. In some cases, server 210 can deliver different capabilities for different users. By way of example, PC 200A can be restricted to modification of character motion while PC 200 modifies animation of bird 68.

Generally, the client 200 controls the time when playback starts and stops for any individual Action. Moreover, the client 200 may arbitrarily change the portion of the Action being worked on by simply referring to that Action at a specified time period.

It will further be realized that with the rendering software 203 and the model, rig, and/or texture software in storage 201, the data transmitted over the network through network connection 205 is maintained at a minimum. Specifically, just a small section of data need be transmitted. This data will include that which is needed to play the animation (e.g., a set of joint rotations, etc.). As the rendering software 203 and some or all of the model, rig, and/or texture storage 201 may be resident at PC 200, only small batches of data need be transmitted over the Internet.

It will be understood that the server 210 is useful in serving multiple clients. Further, the server 210 can act as a studio, providing the artist and/or programmer at client 200 with a full range of services including storage and delivery of updated model, texture, and rig data to client 200 and client 200A. In a set of embodiments, the server 210 will store all animation data. Furthermore, through version tracking software 214, animation data storage 211 will provide animation data (such as joint rotations, etc.) to the respective client 200A on an as needed basis.

Referring now to FIG. 3, a system in accordance with another set of embodiments is illustrated. Specifically, client 300 includes a network connection 305, a controller 302, and a display 304. Server 310 includes animation data storage 211, version tracking software 214, and animation software 212. Additionally, server 310 includes rendering software 303.

In this case, the manipulation of the animation software from controller 302 through network connection 305 of the client 300 is identical to that shown in FIG. 2. As well, the animation software for calculating joint rotations, etc. is resident on the server 310. In the embodiments illustrated by FIG. 3, however, the rendering component also resides on the server 310. Specifically, rendering software 303 will generate actual images (e.g., bitmaps, etc.), which images will be sent through the network to network connection 305 and may be displayed thereafter at display 304.

FIG. 6 provides a generalized schematic diagram of a client/server system in accordance with some embodiments of the invention. The system 600 includes an animation server computer 605, which may be a PC server, minicomputer, mainframe, etc. running any of a variety of available operating systems including UNIX™ (and/or any of its derivatives, such as Linux, BSD, etc.), various varieties of Microsoft Windows™ (e.g., NT™, XP™, 2003, Vista™, Mobile™, CE™, etc.), Apple's Macintosh OS™ and/or any other appropriate server operating system. The animation server computer also includes animation server software 610, which provides animation services in accordance with embodiments of the invention. The animation server computer 605 may also comprise (and/or have associated therewith) one or more storage media 615, which can include storage for the animation server software 610, as well as a variety of associated databases (such as a database of animation data 615a, a data store 615b for model data, such as the polygons and textures that describe an animated character, a data store 615c for scene data, and any other appropriate data stores).

The system 600 further comprises one or more animation client computers 620, one or more of which may include local storage (not shown), as well as animation client software 625. (In some cases, such as a case in which the animation client computer 620 is designed only to provide input and display a rendered image, the rendering subsystem may reside on the animation server 620, as described with respect to FIG. 3, for example. In this way, thin clients, such as wireless phones, PDAs, etc. may be used to provide input even if they have insufficient processing power to render the objects).

The animation client computer thus 620 may be, inter alia, a PC, workstation, laptop, tablet computer, PDA, wireless phone, etc. running any appropriate operating system (such as Apple's Macintosh OS™, UNIX and/or its derivatives, Microsoft Windows™, etc.) Each animation client 620 may also include one or more display devices 630 (such as monitors, LCD panels, projectors, etc.) and/or one or more input devices 635 (such as the controllers described above and in U.S. patent application Ser. No. ______ (attorney docket number 020071-000210), already incorporated by reference, as well as, to name but a few examples, a telephone keypad, a stylus, etc.).

In accordance with a set of embodiments, the system 600 may operate in the following exemplary manner, which is described by additional reference to FIG. 7, which illustrates a method 700 of creating an animated work in accordance with some embodiments of the invention. (It should be noted that, while the method 700 of FIG. 7 is described in conjunction with the system 600 of FIG. 6, that description is provided for exemplary purposes only, and the methods of the invention are not limited to any particular hardware or software implementation. Likewise, the operation of the system 600 of FIG. 6 is not limited to the described methods.)

The animation client software 625 comprises instructions executable by the animation client computer 620 to accept a set of input data from one or more input devices (block 705). The input data may, for example, indicate a desired position of an object in a scene (which may be a virtual scene, a physical set, etc.) In particular embodiments, the object may be an animated object, which may comprise a plurality of polygons and/or textures, as described above. The animation client software optionally may process the input data, for example as described above. The animation client software then transmits the set of input data for reception by the animation server computer (block 710).

The animation server computer 605 (and, more particularly in some cases, the animation server software 610) receives the input data (block 715). The animation server software 610 calculates a set of position data (block 720), based on the received input data. In some cases, calculating the set of position data can include processing the input data to determine a desired position of an animated object and/or calculating a set of joint rotations defining that desired position (and/or defining the deformation of a rig defining the character, in order to place the character in the desired position). In other cases, including merely by way of example, if the object is a light or a camera on a physical set, there may be no need to calculate any animation data—the position can be determined based solely on the input data, perhaps in conjunction with a current position of the object.

In yet other cases, the object may be an animated character (or other object in a virtual scene), and the position of the object in the scene may be affected by the position of a virtual camera and/or light source. In these cases, the position data might comprise data about the position and/or orientation of the virtual camera/light.

The animation server computer 605 (perhaps based on instructions from the server software 610) then transmits the set of position data (e.g., joint rotations, etc.) for reception by the animation client 620 (block 725). When the animation client computer receives the set of position data (block 730), the animation client software 625 is responsible for placing the object in the desired position (block 735). This procedure necessarily will vary according to the nature of the object. Merely by way of example, if the object is an animated character, placing the object in the desired position generally will comprise rendering the animated character in the desired position, for example by calculating a set of positions for the polygons that describe the character and/or by applying any necessary textures to the model. If the object is a physical object, such as a light, placing the object in the desired position may require interfacing with a movement system, which is not illustrated on FIG. 6 but examples of which are described in detail in U.S. patent application Ser. No. ______ (attorney docket number 020071-000210), already incorporated by reference).

In some cases, the object (for instance, if the object is a virtual object) may be displayed on a display device 630 (block 740). In particular, the object may be displayed in the desired position. In other cases, the client 620 may be configured to upload the rendered object to the animation server 605 for storage and/or distribution to other computers (which might be, inter alia, other animation servers and/or clients).

The system 600 may provide a number of other features, some of which are described above. In some cases, the animation server 605 can provide animation services to a plurality of animation client computers (e.g., 620a, 620b). In an exemplary embodiment, input may be received at a first client 620a, and the position data may be transmitted to a second client 620b for rendering and/or display. (Optionally, the plurality of client computers 620 may perform rendering tasks in parallel for a given scene). In another embodiment, each client 620a, 620b accepts input and receives position data, such that two artists may collaborate on a given character and/or scene, each being able to view changes made by the other. In yet another embodiment, each client 620a, 620b may interact individually with the server 605, with each client 620 providing its own input and receiving position data based on that input. (That is, the position data received by one client has no impact on the rendering of an object on another client.)

As noted above, in some cases, the animation server software 610 may be configured not only to calculate the position data, but also to render the object (which can include, merely by way of example, not only applying one or more textures to a model of the object, but also to calculating the positions of the polygons that make up the model, based on the position data). Hence, in such cases, the rendered object may be provided to an animation client computer 620 (which may or may not be the same client computer that provided the input on which the position data is based), which then can display the object in the desired position. In some cases, the animation server 605 might render a first object for a first client 620 and might merely provide to a second client a set of position data describing a desired position of a second object.

In some embodiments, one or more of the data stores (e.g., data store 615c) may be used to store object definition files, which can include some or all of the information necessary for rending a given object, such as the model, rig, polygons, textures, etc. describing that object. An animation client 620 then can be configured to download from the server 605 the object definition files (and/or a subset therefore) to perform the rendering of the object in accordance with embodiments of the invention. It should be noted, however, that for security, the downloaded object definition files (and/or portions thereof) may be insufficient to allow a user of the client 620 to independently recreate the object without additional data resident on the server.

The system 600 may be configured such that a user of the client 620 is not allowed to modify these object definition files locally at the client 620 and/or, if local modification is allowed, the client 620 may not be allowed to upload modified object definition files. In this way, the system 600 can prevent the unauthorized modification of a “master copy” of the object definition files. Alternatively, the server software 610 may be configured to allow modified object definition files to be uploaded (and thus to receive such files), perhaps based on an identification of the user of the animation client computer—that is, the server 620 may be configured to identify the user and determine whether the user has sufficient privileges to upload modified files. (It should be noted that the identification, authentication and/or authorization of users may be performed either by the animation server 605 and/or by another server, which might communicate such identification, authorization and/or authentication data to the animation server 605.)

Similarly, in other embodiments, the animation server software 610 may be configured to determine whether to allow an animation client 620 to interact with the server software 610. Merely by way of example, the animation server software 620 may control access to rendered objects, object definition files, the position data, and/or the software components used to create either of these, based on any number of factors. For instance, the server software 610 (and/or another component) may be configured to identify, authenticate and/or authorize a user of the animation client 620. Based on an identity of the user of a client computer 620 (as well, in some cases, as the authentication status of the user and/or the user's authorization) the animation server software 610 may determine whether it will receive input from the client computer 620, whether it will provide position data to the animation client computer 620 and/or whether it will allow the animation client computer 620 to access files and/or animation services on the animation server 605.

Alternatively and/or in addition, the animation server 605 may be configured to provide for-fee services. Hence, the animation server software (and/or another component) may be configured to evaluate a set of payment and/or billing information (which may be, but is not necessarily, associated with an identity of a user of the animation client computer 620), and based on the set of payment and/or billing information, determine whether to allow the client 620 to interact with the server software 610 (including, as mentioned above, whether it will accept/provide data and/or allow access to files and/or services). The set of billing and/or payment data can include, without limitation, information about whether a user has a subscription for animation services and/or files, whether the user has paid a per-use fee, whether the user's account is current, and/or any other relevant information.

In some cases, various levels of interaction with the server software 610 may be allowed. Merely by way of example, if the animation server computer 605 stores a plurality of sets of rendered objects and/or object definition files (wherein, for example, each set of files comprises information describing a different animated character), the animation server 605 may allow an unregistered user to download files for a few “free” characters, while paid subscribers have access to files an entire library of characters (it should be appreciated that there may be various levels of subscription, with access to files for corresponding various numbers of characters). Similarly, a user may be allowed to pay a per-character fee for a particular character, upon which the user is allowed to download the set of files for that character. (Such commerce functionality may be provided by a separate server, third-party service, etc.) In some cases, if a user has a subscription (and/or pays a per-use fee), the user (and/or an animation client computer operated by the user) may be given access to services and/or data on the animation server. Merely by way of example, if the user pays a per-use fee to obtain object definition files for a given animated character, that user may use animation services (including without limitation those described above) for that animated character. As another example, a user may have a monthly subscription to use files for a set of animated characters, and the user may use the animation server as part of the monthly subscription. Other for-fee uses are possible as well. A user may pay, for example, a per-use and/or subscription fee for access to the services of an animation server, apart from any fees that might be paid for the use of object definition files.

The animation server software 610 (and/or another software component) may also be configured to perform change tracking and/or version management of object definition files (and/or rendered objects, position data, etc.). In some embodiments, any of several known methods of change tracking and/or version management may be used for this purpose. In particular, the change tracking/version management functions may be configured to allow various levels of access to files based on an identity of a user and/or a project that the identified user is working on. Merely by way of example, an artist in a group working on a particular character, scene, film, etc. may be authorized to access (as well, perhaps, as download) files related to that character, scene, film, while a manager or senior artist might be authorized to modify such files. An artist working on another project might not have access to any such files.

The animation server software 605 may also be configured to distribute (e.g., to other clients and/or servers) a set of modified object definition files, such that each user has access to the most recent version of these files. As described above, access to a distribution of these modified files may be controlled based on an identity of the user, various payment or billing information, etc.

Embodiments of the invention can be configured to protect stored and/or transmitted data, including without limitation object definition files, rendered objects, input data, position data, and the like. Such data can be protected in a variety of ways. As but one example, data may be protected with access control mechanisms, such as those described above. In addition, other protection measures may be implemented as well. Merely by way of example, such data may be encrypted prior to being stored at an animation server and/or prior to being transmitted between an animation server and an animation client, to prevent unauthorized access to such data. As another example, data may be digitally signed and/or certified before storage and/or before transmission between computers. Such signatures and/or certifications can be used, inter alia, to verify the identification of an entity that created and/or modified such data, which can also facilitate change tracking and/or version management of various data used by embodiments of the invention.

While a few examples of the data management services that can be provided by various embodiments of the invention are described above, one skilled in the art should appreciate, based on the disclosure herein, that a variety of additional services may be enabled by certain features of the disclosed embodiments.

FIG. 8 provides a generalized schematic illustration of one embodiment of a computer system 800 that can perform the methods of the invention and/or the functions of computer, such as the animation server and client computers described above. FIG. 8 is meant only to provide a generalized illustration of various components, any of which may be utilized as appropriate. The computer system 800 can include hardware components that can be coupled electrically via a bus 805, including one or more processors 810; one or more storage devices 815, which can include without limitation a disk drive, an optical storage device, solid-state storage device such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable and/or the like (and which can function as a data store, as described above). Also in communication with the bus 805 can be one or more input devices 820, which can include without limitation a mouse, a keyboard and/or the like; one or more output devices 825, which can include without limitation a display device, a printer and/or the like; and a communications subsystem 830; which can include without limitation a modem, a network card (wireless or wired), an infra-red communication device, and/or the like).

The computer system 800 also can comprise software elements, shown as being currently located within a working memory 835, including an operating system 840 and/or other code 845, such as the application programs (including without limitation the animation server and client software) described above and/or designed to implement methods of the invention. Those skilled in the art will appreciate that substantial variations may be made in accordance with specific embodiments and/or requirements. For example, customized hardware might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets), or both.

While the invention has been described with respect to exemplary embodiments, one skilled in the art will recognize that numerous modifications are possible. For example, the processes described herein may be implemented using hardware components, software components, and/or any combination thereof. Thus, although the invention has been described with respect to exemplary embodiments, it will be appreciated that the invention is intended to cover all modifications and equivalents within the scope of the following claims.

Claims

1. A system for producing animated works, the system comprising:

an animation client computer in communication with the animation server computer, the animation client computer comprising a first processor, a display device, at least one input device, and animation client software comprising: a) instructions executable by the first processor to accept a set of input data from the at least one input device, the set of input data indicating a desired position for an animated object, the animated object comprising a set of one or more polygons and a set of one or more textures to be applied to the set of one or more polygons; and b) instructions executable by the first processor to transmit the set of input data for reception by an animation server computer; and
an animation server computer comprising a second processor and animation server software comprising: a) instructions executable by the second processor to receive the set of input data from the animation client computer; b) instructions executable by the second processor to process the set of input data to determine the desired position of the animated object; c) instructions executable by the second processor to calculate a set of joint rotations defining the desired position of the animated object; and d) instructions executable by the second processor to transmit the set of joint rotations for reception by the animation client computer;
wherein the animation client software further comprises: a) instructions executable by the first processor to receive the set of joint rotations defining the position of the animated object; b) instructions executable by the first processor to calculate, based on the set of joint rotations, a set of positions for the set of one or more polygons; and c) instructions executable by the first processor to apply to the set of one or more polygons at least one of the textures from the set of one or more textures to render the animated object in the desired position.

2. A system as recited by claim 1, wherein the animation client software comprises further instructions executable by the first processor to:

display on the display device the rendered animated object.

3. A system as recited by claim 1, wherein the animation server software comprises further instructions executable by the second processor to:

store at a data store associated with the animation server computer the set of joint rotations.

4. A system as recited by claim 1, wherein:

the animation client computer is a plurality of animation client computers comprising a first animation client computer and a second animation client computer;
the first animation client computer comprises the at least one input device;
the second animation client computer comprises the display device;
the animation server computer receives the set of input data from the first animation client computer;
the animation server computer transmits the set of joint rotations for reception by the second animation client computer, and
the second animation client computer is configured to: a) receive the set of joint rotations; b) based on the set of joint rotations, calculate a set of positions for the set of one or more polygons; c) apply to the set of one or more polygons at least one of the textures from the set of one or more textures to render the animated object in the desired position; and d) display on the display device the rendered animated object.

5. A system for producing animated works, the system comprising:

an animation client computer, the animation client computer comprising a first processor, a display device, at least one input device, and animation client software comprising: a) instructions executable by the first processor to accept a set of input data from the at least one input device, the set of input data indicating a desired position for an object; and b) instructions executable by the first processor to transmit the set of input data for reception by an animation server computer; and
an animation server computer in communication with the animation client computer, the animation server computer comprising a second processor and animation server software comprising: a) instructions executable by the second processor to receive the set of input data from the animation client computer; b) instructions executable by the second processor to transmit for reception by the animation client computer a set of position data, based on the set of input data received from the animation client computer;
wherein the animation client software further comprises: a) instructions executable by the first processor to receive the set of position data from the animation server computer; and b) instructions executable by the first processor to place the object in the desired position, based at least in part on the set of position data.

6. A system as recited by claim 5, wherein the object is a virtual object.

7. A system as recited by claim 5, wherein the scene is physical object.

8. A system as recited by claim 5, wherein the object is an animated character.

9. A system as recited by claim 8, wherein the animated character comprises a set of polygons and at least one texture, and wherein placing the object in the desired position comprises rendering the animated character in the desired position.

10. A system as recited by claim 5, wherein the object is a device in communication with the client computer and is selected from the group consisting of a camera and a light source.

11. A system as recited by claim 5, wherein the object is selected from a group consisting of a virtual camera and a virtual light source.

12. A system as recited by claim 5, wherein the set of position data comprises a set of joint rotations defining a position of the object.

13. A system as recited by claim 5, wherein the set of position data comprises a set of joint rotations defining a deformation of a rig describing the object.

14. A system as recited by claim 5, wherein the set of position data comprises a position and/or orientation of a real or virtual camera, and wherein the position of the object in the scene is affected by the position and/or orientation of the real or virtual camera.

15. A system as recited by claim 5, wherein the animation client software further comprises instructions executable by the first processor to process a set of raw input data received from the at least one input device to produce a set of processed input data, such that the set of processed input data is transmitted for reception by the animation server computer.

16. A system as recited by claim 5, wherein the set of input data and/or the set of position data is protected prior to transmission.

17. A system as recited by claim 5, wherein the animation server has an associated data store configured to hold a set of one or more object definition files for the animated object, the set of one or more object definition files collectively specifying a set of polygons and textures that define the object.

18. A system as recited by claim 17, wherein the animation client software further comprises instructions executable by the first processor to download from the animation server computer at least a portion of the set of one or more object definition files necessary to render the object.

19. A system as recited by claim 18, wherein the object definition files are protected prior to transmission.

20. A system as recited by claim 18, wherein the object definition files comprise one or more textures associated with the object.

21. A system as recited by claim 18, wherein the at least a portion of the set of one or more object definition files is insufficient to independently recreate the animated object without additional data resident on the animation server computer.

22. A system as recited by claim 18, wherein the animation client computer is unable to upload to the animation server computer any modifications of the at least a portion of the set of one or more object definition files.

23. A system as recited by claim 18, wherein:

the animation client software comprises further instructions executable by the first processor to modify the at least a portion of the set of one or more object definition files to produce a set of modified object definition files; and
the animation server software comprises further instructions executable by the second processor to receive the set of modified object definition files.

24. A system as recited by claim 23, wherein the animation server software comprises further instructions executable by the second processor to track changes to the set of object definition files.

25. A system as recited by claim 23, wherein the animation server software comprises further instructions executable by the second processor to:

identify a user of the animation client computer; and
based on an identity of the user of the animation client computer, determine whether to accept the set of modified object definition files.

26. A system as recited by claim 23, wherein:

the animation client computer is a first animation client computer; and
the animation server software comprises further instructions executable by the second processor to distribute the set of modified object definition files to a set of animation client computers comprising at least a second animation client computer.

27. A system as recited by claim 17, wherein the data store is configured to hold a plurality of sets of one or more object definition files for a plurality of animated objects.

28. A system as recited by claim 27, wherein the animation server software comprises further instructions executable by the second processor to determine whether to provide to the animation client computer one or more of sets of the object definition files based on (a) a set of payment or billing information, or (b) an identity of a user of the animation client computer.

29. A system as recited by claim 5, wherein the animation server software further comprises:

instructions executable by the second processor to identify a user of the animation client computer; and
instructions executable by the second processor to determine, based at least on an identification of the user, whether to allow the animation client computer to interact with the animation server software.

30. A system as recited by claim 5, wherein the animation server software comprises further instructions executable by the second processor to determine whether to allow the animation client computer to interact with the animation server software based on a set of payment or billing information.

31. A system as recited by claim 5, wherein the animation server software comprises further instructions executable by the second processor to store at a data store associated with the animation server computer the set of position data.

32. A system as recited by claim 31, wherein the animation server software further comprises:

instructions executable by the second processor to store a plurality of sets of position data, each of the sets of position data based on a separate set of input data; and
instructions executable by the second processor to track a series of changes to a position of the object, based on the plurality of sets of position data.

33. A system as recited by claim 5, wherein:

the animation client computer is a first animation client computer; and
the system comprises a second animation client computer in communication with the animation server computer, the second animation client computer comprising a third processor, a second display device, a second input device, and a second animation client software comprising: a) instructions executable by the third processor to accept a second set of input data from the second input device, the second set of input data indicating a desired position for a second object; and b) instructions executable by the third processor to transmit the second set of input data for reception by the animation server computer; wherein:
the animation server software further comprises: a) instructions executable by the second processor to receive the second set of input data from the animation client computer; and b) instructions executable by the second processor to transmit for reception by the second animation client computer a second set of position data, based on the second set of input data received from the second animation client computer; and
the second animation client software further comprises: a) instructions executable by the third processor to receive the second set of position data from the animation server computer; and b) instructions executable by the third processor to place the second object in the desired position, based at least in part on the second set of position data.

34. A system as recited by claim 33, wherein the second object and the first object are the same object.

35. A system as recited by claim 34, wherein:

the animation server software further comprises: a) instructions executable by the second processor to transmit the second set of position data for reception by the first animation client computer; and
the animation client software on the first animation client computer further comprises: a) instructions executable by the first processor to place the object in a position defined by the second set of position data, such that the first display displays the object in a position desired by a user of the second animation client computer.

36. A system as recited by claim 33, wherein the second set of position data has no impact on a rendering of the first object on the first client computer, and wherein the first set of position data has no impact on a rendering of the second object on the second client computer.

37. A system as recited by claim 5, wherein the at least one input device comprises a device selected from the group consisting of a joystick, a game controller, a mouse, a keyboard, a steering wheel, an inertial control system, and an optical control system.

38. A system as recited by claim 5, wherein the at least one input device comprises a full or partial body motion capture unit.

39. A system as recited by claim 5, wherein the at least one input device comprises an optical, mechanical or electromagnetic system configured to capture the position or motion of an actor, puppet or prop.

40. An animation client computer for producing animated works, the animation client computer comprising at least one input device, a processor and animation client software comprising:

instructions executable by the processor to accept a set of input data from the at least one input device, the set of input data indicating a desired position for an animated object comprising a set of one or more polygons and a set of one or more textures to be applied to the set of one or more polygons;
instructions executable by the processor to transmit the set of input data for reception by an animation server computer; and
instructions executable by the processor to receive from the animation server computer a set of joint rotations defining the desired position of the animated object, the set of joint rotations being calculated based on the set of input data;
instructions executable by the processor to calculate, based on the set of joint rotations, a set of positions for the set of one or more polygons; and
instructions executable by the processor to apply to the set of one or more polygons at least one of the textures from the set of one or more textures to render the animated object in the desired position.

41. An animation client computer as recited in claim 40, further comprising a display device, wherein the animation client software further comprises:

instructions executable by the processor to display on the display device the rendered animated object.

42. An animation server computer for producing animated works, the animation server computer comprising a processor and animation server software comprising:

instructions executable by the processor to receive from an animation client computer a set of input data obtained from at least one input device, the set of input data indicating a desired position for an animated object comprising a set of one or more polygons and a set of one or more textures to be applied to the set of one or more polygons;
instructions executable by the processor to process the set of input data to determine a desired position of the animated object;
instructions executable by the processor to calculate a set of joint rotations defining the desired position of the animated object; and
instructions executable by the processor to transmit for reception by the animation client computer the set of joint rotations, such that the animation client computer can use the set of joint rotations to render the animated object in the desired position.

43. A system for producing animated works, the system comprising:

a first animation client computer, the first animation client computer comprising a first processor, a first display device, at least one first input device, and first animation client software comprising: a) instructions executable by the first processor to accept a first set of input data from the at least one input device, the first set of input data indicating a desired position for a first object; and b) instructions executable by the first processor to transmit the first set of input data for reception by an animation server computer; and
an animation server computer in communication with the animation client computer, the animation server computer comprising a second processor and animation server software comprising: a) instructions executable by the second processor to receive the first set of input data from the first animation client computer; b) instructions executable by the second processor to calculate a first set of position data, based on the first set of input data received from the first animation client computer; d) instructions executable by the second processor to render the first object, based at least in part on the first set of position data;
wherein the first animation client software further comprises: a) instructions executable by the first processor to display the first object in the desired position.

44. A system as recited by claim 43, wherein:

the system comprises a second animation client computer comprising a third processor, a second display device, at least one second input device, and second animation client software comprising: a) instructions executable by the third processor to accept a second set of input data from the at least one input device, the set of input data indicating a desired position for a second object; and b) instructions executable by the third processor to transmit the second set of input data for reception by the animation server computer;
the animation server software further comprises: a) instructions executable by the second processor to receive the second set of input data from the second animation client computer; and b) instructions executable by the second processor to transmit for reception by the second animation client computer a second set of position data, based on the second set of input data received from the second animation client computer; and
the second animation client software further comprises: a) instructions executable by the third processor to receive the second set of position data from the animation server computer; and b) instructions executable by the third processor to place the second object in the desired position for the second object, based at least in part on the second set of position data.

45. An animation software package embodied on at least one computer readable medium, the animation software package comprising:

an animation client component comprising: a) instructions executable by a first computer to accept a set of input data from at least one input device at the first computer, the input data indicating a desired position for an object; and b) instructions executable by a first computer to transmit the set of input data for reception by a second computer; and
an animation server component comprising: a) instructions executable by a second computer to receive the set of input data from the first computer; b) instructions executable by a second computer to transmit for reception by the first a set of position data, based on the set of input data received from the animation client computer;
wherein the animation client component further comprises: a) instructions executable by the first computer to receive the set of position data from the second computer; and b) instructions executable by the first computer to place the animated object in the desired position, based at least in part on the set of position data.

46. A method of creating an animated work, the method comprising:

accepting at an animation client computer a set of input data from at least one input device, the input data indicating a desired position for an object;
transmitting the set of input data for reception by an animation server computer;
receiving at the animation server computer the set of input data from the animation client computer;
transmitting for reception by the animation client computer a set of position data, based on the set of input data received from the animation client computer;
receiving at the client computer the set of position data from the animation server computer; and
based at least in part on the set of position data, placing the object in the desired position.
Patent History
Publication number: 20060109274
Type: Application
Filed: Oct 28, 2005
Publication Date: May 25, 2006
Applicant: Accelerated Pictures, LLC (Redmond, WA)
Inventors: Donald Alvarez (Woodinville, WA), Mark Parry (Huntington Beach, CA)
Application Number: 11/262,492
Classifications
Current U.S. Class: 345/473.000
International Classification: G06T 15/70 (20060101); G06T 13/00 (20060101);