Method and system for producing a sequence of views
A method for producing a sequence of views, comprising the steps of providing a screenplay as an initial meta script in a meta script language for a computer; converting the initial meta script into commands for controlling at least one motion picture production device; executing the converted commands with said at least one motion picture production device in order to create a sequence of views; and displaying, in real time, the sequence of views on a display device.
In view of the above, a method for producing a sequence of views is provided, the method comprising the steps of providing a screenplay as an initial meta script in a meta script language for a computer; converting the initial meta script into commands for controlling at least one motion picture production device; executing the converted commands with said at least one motion picture production device in order to create a sequence of views; and displaying, in real time, the sequence of views on a display device. Further aspects, advantages and features are apparent from the dependent claims, the description and the accompanying drawings.
A full and enabling disclosure to one of ordinary skill in the art is set forth more particularly in the remainder of the specification, including reference to the accompanying figures wherein:
Reference will now be made in detail to the various embodiments, one or more examples of which are illustrated in the figures. Each example is provided by way of explanation, and is not meant as a limitation. For example, features illustrated or described as part of one embodiment can be used on or in conjunction with other embodiments to yield yet a further embodiment. It is intended that such modifications and variations are included within the present specification.
In the context of this application, the term “screenplay” should be understood as including a blueprint for a motion picture. The screenplay may be either an adaptation from a previous works such as a novel, a play, a TV-show, or a short story or may be an original work. Furthermore, it is intended that the term screenplay also includes the meaning of a “script” which may be less detailed. Typically, a screenplay differs from traditional literature conventions in that it may not involve emotion-related descriptions and other aspects of the story that are, in fact, visual within the end-product, i.e. the motion picture.
In the context of this application, the term “script language” or “scripting language” should be understood as including a computer programming language that will be interpreted command-by-command. It should be distinguished from a programming language which is converted permanently into binary executable files by means of compiling source code.
In the context of this application, the term “interpreter” should be understood as including a means of translating a computer programming language command-by-command into another computer programming language. In particular, the term “interpreter” as it is used in the present application may especially relate to a computer program which translates a first or meta script language into a second script language.
In the context of this application, the term “game engine” should be understood as including a software component of a computer or video game with real-time graphic ability. Typically, a game engine includes several components like a rendering engine, also called a “renderer”, for rendering 2D or 3D graphics in real-time. Typically, a game engine also includes an animation engine which is adapted to create the illusion of movement of an animated object. Furthermore, a game engine may include a physics engine which simulates Newtonian (or other) physics models so that simulated objects behave like obeying the laws of physics. In particular, physics engines may include a collision detection and, optionally, also a collision response functionality to handle collisions between simulated objects. Typically, the game engine also includes a scene graph which is a logical and/or spatial representation of a graphical scene. For example, a scene graph may be a collection of nodes in a graph or tree structure representing entities or objects in the scene. It should be understood by those skilled in the art that the above list of game engine elements is not exhaustive and further elements may be included. Furthermore, the term “real time 3D engine” or “real time 3D game engine” should be understood as including a game engine capable of real-time animation and rendering of 3D objects.
In the context of this application, the term “animation asset” should be understood as including a predefined animated sequence. For example, an animated trailer for a TV show, a falling glass, a character getting out of bed, etc. may be stored as predefined animation assets which can be triggered at a desired moment. The animation assets may be saved as complete graphical information of the animated sequence or only as a time sequence of animation variables (so-called “avars”) for the animated object, e.g. a character. Furthermore, an animation asset may also include information gathered by a motion capturing equipment.
In the context of this application, the term “motion picture production device” should be understood as including a device which is used in the production of videos, films, TV serials, TV shows, internet serials, mobile serials etc. In particular, the term “motion picture production device” may relate to any hardware or software component used in the production of the aforementioned audio-visual products. In particular, the term relates to cameras, microphones, lighting consoles and robot arms as well as to software components like game engines for producing animated sequences.
In the context of this application, the term “computer-generated imagery” should be understood as including application of computer graphics for example to special effects in motion pictures, TV programs, commercials, simulators, video games or the like.
In a next step 1100, the initial meta script is converted into commands for controlling at least one motion picture production device. Typically, the conversion is done by an interpreter which translates the meta script language command-by-command into control commands for the production device. In this context, it will be understood by those skilled in the art that the control commands may themselves be commands of a script language. However, such a script language is on a lower level than the script language in which the screenplay is provided. Accordingly, the term “meta” specifies that the meta script language in which the screenplay is provided is a higher-level language compared to the language in which it is translated. An example for such a motion picture production device may be at least one of a camera, a robot arm, a lighting console, a spotlight, a sound mixer, a video server, a video hard disc system. Typically, any of the aforementioned production devices has a computer interface so that it can be remotely controlled by a computer. Typically, there will exist different command sets for different production devices. For example, the command set for a camera or a sound mixer will be more complex than the command set for a spotlight. Therefore, it is intended that the interpreter is able to convert the meta script into various languages or command sets for different production devices. Thus, the interpreter can translate also complex instructions like “camera zoom on main actor and soften light” simultaneously into the different command sets for the camera and the spotlight. According to other embodiments, the motion picture production device includes a computer-generated imagery (CGI) device. For example, such a CGI device may be a real time 3D engine (RT3DE). In this case, the initial meta script, or at least the part relating to CGI, will be converted into commands of a RT3DE script language, i.e. into the script language of the game engine.
In the next step 1200, the converted commands are executed with said at least one motion picture production device. For example, instructions contained in the screenplay, e.g. “camera zoom on face of main character” or “soft blue light”, are then realized by the production device, camera or lighting console in the above examples, due to the control commands sent via the interface. According to some embodiments, not only a single but two or more production devices are controlled simultaneously. For example, a camera, a robot arm, a lighting console, a spotlight, a sound mixer, a video server, a video hard disc system may be simultaneously controlled to execute the instructions contained in the meta script. Furthermore, also a combination of controlling a “real” camera and controlling an animated scenery or background can be carried out by the present method. For example, real actors may move within a bluescreen setup while the scenery is provided by a CGI device, e.g. a 3D game engine.
In a final step 1300, a sequence of views generated by the above method is displayed in real time on a display device. In one embodiment, the display device is a screen of a director so that he can watch the motion picture and, e.g., instruct actors, cameramen or the like. In another embodiment, the display device is a TV set or a computer located in the home of a viewer. In this embodiment, the sequence of views produced by the above described method is transmitted to the TV set or the computer via broadcasting, internet or similar means. It will be understood by those skilled in the art that the sequence of views may be displayed not only on a single display device but simultaneously on a larger number of display devices. For example, millions of viewers may be reached when broadcasting the produced sequence of views. According to another embodiment, not only the director but also a cameraman, a lighting technician or other staff members of a production team may each have their own display device for displaying the sequence of views in real time.
According to a further embodiment, the sequence of views is a fully animated sequence of views. For example, the production device may be a real time 3D game engine (RT3DE) for producing fully animated views. In this embodiment, the meta script is converted into commands of the RT3DE script language which are then executed on the RT3DE to produce the sequence of views. The fully animated views produced by the RT3DE are then displayed on a display device. Thus, a director watching the views on the display device may check whether the produced motion picture is in order or changes have to be made. As explained above, also other members of the production team may each have their own display device to check the fully animated sequence of views produced by the above-described method.
The above described method can be used to produce any desired sequence of views for any desired visual or audio-visual medium. However, this method is useful for producing a TV serial, an internet serial, or a mobile serial. Furthermore, the above described method is also useful for producing commercials, animated advertisements or the like. It will be understood by those skilled in the art that the above list of applications is not exhaustive and other applications of the production methods described herein are also considered to be within the scope of the appended claims.
The above described method enables efficient production of motion pictures. Providing the screenplay in a meta script language together with the computerized translation of the meta script into commands selected from one or more computer languages for directly controlling production devices achieves an at least partial automation of motion picture production. For example, in conventional motion picture production the screenplay had to be copied and distributed to the director, the cameramen, the lighting technicians, the actors and, in principle, almost every member of the production staff.
According to a further embodiment, predefined animation assets are arranged on a time line to define the sequence of views. In the context of this application, the term “time line” should be understood as defining the chronological order of actions and/or dialogues in a screenplay and/or motion picture. In other words, the time line defines the chronological order of the views within a sequence of views and/or the chronological order of sound accompanying the views. For example, when using a GUI the animation assets may be represented by icons which can be arranged on the time line by simple drag-and-drop action. In another example, the time line can be graphically represented by a line shown on the GUI. However, other representations of the time line may also be used especially for complex settings. Typically, the animation assets include at least one of the following: a ragdoll skeleton of a character, a 3D model of a character, a full body animation information for a character, a facial animation information for a character, a predefined motion sequence for a character, motion capturing information for a character, a surface texture information, a scenery information. Thus, predefined animations are provided to an author who may compose the screenplay from the predefined animations. Of course, the author will also have the option of creating new animation assets and/or to alter predefined animation assets.
However, the method according to the present embodiment allows altering the initial meta script or the content of the initial meta script. For this purpose, one or more users may input data in method step 5200. Typically, inputting the user input data is effected via at least one input device. In one example, the input device is a manual input device like a keyboard, joystick, a mouse, a scrollwheel, a trackball or similar devices. For example, a cameraman (user) may alter the position of a camera via a joystick while altering the zoom of the camera via a keyboard. According to another additional or optional embodiment, the input device is a motion capturing device. Motion capturing, sometimes also called motion tracking or mocap, is a technique of digitally recording movements. With a motion capturing device, the movement of one or more actors can be recorded and used for animating characters. For example, an actor may wear a special mocap suit having multiple active or passive optical markers which can be tracked by a camera system. The movement of the markers is then used to animate a 2D or 3D model of an animated character. With modern motion capturing devices and animation software, e.g. 2D or 3D game engines, the motion capturing data can be transformed into animated views of a character in real time.
In method step 5300, the user input data is used to alter the meta script, i.e. to provide an altered version of the meta script. In the present embodiment, altering of the initial meta script is allowed prior to conversion into the converted commands. Furthermore, the initial meta script can typically altered in real time, i.e. without any noticeable delay. When inputting data, a user may alter the meta script commands themselves or only the content thereof, e.g. parameter values of meta script commands. For example, a cameraman may decide to zoom on an object although this was not scheduled in the initial screenplay. Accordingly, a new “zoom” command has to be created and added to the meta script. Rather, a cameraman may control only the speed of a zoom or pan shot, thus altering only a parameter (speed) of the already scheduled “zoom” or “pan shot” command. It will be understood by those skilled in the art that this principle can be transferred also to other users like directors, actors, lighting technicians and/or any other member of the production staff.
Typically, a set of alterable variables is defined for each user, wherein a user's set of alterable variables contains the variables that can be altered by this user. For example, the set of alterable variables for a cameraman includes camera-related variables only whereas a set of alterable variables for an actor includes only variables related to the character played by this actor. Thus, users can influence the produced sequence of views only within their restricted range of alterable variables.
After altering the initial meta script, i.e. the screenplay, by the user input information, the altered meta script is converted into commands for the production device(s) in step 5400. This step 5400 is similar to steps 1100, 2200, 3100, and 4100 except for that it is executed on the altered meta script. Therefore, the above explanations also apply to step 5400. In particular, the altered meta script is converted sequentially into the control commands in step 5400. As has been described above, sequential conversion is typically done command-by-commands by means of an interpreter. The arrows on the right hand side of
The above described option to alter the initial meta script, i.e. the screenplay or the way how the screenplay is realized, in real time by user input information approximates the production method shown in
In the following, examples and embodiments of variables which may be altered and/or controlled via user input data are described. It will be understood by those skilled in the art that the following list of examples and/or embodiments is not intended to be limiting. In one example, the steps 5200 and 5300 of altering the initial meta script include at least one of the following: controlling, in real time, a camera view during display of the sequence of views, wherein a camera view information is recorded and used to modify the initial meta script to generate a modified meta script including the recorded camera view information. In another example, the steps 5200 and 5300 of altering the initial meta script include at least one of the following: controlling, in real time, a character during display of the sequence of views, wherein a character information is recorded and used to modify the initial meta script to generate a modified meta script including the recorded character information. In particular, said character information may include at least one of the following: a 3D model of a character, a surface texture information for a character, a full body animation information for a character, a facial animation information for a character, a motion sequence for a character, a motion capturing information for a character. In a further example, the steps 5200 and 5300 of altering the initial meta script include: including, in real time, a character speech during display of the sequence of views, wherein a character speech information is recorded and used to modify the initial meta script to generate a modified meta script including the recorded character speech information. Therein, a lip movement animation information for a character may be generated, in real-time, depending on the character speech information, wherein the lip movement animation information is included in the modified or altered meta script. In one example, the steps 5200 and 5300 of altering the initial meta script include controlling, in real time, a scenery during display of the sequence of views, wherein a scenery information is recorded and used to modify the initial meta script to generate a modified meta script including the recorded scenery information. Therein, the scenery information may include at least one of the following: a background information, an object information, a 3D model for an object, a surface texture information for an object, a sound information for an object, an animation information for an object, an effect information for an object.
In another embodiment, coherence control is carried out so that conflicting alterations are resolved. For example, if an actor navigates his character to a position occupied by a solid object, e.g. a table or a stone, a collision between the character and the solid object is detected. The same may happen if two characters are navigated on colliding paths. In such situations, different options for resolving the situation may be chosen, e.g. simply outputting a warning or not moving the character any farther. In embodiments using a game engine as a production device, a collision detection of the game engine (or its physics engine) may be utilized within the coherence control.
In the following, examples and embodiments of variables which may be altered and/or controlled via user input data are described. It will be understood by those skilled in the art that the following list of examples and/or embodiments is not intended to be limiting. In one example, the steps 6300 and 6400 of altering the initial meta script include at least one of the following: controlling, in real time, a camera view during display of the sequence of views, wherein a camera view information is recorded and used to modify the initial meta script to generate a modified meta script including the recorded camera view information. In another example, the steps 6300 and 6400 of altering the initial meta script include at least one of the following: controlling, in real time, a character during display of the sequence of views, wherein a character information is recorded and used to modify the initial meta script to generate a modified meta script including the recorded character information. In particular, said character information may include at least one of the following: a 3D model of a character, a surface texture information for a character, a full body animation information for a character, a facial animation information for a character, a motion sequence for a character, a motion capturing information for a character. In a further example, the steps 6300 and 6400 of altering the initial meta script include: including, in real time, a character speech during display of the sequence of views, wherein a character speech information is recorded and used to modify the initial meta script to generate a modified meta script including the recorded character speech information. Therein, a lip movement animation information for a character may be generated, in real-time, depending on the character speech information, wherein the lip movement animation information is included in the modified or altered meta script. In one example, the steps 6300 and 6400 of altering the initial meta script include controlling, in real time, a scenery during display of the sequence of views, wherein a scenery information is recorded and used to modify the initial meta script to generate a modified meta script including the recorded scenery information. Therein, the scenery information may include at least one of the following: a background information, an object information, a 3D model for an object, a surface texture information for an object, a sound information for an object, an animation information for an object, an effect information for an object.
In another embodiment, coherence control is carried out so that conflicting alterations are resolved. For example, if an actor navigates his character to a position occupied by a solid object, e.g. a table or a stone, a collision between the character and the solid object is detected. The same may happen if two characters are navigated on colliding paths. In such situations, different options for resolving the situation may be chosen, e.g. simply outputting a warning or not moving the character any farther. In embodiments using a game engine as a production device, a collision detection of the game engine (or its physics engine) may be utilized within the coherence control.
Typically, a set of alterable variables is defined for each user, wherein a user's set of alterable variables contains the variables that can be altered by this user. For example, the set of alterable variables for a camera-controlling user includes only camera-related variables whereas a set of alterable variables for a character animator includes only variables related to the character controller by this user. Thus, users can influence the produced sequence of views only within their restricted range of alterable variables.
After altering the initial meta script, i.e. the screenplay, by the user input information, the altered meta script is converted into commands for a real time 3D game engine (RT3DE) in step 8400. In particular, the altered meta script is converted sequentially into command of the RT3DE script language in step 8400. As has been described above, sequential conversion is typically done command-by-commands by means of an interpreter. The arrows on the right hand side of
The above described option to alter the initial meta script in real time by user input information approximates the production method for a fully-animated motion picture to the conventional process of motion picture production. In particular, actors may control their characters, cameramen may control their cameras etc. However, the advantage of the present production method is still obtained since the whole information is transformed into a meta script language, i.e. into an altered meta script defining the sequence of views to be produced. Furthermore, a commercially available RT3DE is utilized for rendering the sequence of views in real time. Thus, the increased efficiency of computerized and fully-animated motion picture production can be maintained while still allowing artistic expression and influence of the director, actors and/or other members of the production staff. In particular, the above described motion picture production method is more time-efficient than conventional production methods for animated motion pictures. Furthermore, RT3DE can be implemented on relatively cheap computers compared with the large specialized rendering farms provided by animation studios like Pixar or others. Due to the faster production time and the reduced hardware costs, the present production method promotes development of fully-animated motion pictures.
In the following, examples and embodiments of variables which may be altered and/or controlled via user input data are described. It will be understood by those skilled in the art that the following list of examples and/or embodiments is not intended to be limiting. In one example, the steps 8200 and 8300 of altering the initial meta script include at least one of the following: controlling, in real time, a camera view during display of the sequence of views, wherein a camera view information is recorded and used to modify the initial meta script to generate a modified meta script including the recorded camera view information. In another example, the steps 8200 and 8300 of altering the initial meta script include at least one of the following: controlling, in real time, a character during display of the sequence of views, wherein a character information is recorded and used to modify the initial meta script to generate a modified meta script including the recorded character information. In particular, said character information may include at least one of the following: a 3D model of a character, a surface texture information for a character, a full body animation information for a character, a facial animation information for a character, a motion sequence for a character, a motion capturing information for a character. In a further example, the steps 8200 and 8300 of altering the initial meta script include: including, in real time, a character speech during display of the sequence of views, wherein a character speech information is recorded and used to modify the initial meta script to generate a modified meta script including the recorded character speech information. Therein, a lip movement animation information for a character may be generated, in real-time, depending on the character speech information, wherein the lip movement animation information is included in the modified or altered meta script. In one example, the steps 8200 and 8300 of altering the initial meta script include controlling, in real time, a scenery during display of the sequence of views, wherein a scenery information is recorded and used to modify the initial meta script to generate a modified meta script including the recorded scenery information. Therein, the scenery information may include at least one of the following: a background information, an object information, a 3D model for an object, a surface texture information for an object, a sound information for an object, an animation information for an object, an effect information for an object.
In another embodiment, coherence control is carried out so that conflicting alterations are resolved. For example, if an actor navigates his character to a position occupied by a solid object, e.g. a table or a stone, a collision between the character and the solid object is detected. The same may happen if two characters are navigated on colliding paths. In such situations, different options for resolving the situation may be chosen, e.g. simply outputting a warning or not moving the character any farther. In embodiments using a game engine as a production device, a collision detection of the game engine (or its physics engine) may be utilized within the coherence control.
In one embodiment, the computer program 9000 provides a set of alterable user input information for each user. A user's set of alterable input information contains the information that can be altered by this user.
In another embodiment, the computer program 9000 is adapted to create an altered version of the initial screenplay information by logging the alterations caused by the user input information and including said alterations into the initial screenplay information. In one example, computer program 9000 is adapted to create the altered version of the screenplay information parallel to converting the initial screenplay information into the control commands by interpreter 9300. In another example, computer program 9000 is adapted to buffer the alterations during conversion of the initial screenplay information into the control commands. The altered version of the screenplay information is then created after conversion of the screenplay information. In this embodiment, computer program 9000 may include a re-translator (not shown) which is adapted to retranslate control commands into the meta script language in which the screenplay information is provided.
Typically, computer program 9000 is adapted to be executed on a server of a computer system having a client-server architecture. In such an embodiment, the first to third interfaces are interfaces to clients of the client-server architecture. In other embodiments, computer program 9000 may further be adapted to be executed on a client in a client-server architecture. In this embodiment, computer program 9000 may further include a fourth interface adapted to transmit user input information to a server.
The system 10 further includes at least one first client 200 which is adapted to provide screenplay information in a meta script language to the server 100. For example, first client 200 may include a file server on which the screenplay information is saved. Furthermore, a graphical user interface (GUI) may be implemented on first client 200, thus allowing an author to create or convert a screenplay in the meta script language.
The system 10 further includes at least one second client 300 which is adapted to provide user input information for altering the screenplay information provided by first client 200. For example, the at least one second client is connected to at least one input device for inputting user input information for altering the screenplay information. In one example, the input device is a manual input device like a keyboard, joystick, a mouse, a scrollwheel, a trackball or a similar device. For example, a cameraman (user) may alter the position of a camera via a joystick while altering the zoom of the camera via a keyboard or scrollwheel. According to another additional or optional embodiment, the input device is a motion capturing device. Motion capturing, sometimes also called motion tracking or mocap, is a technique of digitally recording movements. With a motion capturing device, the movement of one or more actors can be recorded and used for animating characters. For example, an actor may wear a special mocap suit having multiple active or passive optical markers which can be tracked by a camera system. The movement of the markers is then used to animate a 2D or 3D model of an animated character. Thus, the user input device connected to second client 300 may be a complex system in itself. For example, the input device may include virtual reality (VR) devices, e.g. a VR glove, a VR suit or the like.
Furthermore, system 10 typically includes at least one motion picture production device 400 which is connected to server 100 and adapted to be controlled by control commands transmitted from server 100. For example, the motion picture production device may be at least one of a camera, a robot arm, a lighting console, a spotlight, a sound mixer, a video server, a video hard disc system. In other embodiments, the motion picture production device is computer-generated imagery (CGI) device, e.g. a real time 3D engine (RT3DE).
Finally, system 10 includes at least one display device 500 which connected to the computer system and is adapted to display a sequence of views, i.e. a motion picture produced with system 10. For example, display device 500 is a monitor or other device for visualizing video and audio data and/or computer-generated graphics and/or sound. As shown in
From the above description, it will be understood by those skilled in the art that system 10 is specifically adapted for executing a production method according to embodiments described or indicated herein. Furthermore, it will be understood by those skilled in the art that at least a part of the computer system may be realized as a workstation or PC cluster.
This written description uses examples to enable any person skilled in the art to make and use the described technical teaching. While various specific embodiments have been described herein, those skilled in the art will recognize that the technical teaching can be practiced also with modification within the spirit and scope of the claims. Especially, mutually non-exclusive features of the embodiments described above may be combined with each other. The patentable scope is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.
Claims
1. A method for producing a sequence of views, comprising the steps of:
- (a) providing a screenplay as an initial meta script in a meta script language for a computer;
- (b) converting the initial meta script into commands for controlling at least one motion picture production device;
- (c) executing the converted commands with said at least one motion picture production device in order to create a sequence of views; and
- (d) displaying, in real time, the sequence of views on a display device.
2. The method according to claim 1, wherein the motion picture production device is at least one of a camera, a robot arm, a lighting console, a spotlight, a sound mixer, a video server, a video hard disc system.
3. The method according to claim 1, wherein the motion picture production device is computer-generated imagery (CGI) device.
4. The method according to claim 3, wherein the CGI device is a real time 3D engine (RT3DE) and wherein the initial meta script is converted into commands of a RT3DE script language.
5. The method according to claim 4, further including the step of linking animation assets to the initial meta script.
6. The method according to claim 1, wherein the sequence of views is a fully animated sequence of views.
7. The method according to claim 1, wherein, in step (a), the screenplay is created as an initial meta script.
8. The method according to claim 1, wherein, in step (a), the screenplay is transformed into an initial meta script.
9. The method according to claim 7 or 8, wherein, in step (a), predefined animation assets are arranged on a time line to define the sequence of views.
10. The method according to claim 9, wherein the animation assets include at least one of the following: a ragdoll skeleton of a character, a 3D model of a character, a full body animation information for a character, a facial animation information for a character, a predefined motion sequence for a character, motion capturing information for a character, a surface texture information, a scenery information.
11. The method according to claim 7 or 8, wherein, in step (a), a graphical user interface is used for creating or transforming the screenplay into the initial meta script.
12. The method according to claim 1, wherein, in step (b), the initial meta script is converted sequentially while executing previously converted commands with the motion picture production device.
13. The method according to claim 1, wherein altering the initial meta script or the content of the initial meta script is allowed for one or more users during executing the converted commands with the motion picture production device.
14. The method according to claim 13, wherein the altering of the initial meta script is allowed prior to conversion into the converted commands.
15. The method according to claim 13, wherein the altering of the initial meta script is allowed after to conversion into the converted commands.
16. The method according to claim 13, wherein the initial meta script is altered in real time.
17. The method according to claim 13, wherein the altering of the initial meta script is effected by a user via at least one input device.
18. The method according to claim 17, wherein the at least one input device is a manual input device.
19. The method according to claim 17, wherein the at least one input device is a motion capturing device.
20. The method according to claim 13, wherein a set of alterable variables is defined for each user, wherein a user's set of alterable variables contains the variables that can be altered by this user.
21. The method according to claim 13, wherein the step of altering the initial meta script includes
- controlling, in real time, a camera view during display of the sequence of views, wherein a camera view information is recorded and used to modify the initial meta script to generate a modified meta script including the recorded camera view information.
22. The method according to claim 13, wherein the step of altering the initial meta script includes controlling, in real time, a character during display of the sequence of views,
- wherein a character information is recorded and used to modify the initial meta script to generate a modified meta script including the recorded character information.
23. The method according to claim 22, wherein said character information includes at least one of the following: a 3D model of a character, a surface texture information for a character, a full body animation information for a character, a facial animation information for a character, a motion sequence for a character, a motion capturing information for a character.
24. The method according to claim 13, wherein the step of altering the initial meta script includes
- including, in real time, a character speech during display of the sequence of views, wherein a character speech information is recorded and used to modify the initial meta script to generate a modified meta script including the recorded character speech information.
25. The method according to claim 24, wherein a lip movement animation information for a character is generated, in real-time, depending on the character speech information, wherein the lip movement animation information is included in the modified meta script.
26. The method according to claim 13, wherein the step of altering the initial meta script includes
- controlling, in real time, a scenery during display of the sequence of views, wherein a scenery information is recorded and used to modify the initial meta script to generate a modified meta script including the recorded scenery information.
27. The method according to claim 26, wherein the scenery information includes at least one of the following: a background information, an object information, a 3D model for an object, a surface texture information for an object, a sound information for an object, an animation information for an object, an effect information for an object.
28. The method according to claim 13, further including coherence control so that conflicting alterations are resolved.
29. The method according to claim 13, wherein an altered version of the initial meta script is generated by logging the alterations and including the alterations into the initial meta script.
30. The method according to claim 29, wherein the altered version of the meta script is generated parallel to the execution of the initial meta script.
31. The method according to claim 29, wherein the alterations are buffered during execution of the initial meta script and the altered version of the meta script is generated after the execution of the initial meta script.
32. The method according to claim 29, wherein, if an alteration is executed on one or more converted commands, the one or more altered commands are translated back into the meta script language to be included into the altered meta script.
33. The method according to claim 1, wherein an altered version of the meta script is used as the initial meta script to allow iterative alteration of the meta script to generate a iteratively modified sequence of views.
34. The method according to claim 1, further comprising the step of converting the iteratively modified sequence of views into a high definition SDI video format or a computer-based video file format.
35. The method according to claim 1, wherein the method is used to produce a tv serial, an internet serial, or a mobile serial.
36. A method of producing a fully-animated motion picture, a fully-animated movie, a fully-animated TV serial, a fully-animated internet serial, or a fully-animated mobile serial, including the steps of
- providing a meta script in a meta script language for a computer, the meta script representing a screenplay for the motion picture, movie, TV serial, internet serial, or mobile serial;
- linking animation assets to the meta script;
- converting the meta script into commands for controlling a real time 3D game engine;
- executing the converted commands with said real time 3D game engine in order to create a fully-animated sequence of views; and
- displaying, in real time, the fully-animated sequence of views on a display device.
37. The method according to claim 36, wherein altering the initial meta script or the content of the initial meta script is allowed for one or more users during executing the converted commands with the real time 3D game engine.
38. The method according to claim 37, wherein the altering of the initial meta script is effected by a user via at least one of a manual input device and a motion capturing device.
39. The method according to claim 37, wherein a set of alterable variables is defined for each user, wherein a user's set of alterable variables contains the variables that can be altered by this user.
40. A computer program for converting a screenplay written in a meta script language into a sequence of commands of a motion picture production device, comprising:
- a first interface adapted to receive screenplay information provided in a meta script language for a computer;
- at least one second interface adapted to receive user input information for altering said screenplay information received via the first interface;
- an interpreter adapted to convert the screenplay information provided in said meta script language into control commands for the motion picture production device; and
- at least one third interface adapted to transmit the converted control commands to at least one motion picture production device.
41. The computer program according to claim 40, wherein the motion picture production device is at least one of a camera, a robot arm, a lighting console, a spotlight, a sound mixer, a video server, a video hard disc system.
42. The computer program according to claim 40, wherein the motion picture production device is computer-generated imagery (CGI) device.
43. The computer program according to claim 42, wherein the CGI device is a real time 3D engine (RT3DE), and wherein the screenplay information is converted into commands of a RT3DE script language.
44. The computer program according to 40, wherein a set of alterable user input information is defined for each user, wherein a user's set of alterable input information contains the information that can be altered by this user.
45. The computer program according to claim 40, wherein the program is further adapted to create an altered version of the initial screenplay information by logging the alterations caused by the user input information and including said alterations into the initial screenplay information.
46. The computer program according to claim 45, wherein the program is adapted to create the altered version of the screenplay information parallel to the converting the initial screenplay information into the control commands.
47. The computer program according to claim 45, wherein program is adapted to buffer said alterations during conversion of the initial screenplay information into said control commands, and to create the altered version of the screenplay information after conversion of the screenplay information into said control commands.
48. The computer program according to claim 45, the program comprising a re-translator adapted to retranslating control commands into said meta script language.
49. The computer program according to claim 40, wherein the computer program is adapted to be executed on a server of a client-server architecture, and wherein the first to third interfaces are interfaces to clients of the client-server architecture.
50. The computer program according to claim 40, wherein the computer program is adapted to be executed on a client in a client-server architecture, wherein the computer program further includes a fourth interface adapted to transmit user input information to a server.
51. A system for producing a sequence of views, comprising:
- a computer system having a client-server architecture and comprising: at least one server adapted to receive screenplay information provided in a meta script language, to receive user input information for altering said screenplay information, to convert the screenplay information provided in a meta script language into control commands for a motion picture production device, and to transmit the converted control commands to the at least one motion picture production device, at least one first client adapted to provide screenplay information in a meta script language to said server, and at least one second client adapted to provide user input information for altering said screenplay information,
- at least one motion picture production device connected to said server and being adapted to be controlled by said converted control commands, and
- at least one display device connected to said computer system and being adapted to display a sequence of views.
52. The system according to claim 51, wherein the motion picture production device is at least one of a camera, a robot arm, a lighting console, a spotlight, a sound mixer, a video server, a video hard disc system.
53. The system according to claim 51, wherein the motion picture production device is computer-generated imagery (CGI) device.
54. The system according to claim 53, wherein the CGI device is a real time 3D engine (RT3DE) and wherein the initial meta script is converted into commands of a RT3DE script language.
55. The system according to claim 54, wherein the computer system further comprises a file server for storing at least one of animation assets, screenplay information, and a sequence of views.
56. The system according to claim 51, wherein said at least one second client is connected to at least one input device for inputting user input information for altering said screenplay information.
57. The system according to claim 56, wherein the at least one input device is a manual input device.
58. The system according to claim 56, wherein the at least one input device is a motion capturing device.
59. The system according to claim 51, wherein at least a part of the computer system is realized as a workstation cluster.
60. The system according to claim 51, further comprising a multi-channel HD video server for storing the sequence of views.
Type: Application
Filed: Jun 7, 2007
Publication Date: Dec 11, 2008
Inventors: Ernst Feiler (Falkensee), Thomas Knop (Berlin), Jonas Baur (Berlin), Jan Marquardt (Berlin)
Application Number: 11/810,839
International Classification: G06F 3/00 (20060101);