Method and apparatus for advanced leadership training simulation

A method and apparatus is disclosed for advanced leadership training simulation wherein the simulation teaches skills in leadership and related topics through an Internet-based distance-learning architecture. The distance-learning features link trainees at remote locations into a single collaborative experience via computer networks. Instructional storylines are created and programmed into a computer and then delivered as a simulated but realistic story to one or more participants. The participants' reactions are monitored and compared with expected results. The storyline may be altered in response to the participants' responses, and synthetic characters may be generated to act as automated participants or coaches. Constructive feedback is provided to the participants during or after the simulation.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
FIELD OF THE INVENTION

[0002] The present invention relates generally to simulation technology, and more particularly to the use of simulation technology to teach skills in leadership and related topics through an Internet-based distance-learning architecture, as well as for general consumer gaming use. The distance-learning features link participants at remote locations into a single collaborative experience via computer networks.

BACKGROUND OF THE INVENTION

[0003] Recent United States Army studies have indicated that the leadership requirements of the modern war fighting force involve several significant differences from historical experience. Some factors of particular importance to the new generation of military leaders include: (i) the broad variety of people-centered, crisis-based military missions, including counter-terrorism, peacekeeping, operations in urban terrain and the newly emphasized homeland defense, in addition to more conventional warfare; (ii) the command of and dependence on a number of complex weapon, communication and intelligence systems involving advanced technology and specialized tasks; (iii) increased robotic and automated elements present on the battlefield; (iv) distributed forces at all echelons, requiring matching forms of distributed command; and (v) increased emphasis on collaboration in planning and operations.

[0004] The demographics of the military leadership corps is changing in several ways, and among the positive features of this change is a high level of sophistication and experience in computer use, including computer communication gaming and data acquisition. This means that modern training simulations must be as motivating and as well-implemented as commercial gaming and information products in order to capture and hold the attention of the new army generation.

[0005] There are currently highly developed aircraft, tank and other ground vehicle virtual simulators that realistically present military terrain and the movement of the vehicles within the terrain. Such simulators are very effective at teaching basic operational skills. Networks of virtual simulators, including SIMNET, CCTT and the CATT family, are also available to teach leader coordination of combined arms weapons systems during conventional and MOUT (Military Operations on Urbanized Terrain) warfare in highly lifelike settings. Likewise, constructive simulations such as BBS, Janus, WARSIM, WARSIM 2000 and others are very effective in focusing on the tactical aspects of leadership—representing movement of material, weapons and personnel—particularly for higher echelon maneuvers.

[0006] But the same level of developmental effort has not been directed toward equally effective virtual and/or constructive simulators for training leadership and related cognitive skills in scenarios involving substantial human factor challenges. Driving a tank does not require the background knowledge, the collaboration or the complex political, diplomatic and psychological judgments that must be made in a difficult, people-centered crisis leadership situation. These judgments depend largely on the actual and estimated behavior of human participants, both friend and foe, in the crisis situation. And unfortunately, the complete modeling of complex human behavior is still beyond current technical capabilities.

[0007] As a result, these kinds of leadership skills have routinely been taught in the classroom through lectures and exercises featuring handouts and videotapes. It is possible for a good instructor to build the tension needed to approximate a leadership crisis, but sustaining the tension is difficult to do. Showing the heartbreak of the crisis and the gut-wrenching decisions that must be made is not the strong suit of paper-and-pencil materials or low budget, home-grown videos.

[0008] Large classroom exercises such as “Army After Next” and “The Crisis Decision Exercise” at the National Defense University have attempted to give some sense of the leaders' experience through week-long exercises that involve months of planning. These exercises are effective, but they cannot be distributed widely. Also, they are not easy to update and modify, and they require a large contingent of designers and developers, as well as on-site operators, to run them after months of planning time.

[0009] Story-based simulations, on the other hand, increase participant attention and retention because story-based experiences are more involving and easier to remember. Participants are also able to build judgmental, cognitive and decision-making leadership skills because the simulations provide realistic context in which to model outstanding leadership behavior. Story-based simulations can teach innovation because they are able to challenge participants by providing dramatic encounters with unexpected events and possibilities. Also, story-based simulations overcome the limitations of current constructive and virtual simulations in modeling complex human behavior, which is an increasing part of today's leadership challenges.

[0010] A prime consideration in training modern leadership skills is the establishment of a simulation network for collective training that reflects the real world network of distributed command nodes. Today's budgetary constraints, which necessitate the most efficient use of resources, require that collective as well as individualized training simulation be delivered remotely via distance learning as well as in classrooms, to avoid costly travel and subsistence.

[0011] Crisis-based leadership training requires an awareness of human factors that has been especially difficult to teach through media or the classroom. Giving complexity to an adversary's personality or turning a political confrontation into a battle of wits and will (things that, in fact, represent so much of today's military decision making) are easier to talk about than to practice or simulate.

[0012] From a computational perspective, the greatest challenge in the development of interactive storytelling environments is handling the autonomy and unpredictability of the participants. In non-interactive storytelling genres, the focus of development can be placed entirely on a single storyline that is to be experienced by the audience. However, when the audience itself becomes an actor in the story, the number of potential storylines that could unfold becomes much larger, based on the number of times the actors have the possibility of taking an action, and the number of possible actions that they could take at those times.

[0013] Given the autonomy of the actors' characters in the storyline, the story composer is immediately faced with a number of critical problems: How can the composer prevent the actor from taking actions in the imagined world that will move the story in a completely unforeseen direction, or from taking actions that will derail the storyline entirely? How can the composer allow the actors to make critical decisions, devise creative plans, and explore different options without giving up the narrative control that is necessary to deliver a compelling experience? And in the case of interactive tutoring systems, how can the composer understand enough about the beliefs and abilities of the actors to create an experience that has some real educational value, i.e., that improves the quality of the decisions that they would make when faced with similar situations in the real world?

[0014] Therefore, what is needed is a method and apparatus for advanced leadership training simulation that allows the participants to make real-time critical decisions, devise creative plans and explore different options without relinquishing the composer's narrative control and while allowing the composer to create an experience that improves the quality of leadership decision-making and delivers a compelling experience.

[0015] The present invention proposes to overcome the above limitations and problems through a broad, long-range solution that creates a unique, fully immersive type of leadership training simulation that provides complex, realistic human interactions through a highly innovative and adaptive story-generation technology. The same technology may also be applied to simulations created for consumer gaming.

SUMMARY OF THE INVENTION

[0016] The present application discloses simulation technology that teaches skills in leadership and related topics through an Internet-based distance-learning architecture. The simulations are extremely compelling and memorable because they employ dramatic, people-centered stories and real-time instructional feedback managed by artificial intelligence software tools.

[0017] The advanced leadership training simulation system comprises a story representation system for representing simulation content for use in the training simulation, a story execution system for delivering the simulation content to one or more participants via a computer network, and an experience manager system for monitoring the participants responses to the simulation content, providing feedback to the participants and adjusting story events to match a change in the story's direction.

[0018] The story representation system provides a computer model of a story divided into discrete tasks, actions, goals or contingencies to be achieved by the participants in an engrossing story format. The experience manager monitors the progress of the simulation with respect to the story representation tasks achieved by the participants and reports progress to an instructor interface. An instructor monitoring the instructor interface may intervene in the simulation to adjust the direction of the simulation to maximize the dramatic and educational effectiveness of the simulation. In a gaming application, such a system would serve the needs of the game manager or game monitor.

[0019] The instructor may intervene in the simulation by changing the events of the story, by giving direct instruction to the participants, or by introducing a synthetic character into the simulation to change the simulation in a desired manner or to encourage certain responses from the participants. An automated coaching system may also be used as part of or instead of the instructor intervention.

[0020] The system may also comprise an immersive audio system for enhancing realistic situations and an authoring tools system for developing new simulation scenarios, as well as tools allowing interoperability with other systems and/or simulations.

BRIEF DESCRIPTION OF THE DRAWINGS

[0021] While the specification concludes with claims specifically pointing out and distinctly claiming the subject matter of the invention, it is believed the invention will be better understood from the following description taken in conjunction with the accompanying drawings wherein like reference characters designate the same or similar elements and wherein:

[0022] FIG. 1 is a diagram of the main components of the preferred embodiment as disclosed in the present application;

[0023] FIG. 2 is a diagram of certain components of the content delivery process of the preferred embodiment;

[0024] FIG. 3 is a diagram of the monitoring process of the preferred embodiment;

[0025] FIG. 4 illustrates an example of the monitoring process of the preferred embodiment;

[0026] FIG. 5 is a diagram of the media record structure of the preferred embodiment; and

[0027] FIG. 6 is a diagram of synthetic character generation for the preferred embodiment.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

[0028] It is to be understood that the figures and descriptions of the present invention have been simplified to illustrate elements that are relevant for a clear understanding of the invention, while eliminating, for purposes of clarity, other elements that may be well known. Those of ordinary skill in the art will recognize that other elements are desirable and/or required in order to implement the present invention. However, because such elements are well known in the art, and because they do not facilitate a better understanding of the present invention, a discussion of such elements is not provided herein. The detailed description will be provided hereinbelow with reference to the attached drawings.

[0029] The present invention's distance-learning and general gaming technology employs a computer-based architecture that operates over the Internet to bring together dispersed participants into a single collaborative activity that simulates a realistic experience. However, the experience is designed to be fully immersive and engaging in many ways, and to have the interactivity of a leading-edge multi-player game in order to appeal to and motivate a new generation of game-savvy participants.

[0030] Referring to FIG. 1, the story representation system 20 is a computer program that provides a representation model within the system, i.e., it represents stories, structure and events in the program (akin to a storyboard) and allows integration of media and characters to a series of events and includes a task model 22. Expected participant behavior can be mapped onto the task model 22, which is a list of tasks to be performed and goals to be reached. By turning blocks of expository text into numbered sets of task steps, with preconditions, structured contingencies and action descriptions that are more algorithmic in nature, the task model 22 may be used as an expectation of participant action. By comparing the specific actions of a participant to the task model 22 for the participant's ideal real-world counterpart, the participant's progress may be tracked, and deviations warranting pedagogical or dramatic interventions may be flagged. The task model 22 preferably has three components. First, there is a goal hierarchy 24, which is an outline of all the goals that are to be achieved in the task, where each major goal may be subdivided into a set of sub-goals, which in turn may be subdivided into sub-goals of their own, and so on. Sub-goals may be thought of as necessary conditions for the achievement of the parent goal, but not always sufficient conditions. Second, there is an expected plan 26, which is a recipe for the successful attainment of the goals in the goal hierarchy 24. The expected plan 26 is initially presented as a linear plan of action, which itself begins the execution of a set of repetitive sub-plans and the monitoring for trigger conditions of a set of triggered plans. Thus, the expected plan 26 may branch into a system of plans and sub-plans, wherein the repetitive plans are those that the participant is expected to repeat at certain intervals, such as repeated communications with other officers or repeated checking of maps and charts. Triggered plans, as the name suggests, are triggered by certain events or conditions, such as transferring control to a Tactical Command Center once certain conditions are met. The third component of the task model is a staff battle plan 28. A staff battle plan 28 is a set of prescribed activities that the participants and other characters are expected to follow in the event of an unforeseen occurrence. The occurrence is unforeseen, but, as with the expected plan 26, the possibilities and the proper activities for handling it are well defined.

[0031] Referring to FIGS. 1 and 2, a story execution system 30, is a computer that selects the story elements and delivers them to the participants through a participant interface 31 connected to each participant's workstation 32. The story execution system 30 sends the story elements to the participant workstations 32 and records participant reaction to these elements, which is inputted into the participant workstations 32 by the participants. Thus, the story execution system 30 provides for both input and output for the run-time operation of the simulated environment. Additionally, participants preferably have video connectivity so that they can see their fellow participants on their computer screens.

[0032] The story execution system 30 includes a story execution server 33, which is a web server, such as an Apache Web Server, having additional server-side logic that manages the simulation. A content database 34 is linked to the story execution server 33 and delivers to it the media content for the simulation according to the programmed story execution server logic 35 derived from the task model 22 and in response to input from the participants and/or input from the instructor. The story execution server 33 then delivers the media content to the participants' workstations 32 through the participant interface 31, which relies on readily-available web technology to communicate with the story execution server 33. The story execution server 33 also creates and delivers the simulation's web pages in accordance with known web page construction techniques and inserts keyed Hypertext Reference (HREF) Anchors to the interactive controls so that the server can track and relate the participants' actions. The participant workstations 32 can then be web browsers that use plug-in components, such as a Shockwave Player, and basic scripting for display and interaction with the media. It also allows the participants to use a variety of existing media presentation components without source modification. FIGS. 1 and 2 show three participant workstations 32, although more or less than three may be used as necessary, depending on the number of participants.

[0033] The story execution server 33 preferably includes a participant manager 36, which is a web page publishing engine that creates and maintains all interactions with the participant workstations 32. The participant manager 36 keeps the tables listing the current state of the participant interface and the triggers for the experience manager 40 (discussed below). It also outputs to a system activity database 37, which is the log of all activity of the participants and the system itself.

[0034] The story execution server 33 further includes a page output engine 38, which is a server that creates and delivers the formatted output (web pages and media content) to the participant workstations 32. The page output engine 38 utilizes tag substitution, which is managed by the participant manager 36. Tag substitution works to create a normal reference between the display control element on the participant workstations 32 and the related function on the story execution server 33 that the tag will trigger. The participant manager 36 can then pre-process and forward the related command to the story execution server 33 components to influence the simulation's future course. Dynamic tags are thereby generated that are specific to the singular nature of the currently running simulation, not relying upon hard coded tags generated during authoring that would not support a dynamic experience manager 40. This allows different simulation events to use the same content files in various ways and with various individuals with alternative feedback results.

[0035] The participant manager 36 is preferably broad enough to maintain connections to any remote entity that utilizes or communicates with the story execution server 33. This allows for a pass-through design where tagged elements can be normalized with remote simulations who may not be in the same simulation environment. The participant manager 36 provides a common interface through which the simulations may inter-communicate. The participant manager's 36 tag substitution allows alternative tag types for various participant types. Such a structure also allows for automated systems to interact as virtual participants or for media generators to create dynamic new media with the system as necessary. This remote capability frees up the story execution server 33 to support the output and create a platform-independent runtime environment for automated media generation.

[0036] Creation and delivery of the output page is done by dynamically allocating media elements into a set of templates that are specific to the participant. In this way, a unique control set can be created for each participant that is specific to their function. This also allows for support of multiple browsers or client platforms that react in different ways to HTML layout rules.

[0037] Time is often of the essence for the participant's character, but occasionally time may be suspended while the participant receives advice or criticism from the instructor or, in a gaming application, from the game manager or game monitor. Thus, the story execution server 33 further includes a master clock 39, which can receive external commands that will suspend or halt the story execution server 33, or suspend a participant's time. Time preferably may be halted for the entire simulation, for any set of participants, or for any event. When time is halted for an individual or exclusive group during a simulation, it may be thought of as a suspension, after which the participant or participants will rejoin at the current system time, missing events that have occurred during the suspension period. If desired, reactions may be automatically inserted by the story execution server 33 to default selections specified during the authoring process. When the suspended participants re-enter the scenario, their participant interfaces 31 are refreshed to bring them up to date with the current simulation. This mechanism is also used to allow for participants who drop their connection to the story execution server 33 to be processed by the story execution server 33, which provides default responses to the scenario enabling the simulation to play out without adversely affecting the continuity of the experience. Alternatively, the instructor may wish to use the dropped connection as part of the exercise.

[0038] Referring to FIGS. 1, 2 and 3, an experience manager 40 is an artificial intelligence rule engine residing on the story execution server 33 that monitors the progress of participants in the simulation and compares the progress to the pedagogical and dramatic goals of the simulation as expressed in the story representation system 20. When differences cause specific rules to be triggered, the experience manager 40 generates an alert 41 and recommends modifications to the storyline that help keep the simulation on track. Participants' reactions to the simulation events are expressed through the interactive components, such as audio/video conference, that are part of the participant interface 31.

[0039] Referring to FIGS. 1 and 4, an instructor interface 50 is a web client that communicates as a special class of participant through the story execution system 30 with the content database 34 and the experience manager 40 in order to present to the instructor an event-by-event description of the simulation as it actually unfolds and to display the participants' expected and actual behaviors. In a general gaming application, the game manager or game monitor may use the instructor interface 50 in much the same way as an instructor would. A plug-in, such as Java Applets or Shockwave Player, manages the communications from the instructor interface 50 through the story execution system 30 in order to update media event records, call routines that would affect properties that influence the experience manager 40, select alternative media for a participant, or manage the story state. Thus, the instructor may adjust the direction of the simulation to maximize the dramatic and educational effectiveness of the simulation and to interject new elements and information when necessary. The instructor interface 50 includes a heading 51, which indicates the name or number of the simulation. Also present on the instructor interface 50 is an experience manager display 52, a story representation display 53 and a participant display 54. Alerts 41 and corresponding recommendations generated by the experience manager 40 are displayed in the experience manager display 52. The story representation display 53 depicts the expected storyline and the way it is affected by the participants' behavior. The participant display 54, along with various access tools 55, gives the instructor access to all of the participant elements, such as maps, charts, newscasts, tools and so forth. The instructor may preview any or all of these elements and may also modify them as necessary. The instructor interface 50 also includes various other tools, such as an email tool 56 for communicating with participants, a synthetic character development tool 57 for generating and inserting synthetic characters 60 (discussed below), and a clock 58 for keeping track of time in each story state.

[0040] The instructor interface 50 handles the master state of the story. Present on the instructor interface 50 is a master list of states for all media to be presented in the expected story, along with a set of entries that represent each media element that must be selected in order to transition to the next state. The state of the instructor interface 50 is defined as the totality of media that is currently displayed and that can be triggered in the immediate future by selecting any interactive control on the instructor interface 50. The transition from one state to the next is the updating of the media on the participant interface 31 by initiating a selection that alters what is seen or what may be selected in the immediate future. As the participants access each media element, an identification tag is sent to the instructor interface 50 to be presented as text and icons in the story representation display 53 and the participant display 54. To progress to the next state in the story, each required item in the current state must be accessed while in that current state. Participants may access other media not related to the current state, and these will be transmitted to the instructor interface 50 as well, but without influencing the state transition. Once all of the required media elements are selected, the state then transitions to the next state, and this transition is reflected accordingly on the clock 58.

[0041] Returning to FIG. 1, synthetic characters 60, which are computer-generated speaking images, may be introduced into the simulation for various reasons. For example, a synthetic character may be required to play the role of a character in the story or the role of another participant.

[0042] Alternatively, it may be required to provide coaching to participants automatically or through directives from the instructor via the instructor interface 50. They can play adversaries or friends or other personalities that say or do things that make it necessary for the story to head in the required direction. They can also substitute as participants when sufficient numbers of live simulation participants are unavailable.

[0043] An automated coaching system 70 is a computer program connected to the story execution system 30 that provides pre-programmed advice and instruction to the participants, either automatically or when prompted by the instructor. It uses artificial intelligence technology to monitor participant performance and recommend appropriate actions to the participants.

[0044] Authoring tools 80, which are applications connected to the story representation system 20, enable non-programmers to create new simulations based on new or existing storylines. The authoring tools 80 are a collection of applications that allow for the generation and integration of the media that represents the story into the content database 34. They are image, video, audio, graphic and text editors, interactive tools (such as for simulated radio communications or radar displays), interface template layout editors, or tools that integrate these media elements into the story. The authoring tools 80 enable non-programmers to create new scenarios that take into consideration pedagogical goals and the principles of good drama and storytelling.

[0045] Immersive audio 90 is connected to the story representation system 20 and may be used to give the experience an especially rich and authentic feel. Immersive audio 90 provides a level of realism that helps propel the participants' emotional states and raise the credibility of the simulation.

[0046] The system is preferably designed to support a story-based simulation. Story-based simulations depend upon information transferred to the active participants and upon the participants' interaction with that content. The information is presented to the participants in terms of content media. The media may take any form of representation that the participant workstations 32 are able to present to the participants. The media may play out in a multitude of representational contexts. For example, audio may be a recorded speech, the sound of a communications center or a simulated interactive radio call. These three examples could be represented with different participant interfaces, yet they are all audio files or streams.

[0047] Referring to FIGS. 2 and 5, the story execution system 30 obtains the simulation media components from the content database 34. All simulation-related media and references have record definitions in the content database 34 that define them as media events 100. Media events 100 are the master records for content that is presented by the story execution system 30. A media event 100 is a description of information related to the nature of the corresponding media component and the impact it has on the simulation, required content media, positioning and playback control information. Not only can media components be played out from the content database 34, but they can be created and inserted into the content database 34 during authoring (i.e., internally) or from an external system during the runtime. Information related to the story representation system 20 and required by the experience manager 40 is also expressed as a media event 100. The media events 100 not only allow for markers for authoring, monitoring and evaluation, but also provide required data to assist the experience manager 40 in processing directives.

[0048] Media events 100 can be different to different participants and preferably support polymorphism. This is due to the fact that participants' interfaces 31 may be different in terms of display components, alert importance and desired representational form.

[0049] The records of media events 100 preferably contain one or more simulation event records 102. Each simulation event record 102 contains information related to action and performance of the simulation event in a particular participant interface. The simulation event records 102 contain the parameters for the individual component they will represent. They also contain the identification symbols for the components and parameters that manage their layout. This data is transferred to and referenced by the participant manager 36, which acts as the repository of current state information for the experience manager 40.

[0050] The simulation event records 102 hold the information that is related to the role of the media in the participants' interfaces 31. If required, a specific media event 100 may contain a separate simulation event record 102 for each participant. Different participants may utilize different layouts for the media in their interface.

[0051] A simulation event record 102 is linked to content media 104 through a media operation record 106. The media operation record 116 is specific to the simulation event record's 102 usage of the media. The content media 104 is a generic media record that is indifferent to playback component requirements. This many-to-one relationship between media operation records 106 and content media 104 facilitates effective polymorphic usage of the media and its application. All participant interaction and simulation milestones are logged into the system activity database 37, which allows for manual review and re-creation of a simulation.

[0052] Several of the components disclosed herein rely on artificial intelligence technology. These artificial intelligence engines are preferably rules-based systems, wherein a computer is programmed with a set of rules for generating output in response to various inputs corresponding to as many different scenarios as can be thought of.

[0053] The approach of the present invention can best be described with the term “story-channels,” to replace the traditional notion of a “storyline.” The term is derived from the metaphor of the system of gullies and channels that are formed as rainwater drains into lakes and oceans. Globally, the channels may be either linear (a single valley, for example) or may have a branching tree structure, caused when a main valley is fed by multiple sources. Locally, the channels can be very wide, such that someone paddling a canoe could chose from a huge range of positions as they navigated along their way. In the same manner, the invention's approach to interactive storytelling is akin to making the inter-actor direct a canoe upstream in a system of story-channels. The storyline could potentially have significant branching structure, where certain decisions could have drastic effects on the way the story unfolds. However, most decisions will simply serve to bounce the actor from side to side within the boundaries of the channel walls, never allowing the actor to leave the channel system entirely to explore in some unforeseen direction. This metaphor is useful in describing four key parts of the development and use of the invention. First, the “authoring process” for interactive narrative is to construct the geographical terrain, to describe the (potentially branching) series of mental events that the actors should experience as they play their role in the story. Second, during the actual running of the simulation, a “tracking process” monitors the position of the canoe, observing the actions of the characters controlled by the actors in order to gather evidence for whether or not the actors' mental states adhere to the designers' expectations. Third, a “containing process” will serve as the walls of the channels, employing a set of explicit narrative strategies to keep the actors on track and moving forward. Fourth, a “tutoring process” will serve as the actors' experienced canoeing partner, watching the way that they navigate upstream and looking for opportunities to throw an educationally valuable twist in their paths.

[0054] The simulation delivered to the participants preferably depicts a series of events, characters and places arranged in a specified order and presented via web pages and media components, such as video, audio, text and graphic elements. The media components may include items such as news stories, media clips, fly-over video from reconnaissance aircraft, synthetic representations of characters, maps, electronic mail, database materials, character biographies and dossiers. Initially, a specific “story-channel” (or a branching set of storylines) is constructed for the interactive environment, and the events that the participants are expected to experience are explicitly represented in the story representation system 20. The story execution system 30 initially selects the appropriate simulation elements from the content database 34 according to the story representation system 20 and the task model 22.

[0055] The experience manager 40 tracks the participants' actions and reports them to the story execution system 30 for comparison with the story representation system 20 and the task model 22. Each participant action is identified, for example as being “as expected” or as “different from expectations,” although other types of identifiers may be used. The experience manager 40 analyzes the participants' input and flags performance that does not correspond to expectations. In response to such unexpected performance, the experience manager 40 then generates the alert 41 and sends it to the instructor interface 50. The alert 41 not only points out when participant behavior deviates from expectations, but also suggests responses that the system or the instructor can make in reaction to the unexpected participant performance. These responses are designed to set the simulation story back on course or to plot out a new direction for the story.

[0056] Alerts 41 generated by the experience manager 40 pass to the instructor interface 50 for acceptance or rejection by the instructor and then back to the story execution system 30 for forwarding to the experience manager 40. Changes to events and media initiated by the instructor via the instructor interface 50 also pass to the story execution system 30 for forwarding to the experience manager 40. The chosen option is converted by the experience manager 40 into a media event 100 and inserted into the content database 34 for immediate or later playback to the participants. Thus, when the experience manager 40 determines that it will generate a new media event 100, it will create a record that allows the story execution system 30 to present the media event 100 to the participant. As such, the experience manager 40 is not required to know about the intricacies of the particular participant interface 31 that the participant maintains, only the nature of the media event 100 that must be produced. The participant manager 36 matches the media event 100 to the layout specifications for the participant interface 31 when triggered. Tags are substituted with the aid of the experience manager 40 and the media event 100 will be actualized by the participant workstation 32.

[0057] By way of example, multiple participants may be placed in the roles of United States Army personnel in a Tactical Operations Center (TOC) during a Stability and Security Operations, and may be presented with a number of challenging decisions that must be addressed. Or, to imagine a simple example in general game-play, the United States Army personnel described below may be replaced with the crew of a 24th Century spacecraft. Actions and decisions that are made by the participants cause changes in the simulated environment, ultimately causing the system to adapt the storyline in ways to achieve certain pedagogical or dramatic goals.

[0058] In the military example, one of the participants may play the role of the Battle Captain, who runs the operation of the TOC and ensures proper flow of information into, within and out of the TOC. The Battle Captain tracks the missions that are underway, tracks the activities of friendly and enemy forces, and reacts appropriately to unforeseen events. Thus, the following goals, among many others, may be set up as the Battle Captain's goal hierarchy: (i) assist the commanding officer, (ii) assist in unit planning, (iii) set the conditions for the success of the brigade, and (iv) ensure that information flows in the TOC. Each of these goals may have one or more sub-goals, such as (i.a) provide advice and recommendations to the commanding officer, (ii.a) assist in developing troop-leading procedures, (iii.a) synchronize the efforts of the brigade staff, and (iv.a) repeatedly monitor radios, aviation reports, and activities of friendly units. Each of these sub-goals may have one or more further sub-goals, and so on.

[0059] Next, by combining the goal hierarchy with evidence from actual military documents, a plan may be devised that hypothesizes the expected plan of a Battle Captain for a typical 12-hour shift. For example: (i) arrive at the TOC, (ii) participate in battle update activity, (iii) collaboratively schedule first staff huddle for current staff; (iv) collaboratively schedule battle update activity for next shift, (v) begin monitoring for triggered sub-plans, (vi) begin the execution of repetitive sub-plans, (vii) terminate execution of repetitive sub-plans, (viii) participate in scheduled battle update activity, (ix) terminate execution of triggered sub-plans, and (x) leave the TOC.

[0060] Next in the example, a staff battle plan is identified for responding to battle drills. These plans are the military's tool for quickly responding to unforeseen or time-critical situations. For example, the system may simulate an unforeseen communications loss with a subordinate unit, necessitating a quick response from the Battle Captain. Identifying which staff battle drills are appropriate in any given task model generally depends on the storylines that are created for each simulation.

[0061] Task models such as these may be authored at varying levels of detail and formality, depending on the specific needs that they will serve. The content of a task model 22 preferably comes from doctrinal publications and military training manuals, but also preferably includes assumptions or tacit knowledge obtained from known military stories and anecdotes.

[0062] Scenarios and elements thereof may also be developed by artists and other creative people with skill in dramatic writing and storytelling, such as screenplay writers and movie makers.

[0063] Continuing with the Battle Captain example, after an unforeseen loss of communications with a subordinate unit, it may be expected that the Battle Captain first checks recent activities and locations of enemy troops and then sends a second unit towards the location of the first unit.

[0064] If, however, the Battle Captain fails to check the activities and locations of enemy troops before deploying the second unit, the experience manager 40 generates an alert that the participant playing the Battle Captain is not acting as expected and sends the alert to the instructor interface 50 along with suggested responses for the instructor, such as “Employ coach to advise Battle Captain.” The instructor may then accept or reject the experience manager's 40 recommendation, depending on the instructor's desire to set the simulation back on track, to plot out a new direction for the simulation, or simply to teach the participant a valuable lesson.

[0065] As discussed, a specific media event 100 may contain a separate simulation event record 102 for each participant, and different participants may utilize different layouts for the media in their interface. For example, while the media delivered to a participant acting as a radar sector operator would be the same as the media delivered to a participant acting as a brigade commander, their access and presentation of that media would differ. Also, some media may be treated differently on different participants' interfaces. For example, an updated inventory of aircraft would be of great importance to an aviation officer but would be of passing interest to an intelligence officer. The notice may be visually highlighted in the aviation officer's interface through an alert. As such, the information related to the event will have to contain not only a layout identifier for the media, but also qualities for different participants in the story that effect the presentational rules for the media. Also, the media may differ from participant to participant. The intelligence officer may receive an audio file of a conversation while the aviation officer may only have access to a text manuscript of the file. On the other hand, the intelligence officer may have a simulated radio communication alert him that an active communication is taking place and force him to listen to it, while the aviation officer may gain access to the file only by navigating a series of menus that present the audio file in the context of the message. While the media file is the same, the display, presentation and impact on the participants differ greatly.

[0066] The designers of the simulation may anticipate many kinds of variations from the normal progress of the story. These variations can be pre-produced in traditional media forms and exist in the content database 34 for future use in the event that they are called for by the participant performance. The diagram of the use of these kinds of media and the new direction in which they take the story correspond to traditional branching storylines that have been used in interactive lessons in the past. These options are preferably presented to the instructor on the instructor interface 50 before they are used in the simulation, although the experience manager 40, as an artificial intelligence engine, may be programmed to deploy the elements as needed. Moreover, the instructor has the capability to edit many of the pre-produced options.

[0067] Other options, such as the use of the synthetic characters 60 as coaches, are not pre-produced but can be generated by the system or the instructor on the spot. The synthetic character engine has the capability to select an appropriate response to the participant action and create that response in real time. However, the original response is preferably presented to the instructor in the instructor interface 50 so that it can be approved and/or edited by the instructor before it is implemented. Once the response is created and approved, the experience manager 40 sends it to the story execution system 30. Approved options are converted by the experience manager 40 into media event records and inserted into the content database 34.

[0068] The automated coaching system 70 contains the artificial intelligence to understand the performance of the participants and judge whether it is correct or incorrect. It can then automatically and immediately articulate advice, examples or criticism to the participants that will help tutor them and guide them to the correct performance according to the pedagogical goals of the exercise. Because the simulation is story-based, the synthetic character 60 that delivers the advice to the participant can play the role of one of the characters in the story. As such, the character will display the personality and style of the character as it imparts information to the appropriate participant. As with the experience manager 40, the artificial intelligence of the automated coaching system 70 is preferably rules-based. In another preferred embodiment, the artificial intelligence may be knowledge-based.

[0069] Turning to FIG. 6, once the decision has been made by the system and the instructor to deploy a synthetic character 60 with a specific statement, the story execution system 30 displays a media item on the participants' screens that portrays the synthetic character 60 saying the words. Preferably, this media item has both audio and visual components that cause the participants to believe that the character is a real human being that was participating in the simulation from an off-site location and using the same video-conferencing tools that are available to the participants.

[0070] The most believable media that could be presented to the participants is a pre-produced digital video file 120, capturing an actor delivering a predetermined speech. Special effects may be added to the video file to simulate the effects of latency caused by such things as video-conferencing over the Internet, among other factors. Alternatively, an algorithm could be created to transform textual input into audio output by voice synthesis, while accompanying a static photograph 122 of the speaking character. This enables the instructor to tailor the communications to the particular participants as necessary. As a further alternative, the synthetic text-to-speech algorithm could be used with articulation photographs 124 (i.e., photographs of actors articulating specific vowel and consonant sounds) or animated character models.

[0071] Although the invention has been described in terms of particular embodiments in an application, one of ordinary skill in the art, in light of the teachings herein, can generate additional embodiments and modifications without departing from the spirit of, or exceeding the scope of, the claimed invention. Nothing in the above description is meant to limit the present invention to any specific materials, geometry, or orientation of elements. Many part/orientation substitutions are contemplated within the scope of the present invention and will be apparent to those skilled in the art. Accordingly, it is understood that the drawings, descriptions and examples herein are proffered only to facilitate comprehension of the invention and should not be construed to limit the scope thereof.

Claims

1. A method of training comprising the steps of

generating simulation content;
delivering the simulation content to one or more participants via a computer network;
monitoring the one or more participants' responses to the simulation content; and
providing feedback to the one or more participants.

2. The method of claim 1, further including the step of generating one or more synthetic characters.

3. The method of claim 2, wherein the feedback is provided by the one or more synthetic characters.

4. The method of claim 2, wherein the one or more synthetic characters are used to alter the simulation content.

5. The method of claim 1, wherein the feedback is provided by an instructor.

6. The method of claim 1, further comprising the steps of

generating a representation of expected responses to the simulation content; and
alerting an instructor of the one or more participants' responses when the one or more participants' responses deviate from the representation of expected responses to the simulation content.

7. The method of claim 1, further comprising the step of altering the simulation content in response to the one or more participants' responses.

8. The method of claim 1, wherein the simulation content depicts military scenarios.

9. The method of claim 1, further comprising the step of delivering immersive audio to the one or more participants.

10. The method of claim 1, wherein the computer network comprises the Internet.

11. A training apparatus comprising

means for generating simulation content;
means for delivering the simulation content to one or more participants via a computer network;
means for monitoring the one or more participants' responses to the simulation content; and
means for providing feedback to the one or more participants.

12. The apparatus of claim 11, further including means for generating one or more synthetic characters.

13. The apparatus of claim 12, wherein the feedback is provided by the one or more synthetic characters.

14. The apparatus of claim 12, wherein the one or more synthetic characters are used to alter the simulation content.

15. The apparatus of claim 11, wherein the feedback is provided by an instructor.

16. The apparatus of claim 11, further comprising

means for generating a representation of expected responses to the simulation content; and
means for alerting an instructor of the one or more participants' responses when the one or more participants' responses deviate from the representation of expected responses to the simulation content.

17. The apparatus of claim 11, further comprising means for altering the simulation content in response to the one or more participants' responses.

18. The apparatus of claim 11, wherein the simulation content depicts military scenarios.

19. The apparatus of claim 11, further comprising a means for delivering immersive audio to the one or more participants.

20. The apparatus of claim 11, wherein the computer network comprises the Internet.

21. A simulation method comprising the steps of

generating simulation content;
generating a representation of expected responses to the simulation content;
delivering the simulation content to one or more participants via a computer network;
monitoring the one or more participants' responses to the simulation content;
comparing the one or more participants' responses with the representation of expected responses to the simulation content; and
altering the simulation content in response to the one or more participants' responses.

22. The method of claim 21, further including the step of generating one or more synthetic characters.

23. The method of claim 21, wherein the simulation content depicts military scenarios.

24. The method of claim 21, further comprising the step of delivering immersive audio to the one or more participants.

25. The method of claim 21, wherein the computer network comprises the Internet.

26. A simulation apparatus comprising

means for generating simulation content;
means for generating a representation of expected responses to the simulation content;
means for delivering the simulation content to one or more participants via a computer network;
means for monitoring the one or more participants' responses to the simulation content;
means for comparing the one or more participants' responses with the representation of expected responses to the simulation content; and
means for altering the simulation content in response to the one or more participants' responses.

27. The apparatus of claim 26, further including a means for generating one or more synthetic characters.

28. The apparatus of claim 26, wherein the simulation content depicts military scenarios.

29. The apparatus of claim 26, further comprising a means for delivering immersive audio to the one or more participants.

30. The apparatus of claim 26, wherein the computer network comprises the Internet.

31. A simulation apparatus comprising

a database containing simulation content;
one or more participant workstations;
a web server for delivering the simulation content to the one or more participant workstations;
an instructor interface for displaying information to an instructor and receiving input from the instructor;
one or more participant interfaces connecting the web server to the respective one or more participant workstations; and
an artificial intelligence engine for analyzing input into the one or more participant workstations and altering the simulation content in response to the input.

32. The apparatus of claim 31, further comprising a means for generating one or more synthetic characters.

33. The apparatus of claim 32, wherein the one or more synthetic characters are represented by digital video.

34. The apparatus of claim 32, wherein the one or more synthetic characters are represented by one or more static photographs.

35. The apparatus of claim 32, wherein the one or more synthetic characters are represented by a plurality of articulation photographs.

36. The apparatus of claim 31, further comprising one or more authoring tools for generating additional simulation content.

37. The apparatus of claim 31, further comprising a means for delivering immersive audio to the one or more participant workstations.

38. The apparatus of claim 31, further comprising a means for providing feedback.

39. The apparatus of claim 31, further comprising a system activity database for logging information generated in response to the simulation content.

Patent History
Publication number: 20030091970
Type: Application
Filed: Nov 9, 2001
Publication Date: May 15, 2003
Applicant: ALTSIM, INC. AND UNIVERSITY OF SOUTHERN CALIFORNIA
Inventors: Nathaniel A. Fast (Santa Rosa, CA), Andrew S. Gordon (Marina Del Rey, CA), Randall W. Hill (Pasadena, CA), Nicholas V. Iuppa (Belmont, CA), Richard D. Lindheim (Beverly Hills, CA), William R. Swartout (Malibu, CA)
Application Number: 10036107
Classifications
Current U.S. Class: Question Or Problem Eliciting Response (434/322)
International Classification: G09B003/00;