INSTRUCTOR-LEAD TRAINING ENVIRONMENT AND INTERFACES THEREWITH
An infantry training simulation system comprising at least one firing lane, with at least one display arranged substantially near the end of the firing lane. A trainee experiencing the simulation can carry at least one physical or virtual weapon, which is typically similar to a traditional infantry weapon. To facilitate navigation and other interaction with the simulation, the weapon is preferably outfitted with at least one controller. At least one computer is communicatively coupled to the display and the weapon. The computer can monitor input from the at least one controller, and modifies the training simulation displayed on the display based on the input.
Latest DYNAMIC ANIMATION SYSTEMS, INC. Patents:
This application relates to and claims priority from U.S. patent application Ser. No. 11/285,390, filed Nov. 23, 2005, which claims priority from U.S. Provisional Patent Application Ser. No. 60/630,304, filed Nov. 24, 2004, and Provisional U.S. Patent Application Ser. No. 60/734,276, filed Nov. 8, 2005, the disclosures of which are hereby incorporated by reference in their entirety for all purposes.
COPYRIGHT NOTIFICATIONThis application includes material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent disclosure, as it appears in the Patent and Trademark Office files or records, but otherwise reserves all copyright rights whatsoever.
BACKGROUND OF THE INVENTION1. Field of the Invention
The present invention relates to the field of instructor-based simulated training environments, and more specifically provides new interfaces to such environments.
2. Description of the Related Art
Armed forces throughout the world rely on well trained men and women to protect their countries from harm. Such training varies widely among the different military branches, but until recently, such training has essentially involved one of two extremes, either highly advanced simulations, or hands-on, real-world training.
This training divide exists for several reasons. One such reason is that the cost of developing simulated training environments is typically significantly higher than the real-world training. For example, according to statistics compiled in 2001, it costs the United States Army approximately $35,000 to train a new infantry recruit using traditional training methods. When this is compared to the cost of developing and deploying an infantry simulator, which could easily cost tens of millions of dollars, it is typically seen as more cost effective to provide traditional, hands-on training. The exception to this is in the aviation and maritime realms, where each real-world aircraft or watercraft can easily cost tens of millions of dollars, and training a pilot can cost hundreds of thousands of dollars. In such instances, developing simulators that allowed entry-level pilots to gain experience without entering the cockpit or bridge of a real aircraft or watercraft has proven to be a much more cost-effective training approach than risking the lives and safety of valuable instructors, trainees, and equipment.
Another reason for the training divide is that most infantry-related tasks require maneuvering. Unlike pilots, who sit in a relatively static, fixed-dimension cockpit or bridge, infantry and other service members are required to move around a much broader area. For example, an infantry training exercise may involve securing a building in a city. Where the simulation begins at the outskirts of the city, the recruit must be able to navigate the city and find the appropriate building, enter it, and secure it. Such interactions have heretofore required awkward interfaces that tended to be distracting, and have not allowed the recruits to be fully immersed in the simulation. Thus, traditional, hands-on training has traditionally been preferred for infantry recruits.
While traditional, hands-on, real-world training has traditionally been preferred for training infantry recruits, such training has its disadvantages. For example, it is often difficult to simulate the various environmental, structural, and linguistic differences experienced in different theaters. By contrast, a simulated training environment can readily allow a recruit to experience these differences.
SUMMARY OF THE INVENTIONWhat is needed is a system and methods through which infantry and other recruits can be trained using simulated environments that overcomes one or more of the limitations of the prior art.
It is an object of the present invention to provide a lane-based, instructor controllable, simulated training environment.
It is another object of the present invention to provide a user interface device through which an infantry recruit or other such trainee can easily navigate large simulated geographies.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Technologies have been developed to support the needs of defense and civilian security forces. Specific training areas addressed by this technology can include, but are not limited to, small arms non-live fire marksmanship training, situation-based deadly force application decision-making skills, driver training, and convoy protection skills training. Although the exemplary embodiments described below address the software and hardware technologies as they exist today in the context of demonstration applications for military and law-enforcement training systems, it should be apparent to one skilled in the art that such systems can be readily adapted for alternative use contexts, such as, without limitation, video games, civilian weapons training, paramilitary training, and the like. The technology building blocks explained in the exemplary embodiments can be enhanced, combined, and configured in various ways as a solution to a diverse set of training needs.
The system is preferably scalable, and allows multiple lanes to simultaneously interoperate with the simulation, thereby allowing multiple team members to practice tactics, techniques, and procedures both individually and as a team. Such a configuration also allows multiple teams to train together simultaneously, allows training force on force; permits fire team on fire team or multiple fire team vs. multiple fire team training, and any combination of fire teams vs. fire teams training. Using the integrated simulation controls, a single lane fire team or squad leader can command other trainees during the exercise or practice, such as by interactive GUI or voice command.
One embodiment of the invention includes an infantry training simulation system comprising at least one firing lane, with at least one display arranged substantially near the end of the firing lane. The trainee using the simulation can carry at least one weapon, which is typically similar to an infantry weapon. To facilitate navigation and other interaction with the simulation, the weapon is preferably outfitted with at least one controller. At least one computer is communicatively coupled to the display and the weapon, monitors input from the at least one controller, and modifies the training simulation displayed on the display based on the input.
Another embodiment of the invention includes an infantry training simulation system comprising a plurality of firing lanes, wherein each firing lane has associated therewith at least one display. At least one computer is communicatively coupled to at least one of the plurality of displays and generates a training simulation for display by the at least one display to which it is attached. The embodiment preferably further includes at least one instructor station, wherein the instructor station is communicatively coupled to the at least one computer allows an instructor to take control of at least one entity in the simulation. The trainee and/or instructor can interact with the simulation through a variety of means, including through at least one weapon. Each weapon is preferably associated with a firing lane, and each of the weapons is preferably communicatively coupled to the at least one computer such that the at least one computer can monitor the trainee and/or instructor as he interacts with the weapon.
Still another embodiment of the invention includes a method of interacting with a simulated infantry scenario, comprising equipping a physical weapon with at least one controller; navigating the simulation with the at least one controller; monitoring the simulation for at least one hostile target; and engaging the hostile target using the physical weapon.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.
Reference will now be made in detail to preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings.
An aspect of the present invention provides a lane-based, instructor-led, infantry training simulator. In the embodiments illustrated in
At its most basic, an embodiment of the present invention can be implemented using a computer system, a simulated or modified weapon, and a training lane. All elements of the current embodiment of the system software execute on Windows-based PC's, although it should be apparent to one skilled in the art that alternative operating systems may be substituted therefor without departing from the spirit or the scope of the invention. Each of the software components is easily configured and controlled using standard input devices such as a keyboard and mouse. Additional input devices such as, without limitation, gamepads, joysticks, steering-wheels, foot pedals, foot pads, light gloves, any Microsoft DirectInput compatible USB device, or the like can also be used with the software components as desired. While DirectInput is a preferred API for interfacing with such devices, it should be apparent to one skilled in the art that alternative interface means may be substituted therefor without departing from the spirit or the scope of the invention.
The PC's are preferably standard COTS gaming-level performance PC's, although as technology progresses such high-end machines may not be necessary. A typical, presently preferred computer configuration is as follows:
Pentium-4 2.5 GHz or better
1 GB RAM
ATI Radeon 9800 XT 128 MB video card or better
40 GB Hard drive
The software running on these PC's is preferably capable of operating in a stand-alone mode or a collaborative, networked mode. In one stand-alone mode, the trainee is the only user-controlled entity in the environment with all other entities controlled by Al per the scenario definition. In collaborative, networked mode, each instantiation of the application software, such as, without limitation, each separately controlled trainee PC on the network, represents a trainee-controlled entity. The trainee-controlled entity can be friendly or hostile, with his role and beginning position set by the scenario. With this capability, the following engagement scenarios can be trained:
Single vs. Programmable Al
Team vs. Programmable Al
Single vs. Single
Team vs. Team
As illustrated in
The hit detection system allows trainee PC 230 or other computing device to determine when a shot is fired from weapon 100. Upon actuation of a firing mechanism, such as trigger 110, associated with the weapon, a laser “fires” one pulse per shot which, through hit detection system 208, indicates to the software where the shot enters the virtual environment space. Laser signatures specific to each weapon can identify individual shots fired from multiple weapons in the same lane, enabling multi-trainee training in a single lane as illustrated in
Referring again to
In addition to the three stations, four separate software applications are preferably implemented across the various system components. Although the four software applications are described herein as separate entities, it should be apparent to one skilled in the art that the functionality of one or more applications can be combined together, and one or more application may be divided into a plurality of applications, without departing from the spirit or the scope of the invention. The following are summaries of each application. More detailed descriptions of each appear below.
The first application is the trainee application which is used to present a real-time image to the trainee in the lane via display system 200. This application also handles the input for hit detection system 208, weapon 100 inputs (including inputs from controllers 115, 120, and 140, described below), and clip stand inputs (described below), and propagates these inputs to a simulation server. In one embodiment, the inputs are propagated to the simulation server via the trainee station. In this embodiment, the trainee station preferably processes all input to the simulation from any trainee control device. As described below, by using these control devices, the trainee has the ability to fully interact with the 3D environment, shooting weapons, throwing grenades, climbing up on chairs, climbing ladders, climbing ropes, and the like. In one embodiment, the instructor station controls the observer station, which can run the trainee application in slave mode.
The second application is the instructor station. The instructor station preferably acts as the simulation server, network host, and simulation control station for mission execution.
The third application is the Scenario Editor. This application enables course designers to customize the tactical situation using a simple point and click interface and a standard scripting language.
The final application is the Level Editor. This application is used to build the environment, consisting of visible and invisible geometry, collision geometry, lighting information, special rendering pipeline information, and other characteristics of the environment, objects, and actors in the simulation.
The trainee station preferably includes at least one physical or virtual weapon. Referring to
A preferred embodiment of the invention allows controllers 115, 120 to be located where convenient and comfortable for the trainee. The trainee can adjust control positions based on arm length, hand size, and the like using a plurality of set screws 117 and brackets 116, and/or simply removing and rotating both the joystick/thumbstick and button mechanisms for a left handed configuration. Although the illustrated embodiment utilizes screws 117 to mount the controllers to the weapon, it should be apparent to one skilled in the art that alternative mounting means, including, without limitation, double-stick tape or other adhesive, and rubber bands or other mechanical devices, may be substituted therefor without departing from the spirit or the scope of the invention.
In a preferred embodiment, controllers 115 and 120 are implemented as traditional joysticks or thumbsticks, with the added functionality that pressing directly down on the joystick acts as an additional input. While a joystick is presently preferred, it should be apparent to one skilled in the art that alternative controller arrangements, including, without limitation, a plurality of buttons, or a trackball, may be substituted therefor without departing from the spirit or the scope of the invention.
A plurality of controllers are presently preferred because they allow the trainee to simultaneously navigate the simulated environment and adjust the view angle. By way of example, without intending to limit the present invention, controller 115 may be configured as a view angle controller. In such a configuration, activation of controller 115 can cause the display to change as though the trainee were turning or tilting his or her head. By contrast, when controller 120 is configured as a movement or navigation controller, activation of controller 120 can cause the trainee's position within the simulation to change as appropriate. The combination of these controls allows, for example, a trainee to look to his or her left while stepping backward.
Controllers 115 and 120 are preferably located at or near where the trainee traditionally holds the weapon. In the embodiment illustrated in
This methodology provides highly realistic simulated weapons engagement training. The conversion of a trainee's weapon into an indoor training weapon is a simple procedure that replaces the receiver or barrel with a simulation barrel or standard blank firing adapter, and adds laser 155 for indicating shot location. The weapon is then loaded with special indoor blanks or standard blanks as appropriate. In situations where users do not desire to use blanks, active simulated weapons that meet the same form/fit/weight/function as the real weapons may be substituted therefore without departing from the spirit or the scope of the invention.
In addition to the weapon being instrumented for simulation input with controllers 115 and 120, standard button presses may also be used to control trainee simulation control functions such as throwing a grenade, jumping, unjamming a weapon, switching weapons, or the like. The layout and placement of these buttons are configurable for each trainee to account for ergonomic variation and personal preference, as illustrated by input system 140 of
By utilizing a multi-state input system such as input system 140, individual commands can be defined in terms of the activation of multiple controls, either simultaneously and/or in a defined temporally based sequence. This allows a far greater set of commands to be readily available to each trainee. By way of example, without intending to limit the present invention, one chord of buttons from input system 140 may temporally place the input state machine into an external control mode where the next command will affect the entire group to which the trainee is associated.
In one embodiment, trainees can customize the functionality represented by the various buttons on input system 140 and the functionality associated with each of controllers 115 and 120 through a weapon controls configuration screen such as that illustrated in
An object of the present invention is to provide an immersive, simulated environment in which a trainee can become more familiar with a weapon, practice various techniques and tactics, and the like. The immersive environment is a collaborative virtual world that preferably supports a variety of exterior terrain types such as urban, rural, and urban/rural transitions, as well as various building interior and exterior types, and specific custom-built interiors. The user's view of this environment can be static or moving. A moving viewpoint simulates walking, running, driving, or other movement within the environment, and can be controlled directly by the user, scripted in the scenario, or controlled by a secondary user. While walking or running through the environment, interior of buildings can be explored by moving through doorways from room-to-room, around corners, climbing up and down stairs, ropes, ladders, or the like.
Whether the viewpoint is static or moving, the software can place scenario-driven artificial intelligence (Al) entities 1800 throughout the immersive environment to provide situational engagement opportunities. The Al can represent an individual entity or a group of entities, and can exhibit innocent/non-combatant, armed/opposition, or other such behaviors. These behaviors are preferably programmable and can be grouped and/or event-driven to compose complex behavior sequences. This technology differs from branching video scenarios by offering a wider variety of situations to be trained. Additionally, this technology provides the capability to add variability to Al behavior responses so the trainee learns to handle the situation, not the training device.
A goal of the present invention is to allow trainees to train under a variety of conditions, and to allow instructors to modify a given training scenario so that trainees learn to respond to events occurring in the simulation, rather than merely anticipating an event based on a prior simulation. To that end, a preferred embodiment of the present invention includes a Scenario Editor and a Level Editor. The Scenario Editor allows an instructor, curriculum developer, or other user to create new scenarios and to modify existing scenarios. It preferably provides the user with at least two different viewing modes, free fly camera mode and a locked camera view, effectively providing a 2D orthographic view. The Level Editor allows a curriculum developer to create new environments.
The Level Editor user can import new terrains or geometry from a variety of external software, such as, without limitation, those capable of generating OpenFlight™ files. Additionally, geometry created in 3DStudioMax or other three dimensional CAD or drawing tools can be imported as well. Such importation may occur through, for example, the use of Apex™ Exporter or other such tool. Using the mouse and keyboard, or the navigation area, the user can move, fly, or otherwise navigate around the imported terrain and place objects in the scenario. Objects can be placed by explicitly specifying a location (e.g., by mouse click), by using a paint function to rapidly populate objects (such as a forest or shrubs, trash or other urban clutter), or by using a random placement function with a user specified object density. Depending on the methodology used for rendering the terrain, the user may also specify the terrain textures, tiling factors, and detail texture to be used. The terrain may also have visual details, such as water, roads, scorch marks, and other types of visual detail placed on it. Objects added to the environment using the Level Editor can be moved, rotated, scaled, and have their object-specific attributes edited. The Level Editor is also used to generate or specify terrain and object collision meshes.
The Scenario Editor preferably includes an Al menu which enables the user to populate the environment with entities and specify their default behaviors. Opposing force entities can be given a mission or objective, a skill level, stealth level, and a set of human characteristics similar to those given to live participants. Non-combatant entities can be either given, for example, a starting point, number, path, and destination (i.e., an area in which they maneuver), or a place where they remain but perform a specified action. Other functions include a trigger/event system for specifying complex scenario behaviors.
The Scenario Editor preferably also contains other menu items which allow the user to specify attributes of special objects such as weapons (e.g., weapon type, useful range, slope, lethality, damage/interaction with objects), and explosive devices (e.g., fireball size, lethal range, injury range and damage/interaction with objects). The Scenario Editor also supports the ability to assign “health” to objects in the environment. Anything interacting with a particular object has the capability of doing damage (reducing the “health”) to the object, by virtue of its speed, hardness, and other factors.
The destructible object system is closely tied to the material system. If the user specifies that a submesh of an object is “wood”, the properties of wood will be applied to that submesh. The “wood” material's basic properties would include particle effect, collision interaction sounds, bullet and scorch marks, but also more advanced physical properties such as brittleness—which is where the destructible object system comes into play.
The brittleness of a material, such as wood or glass, determines the amount of impulse or force required to break the object to which the material is assigned. Break points and fracture paths are determined on-the-fly based on the position and direction of the applied contact force. By way of clarification, without intending to limit the present invention, in two dimensions, a fracture path can be thought of as a series of connected line segments with randomly perturbed orientations. The implementation of brittleness simulation preferably involves splitting the volume that comprised the original object, and applying a pre-assigned texture to the newly created polygons. This texture represents the object's newly exposed interior.
Entity visual representations within the simulation are preferably comprised of various body-type representations (man, woman, child, etc.) combined with customizable appearances enabling different face, skin, hair, and clothing styles to be employed. With this multi-variable approach to entity implementations, the software provides a virtually limitless set of human representations.
Entities are engaged during a simulation using physical or virtual weapons. When a shot is registered in the simulated environment, the appropriate response is elicited through the infliction of damage on the scene or other entities. Visual special effects as well as physical reactions provide visual indications of damage. Some of the visual special effects include explosions, blood splatter, dust bursts, debris bursts, sparks, wood chips (from trees), cement bursts, bullet holes, and scorch marks. Physical indications include body movement reactions, large flying debris, and vehicle physical impacts. Additionally, shots registered in the environment can also elicit responsive behavior from the entities in the scene per their programmed behavior patterns in the given scenario.
The present invention preferably uses a base morale score with different factors to compute a morale score for each entity or group of entities in the simulation. The morale score will affect the behavior of friendly and enemy Al entities. The score can also be used to determine if the entity should be suppressed. The following are exemplary factors contributing to the morale score. The following list is intended to be exemplary and not a comprehensive list of the only factors which contribute to the morale score.
Enemy Shot At +1
Enemy Hit +3
Enemy Killed +5
Friendly Shot At −1
Friendly Hit −3
Friendly Killed −5
When morale is lower, Al entities may prefer cover, line-of-sight to friendlies, closer to friendlies, reduce cutoff cost, reduce accuracy, use a lower profile (like crouch attack), if running away then panic and do not fire back and try to increase distance to threat. Based on morale score and the amount bullets “shot at” an Al entity or group of entities, the simulation can determine if the entity becomes suppressed (i.e. exhibits longer cover intervals), pinned (suppressed+does not move), cowered (pinned+does not fire back), or the like.
To detect shot proximity, a sphere is created around each entity and a raycast is calculated which passes from the bullet's entry, exit, and mid point (middle of the line connecting the entry and exit points on the sphere) to the entity. If the ray passes through the sphere, then it means the bullet passed close by and the entity was capable of perceiving it, which in turn alters the entity's morale score as described above.
Although scenarios can be designed for stand-alone training, training is preferably controlled via an instructor station. During a simulation, the instructor is presented with a user interface similar to those illustrated in
The instructor can also select a role to play in the first person to enhance the reality of the training. Using an object oriented actor management and command system, exemplary interfaces to which are illustrated in
In
As illustrated in
During the scenario, an after action review (AAR) log is compiled at the instructor station. The AAR information preferably includes, but is not limited to, the number of shots fired by each trainee, where those shots landed, the firing line of site, reaction time, and other pertinent data deemed important by the instructors. With the AAR log, the instructor can play back the scenario on his local display and/or the trainees' display system to debrief performance. The playback system preferably employs a “play from here” methodology from any given time-point in the AAR log file.
In one embodiment, an instructor can use a simple user interface, such as scenario control 1900 of
Although an instructor can insert bookmarks via the add mark button of scenario control interface 1900 or other such user interface element, certain events should trigger a bookmark automatically. These events include, but are not limited to, an enemy entering the trainee's field of view for the first time, trainee death, enemy death, all explosions, trigger activation, and from scripts. Specific evaluation triggers will automatically log the event in the trainee's individual AAR statistics.
Multi-channel Hit Detection is accomplished by the sending of a network packet whenever a mouse click input is received on a slave channel. The network packet contains the position and direction of the projected ray created out of the two dimensional mouse click. That network packet is processed by the master channel, and a hit is registered using the ray information. To avoid having multiple hits registered in the cases where the screen area overlaps between the channels, a comparison is done on the time of each hit against the maximum fire rate of a weapon. If a hit happens too quickly, the hit is discarded.
One embodiment of the present invention can simulate a weapon system jamming based on the current clip 135, cumulative effects of the rounds fired, and the like, as illustrated in
By tracking clips, rounds within the clips, and the types of those rounds (standard, tracer, custom, other) in a database, using RFID or other wireless or wired technology which includes a special clip stand that is instrumented to identify which clips 135 remain unused (and may thereby derive which clips have been used), the simulation can statistically derive the probabilities used to simulate a realistic and reasonable weapon jam during the currently executing firing cycle. The probability of jamming during any firing cycle is related to the round type being fired, the total rounds fired in this session, the total rounds fired since the last jam, the number of rounds fired of each ammunition type this session, the number of rounds fired of each ammunition type since the last jam, and other items which are tracked in the simulation; or a jam may be simulated at the command of the instructor, or as a predefined command in a control script.
Once a weapon jam has occurred, the firing signal to the instrumented weapon is blocked and the trainee must perform a jam clearing procedure to reactivate the weapon. This procedure may range in complexity from the press of a single button on the instrumented weapon, to a complex series of steps which are sensed by the simulation from a fully instrumented weapon, such as that illustrated in
While the invention has been described in detail and with reference to specific embodiments thereof, it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope thereof. Thus, it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.
Claims
1. A training simulation system, comprising:
- a computer to generate an immersive virtual environment;
- a display in communication with the computer to display the computer-generated immersive virtual environment; and
- a controller in communication with the computer to allow a user to interact with the computer-generated immersive virtual environment;
- wherein the display is configured to display a viewpoint within the computer-generated immersive virtual environment corresponding to each user,
- wherein the computer monitors inputs from the controller to simulate movements and interactions of a user within the computer-generated immersive virtual environment,
- wherein the computer-generated immersive virtual environment represents the movements and interactions of one or more users within the same computer-generated immersive virtual environment, and
- wherein the immersive virtual environment comprises an instructor station to serve as the control station, server, and network host for the computer-generated immersive virtual environment, the instructor station receiving and propagating inputs from each controller associated with a user and controlling, modifying, and recording the computer-generated immersive virtual environment.
- wherein the instructor station comprises a scenario editor configured to edit the computer-generated immersive virtual environment in real-time during the training simulation, the scenario editor configured to allow the taking over of an entity represented within the computer-generated immersive virtual environment during the training simulation.
2. The training simulation system of claim 1, further comprising a plurality of communication channels to allow communication among users of the computer-generated immersive virtual environment, wherein the communications channels allows users to coordinate simulated movement within the computer-generated immersive virtual environment
3. A training simulation system, comprising:
- a computer to generate an immersive virtual environment;
- a trainee station in communication with the computer to host a user, comprising: a display in communication with the computer to display the computer-generated immersive virtual environment; and a controller in communication with the computer to allow the user to interact with the computer-generated immersive virtual environment, wherein the display is configured to display a viewpoint within the computer-generated immersive virtual environment corresponding to the user; and
- an instructor station in communication with the computer to serve as the control station, server, and network host for the trainee station and the computer-generated immersive virtual environment, the instructor station receiving and propagating inputs from each trainee station and controlling, modifying, and recording the computer-generated immersive virtual environment,
- wherein the instructor station comprises a scenario editor configured to edit the computer-generated immersive virtual environment in real-time during the training simulation, the scenario editor configured to allow the taking over of an entity represented within the computer-generated immersive virtual environment during the training simulation,
- wherein the computer monitors inputs from the trainee station to simulate movements and interactions of the user within the computer-generated immersive virtual environment,
4. The training simulation system of claim 3, wherein the training simulation system comprises two or more trainee stations, and wherein the computer-generated immersive virtual environment represents the movements and interactions of each trainee station within the same computer-generated immersive virtual environment, and the display of each trainee station displays a representation of the movements and interactions of the users in the other trainee stations.
5. A method of interacting with a training simulation, comprising
- generating a computer-generated immersive virtual environment;
- displaying the computer-generated immersive virtual environment in a trainee station;
- monitoring the trainee station for user inputs through a controller associated with the trainee station;
- modifying the computer-generated immersive virtual environment displayed in the trainee station according to the user inputs, wherein the user inputs correspond to simulated movements and interactions of the user within the computer-generated immersive virtual environment; and
- modifying the computer-generated immersive virtual environment displayed in the trainee station according to instructor inputs through an instructor station, the instructor station configured to serve as a control station, server, and network host for the computer-generated immersive virtual environment, the instructor station receiving and propagating inputs from the trainee station and controlling, modifying, and recording the computer-generated immersive virtual environment, wherein the instructor station comprises a scenario editor configured to edit the computer-generated immersive virtual environment in real-time during the training simulation, the scenario editor configured to allow the taking over of an entity represented within the computer-generated immersive virtual environment during the training simulation.
6. The method of claim 5, wherein the monitoring of the trainee station comprises the monitoring of two or more trainee stations,
- wherein the modifying of the computer-generated immersive virtual environment displayed in the trainee station according to the user inputs comprises modifying the computer-generated immersive virtual environment according to the user inputs from the two or more trainee stations, and
- wherein the displaying of the computer-generated immersive virtual environment in the trainee station comprises displaying the movements and interactions of each user of a trainee station within the same computer-generated immersive virtual environment, the display in each trainee station corresponding to the viewpoint of each user of a trainee station.
Type: Application
Filed: Mar 5, 2014
Publication Date: Jul 3, 2014
Applicant: DYNAMIC ANIMATION SYSTEMS, INC. (Fairfax, VA)
Inventors: David A. Slayton (Burke, VA), Dale E. Newcomb, JR. (Leesburg, VA), Eric A. Preisz (Fairfax, VA), Carl Douglas Walker (Front Royal, VA), Charles W. Lutz, JR. (Orlando, FL), Robert J. Kobes (Thornburg, VA), Christopher M. Ledwith (Fairfax, VA), Delbert Cope (Orlando, FL), Robert E. Young (Waynesboro, PA), Syrus Mesdaghi (Centreville, VA)
Application Number: 14/198,297