SYSTEMS AND METHODS FOR ARRANGING FIREARMS TRAINING SCENARIOS

Systems and methods of arranging a firearms training scenario utilising at least one robotic mobile target in a training area are disclosed, the method including the steps of: sending commands to at least one robotic target in a training area to cause the target to operate in the training area; recording operations data representative of the operations carried out by the at least one robotic target; and subsequently conducting a training scenario in the training area wherein the at least one robotic target bases its actions at least partially on the previously recorded operations data.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
TECHNICAL FIELD

The present invention relates to systems and methods for arranging firearms training scenarios and particularly relates to firearms scenarios utilising robotic mobile targets.

BACKGROUND TO THE INVENTION

Armed personnel such as soldiers typically receive training to assist them in dealing with armed combat situations that they might encounter during their active duties. Such training can include training exercises using live ammunition such as practice in shooting at targets. Such training is crucial to the personnel's performance and safety in real life situations. There remains a need for improved systems and methods for training armed personnel.

To date, such training has involved the use of static shooting targets, pop-up targets, and targets moved on tracks. For targets on tracks, the routes are defined by the tracks and the motion along those routes is controlled directly in real-time or is pre-defined on a computer screen.

In some cases, mobile targets have been used in the form of a mannequin or the like mounted on a moveable platform on wheels. These may be directly radio-controlled by a human operator during a training exercise. This adds a significant workload to training exercises, particularly when multiple moving targets are required, and it is difficult to present multiple trainees with identical training scenarios.

In some cases, these mobile targets have been programmed to move along a pre-programmed route in a training area to simulate persons moving about, and the personnel being trained must attempt to hit the mannequins. Route definition is performed on a computer screen. In other cases, the mobile targets are autonomous and the target's onboard computer generates the route for the target to follow according to constraints pre-defined on the computer screen. An example of such a system is described in the present applicant's International Patent application no PCT/AU2010/001165 (published as WO/2011/035363), the contents of which are incorporated herein by reference.

In all cases, the intended outcome is to present targets to a trainee in some desired fashion. When presenting moving targets along tracks, considerable thought should be put into the routes of the tracks, since they are difficult to move subsequently. With the advent of trackless targets that can move along any route, novel methods of defining the routes are required to facilitate quick, easy, and intuitive generation of new routes.

Problems with definition of routes on a computer screen include:

1. When looking at the computer screen, the operator has to imagine what the trainee will see from their perspective, what angles and openings will be visible from a certain vantage point, etc. This is especially difficult when there are elevation changes within the training range, so the operator has to think in three dimensions while plotting target trajectories on a two-dimensional screen. As a result, creating a route for a mobile target may involve iterating between defining the route on the screen, watching the target move from the trainee's intended vantage point, modifying the route on the screen, etc (a potentially cumbersome process).

2. Defining a route on a computer screen requires that the route be defined relative to something meaningful that can be displayed on the screen, i.e. a map of some kind This mandates an extra step before a mobile trackless target can be used in a new training range: that map must first be generated. Even if the map is very simple, e.g. an aerial photograph of the training range geo-referenced in a GPS coordinate system, it is still an extra step and may require additional resources such as an internet connection to download the aerial photograph.

SUMMARY OF THE INVENTION

In a first aspect the present invention provides a method of arranging a firearms training scenario utilising at least one robotic mobile target in a training area, the method including the steps of: sending commands to at least one robotic target in a training area to cause the target to operate in the training area; recording operations data representative of the operations carried out by the at least one robotic target; and subsequently conducting a training scenario in the training area wherein the at least one robotic target bases its actions at least partially on the previously recorded operations data.

The operations data may include command data representative of at least some of the commands sent to the robotic target.

The operations data may include actions data representative of at least some of the actions carried out by the robotic target in reacting to the commands.

The operations data may include outcome data representative of at least some of the outcomes of executing the commands.

The step of sending commands to the at least one robotic target may be carried out by a human operator using a remote control input device.

The step of sending commands may be carried out whilst the human operator is situated at a location in the training area where the at least one of the trainees will be situated during the step of conducting the training scenario.

The operations data may be recorded by the robotic mobile unit.

The operations data may include data representative of the location, orientation or velocity of the at least one robotic target in the training area.

The operations data may include data representative of any of sounds produced by the at least one robotic target, raising or lowering of simulated weapons, deployment of special effects by the at least one robotic target or at least one robotic target remaining static.

During the step of conducting the training scenario, the robotic target may intentionally deviate from the operations data.

The robotic target may deviate from the operations data to avoid an obstacle.

The robotic target may randomly deviate from the operations data.

The scenario may utilise more than one robotic target and each base their operations on their own set of operations data.

The at least one robotic target may commence operations in the training scenario following the elapsing of a pre-determined interval of time, or in response to detecting personnel in the training area, or in response to detecting movement of another target in the training area.

In a second aspect the present invention provides a system for use in conducting a firearms training scenario utilising at least one robotic mobile target in a training area, the system including: sending means for sending commands to at least one robotic target in a training area to cause the target to operate in the training area; recording means for recording operations data representative of the operations carried out by the at least one robotic target; the at least one robotic target is arranged to participate in a firearms training scenario in the training area; and wherein the at least one robotic target is arranged to base its actions at least partially on recorded operations data.

The operations data may include command data representative of commands sent to the robotic target.

The operations data may include actions data representative of actions carried out by the robotic target in reacting to commands.

The operations data may include outcome data representative of outcomes of executing the commands.

The sending means may include a remote control input device.

The recording means may be embodied in the robotic mobile unit.

The operations data may include data representative of the location, orientation or velocity of the at least one robotic target in the training area.

The operations data may include data representative of any of sounds produced by the at least one robotic target, raising or lowering of simulated weapons, deployment of special effects by the at least one robotic target or at least one robotic target remaining static.

The robotic target may be arranged to intentionally deviate from the operations data.

The robotic target may be arranged to deviate from the operations data to avoid an obstacle.

The robotic target may be arranged to randomly deviate from the operations data.

The system may include more than one robotic target.

The at least one robotic targets may be arranged to commence actions following the elapsing of a pre-determined interval of time, or in response to detecting personnel in the training area, or in response to detecting movement of another target in the training area.

In this specification the following terms have the following intended meanings:

    • “commands”: instructions sent by an operator to a robot during the recording session.
    • “outcomes”: all aspects of target's performance which, when taken in aggregate, define how the target is presented to the trainees.
    • “operations data”: the persistent record created during the recording session which may include a combination of commands, actions and outcomes.
    • “actions”: operation steps planned and executed by the robot. During the recording session, the actions are generated in response to the operator commands. During the replay session, the actions are generated based on the operations data and real time sensor data with the objective to recreate the operations carried out during the recording session as faithfully as possible.

In embodiments of the invention, a human operator manually controls the operations of one target in a recording session. This can be achieved through the use of a remote user interface. The target records its operations. The operator later commands the target to replay the operations any number of times, for the benefit of the same or different trainees.

The operations of the mobile units may include any of: sounds produced by the mobile units, movements of the mobile units, raising or lowering of simulated weapons, deployment of special effects by the mobile units, changes in velocity or direction of the mobile units or mobile units remaining static.

The target may be unable to faithfully replay the previously recorded operations. It may happen for example if it encounters an obstacle which was not in the training area at the time of the recording. In this case the target may use its sensors to detect the obstacle and navigate safely around it while attempting to return to the original path as soon as practicable.

Instead of faithfully replaying the original sequence of operations, the target may be instructed to alter some of the parameters during replay. The change in the parameters may be random or repeatable, or a combination of the two. Random changes make the actions of the robots more unpredictable, and therefore, more challenging for the trainees. Repeatable changes allow the instructor to fine-tune the scenario to the training needs of a particular trainee. Repeatable changes are also well-suited for firearms training courses where it is desirable that each trainee faces essentially the same training scenario.

The replay of recorded operations may be triggered manually by the instructor or automatically, based on a timer, or actions of other targets, or sensed actions of human participants in the exercise.

Operations of multiple targets may also be recorded and replayed using the described approach. The recording can be achieved by multiple instructors controlling multiple targets simultaneously, or by one instructor controlling one target at a time.

BRIEF DESCRIPTION OF THE DRAWINGS

An embodiment of the present invention will now be described, by way of example only, with reference to the accompanying drawings, in which:

FIG. 1 is a schematic representation of a human-shaped robot used in embodiments of the invention;

FIG. 2 is a schematic bird's eye view of a training area in which recording of a training exercise is taking place using a robot according to FIG. 1;

FIG. 3 shows the training area of FIG. 2 in which a replay of the training exercise of FIG. 2 is taking place;

FIG. 4 shows the training area of FIG. 2 in which another replay of the training exercise of FIG. 2 is taking place;

FIG. 5 shows the training area of FIG. 2 in which yet another replay of the training exercise of FIG. 2 is taking place;

FIG. 6 shows the training area of FIG. 2 in which recording of another training exercise is taking place;

FIG. 7 shows the training area of FIG. 2 in which a replay of the training exercise of FIG. 6 is taking place; and

FIG. 8 is a flow chart illustrating the steps of collecting and using operations data in the recording and replay sessions of a training scenario.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

Referring to FIG. 1, an embodiment of a robotic mobile target is shown in the form of human-shaped robot 100. Robot 100 has a motorised wheeled base 1. On the base 1 is mounted a mannequin 6 shaped like a human torso. Robot 100 is controlled by an on board computer 2, configured with software, which is mounted on the base 1 and protected by an armoured cover 3 from bullet strikes. Robot 100 includes wireless communication means 4 such as wifi to enable sending and receiving of information to and from a human operator (not shown), or to and from other robots, or to and from a control base station (not shown). Robot 100 includes a GPS receiver 12 to determine its own position.

Robot 100 includes a laser rangefinder 13 to enable it to detect features in the local environment to thereby see around. Fixed and moving obstacles are detected by analysing each laser scan. When an obstacle is detected in the robot's intended motion path, the motion plan is modified to safely navigate around it.

FIGS. 2 to 7 depict preparation and execution of firearms training exercises carried out in a training area using one or several robots 100 of FIG. 1.

Referring to FIG. 2, a training area is shown 10 in which are located high walls 15, 16, 17 and a barrel 18. At the south edge of the training area is a firearms instructor 41 arranging a firearms training exercise. In the training area is a mobile unit in the form of human-shaped robot 31. This robot 31 is of the type of robot 100 shown in FIG. 1. The robot is arranged to execute and record operations based on remote commands of the human instructor as will now be described.

Firearms instructor 41 positions himself at the south edge of the exercise area to observe the area from the position of where the trainee(s) will be later situated. The instructor sends a sequence of remote commands 51 to the target 31 using a specialised remote control hand-held device. The device includes a joystick for inputting directional commands along with other buttons for sending commands to carry out other types of operations, such as deploy special effects as will be later described. Desired speed of movement of the target in any direction is indicated by the instructor by the degree of deflection applied to the joystick. The remote control device communicates with the target 31 by radio communication.

The instructor can issue the following commands to the target:

1. Motion control commands by joystick input, i.e. turn left, turn right, straight, move faster, slower, reverse direction, or stop moving.

2. Raise or lower arms holding objects or simulated weapons;

3. Create audio effects from an onboard speaker;

4. Create light effects to illuminate the target itself or the ground around it;

5. Create other effects such as simulated gunfire, pyrotechnics, explosions, or smoke.

The target 31 operates in the training area in response to the commands it receives. The target also records operations data representative of the operations that are carried out. The operations data recorded includes data representative of the commands issued and also data indicative of the operation steps carried out in response to the commands.

For example, if the target reacts to directional commands to move between certain positions in the training area, then it records these operations in the form of positional outcomes of executing these commands by storing GPS coordinate data of the points that it moved between in the form of waypoints. This ensures that the movements made subsequently by the target during the replay of a training scenario are a faithful reproduction of the movements witnessed by the instructor at the time of recording the scenario. The recorded operations data enables compensation for variations in conditions such as increased wheel slippage of targets in wet weather or other minor variations in conditions.

Target 31 may record the following outcomes resulting from executing operator commands:

1. Positional outcomes such as target's coordinates and orientation
2. Movement outcomes such as target's linear and rotational velocities, as well as linear and angular accelerations;
3. Raising or lowering arms holding objects or simulated weapons;
4. Creation of audio effects from an onboard speaker;
5. Creation of light effects to illuminate the target itself or the ground around it;
6. Creation of other effects such as simulated gunfire, pyrotechnics, explosions, or smoke.

The instructor commands the target to move along the path 36 from position 71 behind the wall 15, out into the open area, in front of and around the barrel 18, and to its final position 72 behind the wall 17. Target 31 operates in the training area by executing the commands received from the instructor. The target 31 records its operations in the form of operations data which includes data representative of the commands and also data representative of the actions taken in reacting to the commands.

The instructor also provides information to the target as to the future intended location of trainees in the training exercise. The remote control device includes its own GPS positioning capability and a button which indicates “I'm at the Trainee Location”.

The remote localises itself and sends the location to the robotic target which saves it for future use. Alternatively, the instructor drives the target to the intended trainee location by way of joystick control and pushes a button which indicates “You're at the Trainee Location”. The robot uses its own GPS positioning system to determine the location and saves it for future use.

Referring to FIG. 3, a training exercise is being carried out in the training area. The actions of the robot 31 are based on the operations data that was previously recorded in FIG. 2. For the purpose of the training exercise, the armed personnel 21 is the “blue” force (friendly), and the robot 31 is the “red” force (enemy). In this exercise, it is imagined that the red force has occupied the training area; the blue force must clear the area of red force. The armed personnel 21 is entering the training area from the south. The firearms instructor 41 initiates the previously recorded exercise causing the target 31 to start moving along the path 36. Armed person 21 takes note of target 31, takes aim and shoots.

Referring to FIG. 4, the exercise proceeds as in FIG. 3 but there is now a barrel 19 which was not there at the time when the exercise was recorded. Based on the continuous analysis of the output of the laser rangefinder mounted on robot 31, the onboard computer determines that there is an obstacle which prevents it from following the pre-recorded path 36. The onboard computer calculates a new path 37 which allows it to navigate safely around the obstacle and return to the pre-recorded path 36 as soon it is practical.

In FIG. 5 the instructor 41 commanded the target to execute the scenario recorded in FIG. 2 with an increased level of difficulty for armed personnel 21. Based on the analysis of the training area, the shape of pre-recorded path 36, and the location of personnel 21, the onboard computer calculated a new path 38 which takes it behind the barrel 18. The barrel partially obscures the target making it more difficult to observe and to shoot.

In FIG. 6, two firearms instructors 41 and 42 are recording another firearms training exercise. The instructors position themselves inside the training area in order to better observe the targets 31, 32 and the high walls 15, 17. Instructor 41 sends a sequence of remote commands 51 to target 31 while instructor 42 sends a sequence of remote commands 52 to target 32. Target 31 is commanded to move along path 38, from position 73, around the western end of high wall 15; simulate loud human speech when it reaches position 74; and proceed south to position 75. Target 32 is commanded to move along path 39 from position 76, around the western end of high wall 17; simulate multiple shots 77; and proceed south to position 78.

Referring to FIG. 7, the training exercise recorded in FIG. 6 is being replayed. The armed personnel 21 again is entering the training area from the south. The firearms instructor 42 initiates the previously recorded exercise causing the targets 31 and 32 to start moving along the paths 38 and 39. The timing of the two targets' actions is arranged such that target 32 waits until target 31 has produced simulated speech as a trigger for emerging from behind wall 17. Therefore, the simulated speech occurs before target 32 exposes itself from behind high wall 17. Armed person 21 is challenged to shoot and hit target 32 before it simulates firing shots, despite the distraction from target 31.

The record of changes in the target's position over time forms the target's trajectory. The record of other operations, e.g. audio effects, may be correlated to the recorded trajectory. This type of geo-referencing enables more faithful reproduction of the original target presentation. For example, the audio effect was intended to be played by target 31 at position 74 and not simply 15 seconds after the start of motion.

Referring to FIG. 8, the steps in a recording session and replay session are illustrated. In the recording session an operator issues commands to a robotic target. The robot peforms actions by way of utilising its various actuators. The results of its actions are referred to as outcomes. During the recording session data relating to commands, actions and outcomes is recorded and referred to as operations data.

In the replay session, the robotic target uses the previously recorded operations data to plan and carry out actions by way of its various actuators in an attempt to reproduce the outcomes of the recording session.

The ability of the robots to maintain estimates of their own positions within the training area is important for their ability to repeat the operations that they took in response to the commands. In the embodiments described above, the robots 100 carried GPS receivers to localise themselves within the training range. In other embodiments the robots may localise themselves by way of any of many methods described in the literature, e.g. tracking range and bearing to laser reflecting beacons, measuring signal strength of radio beacons, or detecting buried magnets.

In the embodiments described above, the robots 100 carried laser rangefinders to sense objects and movements of objects in front of them. In other embodiments the robots may sense objects and movements of objects by way of other sensors such as cameras, radars or sonars. After the obstacles in the robot's vicinity are detected, one of many well-known obstacle avoidance algorithms may be employed to calculate a safe motion plan which avoids collision with the obstacles.

In various scenarios, the robots might perform the following variations to the previously recorded operations, or a combination of these variations:

1. Deviate from the recorded velocity profile, i.e. faster, slower, pause, skip a pause, change pause duration;

2. Make small deviations from the recorded path, i.e. to the left or to the right;

3. Make significant deviations from the recorded path in order to use the cover of natural or man-made obstacles;

4. Make more or fewer sound-effects or other actions.

In the embodiments described above, the replay of recorded operations was triggered manually by the instructor. In other embodiments it may be triggered automatically, based on a timer, or actions of other targets, or sensed actions of human participants in the exercise. With a user interface, the operator may also want to pause the replay somewhere in the middle, or to begin replay part-way through the activity sequence.

Operations of multiple targets may also be recorded and replayed. In the embodiments described above, the operations of multiple targets were recorded in parallel, i.e. multiple operators control multiple targets simultaneously. With this approach, the timing of the targets' actions relative to one-another is also captured. In other embodiments the operations of multiple targets may be recorded in series, i.e. a single operator controls the targets one after another, and then assembles the individual activities into a coordinated scenario.

During replay of multi-target recordings, the targets begin their activities on one of the triggers listed above (the two simplest approaches being that all activities begin simultaneously, or each activity is triggered independently by the operator). Some form of dynamic obstacle avoidance may be needed when multiple robots operate in close proximity to one-another.

The operator can stand anywhere while recording the target's activity, but there are two advantageous locations:

1. The operator can follow along behind the target. This addresses two general problems encountered when controlling a robot from a distance:

    • obstacles are difficult to avoid due to poor depth perception at long range; and
    • when a robot passes behind an obstacle and line-of-sight is lost, situation awareness suffers.

2. The operator can stand exactly where the trainees will be when they are conducting the training scenario. This eliminates the problems associated with defining routes via the abstraction of a computer screen: the operator can see exactly what the trainees will see, and can define the route accordingly. Providing feedback to the operator, e.g. in the form of a live video feed, can mitigate the problems associated with controlling a robot at a distance or out of line-of-sight.

In the embodiments described above, the remote commands are sent to the robot using a specialised hand-held device. In other embodiments the remote commands could be sent using a computer, phone, gaming device, etc.

In the embodiments described above, the remote commands are sent to the robot using a wifi connection. In other embodiments the remote commands could be sent using any radio or a wired link.

In the embodiments described above, the firearms training exercises were carried out using live ammunition. In other embodiments the ammunition used could be simunition (simulated ammunition) or the firearms may be replaced by or augmented with lasers and laser targets to simulate ammunition.

In the embodiment described above, the armed personnel taking part in the training exercise were soldiers. Similarly, embodiments of the invention have application in training other types of people such as security guards, members of private military companies, law enforcement officers, and private citizens who may be members of a gun club or shooting academy.

It can be seen that embodiments of the invention have at least one of the following advantages:

    • saving labour by allowing to record the scenario once and replay it many times later, possibly with programmed or random variations. In the case of multi-target recording, a single instructor can operate multiple targets.
    • eliminating requirement for a pre-existing map of the training range. The routes are defined relative to what the operator sees, and may be stored e.g. as a path in a GPS coordinate system.
    • providing an intuitive, “what-you-see-is-what-you-get” interface for describing the scenario without at any point needing an abstract visual representation of the training range that can be shown to an operator on a computer screen.

Any reference to prior art contained herein is not to be taken as an admission that the information is common general knowledge, unless otherwise indicated.

Finally, it is to be appreciated that various alterations or additions may be made to the parts previously described without departing from the spirit or ambit of the present invention.

Claims

1. A method of arranging a firearms training scenario utilising at least one robotic mobile target in a training area, the method including the steps of:

sending commands to at least one robotic target in a training area to cause the target to operate in the training area;
recording operations data representative of the operations carried out by the at least one robotic target; and
subsequently conducting a training scenario in the training area wherein the at least one robotic target bases its actions at least partially on the previously recorded operations data.

2. A method according to claim 1 wherein the operations data includes command data representative of at least some of the commands sent to the robotic target.

3. A method according to claim 1 wherein the operations data includes actions data representative of at least some of the actions carried out by the robotic target in reacting to the commands.

4. A method according to wherein the operations data includes outcome data representative of at least some of the outcomes of executing the commands.

5. A method according to claim wherein the step of sending commands to the at least one robotic target is carried out by a human operator using a remote control input device.

6. A method according to claim 5 wherein the step of sending commands is carried out whilst the human operator is situated at a location in the training area where the at least one of the trainees will be situated during the step of conducting the training scenario.

7. A method according to claim 1 wherein the operations data is recorded by the robotic mobile unit.

8. A method according to claim 1 wherein the operations data includes data representative of the location, orientation or velocity of the at least one robotic target in the training area.

9. A method according to claim 1 wherein the operations data includes data representative of any of sounds produced by the at least one robotic target, raising or lowering of simulated weapons, deployment of special effects by the at least one robotic target or at least one robotic target remaining static.

10. A method claim 1 wherein, during the step of conducting the, training scenario, the robotic target intentionally deviates from the operations data.

11. A method according to claim 10 wherein the robotic target deviates from the operations data to avoid an obstacle.

12. A method according to claim 10 wherein the robotic target randomly deviates from the operations data.

13. A method according to claim wherein the scenario utilises more than one robotic target and each base their operations on their own set of operations data.

14. A method according to claim wherein the at least one robotic target commences operations in the training elapsing of a pre-determined interval of time, or in response to detecting personnel in the training area, or in response to detecting movement of another target in the training area.

15. A system for use in conducting a firearms training scenario utilising at least one robotic mobile target in a training area, the system including:

sending means for sending commands to at least one robotic target in a training area to cause the target to operate in the training area;
recording means for recording operations data representative of the operations carded out by the at east one robotic target;
the at least one robotic target is arranged to participate in a firearms training scenario in the training area; and
wherein the at least one robotic target is arranged to base its actions at least partially on recorded operations data.

16. A system according to claim 15 wherein the operations data includes command data representative of commands sent to the robotic target.

17. A system according to claim 15 wherein the operations data includes actions data representative of actions carried out by the robotic target in reacting to commands.

18. A system according to claim 15 wherein the operations data includes outcome data representative of outcomes of executing the commands.

19. A system according to claim 16 wherein the sending means includes a remote control input device.

20. A system according to claim 15 wherein the recording means is embodied in the robotic mobile unit.

21. A system according to claim 16 wherein the operations data includes data representative of the location, orientation or velocity of the at least one robotic target in the training area.

22. A system according to claim 16 wherein the operations data includes data representative of any of sounds produced by the at least one robotic target, raising or lowering of simulated weapons, deployment of special effects by the at least one robotic target or at least one robotic target remaining static.

23. A system according to claim 16 wherein the robotic target is arranged to intentionally deviate from the operations data.

24. A system according to claim 23 wherein the robotic target is arranged to deviate from the operations data to avoid an obstacle.

25. A system according to claim 23 wherein the robotic target is arranged to randomly deviate from the operation data.

26. A system according to claim 16 including more than one robotic target.

27. A system according to claim 16 wherein the at least one robotic target is arranged to commence actions following the elapsing of a predetermined interval of time, or in response to detecting personnel in the training area, or in response to detecting movement of another target in the training area.

Patent History
Publication number: 20140356817
Type: Application
Filed: Jan 23, 2013
Publication Date: Dec 4, 2014
Inventors: Alex Brooks (Newton), Tobias Kaupp (St Peters), Alexei Makarenko (Erskineville)
Application Number: 14/364,865
Classifications
Current U.S. Class: Gun Aiming (434/19)
International Classification: F41G 3/26 (20060101); F41J 9/02 (20060101);