GUNNERY TRAINING DEVICE USING A WEAPON

The invention relates to a gunnery training device using a weapon (1), said device being characterised in that it comprises video capturing means (5) for capturing the field of vision, angular capturing means for detecting angles defining the position of insertion of the synthesis images, processing means (21, 22) for inserting, in real-time, synthesis images into the captured field of vision, and means for visualising (6) the captured field of vision containing said at least one inserted synthesis image.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description

The present invention relates to a gunnery training device based on a weapon allowing a user to train in a real context but with the insertion of virtual objects into the field of view.

The subject of the invention is more precisely a device for gunnery training or a gunnery simulator, in particular based on a real weapon in a real context.

Gunnery training used in particular by the military consists in the use of non-real weapons in as real a context as possible. Training is carried out on the basis of various types of weapons, for example: rifle, tank, missile launcher.

Systems allowing gunnery training are today embodied mainly by means of non-real weapons dedicated specifically to training. Reproduction of the weapons attempts to comply fully with the dimensions, mass and ergonomics of the real system.

However, although reproducing weapons that are as close as possible to real weapons, these weapons remain reproductions and may behave differently to the real weapons.

The systems also use certain imaging techniques to model a visual environment of the battle field, in particular synthesis scenery in which targets are present, likewise through synthesis images.

These systems have a low level of realism given that the scenery are modeled through synthesis imaging.

Moreover, the preparation of a training scene is expensive. Furthermore, the training scene must be entirely remodeled in a realistic manner (with the geometry, textures and materials) if the training terrain is different.

Moreover, these systems operate indoors only, that is to say very far from the real conditions.

Having regard to the foregoing, it would consequently be beneficial to be able to embody a gunnery training device based both on real weapons and in a real context in which the position of the weapon may vary, as may the place and the moment of appearance of the target virtual objects while circumventing at least some of the drawbacks mentioned above.

The present invention is firstly aimed at providing a gunnery training device based on a weapon, characterized in that the device comprises:

    • video capture means adapted to capture an angular field sighted by the weapon,
    • means for measuring angles adapted to determine at least one angle representative of the position of the weapon,
    • processing means adapted to insert, in real time, at least one synthesis image into the captured field, according to the position information received from said means for measuring angles, and
    • means for viewing the captured field containing said at least one inserted synthesis image.

The device according to the invention makes it possible to perform gunnery training under real conditions, that is to say in a genuine theater of operations. Accordingly, a real interaction is created, in real time, between a weapon and a real context on the one hand, and virtual objects representing targets, fixed or moving, on the other hand.

The virtual targets are adapted to be positioned by means of the measurements obtained by the means for measuring angles.

According to a particular embodiment, the device comprises real-time control means, adapted to control the processing means for the insertion of synthesis images.

According to a particular characteristic, the weapon is a real weapon.

According to this characteristic, the training being able to be performed on the basis of a real weapon, the training is all the more real and less expensive since it does not require the creation of a dummy weapon.

According to a particular embodiment, the weapon comprising sighting means, the training device comprises optical adaptation means adapted to allow the viewing of an image in the sighting means on the basis of the means for viewing the captured field of view containing the inserted synthesis images.

According to a particular characteristic, the position information received from said means for measuring angles are the yaw and the pitch.

According to another particular characteristic, the device furthermore comprises drive means adapted to drive, in real time, the processing means adapted to insert synthesis images.

According to a particular characteristic, the device comprises location means adapted to determine the position of the video capture means in the reference frame of a synthesis terrain in three dimensions, the synthesis terrain being a modeling corresponding to at least one element of the captured angular field.

According to this characteristic, the synthesis terrain comprises geometry information for determining the location of the capture means, so that the subsequent insertion of the synthesis images is carried out rapidly and precisely.

According to an embodiment, the location means comprise means for the real-time pairing of points of the synthesis terrain with corresponding points of the captured angular field.

According to another embodiment, the device comprises means for locking the synthesis terrain on the captured angular field.

According to a particular characteristic, the device comprises a location receiver adapted to locate the device, in particular a GPS.

According to another particular characteristic, the device comprises image analysis means adapted to increase the precision of the measurements carried out by the means for measuring angles.

Other aspects and advantages of the present invention will be more clearly apparent on reading the description which follows, this description being given solely by way of nonlimiting example with reference to the appended drawings, in which:

FIG. 1 represents in a diagrammatic manner a device for gunnery training in accordance with the invention;

FIG. 2 illustrates a hardware architecture of the device for gunnery training according to a first embodiment in accordance with the invention;

FIG. 3 illustrates a hardware architecture of the device for gunnery training according to a second embodiment in accordance with the invention;

FIG. 4 is a hardware or software architecture of the device for gunnery training according to the first embodiment illustrated in FIG. 2; and

FIG. 5 is a hardware or software architecture of the device for gunnery training according to the second embodiment illustrated in FIG. 3.

The invention is described on the basis of a real weapon, for example a rifle, such as illustrated in FIG. 1. However, it can be implemented on any type of weapon, real or not.

As illustrated in FIG. 1, a real weapon 1 comprises a gun 2, a sight 3 and possibly a tripod 4, in order to carry the weapon. However, depending on the weapon, the tripod is not always necessary.

The sight 3 is in particular an optical system mounted on the gun, so that the optical axis almost coincides with the axis of the gun. The parallax, that is to say the apparent angular displacement of a body observed from two different points, is negligible.

To embody a gunnery training device based on a real weapon, video capture means 5, in particular a high-definition video camera, are fixed in a rigid manner. The optical axis of the camera almost coincides with the axis of the gun; the parallax is therefore negligible. In this way, the camera is able to acquire a video stream corresponding to the gunner's view. The video capture means are able to capture an angular field sighted by the weapon.

Furthermore, the weapon comprises viewing means 6. These viewing means allow the gunner to see a sharp image, for example through the sight 3. They comprise for example a video monitor 7 and an optical adaptation block 8. The video monitor is of small size so as to be able to be fitted on the weapon. Moreover, in order to obtain as real a rendition as possible, the video monitor is of high definition.

It is also possible to supplement the weapon with means for measuring angles 9, also called movement sensor or angular sensor, based on various technologies, for example an inertial platform, a laser positioning, optical encoders, tracking by image analysis.

A movement sensor makes it possible to ascertain the orientation of the weapon, for example in the case of a weapon placed on a tripod.

The movement sensor also makes it possible to indicate the position and the orientation of the weapon, for example in the case of a weapon free of its movements, in particular in the case of a missile launcher positioned on the gunner's shoulder.

With a view to carrying out gunnery training, the video arising from the video capture means is modified in real time, so as to supplement the real scenes with virtual targets that the gunner has to hit.

Accordingly, a hardware architecture of the device for gunnery training is now described with reference to FIG. 2. This architecture is in particular used when the weapon is equipped with a movement sensor having good accuracy.

This architecture comprises video capture means 5, in particular a high-definition camera and viewing means 6 allowing the gunner to view the real landscape enhanced with virtual targets.

Furthermore, the enhanced real landscape can comprise visual effects, in particular the addition of virtual elements to the landscape. Thus, it is made possible to add virtual buildings, to obscure vision as the shot departs. For example opaque smoke is simulated at the moment of firing and which clears as time passes.

The hardware architecture comprises a first processing means 21 called the gunner processing means, adapted to generate the video images enhanced with virtual objects constituting the gunnery training targets and a second processing means 22 called the instructor processing means, adapted to control the appearance of the virtual targets on the video images.

The video sensor 5 is linked to the gunner processing means 21 by way of a converter 23 from the HD-YUV component output to the HD-SDI standard (“High Definition Serial Digital Interface”), in particular in the 16/9 format. The signal converts HD-SDI is thereafter dispatched to the HD-SDI input 24 of the gunner processing means 21.

YUV designates an analog video interface which separates physically on three conductors, the luminance (Y), the chrominance component 1 (U) and the chrominance component 2 (V) so as to link a video source to a processing means.

The architecture also relies on a firing button 25 present on the weapon. This button is linked to an input/output port 26 of the gunner processing means 21. The port may be in particular the serial port, the parallel port, the USB port, the analog input on a PCI (“Peripheral Component Interconnect”) card.

The movement sensor 9, such as shown in FIG. 1, with which the weapon is supplemented is also connected to an input/output port 26 of the gunner processing means 21.

The gunner processing means 21 is equipped with an audio output 27 making it possible to connect loudspeakers 28 and with a video output 29.

The aim of the audio output 27 is to reproduce the sound effects caused by firing, explosions, destruction of the virtual targets, etc.

The video output 29 is linked to the viewing means 6 mounted on the weapon so as to display in the weapon's sight the real landscape, corresponding to the angular field, enhanced with virtual targets and visual effects, sighted through the gunner's weapon.

The video output is in particular in the UXGA format (“Ultra eXtended Graphics Array”).

According to an embodiment, the video output 29 is linked to a video distributor 30. This distributor can, in this way, duplicate the video signal to the viewing means 6 mounted on the weapon and to a recorder 32, in particular by way of a converter 33, able to convert the images, for example in the UXGA format into a PAL or NTSC format with a view to allowing the recording of the video images by the recorder 32.

The recorder 32 is in particular a DVD recorder comprising a hard disk. This recorder makes it possible, on the one hand, to store the video images seen by the gunner and, on the other hand, to make it possible to replay the gunnery sequences during the evaluation or debriefing phase.

The video distributor 30 can also duplicate the video signal to the screen 34 of an instructor so as to allow the instructor to view the images of the gunner.

The instructor processing means 22 comprises a video output 35, in particular in the UXGA format, linked to a screen 34, in particular, by way of a video switch 36. By means of this switch, the screen of the instructor 34 allows a viewing either of the field of view of the gunner enhanced with the virtual targets, or of the instructor post man-machine interface generated by the instructor processing means 22.

The gunner processing means 21 and the instructor processing means 22 can be connected together via, in particular, an Ethernet concentrator 37.

The data corresponding to the virtual objects to be inserted into the video corresponding to the real training landscape are stored, for example in a database, either in the gunner processing means 21, or in the instructor processing means 22.

Also linked to the instructor processing means 22 by way of an input/output port, in particular by way of the USB (“Universal Serial Bus”) port, is a handle of mouse or joystick type.

According to a variant embodiment, a hardware architecture of the gunnery training device comprising image analysis means is now described with reference to FIG. 3. This architecture is in particular used when the weapon is equipped with a movement sensor not having good accuracy.

The hardware architecture illustrated in FIG. 3 is equivalent to the hardware architecture illustrated in FIG. 2, except that a specific image analysis means 40 has been added.

The means illustrated in FIG. 3 already present in FIG. 2 and described above bear the same identifier.

According to this architecture, the movement sensor 9 is no longer linked to the gunner processing means 21, but to the image analysis means 40 by way of input/output ports 41 of the analysis means 40.

Specifically, in this way, the movement sensor indicates approximately the orientation (yaw, pitch) of the video capture means 5 in the real environment. On the basis of this information and of the image originating from the video capture means, the image analysis means 40 refines the yaw and pitch values.

Thus, the image analysis enhances the accuracy of the orientation of the video capture means, the orientation consisting of the values of yaw and pitch.

According to a particular embodiment, the values (yaw, pitch) are refined by pairing points of interest contained in the video image originating from the video capture means, with predetermined points of interest of the global panorama.

The time required by the image analysis to determine the orientation is optimized by virtue of the coarse knowledge of the values of yaw and pitch dispatched by the imperfect movement sensor.

Furthermore, the video sensor 5 is also linked to the image analysis means 40 via the video input 42 of the image analysis means 40.

The image analysis means 40 is linked to the various processing means, in particular to the gunner processing means 21 and to the instructor processing means 22, via a network, for example the Ethernet network.

This image analysis means 40, according to this variant embodiment, is dedicated to the dispatching via the network of the orientation information (yaw, pitch) of the video capture means to the gunner processing means 21.

This information is generated, in particular, by means of two data streams, namely the data stream originating from the movement sensor 9 and the data stream originating from an image analysis algorithm.

The software architecture for the implementation of the gunnery training system in accordance with the invention is now described with reference to FIG. 4.

This software architecture is presented with reference to the hardware architecture illustrated in FIG. 2.

According to this implementation, the gunner processing means 21 is equipped with enhanced-reality processing means 45 in particular the D'FUSION software from the company TOTAL IMMERSION.

Enhanced reality consists in mixing synthesis images, also called virtual objects, with real images arising from the video capture means 5.

The enhanced-reality processing means carries out the addition of virtual objects to a video and the viewing arising from this addition in real time, thus generating videos of enhanced reality in real time.

Accordingly, the gunner processing means 21 mixes synthesis images and the real images in real time, that is to say operates the processing at a video image rate of 50 frames per second or 60 frames per second as a function of the standard of the video capture means.

The gunner processing means 21, as well as containing enhanced-reality processing software, comprises various software modules allowing the processing of the gunnery training system.

One of the modules consists of a firing management module 44 able to manage the recovery of the pressing of the firing button by the gunner.

A second module consists in determining in real time the trajectory of the shot 47. Accordingly, this module calculates, in real time, the coordinates of the projectile, in particular the X, Y and Z coordinates, the yaw, pitch and roll.

As a function also of the type of the weapon, it is possible that the gunner's sighting axis may influence the trajectory of the missile, in particular in the case of guided missiles.

A third module gathers data originating from the movement sensor 48. These data are dependent on the type of sensor and are, for example, the yaw and pitch pair, or the whole set of yaw, pitch and roll data, or else the X, Y and Z coordinates and the yaw, pitch and roll.

As regards the instructor processing means 22, a control module 49, also called the exercise management module, is present so as to control in particular the passage of the virtual targets in real time over the weapon viewing means.

This module makes it possible to instruct a gunner (missile, tank, or any other weapon) under the guidance of an instructor.

Accordingly, by means of a man/machine interface and of a handle, the instructor controls the virtual targets that are to be inserted into the video acquired by the capture means 5 and are presented to the gunner.

Thus, these instructions are, according to an embodiment, transmitted to the gunner processing means 21 so that the latter carries out, through the enhanced-reality processing software 41, the inlaying of virtual targets into the video acquired.

Furthermore, the instructor can control the symbology, that is to say the reticle also called the sighting aid. Specifically, the instructor can manipulate symbols which appear in the gunner viewing screen. In this way, he can guide the gunner, in particular by showing him a target that he has not seen in the landscape, or ask him to sight a certain point of the landscape.

A communication interface between the gunner processing means 21 and the instructor processing means 22 makes it possible to manage the communications. Specifically, for each virtual target to be inserted, the coordinates of the target are transmitted from the instructor processing means 22 to the gunner processing means 21, in particular providing the following information: the coordinates X, Y and Z, the yaw, pitch and roll and, for each type of symbology, the coordinates of the screen namely the X and Y coordinates.

According to a variant embodiment, a software architecture related to the implementation of the gunnery training system comprising image analysis means is now described with reference to FIG. 5. This architecture is in particular used when the weapon is equipped with a movement sensor not having good accuracy.

The software architecture illustrated on FIG. 5 is equivalent to the software architecture illustrated on FIG. 4, except that a specific module for image analysis is installed in the image analysis means 40, in accordance with the hardware architecture illustrated in FIG. 3.

The modules illustrated in FIG. 5 already present in FIG. 4 and described above bear the same identifier.

According to this implementation, the gunner processing means 21 no longer receives directly the data originating from the movement sensor 9, but possesses a module for receiving data originating from the image analysis means 40, by way of a communication interface between the gunner processing means and the image analysis means.

The image analysis means 40 receives, on the one hand, the data originating from the movement sensor 9 and, on the other hand, the video stream originating from the video sensor 5. The image analysis module analyzes the video stream received with the data received from the movement sensor with a view to determining the weapon's yaw and pitch values. The result of this analysis is sent, in real time, to the gunner processing means 21 by way of the network.

A communication interface between the gunner processing means 21 and the image analysis means 40 makes it possible to manage the communications. Specifically, by means of this interface, the data relating to the position of the camera are transmitted, in particular providing the following information: the yaw and pitch or the yaw, pitch and roll or the coordinates X, Y and Z, the yaw, pitch and roll.

In these various embodiments, the gunner processing means 21 refreshes the images transmitted to the viewing means 6 at a frequency compatible with the video capture means 5 and the viewing means 6, namely 50 Hz in the case of a European-standard video capture means and 60 Hz in the case of an American-standard video capture means. It should be noted that the viewing means must operate at the same frequency as the video sensor.

Concerning the visually induced aiming of the weapon, the latter visually induces an aiming offset of the video landscape and the targets and missile.

The precision of the locking of the virtual targets in the captured real landscape is dependent on the precision of the movement capture procedure, in particular as a function of the analysis or otherwise of images.

Concerning the resolution of the video capture of high-definition type, it should be noted that the 16/9 HD-SDI standard comprises 1920 pixels per 1080 interlaced lines and that the angular field sighted by the gunner's weapon is circular and that the useful video resolution inside the circular field is 1080 pixels per 1080 interlaced lines.

Likewise, during the viewing on the instructor's screen of the angular field sighted by the gunner's weapon, the screen is for example in the 4/3 format, i.e. a resolution of 1440 pixels per 1080 lines.

For these reasons, the video resolution used for the resolution of the display must be redimensioned.

Furthermore, by means of the bilinear texture filtering of the gunner processing means 21, the video image can be redimensioned without any visual artefact, that is to say without pixelation of the video image.

This redimensioning is carried out for the display of the real landscape enhanced with virtual targets in the viewing means 6. Furthermore, it can be used for display of the enhanced landscape on the instructor's screen. In this case, the redimensioning is performed to a format of 1280 pixels per 1024 lines in SXGA mode, i.e. to a format of 1600 pixels per 1200 lines in UXGA mode.

The viewing means 6 are in particular a small-size monitor; specifically, the latter must be lightweight and compact. The technologies that can be used are the following: miniature LCD, LCOS, DLP, OLED.

The viewing means impose a certain display resolution. Thus, the graphics card of the gunner processing means is configured for these special viewing means. For example, the display is 1280 pixels per 1024 lines or 1600 pixels per 1200 lines.

The manner of determining the distance at which the target is detected, recognized and identified by the gunner is now described on the basis of this display information. It should be noted that a target is detected when a target is seen to move but the latter is not recognized. For example, a tank is identified far off but the type of the tank is not recognized. A target is called identified when the latter is well identified.

Detection, recognition and identification criteria according to two types of screen resolution are now described.

In a first case, when the display has a resolution of 1280 pixels per 1024 lines and a field of view of 8.5 degrees (corresponding to 1024 lines), a target is detected on one video line, the target is recognized when it is present on 3 video lines and it is identified when it is present on 5 lines.

According to this example, for a target 2 meters in height, the following results are obtained: detection of a target takes place on a target at a distance of greater than 6 km, recognition of the target is effected at 4600 meters and identification is made at 2760 meters.

In a second case, when the display has a resolution of 1600 pixels per 1200 lines and a field of view of 8.5 degrees (corresponding to 1200 lines), a target is detected on one video line, the target is recognized when it is present on 3 video lines and it is identified when it is present on 5 lines.

According to this example, for a target 2 meters in height, the following results are obtained: detection of a target takes place on a target at a distance of greater than 6 km, recognition of the target is effected at 5390 meters and the identification is made at 3200 meters.

As indicated previously, the enhanced-reality processing means supplements a captured real landscape with mobile or immobile virtual targets in real time. Specifically, the virtual targets can move in the real landscape captured by the video sensor.

For example, targets of tank type or of helicopter type can move in the captured landscape.

It is furthermore possible to add visual effects such as the animation of the rotor of a helicopter.

The instructor, through the instructor processing means 22, can choose in particular two types of displacement for each target.

A first mode consists in displacing the target according to a list of transit points, the transit points being modifiable during the exercise by the instructor. This displacement mode is called the “transit point” mode.

Accordingly, the instructor processing means 22 comprises means for inputting and modifying the transit points, in particular by means of the handle.

To input or modify the points of transit of the targets, the instructor can obtain a view of the scene. On the basis of the handle, for example, the instructor can add, delete or modify a transit point. The transit points appear numbered on the instructor's screen.

The transit points for terrestrial vehicles are associated with the relief of the terrain. The transit points for aerial vehicles have an altitude adjustable by the instructor.

The trajectory of the virtual target along the transit points is calculated by the enhanced-reality processing means with linear interpolation.

According to an embodiment, 16 transit points per virtual target are envisaged.

A second displacement mode consists in displacing the target by driving with the handle 38 by the instructor. This displacement mode is called the “joystick” mode.

It should be noted that very often, the gunner's view exhibits a field of view that is too narrow to select the transit points.

Thus, the instructor processing means is equipped with a enhanced-reality processing means, in particular the D'FUSION software from the company TOTAL IMMERSION specifically configured for inputting and modifying the transit points, as well as managing the joystick.

To input or modify the points of transit of the targets, the instructor has a view from above of the scene. With the aid, in particular, of the keyboard and the mouse, the instructor can add, delete or modify a transit point. The transit points appear numbered on the screen.

Concerning the transit points for terrestrial vehicles, they are associated with the relief of the terrain. While the transit points for aerial vehicles are associated with an altitude adjustable by the instructor.

The enhanced-reality processing means allows management of the toggling from the “transit point” mode to the “joystick” mode. Accordingly, the virtual target starts from its current position.

As regards the toggling from the “joystick” mode to the “transit point” mode, the virtual target repositions itself on the first transit point defined by the instructor.

On the basis of the position and orientation data of the virtual targets defined by the instructor, these are transmitted in real time from the instructor processing means 22 to the gunner processing means 21 equipped with the enhanced-reality processing means with a view to their processing in real time for display on the viewing module of the gunner undergoing training.

Furthermore, the targets can appear as non-destroyed or destroyed. For example, the enhanced-reality processing means can add the immobile carcass of a tank destroyed by the gunner.

However, the management of the destruction of a target by the enhanced-reality processing means can take several forms.

For example, in the case of the destruction of a tank, when the target is hit by the gunner during his training, the enhanced-reality processing means adds the exploding of the tank to the real landscape then displays the carcass of the tank.

In the case of the destruction of a helicopter, when the target is hit by the gunner during his training, the enhanced-reality processing means adds the exploding of the helicopter to the real landscape, then the helicopter disappears.

However, three explosion effects for the virtual target can be implemented by the enhanced-reality processing means. These involve, first of all, the explosion following the impact of the shot on the ground or on an element of the scenery, thereafter, the explosion of the virtual target following the impact of the shot on an aerial virtual target, finally, the explosion of the virtual target following the impact of the shot on a terrestrial virtual target.

The enhanced-reality processing means manages each explosion by a virtual object of “billboard” type in which a silhouetted texture is played. However, this involves a specific video as a function of the type of impact.

A virtual object of “billboard” type is a synthesis object consisting of the following elements: a rectangle of zero thickness and of a texture applied to this rectangle. The rectangle is positioned on the ground facing the camera. The texture applied can be a dynamic texture originating from a film stored on the hard disk. For example, in the case of an explosion, there is an “explosion” film, which is found in this rectangle. Furthermore, the edges of the rectangle are not seen, the latter being transparent.

However, in the case of an impact on the ground or on an element of the real landscape, for example a house, the landscape might not be visually modified.

During a training shot by a gunner, the distance d1 of the shot with respect to the terrain in the landscape, that is to say the point of the terrain on the axis of the shot, is determined. Then, the distance d(N) of the shot with respect to the virtual target N is determined.

According to a simplified embodiment, the distance d(N) is the distance between the center of gravity of the shot and the center of gravity of the target.

On completion of the calculation of these distances, an algorithm for displaying the explosion effects comprises a first test making it possible to determine whether the distance d1 of the shot with respect to the terrain in the landscape is less than a threshold distance. If such is the case, then the coordinates X, Y and Z of the impact of the shot are determined and the addition and the displaying of the explosion following the impact of the shot on the terrain are triggered.

It is also determined whether the distance d(N) of the shot with respect to the virtual target N is less than a threshold distance. If such is the case, then the coordinates X, Y and Z of the impact of the shot are determined and the addition and the displaying of the explosion following the impact of the shot on the virtual target are triggered, followed by the displaying of the virtual target N destroyed.

In the case of a shot of missile type, the latter is demarcated by a 3D virtual object mainly seen from the rear. Furthermore, in order to simulate the smoke of the missile, M objects of circular “billboard” type are displayed for which a texture has been applied with the management of the parameters alpha and transparency, so as to obtain a realistic smoke trail.

The displaying of the missile may be deferred by a few milliseconds after receipt of the firing information cue so as to simulate the time taken by the missile to exit the gun of the weapon.

It should be noted that the trajectory of the missile is stored so as to be able to position the M objects of “billboard” type along the trajectory.

For the management of the virtual targets and impacts on the ground, it is necessary to have virtual objects available in memory, in particular, within a database, of the terrain, that is to say of the ground and buildings.

During the use of the device for gunnery training, it is necessary to install the weapon on the real terrain and to register, before the start of the exercise, the synthesis terrain with respect to the real terrain.

Accordingly, a locating tool makes it possible, by means of a pairing of points, to extract the position of the video sensor in the reference frame of the synthesis terrain. Thus, it is possible to lock the real terrain to the synthesis terrain. This pairing consists in associating points of the synthesis terrain with their equivalent in the captured video.

On the basis of a few paired points, the locating tool determines the position of the video sensor in the reference frame of the synthesis terrain, the synthesis terrain being a modeling in three dimensions of the real terrain.

This technology allows the instructor to install the system at any location of a theater of operation in a reasonable time in particular by means of the three-dimensional modeling thereof.

According to a particular embodiment, a GPS (“Global Positioning System”) receiver can be associated with the weapon so that the latter can be located on the real terrain.

The GPS receiver then dispatches the various coordinates for positioning the weapon on the real terrain.

Once this information has been received by the locating tool, it is possible to locate the video sensor, in particular in two ways.

According to a first embodiment, virtual points are associated with real points, captured as previously described, of the video stream.

According to a second embodiment, the synthesis terrain is locked, in an angular manner, by means in particular of a handle. The synthesis terrain is then displayed by transparency on the video image so as to allow the locking.

The enhanced-reality processing means is also able to process obscuration. Specifically, as the shot departs, a textured object of circular “billboard” type is displayed on the screen. As the shot recedes, the transparency coefficient is modified so as to pass from an opaque aspect to a transparent aspect. The position of the object of “billboard” type is coupled to the position of the missile so as to obtain a more realistic obscuration effect.

Furthermore, the enhanced-reality processing means can display a sighting reticle. The latter can be controlled so as to make it appear or disappear and to modify its position on the sighting screen.

Moreover, the enhanced-reality processing means can allow the addition of a supplementary item of information to aid sighting, for example, by means of a chevron. This item of information can also be controlled so as to make it appear or disappear and to adjust its position on the sighting screen according to the X and Y coordinates.

The gunner processing means 21 comprises loudspeakers making it possible also to reproduce the sound effects induced by the shot.

For this purpose, on receipt of a firing information cue, the gunner processing means 21 activates a sound representing the departure of the shot, in particular by launching an audio file of “.wav” type. Likewise, if the shot hits the ground, an element of the landscape or a virtual target, a sound file is activated so as to represent the noise of the impact.

As illustrated in FIGS. 2 and 3, the hardware architecture can be furnished with a recorder, in particular a DVD recorder with hard disk, so as to record the content of the gunner's view.

Thus, replay of the gunnery exercise is possible by performing a playback of the video film stored by the recorder.

The presence of a hard disk on the recorder allows replay without having to burn the gunnery exercise onto a medium.

Likewise, it is possible to record the positioning and orientation information for the targets, the positioning and orientation information for the missiles and the positioning and sighting axis information for the weapon. Thus, on completion of the gunnery training exercise, the information is available to analyze the exercise, in particular by means of an exercise analysis software module.

The gunnery training system as previously described can be used in various contexts.

Specifically, it is possible to use the gunnery training system indoors. In this case, the system is placed in front of a mockup of a landscape to which virtual targets are added by the enhanced-reality processing means. The mockup must be modeled beforehand.

Furthermore, it is possible to use the gunnery training system on a real terrain. Accordingly, it is necessary to have modeled a synthesis terrain, the latter corresponding to the real terrain.

By way of illustration, the gunner processing means 21 can be an individual computer having the following characteristics:

    • 3 GHz, Pentium IV processor, having in particular the operating system Windows XP Pro, service pack 2,
    • 1 GB random access memory or RAM and 80 GB hard disk,
    • motherboard with PCI-X and PCI-Express bus.
    • DeckLink HD acquisition card.
    • Nvidia GeForce6 (6800 GT) or ATI Radeon X800XT graphics card.

The video capture means 5 are for example a high-definition camera of Sony HDR-FX1 type. The converter 23 from the HD-YUV component output to the HD-SDI standard is in particular an AJA HD converter 10A. The UXGA video distributor 30 is for example the Komelec MSV1004 distributor. The video switch 36 is in particular a Comprehensive Video HRS-2x1-SVA/2x1 VGA/UXGA High Resolution Vertical Switcher switch. The converter 33 is for example the Folsom ViewMax converter. Finally, the recorder 32 is a Philips HDRW 720 DVD recorder having an 80 GB hard disk.

Claims

1. Gunnery training device based on a weapon, characterized in that the device comprises:

video capture means adapted to capture an angular field sighted by the weapon,
means for measuring angles adapted to determine at least one angle representative of the position of the weapon,
processing means adapted to insert, in real time, at least one synthesis image into the captured field, according to the position information received from said means for measuring angles, and
means for viewing the captured field containing said at least one inserted synthesis image.

2. Training device according to claim 1, characterized in that the device comprises real-time control means, adapted to control the processing means for the insertion of synthesis images.

3. Training device according to claim 1, characterized in that the weapon is a real weapon.

4. Training device according to claim 1, characterized in that the weapon comprising sighting means, the training device comprises optical adaptation means adapted to allow the viewing of an image in the sighting means on the basis of the means for viewing the captured field of view containing the inserted synthesis images.

5. Training device according to claim 1, characterized in that the position information received from said means for measuring angles are the yaw and the pitch.

6. Training device according to claim 1, characterized in that the device furthermore comprises drive means adapted to drive, in real time, the processing means able to insert synthesis images.

7. Training device according to claim 1, characterized in that the device comprises location means adapted to determine the position of the video capture means in the reference frame of a synthesis terrain in three dimensions, the synthesis terrain being a modeling corresponding to at least one element of the captured angular field.

8. Training device according to claim 7, characterized in that the location means comprise means for the real-time pairing of points of the synthesis terrain with corresponding points of the captured angular field.

9. Training device according to claim 7, characterized in that the device comprises means for locking the synthesis terrain on the captured angular field.

10. Training device according to claim 1, characterized in that the device comprises a location receiver adapted to locate the device.

11. Training device according to claim 1, characterized in that the device comprises image analysis means adapted to increase the precision of the measurements carried out by the means for measuring angles.

Patent History
Publication number: 20090305198
Type: Application
Filed: Aug 9, 2006
Publication Date: Dec 10, 2009
Applicant: GDI SIMULATION (Paris)
Inventors: Valentin Lefevre (Puteaux), Laurent Chabin (Asnieres Sur Seine), Emmanuel Marin (Paris)
Application Number: 12/063,519
Classifications
Current U.S. Class: Cinematographic Or Cathode Ray Screen Display (434/20)
International Classification: F41A 33/00 (20060101); F41G 3/26 (20060101);