Target Practice Evaluation Unit

A method for evaluating hits on a target is disclosed comprising capturing frames of the target by a camera, detecting a target in a captured frame, classifying the target in the captured frame as a target type, determining a depth of the target from a user, identifying a hit on the target, by a processing device, and scoring the hit. Detecting the target, classifying the target, and/or identifying a hit on the target by a respective machine learning model run. A portable and self-contained target evaluation unit is also disclosed.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
RELATED APPLICATION

The present application claims the benefit of U.S. Provisional Patent Application No. 63/335,145, which was filed on Apr. 26, 2022, and is incorporated by reference herein.

FIELD OF THE INVENTION

The present disclosure relates to the automatic scoring of targets in a shooting range, for example, and, more particularly, automatic scoring of targets in a shooting range and the like by a portable target practice evaluation unit that identifies targets in a firing lane, determines a type of target, identifies hits of the target, and scores the hits, without user intervention.

BACKGROUND

Firearm proficiency is a critical skill for the military, law enforcement, and people who endeavor to hunt for their livelihood or sustenance, for example. It is also a sport. Ample practice is required to achieve a high degree of competency. Because of the destructive and dangerous nature of firearms, as well as the potential for hearing loss, practice with live ammo on typical indoor rifle ranges is performed in compartmentalized lanes, generally alone, and always with sound-deadening hearing protection. As a result, firearm practice has become more solitary and less engaging. The opportunities to improve proficiency are also less frequent.

Known target evaluation systems rely on additional equipment in contact with the firearm, special targets with sensors, and markings, for example. US Patent Application No. 2020/0398168 requires the use of external cameras and target hits are determined by a server.

SUMMARY OF THE INVENTION

A portable, self-contained target evaluation unit would facilitate the target evaluation process. In one embodiment of the invention, a portable target evaluation unit is disclosed that analyzes images of a target taken by a camera in the portable unit to: 1) determine the paper target type, 2) determine the distance or depth to the target, 3) identify hits on the target and their location, and 4) score the session. In one embodiment, machine learning is used to analyze the target to make at least some of the determinations and identifications. In one example, no modifications to the firearm, the target, or the facility are required. Conventional, off-the-shelf paper targets may be used. Scoring may be performed in real time or near real time. In one example, scoring may be performed in less than one (1) second.

In one example of an embodiment of the invention, camera images of a target are analyzed to determine a type of the target. Target types include a bullseye target; a silhouette target including one or more dangerous characters, such as one or more criminals; and/or a hostage target including at least one real or imagined dangerous character, such as a criminal, terrorist, or a zombie, for example, and a hostage. A distance to the target is also determined. The optimal point of impact and the actual point of impact are determined. An accuracy score is determined based on the actual points of impact versus the optimal point of impact. These determinations may be performed by a single, portable device that includes the camera and one or more processing devices, for example. The portable device may be owned or rented by the user, for example. In this example, no specialized equipment need be installed by the gun range and direct contact with the firearm is not required. In another example, a shooting range may be configured to perform the determinations for respective users.

In one embodiment of the invention, the target practice evaluation unit 12 may be configured to identify a type of target and score hits on the target based on the identified target type.

In accordance with an embodiment of the invention, a method for evaluating hits on a target is disclosed, comprising capturing frames of the target by a camera; detecting a target in a captured frame, by a processing device; classifying the target in the captured frame as a target type, by a processing device; determining a depth of the target from a user; identifying a hit on the target, by a processing device; and scoring the hit, by a processing device. Detecting the target, classifying the target, and/or identifying a hit on the target may be performed by a respective machine learning model.

In accordance with another embodiment of the invention, a system for evaluating bullet hits on a target is disclosed, comprising a camera to capture frames of a target; at least one processing device; and storage; wherein the at least one processing device configured to: detect a target in a captured frame; classify the target in the captured frame as a target type; determine a depth of the target from a user; identify a hit on the target, by running a machine learning model; and score the hit. The system may further comprise a casing having an opening, wherein the camera has a lens proximate the opening to capture frames down range of the casing, and the camera, the at least one processing device, and the storage are contained within a portable casing. The system may be portable and self-contained. The system of claim 16, wherein the casing has a second opening different from the first opening and contains a second camera different from the first camera, the second opening and the second camera being configured to image at least a user's shooting hand during use.

In accordance with another embodiment of the invention, a method for evaluating bullet hits on a target comprises classifying the target in a captured frame by running a first machine learning model and identifying a hit on the target by running a second machine learning model different from the first machine learning model; and scoring the hit.

Embodiments of the invention are also applicable to other types of target practice, such as archery, darts, paintball, airsoft, and virtual reality games, for example.

BRIEF DESCRIPTION OF THE FIGURES

FIG. 1A and FIG. 1B are examples of a shooting range in which a target practice evaluation unit may be used, in accordance with an embodiment of the invention;

FIG. 2A and FIG. 2B are a front side face and forward side face, respectively, of a target practice evaluation unit corresponding to the target practice evaluation unit of FIG. 1A, respectively, in accordance with an embodiment of the invention;

FIG. 3 is a block diagram of an example of internal components of the target evaluation unit of FIG. 1A, FIG. 1B, and of FIG. 2A, 2B, in accordance with an embodiment of the invention;

FIGS. 4A, FIG. 4B, and FIG. 4C are examples of a bullseye type target, a silhouette type target, and a hostage type target, respectively, which may be used in embodiments of the invention;

FIG. 5 is a flow diagram of an overview of an example of a process for evaluating targets in accordance with an embodiment of the invention;

FIG. 6 is a flowchart of an example of a session start procedure, which includes a portion of FIG. 5, in accordance with an embodiment of the invention;

FIG. 7 is an example of a machine learning model that may be used in this and other steps of embodiments of the present invention;

FIG. 8A and FIG. 8B are flowcharts of an example of a target classification procedure, in accordance with an embodiment of the invention;

FIG. 8C is an example of a machine learning model that may be used in an embodiment of the invention;

FIG. 9A, FIG. 9B, and FIG. 9C are examples of are examples of a bullseye type target, a silhouette type target, and a hostage type target, respectively, including respective masks;

FIG. 10A, FIG. 10B, FIG. 10C, FIG. 10D, and FIG. 10E are examples of a hostage type target after analysis by the machine learning models in FIG. 8A for an unknown target;

FIG. 11A is a flowchart 1100 of an example of a process for preprocessing frames in the hit detection process;

FIG. 11B is an example of a hit detection machine learning model; and

FIG. 12A, FIG. 12B, FIG. 13, FIG. 14A, and FIG. 14B are examples of scoring of hits on different types of targets.

DESCRIPTION OF PREFERRED EMBODIMENTS

In accordance with one embodiment of the invention, a target practice evaluation unit for use in a shooting range is portable and includes a case that contains a camera for imaging targets. The target practice evaluation unit may be placed by a user on a table or shelf in a firing lane and positioned so that the camera faces down the firing lane, where a target is or will be located, to image the target. In one example, a processing device in the target practice evaluation unit analyzes the images to identify a target, classifies the type of target, determines a distance to the target, detects hits on the target, and scores the hits. Target identification, target classification, distance to the target, and/or hit detection may be performed by the processing device using artificial intelligence, such as machine learning. Target identification, target classification, and/or hit detection may be performed by machine learning, for example. A second camera may be provided in the case of the target practice evaluation unit to image the shooter's shooting hand or upper body including the hands while shooting. The images of the shooter's hand may be analyzed by machine learning or by an expert to identify problems in form that the shooter can correct. A report may be provided to the user including the user's score and evaluation of form, for example.

FIG. 1A is an example of a shooting range 10 in which a target practice evaluation unit 12 may be used, in accordance with an embodiment of the invention. In this example, the target practice evaluation unit 12 is portable, but that is not required, as discussed further below. The target practice evaluation unit 12 shown in FIG. 1A includes a front face 12a, which includes a touch screen display in this example, and a rear side 12b. A rear side 12c of the target practice evaluation unit 12 is shown in FIG. 1B. In this example, the casing of the target practice evaluation unit 12 further includes a bottom surface 12d supported by the shelf or ledge 26.

The gun range 10 in this example includes a plurality of firing lanes 14a, 14b, 14c . . . , three of which are shown at least in part. The firing lane 14b, which is best shown in FIG. 1A, is representative of the other firing lanes and will be discussed herein. FIG. 1B is a side view of the firing lane 14b in FIG. 1A. The firing lane 14b in FIG. 1A and FIG. 1B includes a firing point 16, which is better shown in the side view of the firing lane 14b of FIG. 1B. Dividers 18a, 18b are provided to separate a shooter 20 from shooters in adjacent firing lanes 14a, 14c. A target 22, which in this example is a bullseye target, is shown at the rear of the firing lane 14b. Other types of targets 22 may also be used, as discussed herein. The target 22 is typically supported by a carrier 24, which is supported by a ceiling 26 of the firing lane 14b, as shown in FIG. 1B. The shooter 20 mounts a target 22 to the carrier 24 by a clip, for example, and the carrier moves the target a desired distance down the firing lane 14b. The carrier 24 also enables the shooter 12 to retrieve the target 22. The target practice evaluation unit 12 in this example is supported by a table or shelf. In FIG. 1A, a shelf 28 is shown, supported by the barriers 18a, 18b. In FIG. 1B, a table 28. A carrier control (not shown) to determine the distance the target 22 it positioned down the firing lane, may be provided on a wall or on the table or shelf 28. Ammunition may also be placed on the shelf, for example.

FIG. 1B further shows the carrier 24 suspended from the ceiling 26. A clip 30 depends from the carrier 24 for attachment of the target 22 by the shooter 20 or another party. The carrier 24 may include a cable and pulley system (not shown), for example. As mentioned above, the carrier 24 moves the target 22 down the firing lane 14b to desired position for shooting and moves the target to the shooter for retrieval of the target by the shooter.

FIG. 1B also shows a bullet trap 32 to capture bullets so they do not ricochet back to the shooter in the firing lane 14b and other shooters in other lanes. The bullet trap may be a rubber berm, for example, as is known in the art.

Other components of a shooting range that are known in the art are not shown in FIG. 1A and FIG. 1B to simplify the Figure.

FIG. 2A and FIG. 2B are a front side face 202 and forward side face 204, respectively, of a target practice evaluation unit 206 corresponding to the target practice evaluation unit 12 of FIG. 1A, respectively, in accordance with an embodiment of the invention. The front face 202, which corresponds to the front face 12a shown in FIG. 2A, includes display 208, here a touchscreen display, and an opening 210 through the casing, in which, through which, or behind which a lens (not shown) of an optional camera (not shown) resides. A non-touchscreen display and a keyboard for data entry may be provided instead. The optional camera is for imaging the hand or upper body of the user. An on/off button 212, which may be a toggle switch, for example, is also provided in the front face 202. The on/off button may be provided in other locations on the target practice evaluation unit 206, instead. The side face 204 shown in FIG. 2B shows an opening 214 in which, through which, or behind which a lens of a camera (not shown) resides. The camera behind the front face 204 is for imaging the target 22 down the firing lane 14b, for example, as shown in FIG. 1A and FIG. 1B.

The casing of the target practice evaluation unit 206 may be plastic, such as an injected molded plastic, for example. The plastic may be acrylonitrile butadiene styrene (“ABS”), for example. It is noted that the target evaluation unit 206 may have other shapes and the components of the target evaluation unit 12 may be in different locations.

FIG. 3 is a block diagram 300 of an example of internal components of the target evaluation unit 12 of FIG. 1A, FIG. 1B, and 206 of FIG. 2A, 2B, in accordance with an embodiment of the invention. Operation of the target evaluation unit 12 is controlled by a processing device 302, under the control of software stored on a non-volatile memory 304. The non-volatile memory 304 may also include a table correlating the dimensions of a target in pixels and depth, as discussed further below. Volatile memory 306, such as Random Access Memory, is also provided for use by the processing device 302. The processing device 302 may be a central processing unit (“CPU”), such as a 32 bit or 64 bit CPU with an operating speed of one (1) or more gigahertz, for example. A commercially available CPU, such as a 64-bit ARM A52, which is available from Broadcom, San Jose, California, may be used, for example. The non-volatile memory may be a commercially available multi-gigabyte non-volatile memory, such as an SDXC USH-1, available from Western Digital, San Jose, CA, for example.

The processing device 302 provides input to a display 308 via another processing device, here the video controller 310. It is noted that the video controller may also execute machine learning models, or a co-processor (not shown) may be provided to execute machine learning models. As discussed above, the display 308 may be a touch screen. When the display 308 is a touchscreen display, the display may provide input from the touchscreen to the processing device 302 via an input/output (“I/O”) controller 410, for example. The display may be a 480×800 pixel, touchscreen display measuring 3½ inches×6 inches or about 7 inches on the diagonal in portrait mode, for example. The video controller 310 may be an ARM MALI graphics processing unit, available from Arm, Ltd. San Jose, CA, for example. The I/O controller 310 may be an embedded general purpose I/O or subordinate 32 bit processor, such as an ESP 32, available from Espressif Systems, Shanghai, China, for example.

In one example, the video controller 310 may also perform the machine learning functions, on its own or in conjunction with the processing device 302, to detect a target, classify a target, determine a depth of a target, identify hits on the target, score hits, and/or analyze a user's movement while shooting, as is described in more detail below.

If the display 308 is not a touchscreen, then a keyboard may be provided (not shown). The keyboard, if provided, may also provide input to the processing device 302 via the I/O controller 310.

A camera 314 also provides input to the processing device 312 via the I/O controller 310. A second, optional camera 316 also provides input to the processing device 312 via the I/O controller 310. The camera may be a complementary metal oxide semiconductor (“CMOS”) camera, for example. As is known in the art, a CMOS camera is mounted on a printed circuit board along with a processing device, lens, and power supply, such as a battery, for example. The battery may be one or more commercially available lithium polymer (“LiPo”) batteries, such as LiPo 18650 battery cells, for example. Alternatively, or in addition, power may be provided to the system board by an external power supply, camera-via a USB port (not shown) through the casing, such as through a rear wall of the casing, for example.

The camera 314, which acquires images of the target 22 in FIG. 1A, FIG. 1B for example, may have a prime focal-length comparable to that of the human eye when looking at a rear wall of the range, such as from 35 mm to 85 mm, for example and a resolution of at least about 1024×2160, for example. The camera 314 may have a prime focal-length of 55 mm, for example and a camera speed of 20 frames per second, for example. The camera 314 may be a B0371 16 megapixel autofocus camera from ArduCAM Technology Company, Ltd., Kowloon, Hong Kong, for example, which has a resolution 3840×2160.

The camera 316, if provided to image of a user's hand(s), may have a wide angle, such as a focal-length of 24 mm, for example, at a distance of from about 1 foot to about 3 feet from the user's hands while shooting, for example.

The camera 316 may also be a B0371 camera available from ArduCAM Technology Company, Ltd., Kowloon, Hong Kong, for example, which has a 16 megapixel autofocus.

The target evaluation unit 300 may also include a fan (not shown), if needed, based on the processing device 602.

The portable target evaluation unit of FIGS. 1-3 may weigh from about 1 pound to about 10 pounds, for example. The target evaluation unit may weigh 2 pounds, for example.

FIGS. 4A, FIG. 4B, and FIG. 4C are examples of a bullseye type target 400a, a silhouette type target 400b, and a hostage type target 400c, respectively, which may be used in embodiments of the invention. Other types of targets may also be used. The bullseye target 400a in FIG. 4A includes a circular center 10 (402) and concentric rings 9, 8, 7 around the circular center. While in this example three (3) concentric rings 9, 8, 7 around the circular center 402 are shown, more or fewer concentric rings may be provided. Different numbers may be provided to identify the circular center 402 and concentric rings 9, 8 7. In this example, the circular center 402 and the first concentric ring 404 are colored grey or another color, to highlight the highest scoring portions of the bullseye target 200.

FIG. 4B is an example of a silhouette or character type target 400b in which the target has the shape of a character, here a criminal 410 carrying a gun 412. It is noted that the criminal 410 has a face 414 with a menacing facial expression.

FIG. 4C is an example of a hostage type target 400c, where a victim 420, here a cheerleader, is being threatened by one or more evil characters, such as zombies 422, 424 or other monsters in FIG. 4A. The victim may instead be threatened by one or more criminals, such as the criminal 410 in FIG. 4B, or one or more terrorists, for example. In this example, the zombies 422, 424 are valid targets while the cheerleader is not.

Other types of targets may also be used in embodiments of the invention.

FIG. 5 is a flow diagram of an overview of an example of a process 500 for evaluating targets in accordance with an embodiment of the invention. A session starts in Step 502, when a user 20 enters a firing lane, such as the firing lane 14b in FIG. 1A and FIG. 1B. After positioning and the target evaluation unit 12 of FIG. 1A, for example, on a table or shelf 28 of the firing lane 14b, as is also shown in FIG. 1A, FIG. 1B, the target evaluation unit is turned on, via the on/off button 212 in FIG. 2A, for example, turning on the camera 314. The camera 314 starts to capture frames of the region downstream of the target evaluation unit 12 when the target evaluation unit 12 is turned on, in Step 504. (See FIG. 1.) Each captured frame is received by the processing device 302 and forwarded to the video controller 310 of FIG. 3, to identify a target in the region downstream of the target evaluation unit 12, in Step 506. The target 22 may be identified by the video controller 310 by executing a machine learning model, such as by performing an object identification algorithm machine learning algorithm by the video controller 310, for example, as discussed further below. If a co-processor (not shown) is provided, the machine learning model may be executed by the co-processor. After the target 12 is identified in the firing lane in Step 506, the target is classified by type of target, in Step 508. As discussed above, in one example, the target may be a simple bullseye, a silhouette target, and/or a hostage target, for example, as discussed above with respect to FIG. 4A, FIG. 4B, and FIG. 4C. There may be other types or different types of targets, as well. The target may be classified by machine learning for example, as is discussed in more detail, below. If no target is detected in a frame, the flowchart 500 returns to Step 504 to capture another frame, detect a target, in Step 506, and if a target is detected, classify the target, in Step 508.

Frames continue to be captured, in Step 510. When a target is classified, in Step 508, the depth or distance from the target evaluation unit to the target is then determined, in Step 512. The depth (z) may be determined by the processing device 302 and/or the video controller 310 based on the field of view of the camera 314, the known dimensions of the target (x,y), and the detected dimensions (x, y) of the target in pixels. The depth corresponding to the detected number of pixels may be stored in a table, for example. The table may be stored in the non-volatile memory 304 or in another memory or storage device, for example. Alternatively, the depth can be calculated based on the perceived size of a known target, based on the focal length of the camera, and the camera resolution, as is known in the art.

It is then determined whether the depth (z) of the target 12 is zero (0), in Step 514. This may be determined by the processing device 402 and/or the video controller 410 based on whether the detected number of pixels is the same as the actual dimensions of the target in pixels. If the depth (z) is zero (0) (Yes in Step 514), then the target has either been returned to the firing line or has not been advanced to a location downstream of the firing line. In either case, the session is then stopped in Step 516.

If it is determined that the depth is not zero (0) (No in Step 516), then the target is downstream of the firing line and can be shot at by a user. Hit detection is then performed by the processing device or the video controller 410, in Step 518. Hit detection may be performed by comparing a captured frame to a prior captured frame and determining whether there is a change in the two frames indicative of a hit on the target. Hit detection may be performed by the processing device 402 or the video controller 410 by a machine learning algorithm, for example, as discussed further below. If a hit has been detected in Step 520, the hit is scored, in Step 522. The hit may be scored by the processing device 302 and/or the video controller 310 based on the depth of the target 22 and the proximity of the hit to a center of the target or center of mass of the target, for example, as discussed further below. After scoring the hit, the position of the user 20 may be analyzed using frames captured by the second camera 316, if provided, in Step 524. Step 524 is shown in dashed lines because Step 524 is optional. Position analysis is also discussed in more detail, below. After scoring a hit in Step 522 or analyzing the position of the user in Step 524, if performed, the process returns to Step 512 to continue to capture frames and repeat Steps 512 through 522.

If a hit is not detected (No in Step 522), then the process returns to Step 512 to continue to capture frames and repeat Steps 512 through 522.

FIG. 6 is a flowchart 600 of an example of a session start procedure, which includes Steps 502-508 of FIG. 5, in accordance with an embodiment of the invention. The activities of the user 20 are shown in the block 602, the actions of the user interface, such as the touchscreen display 408 and system orchestration, under the control of the processing device 302 of FIG. 3, are shown in the block 604, and actions by machine learning manager, are shown in the block 606. The machine learning manager may be software that controls operation of the processing device 302, for example. It is noted that the machine learning models are run by the video controller 310 when called by the processing device 302. The machine learning manager 806 determines which machine learning algorithm to implement at respective Steps in the process and runs the machine learning algorithm at the appropriate time.

A session starts in Step 608. A session is a self-contained period of time or activities related to the shooting of a respective target 22. The target evaluation unit 12 is positioned on the shelf or table 28 in the firing lane by the user, in Step 610, for example, as discussed above. The target evaluation unit 12 could also be placed on the floor, if the user 602 is shooting in a prone position. A target is selected and set up by clipping it to the carrier 24 in FIG. 1A and FIG. 1B, in Step 612. The user 602 then turns on the target evaluation unit 12 via the on/off button 212 in FIG. 2A, for example, in Step 614, turning on the camera 314 so that it starts to capture images (or frames) of the target.

In one example, the display 308 of the target evaluation unit 300 may display a select mode option during the start-up procedure, in Step 616, for selection by the user. Step 616 is shown in dashed lines because it is an optional Step. Available modes may include a standard target practice mode, a government bureau agent qualification or certification mode, and/or a game mode, for example, as discussed herein. A game mode may enable one user or multiple users to play tic-tac-toe, for example, on targets shaped like a tic-tac-toe board, for example. The target 22 and/or scoring in Step 522 of FIG. 5 may be affected by the particular mode that is selected. The user interface and system orchestration then creates a session, in Step 618. The second camera 316 (FIG. 3), if provided, may be turned on when the session is created in Step 618, for example. If the target evaluation unit 300 does not provide for different modes, then Step 616 need not be performed and the flowchart 600 proceeds from Step 614 to Step 618, to create a session. Creation of a session involves the performance of operations and collection of information needed to perform hit detection and scoring. For example, in order to perform hit detection and scoring, a target needs to be detected in Step 506 and classified in Step 508 of FIG. 5.

After a session is created, in Step 618 of FIG. 6, one (1) or more targets in the field of view of the camera 314 are identified in a captured frame, in Step 620. In this example, Step 620 is performed by the machine learning manager 606. The processing device 302 and/or the video controller 310 may perform object detection to identify the target, under the control of the machine learning manager 606.

Object detection to identify a target in the firing lane 14b may be performed machine learning model. FIG. 7 is an example of a machine learning model 700 that may be used in this and other steps of embodiments of the present invention. The machine learning model 700 includes an input layer 702, hidden layers 704, and an output layer 706. The machine learning model 700 in this example is trained to identify targets of a typical size of 2 feet by 3 feet. The input layer 702 formats the captured frame, including putting the frame in the form of a linear string, for analysis by the neural network 704. The neural network 704 may be a deep-learning multi-layer convolutional neural network (“CNN”), for example, as is known in the art The neural network 704 in this example outputs an inference of whether a target is present or not, and if present, the confidence of the inference. The output layer 706 determines whether the confidence of the inference that a target is present meets a threshold developed based on the training set. The output layer 706 may use a sigmoid function, for example. The machine learning model 700 may be trained by an unlabeled training set of rectangular white poster boards of varying sizes against a dark or black background, for example. The training set may include thousands of images and the machine learning model may include six (6) or more images, for example. Additional details of the machine learning models that may be used is described below with respect to FIG. 8C, which is a similar machine learning model.

Returning to FIG. 6, after a target is detected in a frame, in Step 620, it is determined whether there is more than one target 22 in a captured frame, in Step 624. There may be more than one target 22 in a frame if the camera 314 has a wide enough field of view to capture the target 22 of an adjacent firing lane, for example. If there is more than one target 22 (Yes in Step 624), then the user 602 is prompted to select the target 22 in the user's lane, in Step 620, via the user interface 804 presented on the touch screen display 308, for example. The user 602 may select the target 22 in the firing lane of the user 602 from the multiple targets displayed on the touch screen display 308, for example, in Step 628. If the display is not a touch screen, then the user can select the target in the user's lane, via a mouse, or other input device, for example.

The flowchart 600 then proceeds to perform target classification to determine the type of target 22 detected in the frame, in Step 630. In this example, target classification is performed by machine learning by the machine learning manager 606. If there is only one (1) target in the frames (No in Step 624), the process proceeds directly from Step 624 to target classification, in Step 630.

FIG. 8A is a flowchart 800 of an example of a target classification procedure, in accordance with an embodiment of the invention. After being called by Step 620 of FIG. 6, target classification starts in Step 810. A frame including an object identified as a target is received from Step 622 or a frame including a target identified by the user among multiple targets is received from Step 626, in Step 810.

The frame is sent to a first machine learning model (“ML 1”), in Step 815. The ML 1 predicts whether the target identified in the frame matches a known target and the confidence of the prediction, in Step 820. The predicted matching target may be identified by a target ID. The ML 1 may have the form of the machine learning model 700 of FIG. 7. The input layer 702 in this example formats the data defining the frame, including putting the data in the form of a linear string. The neural network 702 in this example may be a deep learning algorithm, such as a convolutional neural network (“CNN”), for example. The CNN may be trained by a training set of a large number of commercially available targets. The training set may be from about 50,000 to about 100,000 images of commercially available targets, for example. Other deep learning algorithms that may be used include Long Short Term Memory Networks (LSTMs), Generative Adversarial Networks (GANs), Radial Basis Function Networks (RBFNs), Multilayer Perceptrons (MLPs), Self Organizing Maps (SOMs), Deep Belief Networks (DBNs), Restricted Boltzmann Machines (RBMs), and autoencoders, for example.

The output layer 706 in this example may include a softmax function to classify the target 22 as a specifically identified, known target referenced by a unique ID and confidence score. This unique ID will be used to retrieve additional details such as whether it is a bullseye type, silhouette type, or hostage type with a determined confidence score, for example.

The classification and confidence score are output by the output layer 706 and the confidence score is evaluated in Step 825. The classification and confidence score may be evaluated by the processing device 302, the video controller 310, or a co-processor (not shown), for example. If it is determined that the confidence score associated with the predicted target is greater than a predetermined confidence score for an acceptable target inference, such as 0.8, for example, in Step 825, then target area data is retrieved for the predicted target type, in Step 830. Target area data for known targets may be stored in a database or table in the non-volatile storage 604 or in other storage, for example. The threshold may be set when the system is programmed. The threshold may be a value in the range of from 0.7 to 0.8, for example. Higher or lower thresholds may be used. The target area data is added to the session and stored in RAM 306 in FIG. 3, in Step 835, and the process returns to Step 510, in FIG. 5.

Target area data may comprise the x, y coordinates of each point in an array of points defining valid target areas and invalid target areas. For example, in the bullseye target 400a of FIG. 2A, the valid area may be the area within a circle with a known radius, such as the outer radius of the band 408, for example. The valid area is centered about the bullseye 402. The area outside the radius is invalid. FIG. 9A is an example of a bullseye 900 corresponding to the bullseye type target 400a of FIG. 2A. Since the bullseye target 400a is a known target, a mask 902 defining the valid area 904 of the bullseye type target is retrieved from the database or other storage and overlayed over the bullseye type target. The valid area 904 is indicated by dots in FIG. 9A. The mask also defines a center of mass 906 of the valid area 904 which is the optimum target within the valid area 904. In the case of a bullseye target the center of mass is equidistant from the periphery of the valid area. In the case of an irregularly shaped target, such as a silhouette type target and a hostage type target, the center of mass is the average of the x coordinate values and the average y coordinate values of the mask. The invalid area 906 is outside the valid area 902. A bullet hit in the invalid area 908 receives no points or may be penalized, for example. Scoring a hit in the valid area 902 is discussed below. The mask 902 may be overlayed on the target 900 during scoring or may comprise coordinates that are compared to the coordinates of the target.

In a known silhouette target 400b of FIG. 4B, the valid target area may be points within a contour of the criminal 410 or other such character. FIG. 9B is an example of a known silhouette target 910 corresponding to the target 410 of FIG. 4B with the mask 912 defining a valid area 914 indicated in dots. A center of mass 916 of the valid area is indicated. The area 918 outside the valid area 912 is invalid and a bullet hit in the invalid area receives no points (or may be penalized).

FIG. 9C is an example of a hostage type target 920 that corresponds to the hostage type target 400c of FIG. 4C. Since in this example the hostage type target 920 is a known target, two masks 926, 928 defining the two valid target areas within the contours of the two zombies 922, 924 and a mask 928 defining an invalid area within the contour the cheerleader 930. The target area data may be retrieved from a target database, for example, that correlates target IDs with target area data. The target database may be part of the non-volatile storage 604 or may be part of other storage, for example.

A user will receive positive scores for hits in the areas covered by the valid masks 924, 926 and no score or a penalty score for hits in the area covered by the invalid mask. A penalty score would reduce the total score in a session. A miss of all masks results in a no score. In certain modes, such as a professional certification for the law enforcement, for example, a hit in an invalid area, such as a hostage, results in a failure of the test, ending the session.

Returning to Step 830 of FIG. 8, after the target area data is retrieved, the data is added to the session for use in scoring hits on the target, as described further below. The process 800 then goes to Step 510 of FIG. 5, in Step 837 to continue the process.

Returning to Step 825 of FIG. 8, if the ML Model 1 does not classify the target in the frame with a confidence score greater than 0.80 in this example, then the frame containing the target is analyzed by a second ML model (“ML Model 2”), in Step 840 to determine the target classification when the type of target is not a known target. In accordance with this embodiment of the invention, the ML Model 2 includes multiple machine learning models: 1) an ML Model 845 for object segment detection to identify segments of the target; 2) an ML Model 850 for object detection to detect circles in the target; 3) an ML Model 855 for object identification to identify faces; and 4) an ML Model 860 for sentiment detection to infer the sentiment expressed by the faces. Each of the ML Models 845-860 may generally be in the form of the machine learning model 700 of FIG. 7. A Final Target Classification Determination Algorithm 860 is provided to resolve the results of the multiple machine learning algorithms to classify the target in the frame. Examples of certain of the ML Models are described in more detail, below.

FIG. 8B is a flowchart 870 of an example of the operation of the ML Model 2 of FIG. 8A. The hidden layers 704 in the ML model for Object SEGMENT may be a convolutional neural network (“CNN”), for example, trained to infer the presence of one or more segments or blobs in the identified target, in Step 874. In this example, the presence of segments is inferred by the ML Model for Object SEGMENT detection 845 based on the identification of dense pixel areas, in Step 874. For example, the ML Model for Object SEGMENT detection 845 may detect dense pixel areas that may be indicative of a contour of a bullseye in a bullseye type target, the contour of a criminal in a silhouette type target, and the contours of multiple parties in a hostage type target, for example. The contrast between portions of the target may also indicate the presence of different segments in the target, for example. The ML Model 845 may be trained by a training set including thousands to tens of thousands of images, for example. Some of the images in the training set may be generated synthetically, as is known in the art. An output of the ML Model 845 may be provided in the form of counts for how many segments are identified, in Step 876. FIG. 10A is the hostage type target discussed above, which in this case is an unknown target, after being analyzed by the ML Model for Object SEGMENT detection 845.

The presence of one or more circles in the target is inferred, in Step 878. In this example, the presence of circles is inferred by the ML Model for Object Circle detection 850. The hidden layers 704 (FIG. 7) in the ML model for Object Circle detection 850 may be a CNN, for example, trained to infer the presence of one or more circles. The training set may include thousands of images, including synthetically generated images, for example. The ML Model for Object Circle detection 850 may reduce the dense images to contours, which are then analyzed for being a circle. Ellipses may also be identified. The presence of circles and ellipses may be used to infer the presence of a bullseye in a bullseye type target, for example. An output of the ML Model 850 may be provided in the form of a count for each identified circle, in Step 880. FIG. 10B is an example of the unknown hostage type target after no circles are ellipses are identified.

The presence of one or more faces in the target is inferred, in Step 882. In this example, the presence of faces is inferred by the ML Model for Object FACE Object detection 855, in Step 1082. The hidden layers 704 in the ML model for Object FACE may be a CNN, for example, trained to infer the presence of one or more faces based on face-like shapes, such as an eye, a nose, a mouth, in spatial ratios consistent with human faces. The presence of one or more faces may be used to infer that the target is a silhouette type target or a hostage type target, for example. The training set for the ML Model for Object FACE Object detection 855 may include images of faces of different types and sizes. The size of the training set may be several thousand images, for example. Some of the images in the training set may be synthetically generated. An output of the ML Model 855 may be provided in the form of a count for each identified face, in Step 884. FIG. 10C an example of the unknown hostage type target where faces are identified.

The sentiment of one or more faces in the target is inferred, in Step 884. Sentiment is used in the hostage type target to differentiate between the hostage and the hostage takers. For example, in the hostage type target 400c of FIG. 4C, the faces of the zombies 422, 424 show the threatening sentiments of anger and disgust, for example, while the victim 420 (the cheerleader), shows victimized sentiments of fear and surprise, for example.

In this example, the sentiment of faces is inferred by the ML Model for SENTIMENT detection 860, in Step 886. The hidden layers 704 in the ML model for Object Sentiment may be a neural network, such as a multi-class neural network, for example, for example, trained to infer the sentiment expressed by the identified faces. The training set for the ML Model for SENTIMENT detection 860 may be a labelled set of one or several thousand images of faces expressing different sentiments, including synthetically generated images for example. An output of the ML Model for SENTIMENT detection 860 may be provided in the form of a count for different classes of sentiment, such as a count for the inference of anger or disgust, for example, and a count for fear, sadness, and surprise, for example, in Step 888. The ML Model for SENTIMENT detection 860 need not be executed if no faces are inferred, as indicated by the arrow 885.

FIG. 8C is a more detailed example of the ML Model for SENTIMENT detection 860. In this example, there is an input layer that formats the data defining the frame and creates a linear string of pixel values from the two-dimensional (x,y) pixel values in each frame. The input layer is followed by a first CNN layer which is shown broken out to further include an activate rectified linear unit (“ReLU”) layer, a normalization layer, a maxpool layer, and a dropout layer. Each subsequent CNN layer includes these same layers. The last CNN layer is followed by a softmax function layer, which classifies the output of the CNN as Angry, Disgust, Fear, and Surprise, for example. The confidence of the classification is also output FIGS. 10D, FIG. 10E, and FIG. 10F show faces with inferred sentiments of anger, in FIG. 10D and FIG. 10E, and surprise/fear in FIG. 10F.

The outputs of Steps 876, 880, 884, 888 are resolved based on the received counts, in Step 890, for example. Resolution of the counts is performed by the Final Target Classification Determination algorithm 865 in FIG. 10A, for example. The Final Target Classification Determination algorithm 1065 may be in the form of a table, such as Table I below:

Final Target # SEGMENT_COUNT CIRCLE_COUNT FACE_COUNT FACE_SENTIMENT Classification 1 1 1 0 N/A BULLSEYE 2 2+ 2+ 0 N/A MULTIPLE BULLSEYE 3 1 0 1 ANY SIMPLE SILHOUETTE 4 2+ 0 2+ VALID_HIT_AREA match to HOSTAGE (Angry; Disgust); AVOID_AREA match to (Fear; Sadness; Surprise)

Table I resolves the SEGMENT_COUNT, CIRCLE_COUNT, FACE_COUNT, and FACE SENTIMENT to reach a Final Target Classification Determination in the final column. A SEGMENT_COUNT of 1 and a CIRCLE_COUNT of 1 is resolved to a Bullseye Target, in Row 1. A SEGMENT_COUNT of 1+(indicating more than one (1) segment) and a CIRCLE_COUNT of 1+(indicating more than 1 circle) is resolved to a Multiple Bullseye Target (where more than one bullseye target is provided on the same target. A SEGMENT_COUNT of 0−1, a CIRCLE_COUNT of zero (0) and a FACE_COUNT of 1, and any SENTIMENT COUNT is resolved to a Silhouette Target. A SEGMENT_COUNT of 1+, a CIRCLE_COUNT of 0, a FACE_COUNT of 1+, and a FACE SENTIMENT of 2+ Angry and/or Disgust and Fear, Sadness, or Surprise, for example, resolves to a Hostage Target. In the case of the Hostage Target, Valid areas correspond to faces showing Angry or Disgust sentiments and Invalid areas correspond to faces fear, sadness, or surprise, for example.

The Final Target Classification Determination Algorithm 865 may apply weightings to the outputs of the machine learning models 845-860, based on experimentation.

The resolved target classification is output in Step 892. The target area data is added to the session and stored in RAM 306 in FIG. 3, in Step 835, and the process returns to Step 510, in FIG. 5, in Step 837.

Returning to FIG. 5, after target classification, in Step 508, frames continue to be captured, in Step 512. Depth may be determined by comparing the actual size of the target to the detected size of the target, taking into account the field of view of the camera. In one example, a statistical machine learning model using linear regression is implemented by the video controller 310 may be used to infer the depth of the target. The statistical machine learning model may be trained by a labelled training set, for example.

After the depth is inferred, it is determined whether the depth is zero (0), in Step 514. If the depth is zero (0) (Yes in Step 514), it indicates that the user has retrieved the target and the session is ended, in Step 516. If the depth is not zero (0) (No in Step 514), then the process 500 continues to determine whether a hit is detected in the target, in Step 518.

Hit detection in Step 518 will now be described. FIG. 11A is a flowchart 1100 of an example of a process for preprocessing frames in the hit detection process. It is noted that this preprocessing may also be performed as part of the other machine learning models described herein.

A current frame is received and converted to grayscale, in Step 1105. A blur is then applied to the current grayscale frame, in Step 1110. The current blurred frame is compared to a prior blurred frame, in Step 1115. The result of the compared frames is referred to as a CleanFrame.

The CleanFrame is provided to a machine learning model that is trained to detect bullet hits. FIG. 11B is a schematic representation of a hit detection machine learning model 1120 that may be used, in accordance with an embodiment of the invention. The machine learning model 1020 may include CNN including hidden activation ReLU layers, for example. The hit detection machine learning model 1020 includes an input layer 1125 (x,y), a flatten layer 1130, a CNN including hidden activation layer ReLU layer 1 (1135) through hidden activation ReLU layer N (1140), and an output layer 1145. The input layer (x,y) formats the data in the CleanFrame for processing. The flatten layer (x) creates a linear string of pixel values from the two-dimensional (x,y) pixel values in each CleanFrame.

The hidden activation layers 1 . . . N (1135 . . . 1140) are the first and last layers of a multiple layer neural network. The hidden activation layers 1 . . . N (1135 . . . 1140) in this example are each rectified linear unit (“ReLU”) layers. The neural network may be a convolutional neural network or another type of deep learning neural network, for example. The neural network may include eight (8) layers, for example. Each layer may have 10 activations, for example.

The hidden activation ReLU layers of the neural network progressively analyze the change between the current frame and the prior frame that are indicated in the CleanFrame, to determine whether any difference between the frames is indicative of a possible bullet hit in terms of size and shape, for example. The activation layers also determine a confidence score of the determination that there is a bullet hit. Possible hits are identified as 3×3, or 5×5 pixel squares referred to as kernels.

The neural network 1135-1140 is trained by a training set including from about 50,000 to about 100,000 training images of different types of targets with labelled bullet hits and of different calibers, and clean, white paper with bullet hits with different calibers, for example. The training set may be based on about 10,000 different images of examples of bullet hits on targets, from which the remaining images in the training set may be generated synthetically, as is known in the art. In accordance with an embodiment of the invention, some of the synthetically generated images may have a modified texture to simulate real world defects in a potential actual target, such as being crumbled, worn, dog-eared, etc. The training set may also include labelled images of muzzle flash to train the machine learning model not to identify a muzzle flash for a bullet hit. The training set would include clean targets.

In addition, the output layer 1045 learns where to define a threshold between a hit or not based on the training set. The threshold identifies how much change from one change to the next indicates a bullet hit. The output layer 1045 provides a binary decision of Yes or No of whether a frame indicates the presence of a bullet hit based on the learned threshold and a confidence score. If Yes, the output layer 1145 identifies the x, y coordinate(s) of the pixel(s) where the hit is identified.

Other machine learning techniques could be used instead of deep learning, such as statistical techniques.

When a hit in a CleanFrame is output by the output layer 1145, the x, y coordinates of the hit are recorded I Step 1150 and provided to Step 522 of FIG. 5 to score the hit. The coordinates of the hit may be stored in RAM 306 in FIG. 3 in association with a current session, for example.

Scoring of hits is explained with respect to FIG. 12A, FIG. 12B, FIG. 13, FIGS. 14A, and 14B. FIG. 12A is a flowchart 1200 of an example of a process for scoring a known bullseye type target, such as the bullseye target 400a of FIG. 4A. The process may be implemented by the processing device 302, for example. The coordinates (x, y) of a hit are retrieved from the RAM 306, in Step 1205. The coordinates of the center of the bullseye (cx, cy), where c refers to the center of the bullseye, and the outer radius R of the bullseye are retrieved from storage, in Step 1210. The distance D from the hit to the center is determined by the Pythagoras theorem, for example, as D=((x−cx)2+(y−cy)2)1/2, in Step 1215.

It is then determined whether the distance D is greater than R, in Step 1220. If the distance D is greater than R (Yes in Step 1220), then the hit is outside of the bullseye and no score is granted, in Step 1225. If the distance D is not greater than R (No in Step 1220), then the hit is within the bullseye. The distance D is rounded down to the nearest whole number, in Step 1230. The score is then determined to be 10−D (rounded down), in Step 1235.

FIG. 12B is a flowchart 1250 of an example of a process for scoring the bullseye type target 400a of FIG. 4A, when the bullseye type target is an unknown target.

The coordinates (x, y) of the hit and the target area data are retrieved from the RAM 306 by the processing device 302, for example, in Step 1260. The center of mass of the target hit area is determined and a distance D from the hit to the center of mass (cx, cy) is calculated, in Step 1265. It is then determined whether the distance D is less than the bullseye target radius R, in Step 1270.

If the distance D is not less than the the bullseye target radius (No in Step 1270), then the bullseye has been missed and no score is assigned to the hit, in Step 1275. If the distance D is less than the bullseye radius (Yes in Step 1270), then the hit is within the area of the bullseye. To score the hit based on how close the hit is to the center of mass of the bullseye in this example, a minute of angle (“MOA”) is calculated based on the distance D. The MOA may be calculated by dividing the distance D (in inches) between the hit and the center by 100, for example.

The hit is then scored by calculating the square of the depth and subtracting the MOA ((depth)2−MOA)). The score is returned for storage in the RAM 306 in association with the session, for example. Other scoring methodologies may also be used.

FIG. 13 is a flowchart 1300 of an example of a scoring process for a known or unknown silhouette type target, such as the silhouette type target 400b in FIG. 4B, for example. The coordinates of the hit and the target hit area are retrieved, in Step 1305, as discussed above. In this case the distance D from the hit to a center of mass (cx, cy) of the valid hit area is calculated, in Step 1310. If the target is a known target, the center of mass of the target is defined by a mask of the target that is stored in a database or table in the non-volatile storage 604, for example. The center of mass of an unknown target is defined after the valid hit area is defined, as discussed above, for example. The distance D may be calculated using the Pythagoras theorem, as discussed above.

The coordinates (x, y) of the hit are compared to a valid hit area of the target, in Step 1315. If the target is a known target, the valid hit area is defined by the mask of the target. If the target is an unknown target, the valid hit area may be defined by a machine learning model in the manner described above.

It is determined whether the coordinates of the hit are within the valid hit area, in Step 1320. If the the coordinates of the hit are not within the valid hit area (No in Step 1320), the hit receives no score, in Step 1325.

If the coordinates of the hit are within the valid hit area (Yes in Step 1320), then the hit is scored by calculating the MOA, in Step 1330, as discussed above with respect to Step 1280 of FIG. 12B. The hit is then scored by calculating the square of the depth of the target and subtracting the MOA ((depth)2−MOA)), as in Step 1285 of FIG. 12B. The score is returned for storage in the RAM 306 in association with the session, for example. The score is then stored in the RAM 306 in association with the session, for example.

FIG. 14A is a flowchart 1400 of an example of a scoring process for a known or unknown hostage type target, such as the hostage type target 400c in FIG. 4C, for example.

The coordinates of the hit are retrieved, in Step 1405, as discussed above. In this case, where there are multiple valid hit areas, the distance D from the hit to a center of mass (cx, cy) of the closest valid area is calculated, in Step 1410. As discussed above with respect to FIG. 13, if the target is a known target, the centers of mass of the valid hit areas of the target are defined by a mask of the target that is stored in a database or table in the non-volatile storage 604, for example. The center of mass of an unknown target is defined after the valid hit area is defined, as described with respect to FIG. 8A, FIG. 8B, and FIG. 9C, for example. The distance D may be calculated using the Pythagoras theorem, as discussed above.

The coordinates of the hit (x, y) are compared to the coordinates of the valid hit areas of the target, in Step 1415. The coordinates of the hit (x, y) are compared to the coordinates of the invalid hit areas of the target, in Step 1420.

The process proceeds to Step 1425, in FIG. 14B, where it is determined whether the hit is within a valid hit area (such as zombies or criminals, for example). If the coordinates of the hit are not within the valid hit area (No in Step 1425), it is determined whether the hit is within an invalid hit area (such as a cheerleader or other such victim), in Step 1430.

If the hit is not within an invalid hit area (No in Step 1430), then a no score is returned, in Step 1435. If the hit is within an invalid hit area (Yes in Step 1430), then a penalty score is returned, in Step 1440.

If the coordinates of the hit are within the valid hit area (Yes in Step 1425), then the hit is scored by calculating the MOA, in Step 1330, as discussed above with respect to Step 1280 of FIG. 12B. The hit is then scored by calculating the square of the depth and subtracting the MOA ((depth)2−MOA)), as in Step 1285 of FIG. 12B. The score is returned for storage in the RAM 306 in association with the session, for example. The score is then stored in the RAM 306 in association with the session, for example.

If the coordinates of the hit are within the valid hit area (Yes in Step 1425), then the hit is scored by calculating the MOA, in Step 1445, as discussed above. The hit is then scored by calculating the square of the depth and subtracting the MOA ((depth)2−MOA)), as in Step 1450 of FIG. 12B. The score is returned for storage in the RAM 306 in association with the session, for example. The score is then stored in the RAM 306 in association with the session, for example.

Returning to FIG. 5, the user position may optionally be evaluated to identify flaws in position and technique that a user could correct to improve their score, in Step 524. As discussed above, a second camera (camera 316 in FIG. 3) may be provided in the target evaluation unit 12 of FIG. 1A and FIG. 1B, for example, so that the camera lens faces the user 20 after placement in the shelf or table 28. The camera lens of the second camera has wide field of view. The second camera 316 may be positioned perpendicular to the first camera 314 inside the target evaluation unit 12, for example, to image the shooter's shooting hand and/or upper body including the hands while shooting. In one example, frames from a fraction of a second up to a second, for example, before a bullet hit are evaluated so that the user's position can be imaged prior to and during each shot. The frames of the shooter may be sent to a human expert and/or the frames may be analyzed by a machine learning model to identify flaws that can be corrected. Referring to FIG. 7, the hidden layers 704 of the machine learning model may be a recurrent neural network to perform skeletal tracking, for example. A report may be provided to the user including the user's score and evaluation of form, for example, including a video of the user's hand and body, statistical evaluation of the video, including timing, steadiness, reflex reactions, reactions to recoil. Information may be sent to the expert and be received from expert via email, for example. The information may be collected from the target evaluation device by a centralized server via the Internet, for example.

It is noted that a user may inform the system of a hit that was not counted as a hit, via the display, to improve the CNN. In one example, a user interface may be displayed on the display 208 showing hits on the target. The user interface may include an option to indicate a hit on the target that has not been counted, via the touchscreen 208 or other user input device, for example. Feedback may also be provided if, for example, a user always attends a respective shooting range so that lighting and shadowing unique to the respective shooting range, for example. In this case, a user would be provided with an option on the display to further train the CNN with the frames of the targets that the user has shot.

In addition, new targets may be entered into the system so that they become a known target. This may be done by updating a training set or by sending the new target to a centralized system, which can update the target evaluation device.

A data structure representative of the range session may be generated and available for the user's review and/or for submission to a combination of automated and/or expert review for evaluation, suggested training, and classification of whether a user would qualify for specific government, military or specialist certification based on performance, for example.

While embodiments of the invention have been described including a portable target evaluation unit that performs the analysis of targets to identify bullet hits and perform scoring, some or all of the determinations and inferences performed by the portable target evaluation device may be performed by a server at the location of the firing range or in the cloud, for example.

Examples of implementations of embodiments of the invention are described above. Modifications may be made to those examples without departing from the scope of the invention, which is defined by the claims below.

Claims

1. A method for evaluating hits on a target, comprising:

capturing frames of the target by a camera;
detecting a target in a captured frame, by a processing device;
classifying the target in the captured frame as a target type, by a processing device;
determining a depth of the target from a user;
identifying a hit on the target, by a processing device; and
scoring the hit, by a processing device.

2. The method of claim 1, comprising:

detecting the target, classifying the target, and/or identifying a hit on the target by a respective machine learning model run by a processing device.

3. The method of claim 2, comprising:

classifying the target as a known target by a machine learning model trained on known targets, the known target having known target area data stored in a storage device;
the method further comprising:
retrieving target area data stored in the storage device, by the processing device; and
scoring a hit on the target based, at least in part, on the target area data.

4. The method of claim 3, wherein the target area data for a target includes a valid hit area, the method comprising:

scoring the hit on the target based, at least in part, on whether the hit is within the valid hit area for the target.

5. The method of claim 4, comprising scoring the hit based, at least in part, on a distance between the hit and a center of mass of a valid target hit area.

6. The method of claim 4, wherein the target area data includes an invalid hit area, the method comprising:

scoring the hit on the target based, at least in part, on whether the hit is within the valid hit area for the target.

7. The method of claim 3, comprising determining whether the target is a known target by a machine learning model trained on a training set including known targets.

8. The method of claim 7, wherein the training set includes synthetically generated targets based on actual targets, with a modified texture.

9. The method of claim 3, wherein the known target type is a bullseye target type, a silhouette target type, or a hostage target type.

10. The method of claim 3, wherein, if a known target is not identified based on the machine learning model, the method further comprises:

running a second machine learning model different from the first machine learning model, the second machine learning model including multiple, different machine learning models for identifying different characteristics of the target; and
resolving the outputs of the multiple, different machine learning models to define target area data for the target.

11. The method of claim 10, wherein the multiple machine learning models comprise a segmentation machine learning model, a circle detection machine learning model, a face detection machine learning model, and/or a sentiment detection machine learning model.

12. The method of claim 11, comprising running the sentiment detection machine learning model only if at least one face is identified by the face detection machine learning model.

13. The method of claim 12, wherein the sentiment detection machine learning model is trained on a training set including faces expressing anger, disgust, fear, sadness and/or surprise.

14. The method of claim 10, wherein the target area data includes a valid hit area, the method comprising:

scoring the hit on the target based, at least in part, on whether the hit is within the valid hit area for the target.

15. The method of claim 14, wherein the target area data for a target includes an invalid hit area, the method comprising:

scoring the hit on the target based, at least in part, on whether the hit is within the invalid hit area for the target.

16. The method of claim 10, comprising scoring the hit based, at least in part, on a center of mass of an invalid target hit area.

17. A system for evaluating bullet hits on a target, comprising:

a camera to capture frames of a target;
at least one processing device; and
storage;
the at least one processing device configured to: detect a target in a captured frame; classify the target in the captured frame as a target type; determine a depth of the target from a user; identify a hit on the target, by running a machine learning model; and score the hit.

18. The system of claim 17, further comprising:

a casing having an opening, wherein the camera has a camera lens proximate the opening to capture frames down range of the casing;
wherein the camera, the processing device, and the storage are contained within a portable casing; and
the system is self-contained and portable.

19. The system of claim 16, wherein the casing has a second opening different from the first opening, and contains a second camera different from the first camera, the second opening and the second camera being configured to image at least a user's shooting hand during use.

20. The system of claim 17, wherein the at least one processing device is configured to:

classify the target in the captured frame as a target type by running a machine learning model.

21. A method for evaluating bullet hits on a target, comprising:

classifying the target in the captured frame as a target type by running a first machine learning model;
identifying a hit on the target by running a second machine learning model different from the first machine learning model; and
scoring the hit.

22. The method of claim 20, further comprising:

determining a depth of the target from a user by a third machine learning model different from the first and second machine learning model.
Patent History
Publication number: 20240068786
Type: Application
Filed: Apr 26, 2023
Publication Date: Feb 29, 2024
Inventor: Ian David Biggs (Parket, CO)
Application Number: 18/139,938
Classifications
International Classification: F41J 5/14 (20060101); F41J 5/10 (20060101);