Interactive combat game between a real player and a projected image of a computer generated player or a real player with a predictive method

A method for engaging a player or a pair of players in a motion related game including the steps of attaching plural colored elements onto selected portions of the player(s) garments and processing a video stream of each of the players to separately identify the positions, velocities an accelerations of the several colored elements. The method further comprises generation of a combatant competitor image and moving the image in a manor to overcome the player. In a further approach, two players are recorded and their video images are presented one screens frontal to the other of the players. The same colored elements are used to enable computer calculations of fighting proficiency of the players.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This is a continuation-in-part application of U.S. patent application Ser. No. 11/189,176, filed Jul. 25, 2005, which is incorporated herein by reference.

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT

Not applicable.

THE NAMES OF THE PARTIES TO A JOINT RESEARCH AGREEMENT

Not applicable.

INCORPORATION-BY-REFERENCE OF MATERIAL SUBMITTTED ON A COMPACT DISC

Not applicable.

REFERENCE TO A “MICROFICHE APPENDIX”

Not applicable.

BACKGROUND OF THE INVENTION

1. Field of the Present Disclosure

This disclosure relates generally to games of interactive play between two or more entities including individuals and computer simulated opponents, i.e., the invention may be used by two individuals, an individual and a simulation, and even between two simulations, as for demonstration purposes, and more particularly to a computer controlled interactive movement and contact simulation game in which a player mutually interacts with a computer generated image that responds to the player's movement in real-time.

2. Description of Related Art Including Information Disclosed Under 37 CFR 1.97 and 1.98

Invention and use of computer generated, interactive apparatus are known to the public, in that such apparatus are currently employed for a wide variety of uses, including interactive games, exercise equipment, and astronaut training. Ahdoot, U.S. Pat. No. 5,913,727 discloses an interactive contact and simulation game apparatus in which a player and a three dimensional computer generated image interact in simulated physical contact. Alternately two players may interact through the apparatus of the invention. The game apparatus includes a computerized control means generating a simulated image or images of the players, and displaying the images on a large display. A plurality of position sensing and impact generating means are secured to various locations on each of the player's bodies. The position sensing means relay information to the control means indicating the exact position of the player. This is accomplished by the display means generating a moving light signal, invisible to the player, but detected by the position sensing means and relayed to the control means. The control means then responds in real time to the player's position and movements by moving the image in a combat strategy. When simulated contact between the image and the player is determined by the control means, the impact generating means positioned at the point of contact is activated to apply pressure to the player, thus simulating contact. With two players, each players sees his opponent as a simulated image on his display device. Lewis et al. U.S. Pat. No. 5,177,872 discloses a novel device for determining the position of a person or object. The device is responsive to head or hand movements in order to move a dampened substance contained within a confined tube past one or more sensors. Light passing through the tube is interrupted by the movement of the dampened substance. The intended use of the device, as disclosed, is changing the perspective shown on a video display. Goo U.S. Pat. No. 4,817,950 teaches a video game controller for surfboarding simulation, and of particular interest is the use of a unique attitude sensing device to determine the exact position of the surfboard. The attitude sensing device employs a plurality of switch closures to determine the tilt angle of the platform and open and close a plurality of electrical contacts enabling a signal input to a computer control unit. Good et al. U.S. Pat. No. 5,185,561 teaches the principals of tactile feedback through the use of a torque motor. As disclosed, the device consists of a hand held, one dimensional torque feedback device used to manipulate computer generated visual information and associated torque forces. Kosugi et al. U.S. Pat. No. 5,229,756 disclose a combination of components forming an interactive image control apparatus. The main components of the device are a movement detector for detecting movement, a judging device for determining the state of the operator on the basis of the movement signal provided by the movement detector, and a controller that controls the image in accordance with the movement signal and the judgment of the judgment device. The movement detector, judging device and the controller cooperate so as to control the image in accordance with the movement of the operator. Kosugi requires that a detection means be attached adjacent to the operator's elbow and knee joints so as to measure the bending angle of the extremity and thus more accurately respond to the operator's movements.

The present invention employs a system in which the position of the player is continually monitored. Between the simple types of games of combat as typically found in game arcades, wherein the player's is via a simple control joystick and punch-buttons, and the very sophisticated and complex artificial reality types of game wherein the headgear provides a full sensory input structure, and a highly instrumented and wired glove allows manual contact on a limited basis with the simulation, there is a need for a fully interactive game. The present invention takes the approach to simulate a combat adversary image, while allowing the player to exercise every part of his body in combat with the image. This is the final and most important objective. The present invention fulfills these needs and provides further related advantages as described in the following summary.

Our prior art search with abstracts described above teaches interactive game technology, technique and know-how. However, the prior art fails to teach the instant technique featuring simulated “stand-up” combat between two individuals or between an individual and a computer simulation. The present invention fulfills these needs and provides further related advantages as described in the following summary.

BRIEF SUMMARY OF THE INVENTION

A best mode embodiment of the present invention provides a method for engaging a player or a pair of players in a motion related game including the steps of attaching plural colored elements onto selected portions of the player(s); processing a video stream from a digital camera to separately identify the positions, velocities an accelerations of the several colored elements in time; providing a data stream of the video recorder to a data processor; calculating the distance between the player and the camera as a function of time; predicting the motions of the players and providing anticipatory motions of a virtual image in compensation thereof.

A primary objective of the present invention is to provide an apparatus and method of use of such apparatus that yields advantages not taught by the prior art.

Another objective of the invention is to provide a game for simulated combat between two individuals.

A further objective of the invention is to provide a game for simulated combat between an individual and a simulated second player of the game.

A further objective of the invention is to provide a game for simulated combat between an individual carrying a sport instrument in hand and a simulated offense and defense players of the game.

A still further objective of the invention is to provide the virtual image to anticipate and predict the movement of the real player and to change the virtual image accordingly.

Other features and advantages of the embodiments of the present invention will become apparent from the following more detailed description, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the principles of at least one of the possible embodiments of the invention.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S)

Illustrated in the accompanying drawing(s) is at least one of the best mode embodiments of the present invention In such drawing(s):

The accompanying drawings illustrate at least one of the best mode embodiments of the present invention. In such drawings.

FIG. 1 is a perspective view of the present invention as seen from behind a projection screen transparent to a camera mounted there behind so as to record the motions of a first player moving in front of the screen, the screen being translucent to the first player;

FIG. 2 is a perspective view thereof from the front of the screen showing the first player at left being recorded from the camera mounted behind the screen wherein the player in front of the screen is able to view an image of a second player projected onto the screen from a projector behind the screen;

FIG. 3 is a perspective view thereof showing the first and the second players in separate locations with video images of each projected onto a screen at the other player's location;

FIG. 4 is a logic diagram of the method of the invention showing event detection and prediction processing steps;

FIG. 5 is a continuation of the logic diagram of FIG. 4 showing player offense event processing steps;

FIG. 6 is a continuation of the logic diagram of FIG. 4 showing player defense event processing steps; and

FIG. 7 is a flow chart showing an associative address generator of the invention.

DETAILED DESCRIPTION OF THE INVENTION

The above described drawing figures illustrate the described apparatus and its method of use in at least one of its preferred, best mode embodiment, which is further defined in detail in the following description. Those having ordinary skill in the art may be able to make alterations and modifications what is described herein without departing from its spirit and scope. Therefore, it should be understood that what is illustrated is set forth only for the purposes of example and that it should not be taken as a limitation in the scope of the present apparatus and method of use.

In the present apparatus and method, one or two players take part in a game involving physical movements. Such games may comprise simulated combat, games of chance, competition, cooperative engagement, and similar subjects. However, the present invention is ideal for use in contact games of hand-to-hand combat such as karate, aikido, kick-boxing and American style boxing where the players have contact but are not physically intertwined as they are in wrestling, Judo and similar sports. In this disclosure a combat game is described, but such is not meant to limit the range of possible uses of the present invention. In one embodiment of the instant combat game, as shown in FIG. 2, a first player 5 engages in simulated combat with a second player's image 7′ projected by video projector 40 onto a screen 20 placed in front of the player 5. In this embodiment, the image 7′ is computer generated using the same technology as found in game arcades and the like and which is well known in the art. In an alternate embodiment shown in FIG. 3, two live players 5 and 7 stand in front of two separate screens 20 and 22 and engage in mutual simulated combat against recorded and projected images 5′ and 7′ of players 5 and 7 respectively. This avoids physical face-to-face combat where one of the players might receive injury. In this second approach, the images projected onto the screens 20 and 22 are not computer generated but are real-time projections of video recordings taken as shown in FIG. 1 using cameras 10.

In the first approach, shown in FIG. 1, player 5 is positioned in front of rear projection screen 20. One or more video cameras 10, are positioned behind screen 20. The camera 10 is able to view player 5 through the screen 20 which is transparent from the position of camera 10, and record the player's movements. Alternatively, a miniature videcon CCTV camera (not shown) may be mounted on the front of screen 20, or may be operated through a small hole in the screen 20. The screen 20 may be supported by a screen stand (not shown) or it may be mounted on a wall 25 as shown in the figures.

Simulated image 7′ is visible to the player 5 as shown in FIG. 2. In an approach where the camera 10 is located behind the screen 20, and the image 7′ is visible on screen 20, in order for the camera 10 to not record the projected image 7′, both the camera 10 and the projector 40 are operated at identical rates (frames per second) but each records a frame and blanks for an equal time interlacing the two functions in time so that one is operating when the other is blanking and vice-versa. The net result is that player 5, positioned at the front of the screen 10, sees the projected image 7′ while the camera 10 sees player 5 and not the projected image 7′.

Preferably, projection screen 20 is transparent to camera 10 mounted behind it so as to enable recording the motions of first player 5 moving in front of screen 20. Preferably also, screen 20 is translucent to first player 5 so that he sees only the projected image 7′ and not the camera 10 or projector 40.

In both of the above described embodiments, players 5 and 7 each wears colored bands as best seen in FIG. 2. Preferably, player 5 has a band 51 secured at his forehead, above each elbow 52, on each wrist 53, around the waist 54, above each knee 55 and on each ankle 56. Each of these 10 bands is a different color. Further bands may be placed in additional locations on the players, but the 10 bands shown in FIG. 2 as described, are able to achieve the objectives of the instant innovation as will be shown. In the instant method, the image 5′ of the player 5, as recorded by camera 10 is converted into a digital electronic primary signal. This primary signal is split into 10 derivative secondary signals by color filtering the primary signal for each of the ten colors. Each of the secondary signals contains three pieces of information with respect to each frame of the video recording: a location “x” (left to right), a location “y” (top to bottom) in the camera's field of view, and finally, a number of pixels “p” subtended by the color in the field of view. It is noted that each secondary signal is a representation of only the color band to which it has been filtered and all other aspects of the recorded image are invisible, i.e., not present in that secondary signal. To summarize then, each frame of the recorded image yields 30 pieces of information, i.e., for each of the ten bands, an x, y and p value. The x and y information locates the band in the plane of the field of view of the camera, while the p information approximates the location in the “z” direction approximately, i.e., the distance from the camera lens to the band. The z coordinate is approximated by taking the value for each band at time zero to be the nominal value of the distance z, while when the numerical value of p drops in a subsequent frame of the recording the distance z is increased, and when the numerical value of p increase, the value of z lessens. By rigorous calibration prior to the use of the present invention a reasonable qualitative approximation of the motion of the bands, in the z direction is made by identifying the p count. Computer 60 processes the locations of all ten bands for each frame of the recording in real time, i.e., there is no appreciable lag between the computer's numeric calculation of the locations of the bands and the actual locations thereof.

EXAMPLE 1

The player 5 stands facing the screen 20 with feet a comfortable distance apart, legs straight, and arms hanging at the player's sides. Each of the ten colored bands 51-56 are visible to the camera 10 and with a simple set of anatomical rules, the computer 60 is able to compose a model of the player's form that accurately represents the player's physical position and anatomical orientation at that moment including approximations of arm and leg length, height, and so on. When a band moves, its image on the recording plane of camera 10 moves accordingly so that the computer 60 is able to plot the motion trajectory of the band in three-space using coordinates x, y and p. When a band disappears, i.e., is hidden behind another part of the players anatomy, as is the case in FIG. 2 where band 52 on the players right arm is hidden by his body, the trajectory of the band is approximated taking into account, its locus of locations in preceding frames.

In the case of a single player 5 with a computer generated virtual opponent the opponent's image 7′ is generated and projected onto the screen 20. As player 5 moves to attack or defend against the image 7′, trajectories of the player's bands 51-56 enable the computer to model the player's motion. The computer is programmed to move the image 7′ to attack and defend accordingly. Preferably, the image 7′ is projected with three dimensional realism by any one of the well known techniques for accomplishing this as reported in the art. One technique for accomplishing this is the projection of two orthogonally polarized and slightly separated identical projected images which appear fuzzy to the unaided eye on screen 20. However, when player 5 wears glasses with polarized lenses also orthogonally polarized, the image 7′ appears in three-dimensional realism. Calibration of the image 7′ enables a virtual plane of contact between the player 5 and the image 7′ where this plane of contact is in front of the screen 20. Please see the virtual three-dimensional image shown in FIG. 2 where player 5 is blocking a kick from the image 7′ of player 7.

EXAMPLE 2

As shown in FIG. 3, players 5 and 7 stand facing their respective screens 20, each with feet a comfortable distance apart, legs slightly bent, and arms hanging at their sides. Each of the ten colored bands 51-56 on each of the players 5 and 7 are visible to their respective cameras 10 so that the computers 60 are able to compose mathematical models of the positions of each of the players 5 and 7 that accurately represents each of the player's physical position and anatomical orientation at that moment relative to the other of the player. The vertical planes represented by the screens 20 and 22 represent the same plane in the combat three-space of the game. Therefore, when one player moves a fist, elbow, knee or foot toward his screen, the computers 60 calculate that motion as projecting toward the other player. In this manner the computers 60 calculates contacts between players 5 in offensive and defensive moves when their respective body parts occupy the same space coordinates. As in actual combat, the players initially and nominally stand slightly more than an arm's length away from their screen, i.e., mathematically from their opponent. Points are awarded to each of the players for successful offensive and defensive moves. As discussed above, the images are preferably projected with three-dimensional realism by use of the well known polarized dual images technique, so that each player sees the illusion of the opponent player's image projecting toward him from the screen 20 or 22.

The present disclosure teaches an improved video frame processing method that enables the combative motions between two distant players 5 and 7, as described above and shown in FIGS. 1-3, to be calculated and compared with respect to each other. This method is described as follows and is as shown in FIGS. 4-7. Once the game is initiated, a stream of information from the video recorder frames is processed. Frame by frame each of the 30 coordinate data elements x, y, and p is recorded, with z being calculated, so that for each frame, the position of all parts of the players is known and using a simple physical model of the human body, a mathematical model of each of the players positions in three-space is determined. The changes of the locations of the player's body parts from frame to frame enables the calculation of velocity and acceleration of these parts by taking the first and second differential of the change in position. Furthermore, at each frame, a prediction of the positions, velocities and accelerations of each of the body parts is made. These predictions are made using data from multiple frames. These calculations continue until the number of frames is at least equal to a specified set point. Depending on whether the motion is defensive, i.e., responsive to the opponents movement, or offensive, i.e., independent of the opponent's movement, in any of the body parts, the computer generated image is modified so as to defend against an offensive move by player 5 or to initiate a new offensive move from an inventory of such moves.

With respect to two real opponents, the logical steps of the present method are shown in FIGS. 4 through 6 and comprise the determination of incoming offense information, calculation of the player's new coordinates, determination if the defense or offence is complete, and calculating the player's offensive positions as compared to the image defense moves and vice-versa. Finally, a scoring method is used and for each of the motion and counter motion determinations for both offensive and defensive motions of players, a score is created and projected onto the screen.

Referring now to the numerical reference numbers in the logic flow chart shown in FIGS. 4-7, we find at (1) the game is initiated whereby all game counters and variables, such as player weight, skill level and expertise are entered by the players. Counters are initialized. At this time camera auto focus, zoom and player position is operating and data is being taken and stored in memory. At (2) each incoming video frame is compared to the previous frame to detect a magnitude of change. Changes surpassing a fixed threshold value trigger further processing at (4). This occurrence triggers the start of “event detection,” and represents the recognition of a player's initial motion. Frame to frame changes that do not surpass the threshold are counted, discarded and directed to further processing at (3).

At (3) and (8), counts “a” of frames that do not surpass threshold are compared with a set constant “c.” If a>c, then an offensive action is taken against the player (11). Otherwise the system waits for action to occur.

At (4) and (5) frame changes are compared with a prior trajectory and if consistency is found logic moves to (6), otherwise to (1). Changes in position, speed and acceleration of the player are measured each frame. If motion is consistent, frame to frame as per (6), this indicates that the motion detected at (2) continues. Frame to frame changes in the orientation of each body part suggests body part rotation.

At (6) calculated changes are amended to previous trajectory information. At (7) motion is checked to determine if motion has been continuous for “b” frames and if so, logic moves to (9), otherwise back to (4) and (5). During b frames motion is determined to be offensive or defensive.

At (9) during initial time periods and between event detection periods, a pattern in the player's motions is sought by the system and characterized as a specific stored pattern. This is accomplished by recognizing a prediction area within a selected variance range. Based on new received input information an associative memory generator, e.g., an FPGA (see FIG. 7) for instance, stores player motion habits as an inventory related to the specific player. It is noted here that an “FPGA” is a Field-Programmable Gate Array, a type of logic chip that can be programmed. An FPGA is similar to a PLD but has an order of magnitude greater gates. FPGAs are commonly used for prototyping integrated circuit designs and other applications.

At (17) when an end of an event is characterized by the completion of b frames, if the motion is determined to be offensive logic moves to (10), and if the motion is determined to be defensive logic moves to (11), if neither, the next step is taken. If the motion is a combination of offense and defense then a calculation of likelihood of hit success is established by comparing player's and image's motions. If the player's offense is stronger logic moves to (10), otherwise (11).

At (10) and (11) the physical attributes of the player which were determined after b frames are fed to the associative memory generator (FIG. 7). The output of the generator is fed to a memory address lookup table which provides a memory address of the various predictions. The use of a generator of this type which relates physical attributes to a memory address does not burden the processor since not calculations are necessary.

In FIG. 6 at (14), an event follower processor for player offense, waits for event detection from (15). At (15) the player's offense prediction is read along with recovering a defense absolute address from the lookup table. This address is generated in conjunction with the associative memory generator. The address stores the physical attributes of the player that are quantities representing a degree of expertise. The output from the associative memory generator is the address used at the lookup table which has been previously prepared. The output of the lookup table is the address used by memory holding prediction data.

At (16) the next frame is considered and processed calculating the player's new coordinates and amending prior coordinate information. The trajectory is calculated and image player's defensive moves are predicted.

At (17) the player's offense and the image's defense predictions are compared for each frame. If the player's actual offense correlates with prediction, logic moves to (18) and if not correlated, logic moves to (21). If significant correlation variance is determined, logic moves to (1).

At (18), if an end of image's defense is not determined, logic moves to (16) and (17). If an end is determined, logic moves to (19). At (19) the player's trajectory is compared with the predicted and planned image's trajectory and determines a score. At (20) scores are displayed and the event detection processor is informed of an end of the player's offensive motion. At (21) player information is stored in memory and received at (1) as needed.

Coordinates in three-space of the positions of body parts of the image and of the player are calculated and when a collision is determined velocity and acceleration vectors of both the player and image are used to determine scores. As an example, the score number for player contact with the image's hand (image parrying a player's thrust) is relatively low, while the score for player's contact with the image's face results in a large score number.

In FIG. 7, (24) waits for the player's defense command from the event detector processor. At (25) the player's defense prediction and the image's offense address from the address lookup table are read. This address is generated in conjunction with the associative memory generator. The address contains the player's physical attributes which represent a degree of expertise. This output is stored in the lookup table memory which is information used to establish prediction.

At (26) frames are processed in order sequence and the player's new coordinates are calculated and updated. The trajectory of the player's moves are calculated and the image's defense is predicted.

At (27) the player's defense trajectory and the image's offense predictions are compared at each frame. If the player's defensive prediction corresponds to the measured actual motion, logic moves to (28) and if it does not correspond, logic moves to (31). If correspondence is poor logic moves to (1).

At (28) the end of the image's defense is determined and if an end is not found, logic moves to (26) and (27). If an end is determined, logic moves to (29). The image's planned offense is used to provide a score considering the player's actual defense and the image's planned offense motions.

At (29) the player's trajectory is compared with the image's planned trajectory and scores are determined in accordance with the outcome. At (30) the scores are displayed and the end of the player's defensive motion is logged. At (31) the player's information is stored in memory for future reference.

Preferably, an imaginary boundary is set around the projected image. The actual motion of the player is compared with this boundary to determine the relative position of the player's hands and feet with respect to the boundary, and scores are determined by the relative positions and sensitivities of the parts of the player's or image's body. As play proceeds, the actual speed, accuracy, acceleration and positioning of the player (history information) is stored and used to improve the prediction model of the player.

The enablements described in detail above are considered novel over the prior art of record and are considered critical to the operation of at least one aspect of one best mode embodiment of the instant invention and to the achievement of the above described objectives. The words used in this specification to describe the instant embodiments are to be understood not only in the sense of their commonly defined meanings, but to include by special definition in this specification: structure, material or acts beyond the scope of the commonly defined meanings. Thus if an element can be understood in the context of this specification as including more than one meaning, then its use must be understood as being generic to all possible meanings supported by the specification and by the word or words describing the element.

The definitions of the words or elements of the embodiments of the herein described invention and its related embodiments not described are, therefore, defined in this specification to include not only the combination of elements which are literally set forth, but all equivalent structure, material or acts for performing substantially the same function in substantially the same way to obtain substantially the same result. In this sense it is therefore contemplated that an equivalent substitution of two or more elements may be made for any one of the elements in the invention and its various embodiments or that a single element may be substituted for two or more elements in a claim.

Changes from the claimed subject matter as viewed by a person with ordinary skill in the art, now known or later devised, are expressly contemplated as being equivalents within the scope of the invention and its various embodiments. Therefore, obvious substitutions now or later known to one with ordinary skill in the art are defined to be within the scope of the defined elements. The invention and its various embodiments are thus to be understood to include what is specifically illustrated and described above, what is conceptually equivalent, what can be obviously substituted, and also what essentially incorporates the essential idea of the invention.

The enablements described in detail above are considered novel over the prior art of record and are considered critical to the operation of at least one aspect of the apparatus and its method of use and to the achievement of the above described objectives. The words used in this specification to describe the instant embodiments are to be understood not only in the sense of their commonly defined meanings, but to include by special definition in this specification: structure, material or acts beyond the scope of the commonly defined meanings. Thus if an element can be understood in the context of this specification as including more than one meaning, then its use must be understood as being generic to all possible meanings supported by the specification and by the word or words describing the element.

The definitions of the words or drawing elements described herein are meant to include not only the combination of elements which are literally set forth, but all equivalent structure, material or acts for performing substantially the same function in substantially the same way to obtain substantially the same result. In this sense it is therefore contemplated that an equivalent substitution of two or more elements may be made for any one of the elements described and its various embodiments or that a single element may be substituted for two or more elements in a claim.

Changes from the claimed subject matter as viewed by a person with ordinary skill in the art, now known or later devised, are expressly contemplated as being equivalents within the scope intended and its various embodiments. Therefore, obvious substitutions now or later known to one with ordinary skill in the art are defined to be within the scope of the defined elements. This disclosure is thus meant to be understood to include what is specifically illustrated and described above, what is conceptually equivalent, what can be obviously substituted, and also what incorporates the essential ideas.

The scope of this description is to be interpreted only in conjunction with the appended claims and it is made clear, here, that each named inventor believes that the claimed subject matter is what is intended to be patented.

Claims

1. A method of playing a motion related hand-to-hand combat type game, between a real player and a virtual player; the method comprising the steps of:

a) identifying portions of the real player with distinct colored elements;
b) positioning the real player in front of a video screen upon which a virtual player image is projected.
c) recording video frames of a real player image and filtering the real player image into separate signals according to the colored elements;
d) reviewing each the frames in sequence so as to determine time change of positions in 3-space of the portions and calculating velocity, acceleration and trajectory of each of the portions;
e) moving the virtual player on the video screen, said movement corresponding in defense and offence to the movement of the real player;
f) identifying portions of the virtual player corresponding to the portions of the real player and establishing position, velocity, acceleration and trajectory of the portions of the virtual player to determine when real player and virtual player portions intersect in three-space;
g) assigning scores to the players in accordance with a set of scoring rules; and
h) repeating steps (c) through (g) until one of the players has achieved a set number of points and is therefore declared a winner.

2. The method of claim 1 wherein, utilizing a digital video camera interfaced to a distributed processor to capture real time 3-d motions of a player comprising the further steps of:

a) calibrating the system by initially placing the player(s) at a fixed distance from the camera and having the colored elements, and bodily signatures of the player to be calibrated for real time 3-d motion detection;
b) continually receiving the camera's real time electro-optical, auto focus, and zooming control information along with video camera signals measuring the 3 dimensional positions of the player(s) at motions;
c) while in motion, the depth (z) is calculated by the ratio of the of the total pixel count of the colored elements worn by the player(s) to the total video pixels of the colored elements measured during initial calibration;
d) utilizing a camera that could be commanded to perform auto focus or computer controlled focus;
e) adjusting the pixel count information of the colored elements, and player(s) bodily signature based upon the received camera's auto-focus or computer controlled focus;
f) trajectory of motion, speed, and acceleration of the players body parts is measured upon the differential changes of recent frame to the previous frame. provide filtering of images to provide a sharp image and eliminate background noises;
g) differential changes are measured from frame to frame by following the periphery of each colored element and measuring pixel changes;
h) utilize a computer controlled camera that is commanded to focus and stay focus on a specific moving colored element;
i) utilize a computer controlled camera that its zooming is computer controlled;
j) placing the digital camera on a computer controlled gimbal to follow the player's motions. the pixel count derived from step c will be further adjusted based upon the 2-d gimbal motions;
k) utilizing the digital camera with inferred sensors to monitor the bodily temperature of the player.

3. The method of claim 1 wherein the computer's further actions are synchronized to the start of a player's motions, or verbal commands on a frame by frame basis, further comprising the steps of:

a) each incoming frame is compared to the previous frame to detect the magnitude of change compared to the previous frame. changes in the incoming frames surpassing a threshold are lead to further processing. changes in the incoming frames not surpassing the threshold are counted, discarded, and led to further processing;
b) continuous incoming frames not surpassing a threshold for a certain period of time (“c” number of frames) are counted, discarded, and led to an offense motion by the computer generated image;
c) the voice activated command or other commands are analyzed and led to different processing stages, depending upon the nature of the commands;

4. The method of claims 1 wherein an event detection and prediction distributed digital image processor, continually monitors the movement of the player to detect motions that are consistent within certain time period (“b” number of frames) an event is defined as offense or a defense motion by the player; and wherein the event detector's algorithm is comprising of the steps:

a) consecutive frames that have passed the threshold, each are compared to the previous frame to detect the magnitude of change. changes are added to the previous trajectory of the player's motions;
b) if received frame number is less than b number of frames, repeat previous step, otherwise go to the next step.
c) at the end of “b” number of frames, does player's motions indicate an offense aimed at the image's sensitive parts? if yes, go to players offensive play (step f), otherwise continue;
d) at the end of “b” number of frames does player's motions indicate a defense, protecting and dodging the image's offensive moves? if yes, go to step player's defense (step g), otherwise continue;
e) at the end of “b” number of frames does player's motions indicate a combination of offense and defense against image's body parts? if yes, calculate the likelihood of hit success comparing that of player's and images' motions. if player's offense is stronger, go to step f, if weaker, go to step g;
f) predict a player's offense course of motion that is the continuation of motion detections in “b”. plan a defensive image's motions in conjunction with the player's prediction. calculate player and image's final trajectory and coordinates at the end of the predicted or planned period, and send it to event follower processor step h, then go to step a.
g) predict the players defense course of motion that is the continuation of motion detections in “b”. plan an offense course of motion for the image's motions in conjunction with the player's prediction. calculate player, and image's final trajectory, coordinates at the end of the planned period and send results to event follower processor, step 1, then go to step a;
h) new player's offense command received from the event detection prediction processor? if no, go to the next step, otherwise continue displaying planned image and repeat this step;
i) continually display the planned defense or offense motions of the image. get next frame, process frame, calculate players new coordinates, add to the previous coordinate;
j) end of the players prediction period, or image's planned defense? if yes, go to next step, if no go to previous step;
k) compare player's offense compared to image's defense. calculate and show scores, go step h;
l) new player's defense command received from the event detection processor? if no, go to the next step, otherwise continue displaying planned image and repeat this step;
m) continually display the planned defense or offense motions of the image. get next frame, process frame, calculate players new coordinates, add to the previous coordinate;
n) end of player's prediction or image's planned offense period? if yes go to next step, otherwise go to previous step;
o) calculate image's offense compared to player's defense. calculate and show scores, go to step 1.

5. The method of claim 1 wherein the degree for player's speed, is decided by adjustment of “b” number of frames during the initialization, and the degree of expertise is decided by classifying predictions and plans.

6. The method of claim 1 further comprising the step of increasing the number of cameras and display monitors to assist the player's view of the image at different angles while turning and facing from one camera to another, wherein:

a) the image processor to provide an image 3d field of play for the player to use as a visual guidelines for his/hers movements in a field of play, while the image is moved around from one side of the field of play to the other;
b) the image processor to detect player positions from different cameras and decide witch camera provides the best detection angle and display the image in a relevant field of play to be viewed by the player;

7. The method of claim 1 wherein two local players using two sets of camera(s), two sets of displays, and an image processors examine individual video pictures from each players and display the video or planned image of the opponent player.

8. The method of claim 1 wherein two remote players using two sets of camera(s), two sets of displays, and two sets of image processor further comprising the steps of:

p) each processor to examine the videos from each relevant local player.
q) each processor to receive the opponent's image motion information, (or actual opponent's, video and other relevant information) via remote transmission facilities on frame by frame basis;
r) each processor to display an image of the opponent and control the image based upon the information received from the opponents motions;
s) each processor to provide scores on each displays;
t) one processor to determine the winner score, to be displayed on both monitors;

9. A method of playing baseball, tennis, golf, or other related games between a player having a sport instruments in hand and the images of offense and defense players wherein a ball, the images of a players, and an image of the field are generated by a processor; the method comprising the steps of:

a. identify the play instrument and portions of the player body with individual colored elements;
b. each computer generated thrown image ball, will be planned and played with a known trajectory, speed, acceleration, and a prediction, simulating a pro player;
c. generate an image of the field of play in 3-d whereby offense and defense actions takes place, by image offense, and image defense players;
d. processing of the incoming frames at the vicinity of the time of impact of the player's tennis with the image ball;
e. recording the movements of the player and the instrument as a video image and electronically or optically filtering the image into separate signals according to the colored elements;
f. determining positions in 3-space of the portions of the player and the instrument on each recording video frame;
g. following the trajectory of the player's body parts and the sport instrument, utilizing the method of claim 4, to calculate changes in trajectory, velocity, and acceleration of the portions of the player's body and trajectory of the instrument;
h. predicting the trajectory, velocity, and acceleration of the image ball being hit by the player's instrument;
i. moving the image players, as the result of the predicted trajectory of the ball and the image's physical location in the field of play, at the moment of impact of instrument with the image ball;
j. calculate the likelihood of success for the image ball to stay within the image 3-d field of play;
k. displaying the predicted trajectory of the ball hit by the player instrument;
display the images of players, playing offense, defense and reacting to the image ball, based upon the positions of the image player, and the prediction;
m. compare the player's prediction and follow-on, to the initial planned trajectory (step b), and display scores.

10. A method of playing a motion related hand-to-hand combat type game between a first real player and second real player; the method comprising the steps of:

a) identifying portions of the players with distinct colored elements;
b) positioning the players in front of separate video screens upon which an image of the first player is projected and viewable by the second player, and an image of the second player is projected and viewable by the first player;
c) recording video frames of images of combat movement between the players and filtering each of the images into respective separate signals according to the colored elements;
d) reviewing each the frames of each of the players in sequence so as to determine position changes from frame to frame, and calculating velocity, acceleration and trajectory of each of the portions of each of the players;
e) identifying portions of the players in contact to determine when the portions of the players virtually intersect;
f) assigning scores to the players in accordance with a set of scoring rules; and
g) repeating steps (c) through (f) until one of the players has achieved a set number of points and is therefore declared a winner.
Patent History
Publication number: 20070021207
Type: Application
Filed: Feb 6, 2006
Publication Date: Jan 25, 2007
Inventor: Ned Ahdoot (Rancho Palos Verdes, CA)
Application Number: 11/349,431
Classifications
Current U.S. Class: 463/36.000
International Classification: A63F 9/24 (20060101);