POSITION DETECTION AND/OR MOVEMENT TRACKING VIA IMAGE CAPTURE AND PROCESSING

- BROADCOM CORPORATION

Position detection and/or movement tracking via image capture and processing. Digital cameras perform image capture of one or more objects within a particular region (e.g., a physical gaming environment). A game module or processing module processes the images captured by the digital cameras to identify a position of and/or track movement of objects (e.g., a player, a gaming object, a game controller, etc.). Various digital image processing techniques may be employed including pattern recognition of objects, color recognition/distinction, intensity recognition/distinction, relative size comparison, etc. to identify objects and/or track their movement. The coupling between the digital cameras and the game module or processing module may be wired, wireless, or a combination thereof. If wireless, any number of different signaling means may be employed including Code Division Multiple Access (CDMA) signaling, Time Division Multiple Access (TDMA) signaling, or Frequency Division Multiple Access (FDMA) signaling.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED PATENTS/PATENT APPLICATIONS Provisional Priority Claims

The present U.S. Utility Patent Application claims priority pursuant to 35 U.S.C. §119(e) to the following U.S. Provisional Patent Application which is hereby incorporated herein by reference in its entirety and made part of the present U.S. Utility Patent Application for all purposes:

1. U.S. Provisional Application Ser. No. 60/936,724, entitled “Position and motion tracking of an object,” (Attorney Docket No. BP6471), filed Jun. 22, 2007, pending.

BACKGROUND OF THE INVENTION

1. Technical Field of the Invention

The invention relates generally to position and tracking systems; and, more particularly, it relates to such systems that employ captured digital images to determine position of or track movement of an object.

2. Description of Related Art

Communication systems are known to support wireless and wire lined communications between wireless and/or wire lined communication devices. Such communication systems range from national and/or international cellular telephone systems to the Internet to point-to-point in-home wireless networks to radio frequency identification (RFID) systems. Each type of communication system is constructed, and hence operates, in accordance with one or more communication standards. For instance, radio frequency (RF) wireless communication systems may operate in accordance with one or more standards including, but not limited to, RFID, IEEE 802.11, Bluetooth, advanced mobile phone services (AMPS), digital AMPS, global system for mobile communications (GSM), code division multiple access (CDMA), local multi-point distribution systems (LMDS), multi-channel-multi-point distribution systems (MMDS), and/or variations thereof. As another example, infrared (IR) communication systems may operate in accordance with one or more standards including, but not limited to, IrDA (Infrared Data Association).

Depending on the type of RF wireless communication system, a wireless communication device, such as a cellular telephone, two-way radio, personal digital assistant (PDA), personal computer (PC), laptop computer, home entertainment equipment, RFID reader, RFID tag, et cetera communicates directly or indirectly with other wireless communication devices. For direct communications (also known as point-to-point communications), the participating wireless communication devices tune their receivers and transmitters to the same channel or channels (e.g., one of the plurality of radio frequency (RF) carriers of the wireless communication system) and communicate over that channel(s). For indirect wireless communications, each wireless communication device communicates directly with an associated base station (e.g., for cellular services) and/or an associated access point (e.g., for an in-home or in-building wireless network) via an assigned channel. To complete a communication connection between the wireless communication devices, the associated base stations and/or associated access points communicate with each other directly, via a system controller, via the public switch telephone network, via the Internet, and/or via some other wide area network.

For each RF wireless communication device to participate in wireless communications, it includes a built-in radio transceiver (i.e., receiver and transmitter) or is coupled to an associated radio transceiver (e.g., a station for in-home and/or in-building wireless communication networks, RF modem, etc.). As is known, the receiver is coupled to the antenna and includes a low noise amplifier, one or more intermediate frequency stages, a filtering stage, and a data recovery stage. The low noise amplifier receives inbound RF signals via the antenna and amplifies then. The one or more intermediate frequency stages mix the amplified RF signals with one or more local oscillations to convert the amplified RF signal into baseband signals or intermediate frequency (IF) signals. The filtering stage filters the baseband signals or the IF signals to attenuate unwanted out of band signals to produce filtered signals. The data recovery stage recovers raw data from the filtered signals in accordance with the particular wireless communication standard.

As is also known, the transmitter includes a data modulation stage, one or more intermediate frequency stages, and a power amplifier. The data modulation stage converts raw data into baseband signals in accordance with a particular wireless communication standard. The one or more intermediate frequency stages mix the baseband signals with one or more local oscillations to produce RF signals. The power amplifier amplifies the RF signals prior to transmission via an antenna.

In most applications, radio transceivers are implemented in one or more integrated circuits (ICs), which are inter-coupled via traces on a printed circuit board (PCB). The radio transceivers operate within licensed or unlicensed frequency spectrums. For example, wireless local area network (WLAN) transceivers communicate data within the unlicensed Industrial, Scientific, and Medical (ISM) frequency spectrum of 900 MHz, 2.4 GHz, and 5 GHz. While the ISM frequency spectrum is unlicensed there are restrictions on power, modulation techniques, and antenna gain.

In IR communication systems, an IR device includes a transmitter, a light emitting diode, a receiver, and a silicon photo diode. In operation, the transmitter modulates a signal, which drives the LED to emit infrared radiation which is focused by a lens into a narrow beam. The receiver, via the silicon photo diode, receives the narrow beam infrared radiation and converts it into an electric signal.

IR communications are used video games to detect the direction in which a game controller is pointed. As an example, an IR sensor is placed near the game display, where the IR sensor to detect the IR signal transmitted by the game controller. If the game controller is too far away, too close, or angled away from the IR sensor, the IR communication will fail.

Further advances in video gaming include three accelerometers in the game controller to detect motion by way of acceleration. The motion data is transmitted to the game console via a Bluetooth wireless link. The Bluetooth wireless link may also transmit the IR direction data to the game console and/or convey other data between the game controller and the game console.

While the above technologies allow video gaming to include motion sensing, it does so with limitations. As mentioned, the IR communication has a limited area in which a player can be for the IR communication to work properly. Further, the accelerometer only measures acceleration such that true one-to-one detection of motion is not achieved. Thus, the gaming motion is limited to a handful of directions (e.g., horizontal, vertical, and a few diagonal directions).

Therefore, a need exists for motion tracking and positioning determination for video gaming and other applications that overcome the above limitations.

BRIEF SUMMARY OF THE INVENTION

The present invention is directed to apparatus and methods of operation that are further described in the following Brief Description of the Several Views of the Drawings, the Detailed Description of the Invention, and the claims. Other features and advantages of the present invention will become apparent from the following detailed description of the invention made with reference to the accompanying drawings.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

FIG. 1 is a diagram of an embodiment an apparatus that performs position determination and/or movement tracking via image capture and processing.

FIG. 2 is a diagram of an alternative embodiment of an apparatus that performs position determination and/or movement tracking via image capture and processing.

FIG. 3 is a diagram of an embodiment showing a means by which position of a point, object, etc. may be determined using multiple directional vectors extending from multiple known locations, respectively, to that point, object, etc.

FIG. 4 is a diagram of an embodiment showing the relationship between an object point and various image planes that have performed image capture of the object point.

FIG. 5 is a diagram of an embodiment showing the relationship between multiple object points and various image planes that have performed image capture of the multiple object points.

FIG. 6 is a diagram of an embodiment showing an image sensor and the association of physical pixels and the image pixels generated there from.

FIG. 7A and FIG. 7B are diagrams of an embodiment of an apparatus that employs directional vectors associated with captured images, at least some of which depict an object, to determine position of the object.

FIG. 8A and FIG. 8B are diagrams of an embodiment of an apparatus that employs directional vectors associated images, that depict a number of objects, to determine position of a device that has captured the images.

FIG. 9 is a schematic block diagram of an overhead view of an embodiment of a gaming system.

FIG. 10 is a schematic block diagram of a side view of an embodiment of a gaming system.

FIG. 11 is a diagram illustrating an embodiment of a gaming system including multiple digital cameras for capturing images to undergo processing in a game module, that is wire-coupled to the multiple digital cameras, for position detection and/or movement tracking.

FIG. 12 is a diagram illustrating an alternative embodiment of a gaming system including multiple digital cameras for capturing images to undergo processing in a game module, that is wirelessly coupled to at least some of the multiple digital cameras, for position detection and/or movement tracking.

FIG. 13 is a schematic block diagram of a side view of another embodiment of a gaming system.

FIG. 14 is a schematic block diagram of an overhead view of another embodiment of a gaming system.

FIG. 15, FIG. 16, and FIG. 17 are diagrams of an embodiment of a coordinate system of a gaming system.

FIG. 18, FIG. 19, and FIG. 20 are diagrams of another embodiment of a coordinate system of a gaming system.

FIG. 21 is a diagram of a method for determining position and/or motion tracking.

FIG. 22 is a diagram of another method for determining position and/or motion tracking.

FIG. 23, FIG. 24, and FIG. 25 are diagrams of another embodiment of a coordinate system of a gaming system.

FIGS. 26, FIG. 27, and FIG. 28 are diagrams of another embodiment of a coordinate system of a gaming system.

FIG. 29 is a diagram of another method for determining position and/or motion tracking.

FIG. 30 is a diagram of another method for determining position and/or motion tracking.

FIG. 31 is a diagram of another method for determining position and/or motion tracking.

FIG. 32 is a diagram of another method for determining position and/or motion tracking.

FIG. 33 is a diagram of another embodiment of a coordinate system of a gaming system.

FIG. 34 is a diagram of a method for determining motion.

FIG. 35 is a diagram of an example of reference points on a player and/or gaming object.

FIG. 36, FIG. 37, and FIG. 38 are diagrams of examples of motion patterns.

FIG. 39 is a diagram of an example of motion estimation.

FIG. 40 and FIG. 41 are diagrams of examples of reference points on a player to determine player's physical measurements.

FIG. 42 is a diagram of an example of mapping a player to an image.

FIG. 43 is a diagram of another method for determining motion.

FIG. 44 is a schematic block diagram of an embodiment of a gaming object and/or game console.

FIG. 45, FIG. 46, and FIG. 47 are diagrams of various embodiments of methods for determining position and/or motion tracking.

FIG. 48 is a diagram of an embodiment of a method for determining a distance based on captured digital images.

DETAILED DESCRIPTION OF THE INVENTION

FIG. 1 is a diagram of an embodiment an apparatus that performs position determination and/or movement tracking via image capture and processing. The apparatus includes a number of digital cameras that generate digital images. An object is depicted within at least some of the digital images. A processing module is coupled to receive the digital images. The processing module processes the digital images to identify characteristics of the object as depicted within at least some of the digital images. Based on the identified characteristics, the processing module determines position of the object with respect to locations of at least some of the digital cameras.

In one embodiment, the processing module identifies directional vectors based on the identified characteristics of the object. These directional vectors may be viewed as extending from known locations (e.g., locations of the digital cameras, point of reference within the digital cameras, etc.) to the object. In the context of using a digital camera, a digital camera includes an electronic image sensor. A digital image sensor, when mounted on a surface of an integrated circuit and implemented for performing image capture directly may also be viewed as an alternative embodiment of a digital camera.

The specifications of such digital image sensors are oftentimes defined in terms of number of physical pixels within the digital image sensor that correspond to the number of image pixels that a picture captured by the image sensor will have. For example, as processes by which digital cameras are manufactured continues to improve, the number of mega-pixels that a digital image sensor includes continues to increase. Generally, the digital image sensors within digital cameras have more than a million physical pixels (e.g., mega-pixels (or more)).

A reference point within a digital camera may serve as a point from which a directional vector is defined. As one example, when an image is captured by a digital camera, a camera center of projection of the digital camera is a point to which all points in the image can be traced back to. The focal distance of the digital camera may also correspond to the camera center of projection of the digital camera. A directional vector may be defined as extending from such a reference point within the digital camera to a physical pixel that has captured a particular portion of an object of interest. In other words, an image pixel of interest within a digital image corresponds to a physical pixel of the digital image sensor of the digital camera. A directional vector may be defined as extending from that reference point within the digital camera to that physical pixel.

Any of a variety of means may be performed to identify the characteristics of the object depicted within at least some of the digital images, including any of a variety of pattern recognition processes. Moreover, an object may include one or more sensing tags thereon to assist in the identification of the characteristics of the object depicted within at least some of the digital images.

Some examples of sensing tags include a particular type of material (e.g., metal, etc.), an RFID tag, a material having particular properties (e.g., a light reflective material, a light absorbent material, etc.), a specific RGB [red, green, blue] color or combination of colors, a particular pattern, etc.). By discerning and distinguishing different sensing tags that may be placed on different parts of the object, the relative position of those parts of the object may be determined. This may be performed in addition to the overall position of the object that may be determined by identifying the entire object.

In addition, the object whose characteristics are identified may have a predetermined size. In some of the embodiments depicted herein, a player/user may employ a gaming object when playing a game, and the size of such a gaming object may be known beforehand. When an object having a predetermined size is identified in a digital image, then the actual/physical size of the object may be associated with the identified ‘image size’ as depicted within the digital image. The relationship between these two (e.g., image size and predetermined size) may be employed to determine a scaling factor for that digital image. With this information, a distance between two objects depicted within the digital image may be determined.

Moreover, it is noted that once the position of the object is known, then that position may be mapped to a virtual 3D (three-dimensional) coordinate system. This may be employed within a variety of systems including a gaming system such as is described herein.

Each of the digital cameras has a corresponding field of view in which it can perform image capture. Again, the object is depicted within at least some of the fields of view of at least some of the digital cameras. When the object is not within any field of view of any digital camera, then at least some of the digital cameras can be adjusted (e.g., such as using an actuator coupled to or integrated with a digital camera) so that the object may be visible within at least one of the fields of view of at least one of the cameras.

It is also noted that the configuration of any of the digital cameras may be adjusted. For example, a digital camera may have auto-focus capability in which the focal distance of the digital camera is adjusted to provide a maximum clarity image of the object of interest. Moreover, the image capture rate of any digital camera may be adjusted based on a number of factors including a predetermined setting within the processing module, a user-selected setting within the processing module, a movement history of the object, a current movement of the object, and an expected future movement of the object.

It is noted that, while position determination is described herein with respect to an object, the movement of the object may also be determined by merely updating the position of the object as a function of time. For example, the processing module may determine a first position of the object during a first time, and the processing module may then determine a second position of the object during a second time. The movement of the object may be estimated by comparing the first determined position and the second determined position. The rate of the movement of the object may be determined by also considering the times associated with the each of the first determined position and the second determined position.

It is also noted that the digital cameras may be ‘smart’ digital cameras in some embodiments that include means by which the configuration of the digital camera may be determined and communicated back to the processing module. Certain information such as focal length of the digital camera, the image capture setting of the digital camera (e.g., for digital cameras that can capture images having different numbers of pixels), physical orientation, physical location, etc. may be determined by such a smart digital camera, communicated back to the processing module, and then the processing module can consider this higher level of information when employing the identified characteristics of the object to determine the position of the object.

Moreover, it is noted that while wire-coupling between the directional microphones and the processing module are illustrated in this embodiment, wireless communication may also employed between the various components of such an apparatus without departing from the scope and spirit of the invention.

FIG. 2 is a diagram of an alternative embodiment of an apparatus that performs position determination and/or movement tracking via image capture and processing. This embodiment is somewhat analogous to the previous embodiment, with at least one difference being that the digital cameras are wirelessly coupled to the processing module. It is also noted that at least one digital camera may be integrated into the processing module.

The wireless means by which communication is supported may be varied, and it may be supported using any desired radio frequency (RF) communication standard including any that operates in accordance with one or more standards including, but not limited to, RFID, IEEE 802.11, Bluetooth, advanced mobile phone services (AMPS), digital AMPS, global system for mobile communications (GSM), code division multiple access (CDMA), local multi-point distribution systems (LMDS), multi-channel-multi-point distribution systems (MMDS), and/or variations thereof.

Moreover when the use of RF communication is employed within such an apparatus, at least one of the digital cameras includes a first radio frequency (RF) transceiver, and the processing module includes a second RF transceiver. Based on an RF signal transmitted between the first RF transceiver and the second RF transceiver, the processing module can then determine a distance between the processing module and the digital camera from which the RF signal was transmitted. By using a transmission time at which the RF signal is transmitted from a first device, and a receive time at which the RF signal is received by a second device, and also knowing the speed/velocity at which the RF signal travels, then the distance between the first device and the second device may be determined.

FIG. 3 is a diagram of an embodiment showing a means by which position of a point, object, etc. may be determined using multiple directional vectors extending from multiple known locations, respectively, to that point, object, etc. This diagram depicts 3D space in a right handed, Cartesian coordinate system (e.g., shown as having axes xyz). Clearly, the principles described with respect to this diagram are applicable to any other 3D coordinate system as well.

When at least two positions are known, and when directional vectors extending from each of those two locations are known, then if those directional vectors do intersect at all, then the location of the intersection may be determined using triangulation. If additional known locations are known, and if additional directional vectors extending from those additional known locations are also known, then a greater certainty of an intersection between the various directional vectors may be had.

It is noted that once the position associated with the intersection of these directional vectors is known, then this position (or location) may be mapped to a virtual 3D coordinate system. The upper right hand corner of the diagram depicts a virtual 3D space in a right handed, Cartesian coordinate system (e.g., shown as having axes x′y′z′).

FIG. 4 is a diagram of an embodiment showing the relationship between an object point and various image planes that have performed image capture of the object point. This diagram shows two separate image planes, as corresponding to two separate digital cameras, that capture digital images of an object from different perspectives or fields of view. The image plane of a digital camera may be considered as corresponding to the digital image sensor component of the digital camera. For example, a digital mage sensor may be a complementary metal-oxide-semiconductor (CMOS) device or a charge coupled device (CCD). As is known, various parameters generally are employed to define a digital image sensor, including an image sensor type (e.g., ¼″, 1/3.6″, etc.), a width and height (typically provided in milli-meters), a total number of physical pixels (e.g., X megapixels, where X is a number such as 3, 6, 8.1, etc.), a number of physical pixels along each of the width and height of the digital image sensor (e.g., y×z, where y and z are integer numbers), a diagonal size (again, typically provided in milli-meters) that corresponds to the normal lens focal length, the focal length factor, etc. the general trend in digital image sensor development over the years is to pack more and more physical pixels into a digital image sensor while also trying to reduce the overall size of the digital image sensor. In any case, each physical pixel of a digital image sensor captures information (e.g., color, intensity, etc.) of a portion of the field of view of the digital camera, and this information is employed to generate an image pixel of a digital image. Therefore, in the digital image context, there can be viewed as being a one to one relationship between each physical pixel of a digital image sensor and each image pixel of an image generated from information captured by the digital image sensor.

In this diagram, a directional vector extends from a reference point of a digital camera 1 (DC1) through the image plane of DC1 to a point on the object of interest. As can be seen, a directional vector (DV1) also extends from this DC1 reference point through the image plane of DC1 (e.g., which corresponds to the digital image sensor of DC1. This camera reference point may be a camera center of projection for DC1 based on its current configuration (e.g., focus, etc.). Alternatively, another camera reference point may be employed (e.g., focal point, predetermined point within the camera, etc.) without departing from the scope and spirit of the invention.

Analogously for a second digital camera (DC2), another directional vector extends from a reference point of a DC2 through the image plane of DC2 to the same point on the object of interest. If the locations of DC1 and DC2 are known, and if the directional vectors extending from the respective points of reference of each of DC1 and DC2 are known, then the principles of triangulation may be employed to determine the location of the object point on the object of interest.

As can also be seen in this diagram, there is a relationship between the dimensions of object (physically) and the corresponding images of that object as depicted in the digital images captured by DC1 and DC2. For one example, when considering the actual height of the object, then an image 1 height is the height of the object as depicted in a digital image captured by DC1, and an image 2 height is the height of the object as depicted in a digital image captured by DC2. These two image heights need not be the same (e.g., the object may be closer to one of the digital cameras than the other, the focus of one of the digital cameras may be different than the other, etc.). It is noted that if the actual height of the object is known, then a first ratio between the actual height to the image 1 height may be made, and a second ratio between the actual height to the image 2 height may be made. By knowing the actual size of something depicted within a digital image, and by knowing the configuration of the digital camera (e.g., focus, etc.), then a distance between the digital camera and the object may be determined.

FIG. 5 is a diagram of an embodiment showing the relationship between multiple object points and various image planes that have performed image capture of the multiple object points. This diagram has some similarities to the previous embodiment, in that a directional vector extends from a reference point of a digital camera through the image plane of the digital camera to a point on the object of interest.

However, the object of this embodiment includes a number of sensing tags thereon. These sensing tags can be portions of the object having a particular color, a light reflective material, a light absorbent material, an infrared light source, etc. Generally, the sensing tags have some associated characteristic that is identifiable on the object.

The object in this diagram also has different types of sensing tags (e.g., of type 1, type 2, etc.). This use of different types of sensing tags of an object may be employed to assist in determining the position and orientation of the object (e.g., sometimes referred to as ‘pose’ in the image processing context), since different sides, areas, etc. of the object may be better distinguished from one another. For example, when considering an object such as a cube, then a determination of whether the cube is right side up (or upside down) with reference to a desired convention of which side of the cube will be deemed to be ‘up’ may be determined.

In this embodiment, first directional vectors associated with type 1 sensing tags extend from a reference point of a digital camera through the image plane of the digital camera to two separate points on the object that have type 1 sensing tags. Second directional vectors associated with type 2 sensing tags extend from the reference point of the digital camera through the image plane of the digital camera to two separate points on the object that have type 2 sensing tags.

FIG. 6 is a diagram of an embodiment showing an image sensor and the association of physical pixels and the image pixels generated there from. Within a digital camera, a digital image sensor is the element that captures information (e.g., color, intensity, contrast, etc.) of a field of view of the digital camera. Each individual physical pixel of the digital image sensor captures a small portion of the field of view of the digital camera. For example, if the digital image sensor includes one million physical pixels, then each individual physical pixel of the digital image sensor captures information of one-millionth of the field of view of the digital camera. If the digital image sensor includes X megapixels, then each individual physical pixel of the digital image sensor captures information of (1/(X×106))th of the field of view of the digital camera.

Together, each of these discrete pieces of information, as captured by the physical pixels, is used to form a digital image corresponding what is seen in the field of view of the digital camera.

A directional vector extends from a reference point of a digital camera to one of the physical pixels of the digital image sensor. For example, when a particular image pixel of a digital image is identified, then the corresponding physical pixel that captured information used to generate that image pixel can be determined. Such a directional vector can then be determined. This directional vector may be the directional vector generated from this digital camera to a particular point on the object of interest.

FIG. 7A and FIG. 7B are diagrams of an embodiment of an apparatus, shown from two separate perspectives, that employs directional vectors associated with captured images, at least some of which depict an object, to determine position of the object.

Referring to perspective of FIG. 7A, which is viewed in the xy plane of a 3D space having an xyz coordinate system, the principles of using triangulation may be employed when determining position of an object that is depicted in digital images captured by multiple digital cameras. For example, a projection of a first directional vector (DV1 proj.) from a first digital camera (DC1) extends from the first digital camera to the object. A projection of a second directional vector (DV2 proj.) from a second digital camera (DC2) extends from the second digital camera to the object. Additional directional vectors, associated with additional digital cameras, may also be employed. The directional vectors then undergo processing in a processing module to determine the intersection of the various directional vectors. The intersection of these directional vectors is the location of the object.

Referring to perspective of FIG. 7B, this diagram is viewed in the xz plane of a 3D space having an xyz coordinate system.

FIG. 8A and FIG. 8B are diagrams of an embodiment of an apparatus, shown from two separate perspectives, respectively, that employs directional vectors associated images, that depict a number of objects, to determine position of a device that has captured the images.

Referring to the embodiment of FIG. 8A, which is viewed in the xy plane of a 3D space having an xyz coordinate system, the principles of using triangulation may be employed when determining position of a device that includes multiple digital cameras (e.g., a first digital camera (DC1), a second digital camera (DC2), etc.) that capture digital images that depict various known objects (e.g., a first object (object 1), a second object (object 2), etc.).

The principles of triangulation are employed in this embodiment, but in reverse that the previous embodiment. The orientation of each digital camera of the device, when capturing a digital image of a known object is determined.

For example, a projection of a first directional vector (DV1 proj.) from a first object (object 1) extends to the first digital camera (DC1). A projection of a second directional vector (DV2 proj.) extends from a second object (object 2) to a second digital camera (DC2). Additional directional vectors, associated with additional objects, may also be employed. The directional vectors orientations undergo processing in a processing module to determine their intersection. The intersection of these directional vectors is the location of the device that includes the multiple digital cameras.

Referring to the embodiment of FIG. 8B, this diagram is viewed in the xz plane of a 3D space having an xyz coordinate system.

FIG. 9 is a schematic block diagram of an overhead view of an embodiment of a gaming system that includes a game console and a gaming object. The gaming system has an associated a physical area in which the game console and the gaming object are located. The physical area may be a room, portion of a room, and/or any other space where the gaming object and game console are proximally co-located (e.g., airport terminal, on a bus, on an airplane, etc.).

The gaming object may be a wireless game controller and/or any object used or worn by the player to facilitate play of a video game. For example, the gaming object may be a simulated sword, a simulated gun, a helmet, a vest, a hat, shoes, socks, pants, shorts, gloves, etc.

In this system, the game console determines the positioning of the gaming object within the physical area using one or more position determination techniques as subsequently discussed. Once the gaming object's position is determined, the game console tracks the motion of the gaming object using one or more motion tracking techniques as subsequently discussed to facilitate video game play. In this embodiment, the game console may determine the positioning of the gaming object within a positioning tolerance (e.g., within a meter) at a positioning update rate (e.g., once every second or once every few seconds) and tracks the motion within a motion tracking tolerance (e.g., within a few millimeters) at a motion tracking update rate (e.g., once every 10-100 milliseconds).

FIG. 10 is a schematic block diagram of a side view of an embodiment of a gaming system of FIG. 9 to illustrate that the positioning and motion tracking are done in three-dimensional space. As such, the gaming system provides accurate motion tracking of the gaming object, which may be used to map the player's movements to a graphics image for true interactive video game play.

FIG. 11 is a diagram illustrating an embodiment of a gaming system including multiple digital cameras for capturing images to undergo processing in a game module, that is wire-coupled to the multiple digital cameras, for position detection and/or movement tracking. A physical gaming environment (at least a portion of which may be represented within a virtual gaming environment) includes a number of digital cameras arranged at various locations therein to effectuate the image capture of a player and/or gaming object associated with the player. There may be some instances where the player has no gaming object (e.g., when simulating boxing), and the bodily position and/or movement of the player are those elements being monitored and/or tracked.

Each digital camera has a corresponding field of view in which it can perform image capture. By appropriately placing the digital cameras throughout various locations in an area, an entirety of the physical gaming environment can be visually captured by digital images generated by the digital cameras. By crossing more than one field of view of more than one digital camera, then multiple views of a single object within the physical gaming environment can be obtained. The game module (or another processing module) may then process the digital images captured by the digital cameras to make estimates of a position of an object within the physical gaming environment. Also, by comparing various digital images taken at different times (e.g., digital image 1 taken at time 1, digital image 2 taken at time 2=time (1+Δt)), then movement of the object within the physical gaming environment may be estimated.

The game console is operable to perform processing of digital images captured by the digital cameras to identify characteristics of an object depicted within at least some of the digital images. Based on the identified object characteristics, the game console is operable to determine position of the object with respect to the digital cameras.

Moreover, it is noted that, in this embodiment as well as other embodiments, certain initialization processes can be performed in which the player and/or gaming object remains motionless. The digital cameras then may perform image capture of the motionless player and/or gaming object for calibration purposes. In addition, if a size (e.g., height, width, etc.) of the player and/or gaming object is known and provided to the game console (e.g., by being entered via a user interface by the player, or by being estimated by the game console), then the size of other objects within the physical gaming environment may be estimated based on their relatively proportional size to a known object.

Also, various means of performing digital image processing may be performed including pattern recognition in which a predetermined pattern (e.g., as corresponding to a particular shape) is compared to patterns detected within one of the digital images captured by one of the digital cameras. It is noted that a particular shape may have more than one pattern corresponding thereto (e.g., a pattern 1 of a person-related-shape corresponding to a taller/slender person vs. a pattern 2 of a person-related-shape corresponding to a shorter/bulky person, etc.). Also, it is noted that a pattern detected within a digital image, even if is not an expected pattern or can be associated with a predetermined pattern that is being searched for within the digital image, the detected pattern can be added (e.g., to a memory) that stores a number of patterns/shapes that may be detected within the digital image.

Another means of performing digital image processing may include searching for a particular color (e.g., as associated with a player, gaming object, etc.) within a digital image captured by a digital camera. For example, a player may wear a particular colored clothing article, and when processing the digital image captured by a digital camera, the color associated with that known-colored clothing article is sought for.

Other means of performing digital image processing may be performed including searching for reflections off of reflective material that covers the player and/or gaming object. This digital image processing may involve searching for pixels or groups of pixels within a digital image above a certain threshold (which may be predetermined or adaptively set for each digital image). When the intensity is above that threshold, then that pixel (or group of pixels) can be associated as being associated with the reflective material covering the player and/or gaming object. Additional variations of the physical gaming environment may be employed such as providing special lighting to enhance the reflecting of light off of reflective material covering at least a portion of the player and/or gaming object. Moreover, an appropriate backdrop could also be employed to provide a higher degree of contrast between the player and/or gaming object and the rest of the physical gaming environment.

Certain operational parameters of the digital cameras may also be adjusted by a user/player or in real time by control signals provided by the game console. For example, the image capture rate employed by the digital cameras may be adjusted to based on any number of considerations including a predetermined setting within the game console, a player-selected setting within the game console (e.g., as selected by the player via a user interface), a type of game being played, a movement history of the player and/or gaming object, a current or expected movement of the player and/or gaming object, etc. Also, the any one of the digital cameras may include an integrated actuator to perform real-time re-positioning of a digital camera to effectuate better image capture of the player and/or gaming object within the physical gaming environment. Alternatively, the camera may be mounted on an actuator that can perform such re-positioning of the digital camera. Clearly, a player/user can perform re-positioning of any digital camera as well.

As can be seen in this embodiment, the digital cameras are all wire-coupled to the game console. Any desired wire-based communication protocol (e.g., Ethernet) may be employed to effectuate communication between the digital cameras and the game console to communicate digital images from the digital cameras to the game console and command signals (if necessary) from the game console to the digital cameras.

FIG. 12 is a diagram illustrating an alternative embodiment of a gaming system including multiple digital cameras for capturing images to undergo processing in a game module, that is wirelessly coupled to the multiple digital cameras, for position detection and/or movement tracking.

This embodiment is somewhat analogous to the previous embodiment, with at least one difference being that at least some of the digital cameras and the game console each include wireless communication capability to effectuate wireless communication there between. In this embodiment, at least one of the digital cameras is wire-coupled to the game console. For example, some of the digital cameras and the game console either includes an integrated wireless transceiver or is coupled to a wireless transceiver to effectuate communication between some of the digital cameras and the game console. In addition, a digital camera may be integrated into the game console as well without departing from the scope and spirit of the invention.

This wireless communication can be supported using any number of desired wireless protocols including Code Division Multiple Access (CDMA) signaling, Time Division Multiple Access (TDMA) signaling, Frequency Division Multiple Access (FDMA) signaling, or some other desired wireless standard, protocol, or proprietary means of communication.

In addition, the wireless communication can be supported using any desired radio frequency (RF) communication standard including any that operates in accordance with one or more standards including, but not limited to, RFID, IEEE 802.11, Bluetooth, advanced mobile phone services (AMPS), digital AMPS, global system for mobile communications (GSM), code division multiple access (CDMA), local multi-point distribution systems (LMDS), multi-channel-multi-point distribution systems (MMDS), and/or variations thereof.

FIG. 13 is a schematic block diagram of a side view of another embodiment of a gaming system that includes multiple gaming objects, the player, and a game console. In this embodiment, the gaming objects include one or more sensing tags (e.g., metal, RFID tag, light reflective material, light absorbent material, a specific RGB [red, green, blue] color, etc.). For example, the gaming objects may include a game controller, a helmet, a shirt, pants, gloves, and socks, each of which includes one or more sensing tags. In this manner, the sensing tags facilitate the determining of position and/or facilitate motion tracking as will be subsequently discussed.

FIG. 14 is a schematic block diagram of an overhead view of another embodiment of a gaming system that includes a game console, a plurality of players and a plurality of gaming objects. In this instance, the positioning and motion tracking of each of the gaming objects (and hence the player) are determined by the game console and/or the one or more peripheral sensors.

FIG. 15, FIG. 16, and FIG. 17 are diagrams of an embodiment of a coordinate system of a localized physical area that may be used for a gaming system. In these diagrams, an xyz origin is selected to be somewhere in the localized physical area and each point being tracked and/or used for positioning on the player and/or on the gaming object is determined based on its Cartesian coordinates (e.g., x1, y1, z1). As the player and/or gaming object moves, the new position of the tracking and/or positioning points are determined in Cartesian coordinates with respect to the origin.

FIG. 18, FIG. 19, and FIG. 20 are diagrams of another embodiment of a coordinate system of a localized physical area that may be used for a gaming system. In these diagrams, an origin is selected to be somewhere in the localized physical area and each point being tracked and/or used for positioning on the player and/or on the gaming object is determined based on its vector, or spherical, coordinates (ρ, φ, θ), which are defined as: ρ≧0 is the distance from the origin to a given point P. 0≧φ≧180° is the angle between the positive z-axis and the line formed between the origin and P. 0≧θ≧360° is the angle between the positive x-axis and the line from the origin to the P projected onto the xy-plane. φ is referred to as the zenith, colatitude or polar angle, while θ is referred to as the azimuth.φ and θ lose significance when ρ=0 and θ loses significance when sin(φ)=0 (at φ=0 and φ=180°). To plot a point from its spherical coordinates, go ρ units from the origin along the positive z-axis, rotate φ about the y-axis in the direction of the positive x-axis and rotate θ about the z-axis in the direction of the positive y-axis. As the player and/or gaming object moves, the new position of the tracking and/or positioning points are determined in vector, or spherical, coordinates with respect to the origin.

While FIGS. 15-20 illustrate two types of coordinate system, any three-dimensional coordinate system may be used for tracking motion and/or establishing position within a gaming system.

FIG. 21 is a diagram of a method for determining position and/or motion tracking that begins by determining the environment parameters (e.g., determining the properties of the localized physical area such as height, width, depth, objects in the physical area, etc.). The method then continues by mapping the environment parameters to a coordinate system (e.g., Cartesian coordinate system of FIGS. 15-17). The method continues in one or more branches. Along one branch, the initial coordinates of the player are determined using one or more of a plurality of position determining techniques as described herein. This branch continues by updating the player's position to track the player's motion using one or more of a plurality of motion tracking techniques as described herein.

The other branch includes determining the coordinates of the gaming object's initial position using one or more of a plurality of position determining techniques as described herein. This branch continues by updating the gaming object's position to track the gaming object's motion using one or more of a plurality of motion tracking techniques as described herein. Note that the rate of tracking the motion of the player and/or gaming object may be done at a rate based on the video gaming being played and the expected speed of motion. Further note that a tracking rate of 10 milliseconds provides 0.1 mm accuracy in motion tracking.

FIG. 22 is a diagram of another method for determining position and/or motion tracking that begins by determining a reference point within a coordinate system (e.g., the vector coordinate system of FIGS. 18-20). The reference point may be the origin or any other point within the localized physical area. The method continues in one or more branches. Along one branch, a vector with respect to the reference point is determined to indicate the player's initial position, which may be done by using one or more of a plurality of position determining techniques as described herein. This branch continues by updating the player's position to track the player's motion using one or more of a plurality of motion tracking techniques as described herein.

The other branch includes determining a vector with respect to the reference point for the gaming object to establish its initial position, which may be done by using one or more of a plurality of position determining techniques as described herein. This branch continues by updating the gaming object's position to track the gaming object's motion using one or more of a plurality of motion tracking techniques as described herein. Note that the rate of tracking the motion of the player and/or gaming object may be done at a rate based on the video gaming being played and the expected speed of motion. Further note that a tracking rate of 10 milliseconds provides 0.1 mm accuracy in motion tracking.

FIG. 23, FIG. 24, and FIG. 25 are diagrams of another embodiment of a coordinate system of a localized physical area that may be used for a gaming system. In these diagrams, an xyz origin is selected to be somewhere in the localized physical area and the initial position of a point being tracked on the player and/or gaming object is determined based on its Cartesian coordinates (e.g., x1, y1, z1). As the player and/or gaming object moves, the new position of the tracking and/or positioning points are determined in Cartesian coordinates with respect to the preceding location (e.g., Δx, Δy, Δz).

As another example, the positioning and motion tracking of the player may be done with reference to the position of the gaming object, such the gaming objects position is determined with reference to the origin and/or its previous position and the position of the player is determine with reference to the gaming object's position. The reverse could be used as well. Further, both position and motion of the gaming object and the player may be referenced to a personal item of the player, such as a cell phone.

FIG. 26, FIG. 27, and FIG. 28 are diagrams of another embodiment of a coordinate system of a localized physical area that may be used for a gaming system. In these diagrams, an origin is selected to be somewhere in the localized physical area and the initial position of a point being tracked on the player and/or gaming object is determined based on its vector, or spherical coordinates (e.g., ρ1, φ1, θ1). As the player and/or gaming object moves, the new position of the tracking and/or positioning points are determined as a vector, or spherical coordinates with respect to the preceding location (e.g., ΔV, or Δρ, Δφ, Δθ).

As another example, the positioning and motion tracking of the player may be done with reference to the position of the gaming object, such the gaming objects position is determined with reference to the origin and/or its previous position and the position of the player is determine with reference to the gaming object's position. The reverse could be used as well. Further, both position and motion of the gaming object and the player may be referenced to a personal item of the player, such as a cell phone.

FIG. 29 is a diagram of another method for determining position and/or motion tracking that begins by determining environment parameters of the physical area in which the gaming object lays and/or in which the game system lays. The environmental parameters include, but are not limited to, height, width, and depth of the localized physical area, objects in the physical area, differing materials in the physical area, multiple path effects, interferers, etc.

The method then proceeds by mapping the environment parameters to a coordinate system (e.g., one of the systems shown in FIGS. 15-17). As an example, if the physical area is a room, a point in the room is selected as the origin and the coordinate system is applied to at least some of the room. In addition, objects in the room (e.g., a couch, a chair, etc.) are mapped to the coordinate system based on their physical location in the room.

The method then proceeds by determining the coordinates of the player's, or players', position in the physical area. The method then continues by determining the coordinates of a gaming object's initial position. Note that the positioning of the gaming object may be used to determine the position of the player(s) if the gaming object is something worn by the player or is close proximity to the player. Alternatively, the initial position of the player may be used to determine the initial position of the gaming object. Note that one or more of the plurality of positioning techniques described herein may be used to determine the position of the player and/or of the gaming object.

The method then proceeds by updating the coordinates of the player's, or players', position in the physical area to track the player's motion. The method also continues by updating the coordinates of a gaming object's position to track its motion. Note that the motion of the gaming object may be used to determine the motion of the player(s) if the gaming object is something worn by the player or is close proximity to the player. Alternatively, the motion of the player may be used to determine the motion of the gaming object. Note that one or more of the plurality of motion techniques described herein may be used to determine the position of the player and/or of the gaming object.

FIG. 30 is a diagram of another method for determining position and/or motion tracking that begins by determining a reference point within the physical area in which the gaming object lays and/or in which the game system lays. The method then proceeds by determining a vector for a player's initial position with respect to a reference point of a coordinate system (e.g., one of the systems shown in FIGS. 18-20). As an example, if the physical area is a room, a point in the room is selected as the origin and the coordinate system is applied to at least some of the room.

The method then continues by determining a vector of a gaming object's initial position. Note that the positioning of the gaming object may be used to determine the position of the player(s) if the gaming object is something worn by the player or is close proximity to the player. Alternatively, the initial position of the player may be used to determine the initial position of the gaming object. Note that one or more of the plurality of positioning techniques described herein may be used to determine the position of the player and/or of the gaming object.

The method then proceeds by updating the vector of the player's, or players', position in the physical area to track the player's motion. The method also continues by updating the vector of the gaming object's position to track its motion. Note that the motion of the gaming object may be used to determine the motion of the player(s) if the gaming object is something worn by the player or is close proximity to the player. Alternatively, the motion of the player may be used to determine the motion of the gaming object. Note that one or more of the plurality of motion techniques described herein may be used to determine the position of the player and/or of the gaming object.

FIG. 31 is a diagram of another method for determining position and/or motion tracking that begins by determining environment parameters of the physical area in which the gaming object lays and/or in which the game system lays. The environmental parameters include, but are not limited to, height, width, and depth of the localized physical area, objects in the physical area, differing materials in the physical area, multiple path effects, interferers, etc.

The method then proceeds by mapping the environment parameters to a coordinate system (e.g., one of the systems shown in FIGS. 23-25). As an example, if the physical area is a room, a point in the room is selected as the origin and the coordinate system is applied to at least some of the room. In addition, objects in the room (e.g., a couch, a chair, etc.) are mapped to the coordinate system based on their physical location in the room.

The method then proceeds by determining the coordinates of the gaming object's initial position in the physical area. The method then continues by determining the coordinates of the player's initial position with respect to the gaming object's initial position. Note that one or more of the plurality of positioning techniques described herein may be used to determine the position of the player and/or of the gaming object.

The method then proceeds by updating the coordinates of the gaming object's position in the physical area to track its motion. The method also continues by updating the coordinates of the player's position to track the player's motion with respect to the gaming object. Note that one or more of the plurality of motion techniques described herein may be used to determine the position of the player and/or of the gaming object.

FIG. 32 is a diagram of another method for determining position and/or motion tracking that begins by determining a reference point within the physical area in which the gaming object lays and/or in which the game system lays. The method then proceeds by determining a vector for a gaming object's initial position with respect to a reference point of a coordinate system (e.g., one of the systems shown in FIGS. 26-28). As an example, if the physical area is a room, a point in the room is selected as the origin and the coordinate system is applied to at least some of the room.

The method then continues by determining a vector of the player's initial position with respect to the gaming object's initial position. Note that one or more of the plurality of positioning techniques described herein may be used to determine the position of the player and/or of the gaming object.

The method then proceeds by updating the vector of the gaming object's position in the physical area to track its motion. The method also continues by updating the vector of the player's position with respect to the gaming object's motion to track the player's motion. Note that one or more of the plurality of motion techniques described herein may be used to determine the position of the player and/or of the gaming object.

FIG. 33 is a diagram of another embodiment of a coordinate system of a gaming system that is an extension of the coordinate systems discussed above. In this embodiment, the coordinate system includes a positioning coordinate grid and a motion tracking grid, where the motion tracking grid is of a finer resolution than the positioning coordinate grid. In general, the player or gaming object's position within the physical area can have a first tolerance (e.g., within a meter) and the motion tracking of the player and/or the gaming object has a second tolerance (e.g., within a few millimeters). As such, the position of the player and/or gaming object can be updated infrequently in comparison to the updating of the motion (e.g., the position can be updated once every second or so while the motion may be updated once every 10 milliseconds).

FIG. 34 is a diagram of a method for determining motion of a gaming object and/or a player that begins by determining an initial position of the player and/or gaming object using one or more of the positioning techniques described herein. The method continues by determining motion reference points for the player and/or for the gaming object as shown in FIG. 35. The reference points may be sensors on the player and/or on the gaming object, may be particular body parts (e.g., nose, elbow, knee, etc.), particular points on the gaming object, and/or a combination thereof. The number of reference points and the location thereof may be dependent on the video game, on the player's physical characteristics, on the player's skill level, on the desired motion tracking resolution, and/or on the motion tracking technique being used.

The method continues by determining initial motion coordinates for each reference point using one or more the position determining techniques and/or motion tracking techniques described herein. The method continues by establishing one or more data rates for the reference points based on the location of the reference point, motion patterns (e.g., a video bowling game, the player will have particular motions for bowling), previous motion (e.g., half way through bowling a ball, know where the next motion is likely to be), and/or human bio-mechanics (e.g., arms and legs bends in a certain manner). For example, the reference point of a hand may have a faster data rate than a reference point on the head since the hand will most likely being moving faster and more often than the head.

The method continues by obtaining motion tracking data (e.g., distances, vectors, distance changes, vector changes, etc.) for the reference points at intervals of the one or more data rates. The method continues by determining motion of the reference points based on the motion tracking date at intervals of the one or more data rates.

FIG. 36, FIG. 37, FIG. 38, and FIG. 39 are diagrams of examples of motion patterns in accordance with human bio-mechanics. As shown in FIG. 36, a head can move up/down, it can tilt, it can rotate, and/or a combination thereof. For a given video game, head motion can be anticipated based on current play of the game. For example, during an approach shot, the head will be relatively steady with respect to tilting and rotating, and may move up or down along with the body.

FIG. 37 shows the motion patterns of an arm (or leg) in accordance with human bio-mechanics. As shown, the arm (or leg) may contract or extend, go up or down, move side to side, rotate, or a combination thereof. For a given video game, an arm (or leg) motion can be anticipated based on the current play of the game. Note that the arm (or leg) may be broken down in smaller body parts (e.g., upper arm, elbow, forearm, wrist, hand, fingers). Further note that the gaming object's motion will be similar to the body part it is associated with.

FIG. 38 illustrates the likely motions of a torso, which can move up/down, side to side, front to back, and/or a combination thereof. For a given video game, torso motion can be anticipated based on current play of the game. As such, based on the human bio-mechanical limitations and ranges of motion along with the video game being player, the motion of the player and/or the associated gaming object may be anticipated, which facilitates better motion tracking.

FIG. 39 is a diagram of an example of motion estimation for the head, right arm, left arm, torso, right leg, and left leg of a video game player. In this game, it is anticipated that the arms will move the most often and over the most distance, followed by the legs, torso, and head. In this example the interval rate may be 10 milliseconds, which provides a 1 mm resolution for an object moving at 200 miles per hour. In this example, the body parts are not anticipated to move at or near 200 mph.

At interval 1, at least some of the reference points on the corresponding body parts is sampled. Note that each body part may include one or more reference points. Since the arms are anticipated to move the most and/or over the greatest distances, the reference point(s) associated with the arms are sampled once every third interval (e.g., interval 1, 4, 7). For intervals 2 and 3, the motion of the reference points is estimated based on the samples of intervals 1 and 4 (and may be more samples at different intervals), the motion pattern of the arm, human bio-mechanics, and/or a combination thereof. The estimation may be a linear estimation, a most likely estimation, and/or any other mathematical technique for estimating data points between two or more samples. A similar estimation is made for intervals 5 and 6.

The legs have a data rate of sampling once every four intervals (e.g., intervals 1, 5, 9, etc.). The motion data for the intervening intervals is estimated in a similar manner as the motion data of the arms was estimated. The torso has a data rate of sampling once every five samples (e.g., interval 1, 6, 11, etc.). The head has a data rate of sampling once every six samples (e.g., interval 1, 7, 13, etc.). Note that the initial sampling does not need to be done during the same interval for all of the reference points.

FIG. 40 and FIG. 41 are diagrams of examples of reference points on a player to determine player's physical measurements. In this example, once the positioning of the reference points is determined, their positioning may be used to determine the physical attributes of the player (e.g., height, width, arm length, leg length, shoe size, etc.).

FIG. 42 is a diagram of an example of mapping a player to an image of the video game. In this embodiment, the image displayed in the video game corresponds to the player such that, as the player moves, the image moves the same way. The image may a stored image of the actual player, a celebrity player (e.g., a professional athlete), a default image, and/or a user created image. The mapping involves estimating motion of the non-reference points of the player based on the reference points of the player. In addition, the mapping involves equating the reference points on the player to the same points on the image. The same may be done for the gaming object.

FIG. 43 is a diagram of another method for determining motion that begins by obtaining coordinates for the reference points of the player and/or gaming object. The method continues by determining the player's dimensions and/or determining the dimensions of the gaming object. The method continues by mapping the reference points of the player to corresponding points of a video image based on the player's dimensions. This step may also include mapping the reference points of the gaming object (e.g., a sword) to the corresponding image of the gaming object based on the gaming object's dimensions.

The method continues by determining coordinates of other non-referenced body parts and/or parts of the gaming object based on the coordinates of the reference points. This may be done by a linear interpolation, by a most likely motion algorithm, by a look up table, and/or any other method for estimated data points from surrounding data points. The method continues by tracking motion of the reference points and predicting motion of the non-referenced body parts and/or parts of the gaming object based on the motion of the reference points. This may also be done by a linear interpolation, by a most likely motion algorithm, by a look up table, and/or any other method for estimated data points from surrounding data points.

FIG. 44 is a schematic block diagram of an embodiment of a gaming object and/or game console that includes a physical layer (PHY) integrated circuit (IC) and a medium access control (MAC) layer processing module. The PHY IC includes a position and/or motion tracking RF section, a controller interface RF section, and a baseband processing module. As like any processing module disclosed herein, the MAC processing module and the baseband processing module may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions. The processing module may have an associated memory and/or memory element, which may be a single memory device, a plurality of memory devices, and/or embedded circuitry of the processing module. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. Note that when the processing module implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory and/or memory element storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. Further note that, the memory element stores, and the processing module executes, hard coded and/or operational instructions corresponding to at least some of the steps and/or functions illustrated in the various Figures depicted and described herein.

The MAC processing module triggers position and/or tracking data collection, formatting of the data, processing of the data, and/or controlling position and/or tracking data communications and/or controller communications. The position and/or tracking RF section may include circuitry to transmit one or more beamformed RF signals, RF signals for 3D antenna reception, RFID communications, and/or any other RF transmission and/or reception discussed herein.

The game console may use a standardized protocol, a proprietary protocol, and/or a combination thereof to provide the communication between the gaming object and the console. Note that the communication protocol may borrow unused bandwidth from a standardized protocol to facilitate the gaming communication (e.g., utilize unused BW of a WLAN, cell phone, etc.).

FIG. 45, FIG. 46, and FIG. 47 are diagrams of various embodiments of methods for determining position and/or motion tracking.

Referring to the method of FIG. 45, the method operates by capturing digital images using multiple digital cameras. The method then performs processing of the digital images to identify characteristics of an object that is depicted within at least some of the digital images. The method then operates by determining position of the object based on the identified characteristics. This determined position is with respect to the locations of at least some of the multiple digital cameras.

Once the position of the object is known, the method can continue by mapping this determined position to a virtual 3D (three-dimensional) coordinate system.

Referring to the method of FIG. 46, the method operates by capturing digital images using multiple digital cameras. The method then performs processing of the digital images to identify characteristics of an object that is depicted within at least some of the digital images. Once these characteristics of the object are identified, the method operates by generating directional vectors based on the identified characteristics. These directional vectors may be viewed as extending from locations of at least some of the multiple digital cameras to a position of the object.

The method then operates by determining position of the object based on the directional vectors. Again, this determined position is with respect to the locations of at least some of the multiple digital cameras as indicated by an intersection of at least some of the directional vectors.

Once the position of the object is known, the method can continue by mapping this determined position to a 3D (three-dimensional) coordinate system.

Referring to the method of FIG. 47, the method operates by capturing digital images using multiple digital cameras. The method then performs processing of the digital images to identify at least one sensing tag that is depicted within at least some of the digital images. The sensing tag can be any of a variety of sensing tags, including a light reflective material, a light absorbent material, am infrared source (e.g., when at least one of the digital cameras is infrared sensitive), a color, and/or any other desired type of sensing tag. The sensing tag may be associated with an entirety of object depicted within at least some of the digital images. As also described herein, the sensing tag may be associated with only a portion of an object associated depicted within at least some of the digital images (e.g., a corner of an object, a body part of a player, etc.).

Once the sensing tag is identified within at least some of the digital images, the method operates by generating directional vectors based on the identified sensing tag. These directional vectors may be viewed as extending from locations of at least some of the multiple digital cameras to a position of the sensing tag.

The method then operates by determining position of the sensing tag based on the directional vectors. Again, this determined position is with respect to the locations of at least some of the multiple digital cameras as indicated by an intersection of at least some of the directional vectors.

Once the position of the object is known, the method can continue by mapping this determined position to a virtual 3D (three-dimensional) coordinate system.

FIG. 48 is a diagram of an embodiment of a method for determining a distance based on captured digital images.

Referring to the method of FIG. 48, the method operates by capturing digital images using multiple digital cameras. The method then performs processing of the digital images, using pattern recognition, to identify an object depicted within at least some of the digital images. A size of the identified object is predetermined (e.g., such as a predetermined size of a gaming object, a known object, etc.).

In accordance with processing the digital images, the method operates to determine an image size of the identified object (e.g., a size of the object as depicted within at least one of the digital images). Once an image size of an object depicted within a digital image is know, and also when an actual size of the object is known, then the method can associate the known/predetermined size with the image size. This way, a scaling factor can be determined between objects depicted within the digital image and the actual size of objects within the a physical environment that includes the object.

The method then operates by determining a distance within the physical environment using the image size of the object and the predetermined size of the object (e.g., based on the scaling factor).

Again, this determined position is with respect to the locations of at least some of the multiple digital cameras as indicated by an intersection of at least some of the directional vectors. Once a distance, as depicted within at least one digital image is known, then the method can continue by mapping this determined distance within a virtual 3D (three-dimensional) coordinate system.

It is noted that the various modules (e.g., processing modules, baseband processing modules, MAC processing modules, game consoles, etc.) described herein may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on operational instructions. The operational instructions may be stored in a memory. The memory may be a single memory device or a plurality of memory devices. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, and/or any device that stores digital information. It is also noted that when the processing module implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory storing the corresponding operational instructions is embedded with the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. In such an embodiment, a memory stores, and a processing module coupled thereto executes, operational instructions corresponding to at least some of the steps and/or functions illustrated and/or described herein.

The present invention has also been described above with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claimed invention.

The present invention has been described above with the aid of functional building blocks illustrating the performance of certain significant functions. The boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claimed invention.

One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof.

Moreover, although described in detail for purposes of clarity and understanding by way of the aforementioned embodiments, the present invention is not limited to such embodiments. It will be obvious to one of average skill in the art that various changes and modifications may be practiced within the spirit and scope of the invention, as limited only by the scope of the appended claims.

Claims

1. An apparatus, comprising:

a plurality of digital cameras that generates a plurality of digital images, wherein an object is depicted within at least some of the plurality of digital images; and
a processing module coupled to: receive the plurality of digital images; identify characteristics of the object within the at least some of the plurality of digital images to produce identified object characteristics; and determine position of the object with respect to the plurality of digital cameras based on the identified object characteristics.

2. The apparatus of claim 1, wherein:

the identified object characteristics includes a plurality of directional vectors extending from at least some of the plurality of digital cameras to the object; and
the processing module determines the position of the object with respect to the plurality of digital cameras based on the plurality of directional vectors.

3. The apparatus of claim 2, wherein:

one of the plurality of directional vectors extends from a reference point of a digital image sensor of one digital camera to a physical pixel within the digital image sensor that corresponds to an image pixel of one digital image captured by the one digital camera.

4. The apparatus of claim 1, wherein:

the processing module employs a pattern recognition process to identify the characteristics of the object.

5. The apparatus of claim 1, wherein:

the object includes a sensing tag; and
at least one of the identified object characteristics is the sensing tag.

6. The apparatus of claim 5, wherein:

the sensing tag is at least one of:
a light reflective material;
a light absorbent material;
an infrared source such that at least one of the plurality of digital cameras is infrared sensitive; and
a color.

7. The apparatus of claim 1, wherein:

the object has a predetermined size;
the processing module employs a pattern recognition process to identify the object within the at least some of the plurality of digital images;
the identified object has an image size; and
based on the predetermined size and the image size, the processing module determines a distance between the object and the processing module or at least one of the plurality of digital cameras.

8. The apparatus of claim 1, wherein:

the processing module maps the position of the object within a virtual three-dimensional coordinate system.

9. The apparatus of claim 1, wherein:

a field of view of one camera of the plurality of digital cameras is adjusted based on the position of the object.

10. The apparatus of claim 1, wherein:

an image capture rate of one of the plurality of digital cameras is adjusted based on at least one of:
a predetermined setting within the processing module;
a user-selected setting within the processing module;
a movement history of the object;
a current movement of the object; and
an expected future movement of the object.

11. The apparatus of claim 1, wherein:

the processing module determines the position of the object during a first time;
the processing module determines at least one additional position of the object during a second time; and
the processing module estimates movement of the object by comparing the determined position and the at least one additional determined position.

12. The apparatus of claim 1, wherein:

the object includes a first radio frequency (RF) transceiver;
the processing module includes a second RF transceiver; and
based on an RF signal transmitted from the first RF transceiver to the second RF transceiver, the processing module determines a distance between the processing module and the object.

13. The apparatus of claim 1, wherein:

one of the plurality of digital cameras includes a first radio frequency (RF) transceiver;
the processing module includes a second RF transceiver; and
based on an RF signal transmitted from the first RF transceiver to the second RF transceiver, the processing module determines a distance between the processing module and the one digital camera.

14. The apparatus of claim 1, wherein:

a plurality of integrated circuits is distributed throughout a region in which the object is located; and
one of the plurality of digital cameras is a digital image sensor implemented on a surface of one of the plurality of integrated circuits.

15. An apparatus, comprising:

a gaming object for use within a gaming environment;
a plurality of digital cameras that generates a plurality of digital images, wherein the gaming object is depicted within at least some of the plurality of digital images; and
a game console coupled to: receive the plurality of digital images; identify characteristics of the gaming object within the at least some of the plurality of digital images to produce identified object characteristics; and determine position of the gaming object within the gaming environment with respect to the plurality of digital cameras based on the identified object characteristics.

16. The apparatus of claim 15, wherein:

the gaming object is associated with a player located within the gaming environment; and
the game console determines position of the player based on the position of the gaming object.

17. The apparatus of claim 15, wherein:

the identified object characteristics includes a plurality of directional vectors extending from at least some of the plurality of digital cameras to the gaming object; and
the game console determines the position of the gaming object with respect to the plurality of digital cameras based on the plurality of directional vectors.

18. The apparatus of claim 17, wherein:

one of the plurality of directional vectors extends from a reference point of a digital image sensor of one digital camera to a physical pixel within the digital image sensor that corresponds to an image pixel of one digital image captured by the one digital camera.

19. The apparatus of claim 15, wherein:

the game console employs a pattern recognition process to identify the characteristics of the gaming object.

20. The apparatus of claim 15, wherein:

the gaming object includes a sensing tag; and
at least one of the identified object characteristics is the sensing tag.

21. The apparatus of claim 20, wherein:

the sensing tag is at least one of:
a light reflective material;
a light absorbent material;
an infrared source such that at least one of the plurality of digital cameras is infrared sensitive; and
a color.

22. The apparatus of claim 15, wherein:

the gaming object has a predetermined size;
the game console employs a pattern recognition process to identify the gaming object within the at least some of the plurality of digital images;
the identified gaming object has an image size; and
based on the predetermined size and the image size, the game console determines a distance between the gaming object and the game console or at least one of the plurality of digital cameras.

23. The apparatus of claim 15, wherein:

the game console maps the position of the gaming object within a virtual three-dimensional coordinate system.

24. The apparatus of claim 15, wherein:

an image capture rate of one of the plurality of digital cameras is adjusted based on at least one of:
a predetermined setting within the game console;
a player-selected setting within the game console;
a movement history of the gaming object;
a current movement of the gaming object; and
an expected future movement of the gaming object.

25. The apparatus of claim 15, wherein:

the position is a first position;
the game console determines the first position during a first time;
the game console determines a second position of the gaming object during a second time; and
the game console estimates movement of the gaming object by comparing the first position and the second position.

26. An apparatus, comprising:

a plurality of digital cameras, associated with a gaming object, that generates a plurality of digital images such that a plurality of predetermined references is depicted within at least some of the plurality of digital images; and
a game console coupled to: receive the plurality of digital images; identify characteristics of at least some of the plurality of predetermined references to produce identified characteristics; and determine position of the gaming object with respect to the plurality of predetermined references based on the identified characteristics.

27. The apparatus of claim 26, wherein:

the identified characteristics includes a plurality of directional vectors extending from at least some of the plurality of digital cameras to the at least some of the plurality of predetermined references; and
the game console determines the position of the gaming object with respect to the plurality of digital cameras based on the plurality of directional vectors.

28. The apparatus of claim 27, wherein:

one of the plurality of directional vectors extends from a physical pixel within a digital image sensor of one digital camera, that corresponds to an image pixel of one digital image captured by the one digital camera, to a reference point of the digital image sensor.

29. The apparatus of claim 26, wherein:

the gaming object is associated with a player located within the gaming environment; and
the game console determines position of the player based on the position of the gaming object.

30. The apparatus of claim 26, wherein:

the game console employs a pattern recognition process to identify the characteristics of at least some of the plurality of predetermined references.

31. The apparatus of claim 26, wherein:

the game console maps the position of the gaming object within a virtual three-dimensional coordinate system.

32. The apparatus of claim 26, wherein:

the position is a first position;
the game console determines the first position during a first time;
the game console determines a second position of the gaming object during a second time; and
the game console estimates movement of the gaming object by comparing the first position and the second position.
Patent History
Publication number: 20080316324
Type: Application
Filed: Jun 9, 2008
Publication Date: Dec 25, 2008
Applicant: BROADCOM CORPORATION (IRVINE, CA)
Inventors: AHMADREZA (REZA) ROFOUGARAN (Newport Coast, CA), Maryam Rofougaran (Rancho Palos Verdes, CA)
Application Number: 12/135,332
Classifications
Current U.S. Class: Combined Image Signal Generator And General Image Signal Processing (348/222.1); Target Tracking Or Detecting (382/103); 348/E05.024
International Classification: H04N 5/228 (20060101); G06K 9/00 (20060101);