Network gaming system with a distribution server
A network gaming system includes a distribution server to provide gaming devices with communication with backend servers. The system includes at least one backend server, at least one gaming device, and at least one distribution server, wherein each gaming device is in communication with at least one backend server through at least one distribution server.
Traditionally, gaming networks have been custom designed for gaming purposes only. In this regard, gaming networks have been constructed only to include gaming functionality and have lagged behind the rapid growth of network and communications capability available in the computing, communications and Internet industries.
In many older, or “legacy,” slot systems, data lines are constructed for robust and reliable communications in the harsh environment of the casino, wherein in many cases, slot systems remain up 24 hours a day, 365 days year. Certain legacy slot systems, such as SDS® “Slotline” System by Bally Gaming & Systems, Inc. of Las Vegas, Nev., were developed in the early 1970's before internet protocol (IP) or packet-based networks, such as the Internet and Ethernet networks, were developed to the current level. The legacy systems were originally designed to provide security and accounting information from the gaming device to the backend server over a cable, which was a serial (narrow band) network. Security information included door opens, machine breakdowns, and tilt conditions. Accounting information was related to profit and loss of the operation and used to detect cheating, skimming, and misreporting for tax purposes. The data transmission needs were modest and sporadic in nature, and implemented with a network bridge performing a polling protocol to communicate with the gaming devices. A data rate of 7,200 bits-per-second (bps) was a more than adequate selection for transmission speed since that data rate provided reliable and robust communication, and was by its unusual data rate, a security measure through its obscurity.
Recently, however, casino owners have become aware that the use of additional features to gaming machines and the increasing need for operational efficiency, are driving the current proprietary gaming networks toward much greater capabilities, such as full-duplex (two-way) connectivity and higher speed (e.g., 10 Mbps or greater) plus improved analytic features. These improvements are expected to bring the player greater game choices, more rapid renewal of the slot floor entertainment options, and greater operational efficiency for the operator. These translate into increased revenue generation and improved profits.
However, it is costly to install complete new networks to handle the high-speed traffic necessary for the improved features. Especially in the casino environment, for example, it is costly to install new high-speed cable, due to slot floors typically employing “Walker Duct” in which the communications cables are buried inside the concrete floor. Casino owners are unwilling at best to close down their casino for the time it would take to tear up the casino floor to install the new cabling.
One solution is to keep the previous network infrastructure in place, and to have backend servers that provide the enhanced features convert high speed protocols, for example Ethernet, into narrowband protocols, for example Slotline, to send and receive data through the older network cabling and equipment to and from the gaming devices in the network. However, this conversion requires significant overhead for the server or servers performing the conversion, while still causing a bottleneck in network traffic at the point of conversion to the older protocol.
Thus, it would be desirable to be able to provide high-speed communications in a gaming network using older or legacy network cabling and equipment, without burdening the backend servers in the gaming network with the task of protocol conversion.
SUMMARY OF THE INVENTIONBriefly, and in general terms, the claimed invention resolves the above and other problems by providing a gaming system having a distribution server to provide gaming devices with communication with backend servers. The system includes at least one backend server, at least one gaming device, and at least one distribution server, wherein each gaming device is in communication with at least one backend server through at least one distribution server.
In one embodiment, a gaming system includes at least one backend server, at least one gaming device, and at least one data cache server. Each gaming device is in communication with at least one backend server through at least one data cache server.
In one embodiment, a gaming network includes at least one gaming device, a core layer, and a distribution layer, wherein each gaming device communicates with the core layer via the distribution layer.
In another embodiment, a method eliminates asymmetrical data flow in a gaming network. A backend server and one or more gaming devices are established. A distribution server is established. Data is transmitted between the gaming devices and the backend server through the distribution server.
In another embodiment, an improvement in a gaming network includes a distribution server means for caching data, whereby offload processing and network efficiency are enhanced.
In another embodiment, an improvement in a gaming network comprises a data cache means for caching data between a gaming device and a backend server, whereby offload processing and network efficiency are enhanced.
In another embodiment, an improvement in a gaming network includes a distribution server means for caching data, whereby asymmetrical data flow is minimized between a gaming device and a backend server.
BRIEF DESCRIPTION OF THE DRAWINGS
A one embodiment of a network gaming system, referred to herein as the “Tahoe Network,” constructed in accordance with the claimed invention, is directed towards a gaming system having a distribution server to provide gaming devices with communication with backend servers.
The Tahoe Network is capable of rapidly transporting large data loads using, in one embodiment, mostly industry-standard technology and off-the-shelf hardware and software. For example, Ethernet network typology is used as a network interface in this embodiment due to the World-wide acceptance of Ethernet as a network standard and its relative ease of use. For example, Ethernet networks have the ability to scale from single point-to-point connections to large-sized installations encompassing thousands of network devices.
In one embodiment, an upgraded proprietary network is used to connect at least some components. A hybrid network of the two technologies (Ethernet and proprietary) in which the strengths of Ethernet are exploited while its weaknesses are eliminated with the proprietary network is preferred for some embodiments, but not necessarily all embodiments.
Referring now to the drawings, wherein like reference numerals denote like or corresponding parts throughout the drawings and, more particularly to
In one embodiment, L2 or L3 switches 220 connect the layers 100, 200 and 300 together. 10 or 100 base-T Ethernet cable is used with the switches 200 to eventually connect to managed switches 322 on the gaming floor to connect to the gaming devices 302 through a backbone switch 320.
In one embodiment, the cabling for the network 10 includes, at least in part, legacy cable, for example, serial cable, which is used congruent with newly installed cable for the network 10 to maintain the operation and integrity of a casino's existing systems, such as a slot accounting system for example. Eventually, the whole network 10, including cabling and other older components, may be phased out and ported to the new cabling and servers 102 and 202. For example, the slot accounting system installed in the core layer server 102 as a software process that communicates over the network 10 instead of through legacy serial connections to the gaming devices 302.
With reference to
Referring back to
In one embodiment, the 3Com® NJ200 Network Jack is used as a managed switch. The NJ200 is designed to be installed in an office network data port opening or electrical wall outlet. It features 4-10/100 downlink ports and 1-10/100 uplink port, and is powered via an external power supply (wall transformer), or by power-over-Ethernet (PoE).
The NJ200 is designed to be installed in an office network data port or electrical wall outlet, neither of which are present inside a gaming machine or accompanying slot stand. The proposed solution is to mount the NJ200 inside an inexpensive plastic electrical box and then mount the assembly into the slot stand.
In one embodiment, multiple upstream ports are concentrated within a carousel 310 with another switch 322.
In another embodiment, the Etherwan 1808C is used as an industrial managed switch 322 that comes in a “desktop” form, and provides 7 downlink ports and 1 uplink port. The advantage of using the Etherwan 1808C over the 3Com NJ200 is the expanded number of ports. Etherwan further offers several models of the Etherwan 1808 equipped with fiber ports in the event that fiber cable is used on the casino floor.
The Moxa ED6008 is yet another access layer switch 322 that can be used in one embodiment. The ED6008 is a small form-factor industrial DIN-mounted device equipped for dual-redundant power to attain a high MTBF. Although the ED6008 is an 8-port industrial managed switch, it can also function as a dynamic host configuration protocol (DHCP) server.
In embodiment, instead of using cabling to carry Ethernet signals over the network, connections between various components are wireless. In another embodiment, long range Ethernet (LRE) is used for the connections in some of the components of the network 10.
Data traffic flow in a modern gaming network is typically asymmetrical. Data flow is typically heavy from the core layer server(s) 102 to the gaming devices 302 on the gaming floor in the access layer 300 due to data download requirements of modern gaming devices 302, and data traffic flow is light from the gaming devices 302 to the core layer server(s). This asymmetry, and an accompanying data bottleneck it creates, is alleviated by the distribution server(s) 202, also referred to as data cache servers 202. The distribution server 202 is located at a midpoint of the network, namely, the Distribution Layer. One or more distribution servers 202 operate to cache data for download to floor devices 302 (e.g., code updates, device content), thus presenting a lighter load on the backend servers 102, and speeding up data distribution.
The distribution servers 202 are not confined to the role of data caches. By way of example, and not by way of limitation, dynamic host configuration protocol (DHCP) relay, bus translation (e.g., Ethernet to RSL), and distributed computing are other functions the distribution server 102 performs in some embodiments.
Distribution servers 202, or data caches, can be stand-alone devices (e.g., servers, or built in network hardware such as RSL hubs described below. Distribution Servers 202 allow for parallel processing across the gaming floor. In one embodiment, distribution servers offload processing loads from the backend servers 102, cache or backup data from games played on gaming devices 302, and speed up network 10 transactions. Other embodiments include redundant cabling in the core and distribution layers 100 and 200 for failover contingencies, or an expanded core layer 100 combining distribution and core layers 100 to reduce device count.
Cabling
In one embodiment, link 210 is a fiber optic link connecting the core and distribution layers 100 and 200, which serves a dual purpose of providing long-distance hauls and high bandwidth. Fiber optic cable has an advantage over copper (CAT-5) cabling in that it can transmit data over longer distances and, depending on the fiber type, has a much higher bandwidth. Fiber cable is also immune to electromagnetic noise and interference.
Fiber optic cable may be, however, more expensive than copper cable (approximately $0.65/ft. for multi-mode 12-fiber indoor/outdoor fiber optic cable vs. $0.25/ft. for CAT-5 cable). In some embodiments, fiber-optic cable is used in instances wherein a copper cable is not adequate, for example, when the distance between the core layer 100 and distribution layer 200 exceeds a 300 foot CAT-5 Ethernet limit.
In one embodiment, gigabit Ethernet over copper cable is used when high bandwidth is needed, and cable distances allow for it. The cabling requirements for copper gigabit Ethernet are the same as that for fast Ethernet (100 Mb/s) because gigabit Ethernet operates at the same frequency as Fast Ethernet (100 MHz). Gigabit Ethernet attains its higher data bandwidth by using all 4 pairs (8 wires) of the CAT-5 cable, whereas fast Ethernet uses only 2 pairs (4 wires). The end result is that copper gigabit Ethernet will operate on any network that currently operates at fast Ethernet speeds provided that all 8 wires are properly terminated within the CAT-5 cable RJ45 plug.
In one embodiment, bandwidth is increased by using a method called trunking (or aggregation, as it is also known). Trunking is a method of combining multiple communications channels (cables) to form one large bandwidth channel. For example, and not by way of limitation, in one embodiment, four fast Ethernet channels that operate individually at 100 Mb/s are combined into one trunk to produce a single channel operating at 400 Mb/s. In this embodiment, the network switches 220, and other network hardware, have the trunking feature built-in in order to take advantage of trunking. Another benefit of trunking is redundancy. If one channel (cable) of a trunk fails for any reason, the entire link does not fail. The bandwidth of the link is reduced, but the link is still functional.
Distribution to Access Layer Ethernet Cabling
In one embodiment, the cabling running from the distribution layer 200 to the access layer 300 is CAT-5 copper cable to function with the game node switches 322 in the access layer 300 that in some embodiments have CAT-5 interfaces. In one embodiment, these game node switches 322 are located inside slot stands and are relatively small fast Ethernet units with 5 to 8 ports. In some embodiments, as is the case in many casinos today, the arrangement of the gaming devices 302 on the casino floor is a “carousel” (310 in
With reference to
In one embodiment, each gaming device 302 is equipped with a combination user interface and game monitoring unit device. The game monitoring unit (GMU) is a device that monitors game activity in the gaming device 320, and among other functions, provides data regarding game play and status to the aforementioned user interface device. In one embodiment, the user interface device uses the game play data for bonus games or progressives. For network games, the interface device forwards the data to the backend servers 102. In one embodiment, the user interface is an iView device available from Bally Gaming & Systems, Inc. of, Las Vegas Nev.
The combination user interface device and GMU is referred to as a TahoeEPI herein. With reference to
In some embodiments, 4 TahoeEPI devices 500 are daisy-chained together through their integrated Ethernet hubs 502 with the last Tahoe EPI device 500 connected to the carousel switch. This arrangement eliminates the external game node switch 322 and effectively distributes the function of the game node switch 322 into each of the TahoeEPI devices 500, providing cost saving. Further, gaming devices 302 in one embodiment integrate Ethernet ports into their systems. The embedded Ethernet hub 502 on the TahoeEPI device 500 concentrates the Ethernet connections within a game device 302 into a single Ethernet port. This eliminates the need for redundant cabling to a game device 302 and at least some upstream hardware to interface to the additional Ethernet ports.
Distribution to Access Layer RSL Cabling
In one embodiment, the aforementioned RSL network integrates an older or existing slot cabling system while using active hubs 350 instead of passive switches 220 or line concentrators (“harmonicas”). The use of the active hub 350 allows for point-to-point signalling and higher data speeds.
Typically, an older or current slot line switch is used in older networks that is a 6-port device (1 line in; 1 line out; 4 device ports) whereas, in one embodiment, the RSL active hub 350 has provisions for 8 devices, or twice the capacity of typical slot line Harmonica. In one embodiment, the RSL active hub 350 follows the slot line switch topology in that it can be daisy-chained, for example, with 8 hubs 350.
Long Range Ethernet
In one embodiment, a long-range, low-bandwidth Ethernet network is used. One system is available from Hatteras Networks of Durham, N.C., a company that specializes in Long range Ethernet (LRE) solutions. The HN400 by Hatteras converts a standard 10/100B-TX Ethernet interface into an IEEE standard 2BASE-TL interface, which allows for Ethernet over long cables. The HN400 can operate at 2.3 Mb/s over 11,000 ft (3,350 meters) per wire pair. The 2BASE-TL IEEE specification uses “bonded pairs”, which means that two or more wire pairs can be combined to create a higher bandwidth trunk (similar to trunking described above).
The maximum bandwidth on a single LRE bonded cable 610 (4 pairs bonded, 2.3 Mb/s/bonded pair) is typically 9.2 Mb/s. Dividing this bandwidth between a 16-game carousel gives each game ˜575 Kb/s of bandwidth. That is the main limitation of LRE. The bandwidth is limited unless multiple cables are bundled to produce a higher bandwidth trunk. LRE also doesn't obviate the need for access layer switches 320, 322 at the carousel 310 since the Hatteras NIUs 600 have only a single Ethernet port.
Hatteras Networks also produces their HN4000 product that can be used in place of the HN400 for the NIU 600. The HN4000 is a multi-channel version of the HN400 for the core layer 100 NIU 600. The HN4000 provides an interface for up to 40 wire pairs; a 100B-TX/LX or 1000B-TX/LX Ethernet interface, and is stackable up to five units.
The HN400 and HN4000 NIUs each have redundant power inputs through external power supplies (power bricks) and support in-band and out-of-band network management, including SNMP
Wireless
Wireless networking is used in one embodiment because it frees the gaming device 302 from a physical network interface (the network cable), allowing the casino owner the freedom to easily re-position gaming devices 302 about the casino floor.
Wireless (or “802.11” or “Wi-Fi”) systems are broken down into three fundamental standards, IEEE 802.11a, b and g. 802.11g is the most current standard and is backward compatible with 802.11b. 802.11a is used in dense user areas where higher bandwidth and a greater number of channels are needed. The relatively small coverage area allows more wireless “access points” (analogous to a wired hub or switch) to be concentrated in a certain area.
The 2.4 GHz band that 802.11b and 802.11g operate in and the 5 GHz band that 802.11a operates in is called the ISM band. This band of frequencies are unlicensed and open to anyone to use with few restrictions. The 2.4 GHZ band, for instance, is the same band that Bluetooth equipment and cordless phones operate in, and the use of these devices in a wireless networking environment may cause a decrease in the wireless network bandwidth.
In one embodiment, wired equivalent privacy (WEP) is employed in wireless networks as a baseline of security by encrypting data as it is transmitted. However, WEP does not have any provision for user authentication. In another embodiment, Wi-Fi protected access (WPA), a subset of the IEEE 802.1X security standard, is used to enhance the WEP user authentication mechanism. In another embodiment, a virtual private network (VPN) is employed on Wi-Fi systems as an alternative to 802.1X.
Antennas in wireless networking systems typically radiate in an omni-directional pattern to maximize coverage. In one embodiment, this is changed to achieve more directional coverage. In this embodiment, patch or yagi antennas are deployed for directional coverage and parabolic antennas are used for building-to-building links.
Hardware Management
In one embodiment, the switches used in the network are similar to the off-the-shelf desktop switches with the exception that they have been upgraded for industrial use, and include hardware management protocol, such as SNMP, to aid in monitoring and troubleshooting the network 10. Preferably, the distribution and core layer switches 220 use a management package such that the entire network 10 is managed from a central location.
Power
In some embodiments, where power is not readily available for additional network devices, power over ethernet (PoE) is used. PoE enables devices (such as access layer switches 220) to be powered over CAT-5.
In another embodiment, redundant power is used for devices in the core and distribution layer 100 and 200. Those devices equipped for redundant power use auxiliary DC power inputs, not dual AC power inputs, and use external rackmount power supply units. Blade-style switch chassis' are equipped with redundant, hot-swappable internal AC-DC power supplies.
Hardware Redundancy
In one embodiment, redundancy is used at critical bottlenecks in the system, such as the core layer 100. In one embodiment, the core layer 100 represents a single network access point that can severely impact the performance and availability of the entire network should it or any piece of it fail. Thus in this embodiment, redundancy is preferably used in many components in the core layer 100, at least for those components that do not have a favourable mean-time to repair (MTTR) rating, or for which replacing a failed component is not very quick and easy. In one embodiment, spare components are pre-configured, stocked and mounted with the online hardware as a “hot spare.”
Distribution Layer
The distribution layer switches 220 concentrate the access layer switch uplinks into a high-bandwidth link to the core layer 100. The distribution layer hardware is optional and is eliminated in some embodiments with small installations. In one embodiment, the distribution layer hardware includes 24 and/or 48 port rackmount managed L2 switches 220. These switches are mounted on the periphery of the casino floor within 300 feet (copper Ethernet range) of the access layer switches 320, 322 they interface to. In one embodiment, the L2 switch used is the Nortel® BayStack® 470-24T available from Nortel Networks of Santa Clara, Calif. This L2 switch features 24 RJ45 10/100 downlink ports and 2 Gigabit fiber uplink ports. It is stackable (multiple units can be connected together to produce one large logical switch) and the uplink ports can be trunked together to produce a single high-bandwidth pipeline. Multiple uplink ports (trunking) also provide a level of redundancy in single points.
Core Layer
The core layer 100 is the interface between the main backend server(s) 120 (Tahoe servers) and the rest of the network 10. A backend server 120 interfaces with each individual gaming device 320, or iView device describe above. In one embodiment, the core layer 100 includes fast L3 switches 220, which come in many different configurations ranging from 1U rackmount devices up to hot-swappable, dual-redundant enterprise blade switches 220.
The Netgear GSM7324 managed L3 switch is one such switch that is used in one embodiment. It is a 1U rackmount chassis featuring 24 RJ45 10/100/1000 ports and 4 SFP ports that can be configurable as RJ45 (copper) or SC (fiber) connections. In another embodiment, the 3Com GSM7324 is used as an L3 switch 220.
In another embodiment, 3Com's 4050 L3 Switch is used, which features 12 10/100/1000 ports and 12 fiber ports. The 3Com 4060 and 4070 switches are similar to the 4050 with the exception of the port configuration, wherein the 4060 has 6 10/100/1000 ports and 18 fiber ports, and the 4070 has 24 fiber ports. In some embodiments, any of these L3 switches are used as the L3 switch 220 in the core layer 100.
In yet another embodiment, the Cisco® 4507R modular blade switch, available from Cisco Systems of San Jose, Calif. is used as the L3 switch 220 in the core layer 100.
RSL
In the past, prior art slot networks, such as the “Slotline” network by Bally Gamming & Systems, Inc. of Nevada, Las Vegas, operated at a relatively low data rate of 7200 bps, which has served the needs of slot data systems (SDS) over the last 20 years. However, the demands of new applications and desire of casino operators to deliver a more compelling player experience has pushed the existing architecture to its breaking point.
In one embodiment a “rapid slot line” (RapidSL) system includes a networking solution that is similar in concept and physical layout to the existing Slotline network, while providing a substantial increase in network throughput as well as retaining deterministic timing and throughput for network traffic that exists in the system. In one embodiment, a throughput of 30 Mb/sec is achieved. In another embodiment, the throughput is greater than 100 Mb/sec. In one embodiment, RapidSL is based on the TIA/EIA RS-485 signalling standard (RS-R85) used in a number of non-gaming industries. This RS-485 standard is commonly used in industrial networking, process control, and consumer applications. For example, it forms the core of the Profibus industrial networking system and is often used as the physical layer for the widely used controller area network (CAN) protocol.
RSL Hardware Functional Description
In one embodiment, the RapidSL system architecture includes three major functional blocks, a head end, repeater/hub, and node hardware. The architecture of a RapidSL based slot floor is similar to that of the original slotline concept. With reference back to
RSL Signaling
One benefit of the RapidSL system is that of a point-to-point link is provided for every gaming device. Every link has its own transceiver data rates that are correspondingly higher and more reliable. In this embodiment, every link adheres to the TIA/EIA RS-485 specification in order to improve bus integrity.
RSL Head End Hardware
With reference to
With reference to
With reference to
With the hardware assistance provided by the FPGA, a streamlined Slotline protocol is implemented that is not saddled with the immense delays inherent in the current Slotline protocol, while still maintaining the deterministic and timely throughput that has made the existing protocol such a success over the last two decades. The transmission protocol retains the high level of determinism by keeping and enhancing (through hardware components as illustrated in
With reference to
It should be noted that this protocol does not define the content of the data packet. Any data can be transmitted, which allows the flexibility to layer any desired protocol atop this transmission medium.
In the event the a GNB 202 has system data to transmit to a GMU, a time slice with attached data is sent instead of a single byte address.
With reference to
In one embodiment, the RapidSL protocol is time delimited. The end of a transmission is marked with a >2 microsecond period of no transmission. Start-of-transmission timeout is the same value as that of the end of transmission marker. Transmission timeout values are based on the worst-case propagation times through eight repeater/hubs attached to a total length of 200 m of CAT5 cable. Transmission times from the hubs to the attached GMUs are negligible and are not considered in the delay model.
In one embodiment, the RapidSL hardware presents a register and FIFO based interface to the applications that need to access it. Since the hardware only notifies the application that it needs to be serviced when it has a complete message available, the application is liberated from the stringent real-time requirements currently imposed by the Slotline system. The interface includes a control register, a message size register, an address register, read and write FIFOs, and an interrupt.
In one embodiment, writing a message to the RapidSL communication channel includes of three operations. First, if interrupts are enabled the application receives a transmitter empty interrupt. If interrupts are not enabled the control register is checked to see that the transmitter is empty. Second, the bytes of the message are written to the transmit FIFO. Third, the address of the GMU that the message is bound for is written to the address register. The act of writing the address to the address register causes the RapidSL hardware to begin transmission as soon as it is that address's time slice.
If there is no message waiting to be transmitted in the FIFO then the hardware time division engine continues issuing time slices.
In one embodiment, a message is read using a three-step process. If interrupts are enabled, the application will receive a “message waiting” interrupt. If interrupts are not enabled a control register is checked to see if a message is waiting. Second, the message size register is read. Third the number of bytes encoded in the message size register is read from the FIFO.
In some embodiments, the 30 Mb/sec data rate is not the highest bandwidth level that the RapidSL system is capable of achieving. Performance levels up to 100 Mb/sec are possible with existing RS-485 cable with the right selection of components and management of cable lengths. The basic architecture is not specifically mated to any particular transceiver technology, so, in some embodiments, RocketIO, Infiniband, LVDS, and 1 Gb Ethernet networks are used. Further a reasonable base target bandwidth for a fiber based RapidSL system is 2.5 Gb/sec if the Slotline cable were to be replaced with high bandwidth cabling.
Although the invention has been described in language specific to computer structural features, methodological acts, and by computer readable media, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific structures, acts, or media described. Therefore, the specific structural features, acts and mediums are disclosed as exemplary embodiments implementing the claimed invention.
Furthermore, the various embodiments described above are provided by way of illustration only and should not be construed to limit the invention. Those skilled in the art will readily recognize various modifications and changes that may be made to the claimed invention without following the example embodiments and applications illustrated and described herein, and without departing from the true spirit and scope of the claimed invention, which is set forth in the following claims.
Claims
1. A gaming system, comprising:
- at least one backend server;
- at least one gaming device; and
- at least one distribution server, wherein each gaming device is in communication with at least one backend server through at least one distribution server.
2. A gaming system, comprising:
- at least one backend server;
- at least one gaming device; and
- at least one data cache server, wherein each gaming device is in communication with at least one backend server through at least one data cache server whereby effects of asymmetrical dataflow is reduced.
3. A gaming network, comprising:
- at least one gaming device; and
- a core layer and a distribution layer; wherein each gaming device communicates with the core layer via the distribution layer.
4. A method of eliminating asymmetrical data flow in a gaming network, comprising:
- establishing a backend server;
- establishing one or more gaming devices;
- establishing a distribution server; and
- transmitting data between the gaming devices and the backend server through the distribution server.
5. In a gaming network, the improvement comprising:
- distribution server means for caching data, whereby offload processing and network efficiency are enhanced.
6. In a gaming network, an improvement comprising:
- a data cache means for caching data between a gaming device and a backend server, whereby offload processing and network efficiency are enhanced.
7. In a gaming network, an improvement comprising:
- distribution server means for caching data, whereby asymmetrical data flow is minimized between a gaming device and a backend server.
Type: Application
Filed: Sep 12, 2005
Publication Date: Mar 15, 2007
Inventors: Randy Osgood (Reno, NV), Carmen DiMichele (Sparks, NV), James Morrow (Sparks, NV), Harold Robb (Reno, NV)
Application Number: 11/224,902
International Classification: A63F 9/24 (20060101);