Building and maintaining a network

- Aerohive Networks, Inc.

Techniques and systems for establishing and maintaining networks. The technique includes assigning a network device to an interregional redirector system and load balancer systems. The network device can be assigned based upon the regions or subregions of the network device. The technique includes the load balancer systems assigning the network device to network device management engines. The status of the network device management engines can be monitored to determine if one of the network device management engines has failed. In the event that a network device management engine has failed, the network device can be assigned to a different network device management engine.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. Patent Provisional Application Ser. No. 61/788,621, filed Mar. 15, 2013, which is incorporated herein by reference.

BACKGROUND

An area of ongoing research and development is improving the ease by which a person or an enterprise can set up a network. In particular importance is improving the ease by which a person or an enterprise can add devices to an already existing network to further expand and improve the network. Specifically, in establishing a network or adding devices to an already existing network, an administrator must configure the device in order to establish a new network or incorporate a new device into an already existing network. There therefore exists a need for systems in which a person or an enterprise can easily setup a network or add devices to an already existing network without having to configure the device.

Another area of ongoing research and development is improving the ease by which a network can be monitored and managed to continue to function if a device fails. Typical systems usually connect a plurality of network devices to a single server to manage the network device. Therefore, if the server fails, all of the network devices managed by the failed server are inoperable. There therefore exists a need for a system that monitors the servers or engines that manage network devices to determine whether or not they have failed. There also exists a need for a system capable of reassigning the network devices to different servers or engines that manage network devices in the event that a server or engine that manages network devices has failed.

The foregoing examples of the related art are intended to be illustrative and not exclusive. Other limitations of the relevant art will become apparent to those of skill in the art upon a reading of the specification and a study of the drawings.

SUMMARY

The following implementations and aspects thereof are described and illustrated in conjunction with systems, tools, and methods that are meant to be exemplary and illustrative, not necessarily limiting in scope. In various implementations, one or more of the above-described problems have been addressed, while other implementations are directed to other improvements.

Techniques and systems for building and maintaining a network. The technique involves assigning network devices to device management engines that mange the flow of data packets into and out of the network devices. The technique can include connecting a network device to an interregional redirector system. The network device can be a newly purchased device that is being powered on for the first time by the purchaser. The technique can include the interregional redirector system receiving network device information about the network device. The technique can also include the interregional redirector system validating the network device. The interregional redirector system can then assign the network to a load balancer system. The load balancer system can be associated with or part of one or multiple regional device management systems. The regional device management systems can be regionally unique in that they contain engines in specific regions or subregions. The load balancer systems can be regionally unique in that they are associated with or part of one or multiple regional network device management systems that are regionally unique. The interregional redirector system can assign the network device to a load balancer based upon the regions or subregions of the engines of the regional network device management systems or the engines themselves that which the load balancer systems are associated.

The technique can also involve a load balancer system assigning a network device to a network device management engine. The load balancer system can receive both network device information and network device management engine information. The load balancer system can assign the network device to a network device management engine based upon the regions or subregions of the network devices and the regions or subregions of the other network devices that they network device management engine already manages.

The technique can also include the load balancer system monitoring the status of the network device management engines associated with it and reassign network devices to different management engines in the event that one of the management engines fails. The load balancer system can monitor the status of the network device management engines associated with the load balancer system by retrieving network device management engine status messages from a network device management engine message queue. The status messages can be sent to the network device management engine message queue by the network device management engines. The load balancer system can use the status messages of the network device management engines to determine whether or not a network device management engine has failed. If the load balancer system determines that a network device management engine has failed, then the load balancer system can reassign the network device to another network device management engine that is not failing. The load balancer system can also send a notification to an administrator system that the network device management engine has failed.

These and other advantages will become apparent to those skilled in the relevant art upon a reading of the following descriptions and a study of the several examples of the drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 depicts a diagram of an example of a system configured to couple a network device to a regional network device management system.

FIG. 2 depicts a diagram of an example of a system configured to couple a network device to a network device management engine and monitor the network device management engine.

FIG. 3 depicts a diagram of an example of a load balancer system.

FIG. 4 depicts a flowchart of an example of a method for assigning a network device to a regional network device management system.

FIG. 5 depicts a flowchart of an example of a method of a load balancer system for assigning a network device to a network device management engine.

FIG. 6 depicts a flowchart of an example of a method for determining that a network device management engine has failed by a network device managed by the network device management engine.

DETAILED DESCRIPTION OF THE INVENTION

FIG. 1 depicts a diagram 100 of an example of a system configured to couple a network device to a regional network device management system. The system includes an interregional redirector system 102, a load balancer system 104, regional network device management systems 106-1 . . . 106-n, a computer readable medium 108, and network devices 110-1 . . . 110-n. As used in this paper, a system can be implemented as an engine or a plurality of engines.

While the system is shown to include multiple network devices 110-1 . . . 110-n, in a specific implementation, the system can only include one network device (e.g. 110-1). The network device s 110-1 . . . 110-n are coupled to client devices 112-1 . . . 112-n. Each client device can be coupled to a single network device 110-1 . . . 110-n (e.g. client device 112-1) or can be coupled to more than one network device (e.g. client device 112-2). The client devices 112-1 . . . 112-n can include a client wireless device, such as a laptop computer or a smart phone. The client devices 112-1 . . . 112-n can also include a repeater or a plurality of linked repeaters. Therefore, the client devices 112-1 . . . 112-n can be comprised of a plurality of repeaters and a client wireless device that can be coupled together as a chain.

A network device, as is used in this paper, can be an applicable device used in connecting a client device to a network. For example, a network device can be a virtual private network (hereinafter referred to as “VPN”) gateway, a router, an access point (hereinafter referred to as “AP”), or a device switch. The network devices 110-1 . . . 110-n can be integrated as part of router devices or as stand-alone devices coupled to upstream router devices. The network devices 110-1 . . . 110-n can be coupled to the client devices 112-1 . . . 112-n through either a wireless or a wired medium. The wireless connection may or may not be IEEE 802.11-compatible. In this paper, 802.11 standards terminology is used by way of relatively well-understood example to discuss implementations that include wireless techniques that connect stations through a wireless medium. A station, as used in this paper, may be referred to as a device with a media access control (MAC) address and a physical layer (PHY) interface to a wireless medium that complies with the IEEE 802.11 standard. Thus, for example, client devices 112-1 . . . 112-n and network devices 110-1 . . . 110-n with which the client devices 112-1 . . . 112-n associate can be referred to as stations, if applicable. IEEE 802.11a-1999, IEEE 802.11b-1999, IEEE 802.11g-2003, IEEE 802.11-2007, and IEEE 802.11n TGn Draft 8.0 (2009) are incorporated by reference.

As used in this paper, a system that is 802.11 standards-compatible or 802.11 standards-compliant complies with at least some of one or more of the incorporated documents' requirements and/or recommendations, or requirements and/or recommendations from earlier drafts of the documents, and includes Wi-Fi systems. Wi-Fi is a non-technical description generally correlated with the IEEE 802.11 standards, as well as Wi-Fi Protected Access (WPA) and WPA2 security standards, and the Extensible Authentication Protocol (EAP) standard. In alternative implementations, a station may comply with a different standard than Wi-Fi or IEEE 802.11 and may be referred to as something other than a “station,” and may have different interfaces to a wireless or other medium.

IEEE 802.3 is a working group and a collection of IEEE standards produced by the working group defining the physical layer and data link layer's MAC of wired Ethernet. This is generally a local area network technology with some wide area network applications. Physical connections are typically made between nodes and/or infrastructure devices (hubs, switches, routers) by various types of copper or fiber cable. IEEE 802.3 is a technology that supports the IEEE 802.1 network architecture. As is well-known in the relevant art, IEEE 802.11 is a working group and collection of standards for implementing wireless local area network (WLAN) computer communication in the 2.4, 3.6 and 5 GHz frequency bands. The base version of the standard IEEE 802.11-2007 has had subsequent amendments. These standards provide the basis for wireless network products using the Wi-Fi brand. IEEE 802.1 and 802.3 are incorporated by reference.

The network devices 110-1 . . . 110-n are coupled to the interregional redirector system 102, the load balancer system 104 and regional network device management systems 106-1 . . . 106-n through a computer-readable medium 108. The computer-readable medium 108 is intended to represent a variety of potentially applicable technologies. For example, the computer-readable medium 108 can be used to form a network or part of a network. Where two components are co-located on a device, the computer-readable medium 108 can include a bus or other data conduit or plane. Where a first component is co-located on one device and a second component is located on a different device, the computer-readable medium 108 can include a wireless or wired back-end network or LAN. The computer-readable medium 108 can also encompass a relevant portion of a WAN or other network, if applicable.

The computer-readable medium 108, the interregional redirector system 102, the load balancer system 104, the regional network device management systems 106-1 . . . 106-n, and other applicable systems described in this paper can be implemented as parts of a computer system or a plurality of computer systems. A computer system, as used in this paper, is intended to be construed broadly. In general, a computer system will include a processor, memory, non-volatile storage, and an interface. A typical computer system will usually include at least a processor, memory, and a device (e.g., a bus) coupling the memory to the processor. The processor can be, for example, a general-purpose central processing unit (CPU), such as a microprocessor, or a special-purpose processor, such as a microcontroller.

The memory can include, by way of example but not limitation, random access memory (RAM), such as dynamic RAM (DRAM) and static RAM (SRAM). The memory can be local, remote, or distributed. As used in this paper, the term “computer-readable storage medium” is intended to include only physical media, such as memory. As used in this paper, a computer-readable medium is intended to include all mediums that are statutory (e.g., in the United States, under 35 U.S.C. 101), and to specifically exclude all mediums that are non-statutory in nature to the extent that the exclusion is necessary for a claim that includes the computer-readable medium to be valid. Known statutory computer-readable mediums include hardware (e.g., registers, random access memory (RAM), non-volatile (NV) storage, to name a few), but may or may not be limited to hardware.

The bus can also couple the processor to the non-volatile storage. The non-volatile storage is often a magnetic floppy or hard disk, a magnetic-optical disk, an optical disk, a read-only memory (ROM), such as a CD-ROM, EPROM, or EEPROM, a magnetic or optical card, or another form of storage for large amounts of data. Some of this data is often written, by a direct memory access process, into memory during execution of software on the computer system. The non-volatile storage can be local, remote, or distributed. The non-volatile storage is optional because systems can be created with all applicable data available in memory.

Software is typically stored in the non-volatile storage. Indeed, for large programs, it may not even be possible to store the entire program in the memory. Nevertheless, it should be understood that for software to run, if necessary, it is moved to a computer-readable location appropriate for processing, and for illustrative purposes, that location is referred to as the memory in this paper. Even when software is moved to the memory for execution, the processor will typically make use of hardware registers to store values associated with the software, and local cache that, ideally, serves to speed up execution. As used herein, a software program is assumed to be stored at an applicable known or convenient location (from non-volatile storage to hardware registers) when the software program is referred to as “implemented in a computer-readable storage medium.” A processor is considered to be “configured to execute a program” when at least one value associated with the program is stored in a register readable by the processor.

In one example of operation, a computer system can be controlled by operating system software, which is a software program that includes a file management system, such as a disk operating system. One example of operating system software with associated file management system software is the family of operating systems known as Windows® from Microsoft Corporation of Redmond, Wash., and their associated file management systems. Another example of operating system software with its associated file management system software is the Linux operating system and its associated file management system. The file management system is typically stored in the non-volatile storage and causes the processor to execute the various acts required by the operating system to input and output data and to store data in the memory, including storing files on the non-volatile storage.

The bus can also couple the processor to the interface. The interface can include one or more input and/or output (I/O) devices. The I/O devices can include, by way of example but not limitation, a keyboard, a mouse or other pointing device, disk drives, printers, a scanner, and other I/O devices, including a display device. The display device can include, by way of example but not limitation, a cathode ray tube (CRT), liquid crystal display (LCD), or some other applicable known or convenient display device. The interface can include one or more of a modem or network interface. It will be appreciated that a modem or network interface can be considered to be part of the computer system. The interface can include an analog modem, isdn modem, cable modem, token ring interface, satellite transmission interface (e.g. “direct PC”), or other interfaces for coupling a computer system to other computer systems. Interfaces enable computer systems and other devices to be coupled together in a network.

The computer systems described throughout this paper can be compatible with or implemented through one or a plurality of cloud-based computing systems. As used in this paper, a cloud-based computing system is a system that provides virtualized computing resources, software and/or information to client devices. The computing resources, software and/or information can be virtualized by maintaining centralized services and resources that the client devices can access over a communication interface, such as a network. “Cloud” may be a marketing term and for the purposes of this paper can include any of the networks described herein. The cloud-based computing system can involve a subscription for services or use a utility pricing model. Users can access the protocols of the cloud-based computing system through a web browser or other container application located on their client device.

The computer systems described throughout this paper can be implemented as or can include engines to perform the functions of each system. An engine, as used in this paper, includes a dedicated or shared processor and, typically, firmware or software modules executed by the processor. Depending upon implementation-specific or other considerations, an engine can be centralized or its functionality distributed. An engine can include special purpose hardware, firmware, or software embodied in a computer-readable medium for execution by the processor.

The engines described throughout this paper can be cloud-based engines. A cloud-based engine is an engine that can run applications and/or functionalities using a cloud-based computing system. All or portions of the applications and/or functionalities can be distributed across multiple computing devices, and need not be restricted to only one computing device. In some embodiments, the cloud-based engines can execute functionalities and/or modules that end users access through a web browser or container application without having the functionalities and/or modules installed locally on the end-users' computing devices.

The computer systems described throughout this paper can include datastores. A datastore, as described in this paper, can be cloud-based datastores compatible with a cloud-based computing system.

The regional network device management systems 106-1 . . . 106-n can function to manage the network devices 110-1 . . . 110-n. Each regional network device management system 106-1 . . . 106-n can include a plurality of engines that manage the network devices 110-1 . . . 110-n. The engines can be grouped into regional network device management systems 106-1 . . . 106-n based upon regions and subregions of the network devices 110-1 . . . 110-n that the engines manage. Therefore the regional network device management systems 106-1 . . . 106-n can be characterized by the regions and subregions of the network devices 110-1 . . . 110-n that the engines within a specific network device management system 106-1 . . . 106-n manage. For example, the engines that manage network devices 110-1 . . . 110-n in the same region or subregion can be grouped into the same regional network device management system (e.g. 106-1). As a result, the regional network device management systems 106-1 . . . 106-n can be regionally unique in that they contain engines that manage network devices 110-1 . . . 110-n within specific regions or subregions.

As the regional network device management systems 106-1 . . . 106-n can be implemented as a cloud-based system, and as the regional network device management systems 106-1 . . . 106-n can be characterized by the region and subregions of the network devices 110-1 . . . 110-n, the regional network device management systems 106-1 . . . 106-n can be organized or located at regions within the cloud based upon the regions and subregions of the network devices 110-1 . . . 110-n. Specifically, the regional network device management systems 106-1 . . . 106-n can be organized or located at regions within the cloud based upon the regions or the subregions of the network devices 110-1 . . . 110-n that the regional network device management systems 106-1 . . . 106-n manage. In a specific implementation, the subregions of the network devices 110-1 . . . 110-n together form a region of the network devices 110-1 . . . 110-n.

The regions or subregions of the network devices 110-1 . . . 110-n can be defined based upon geography, an enterprise network or a combination of both geography and an enterprise network. In a specific implementation, the region can be defined based upon geography to include the network devices 110-1 . . . 110-n associated with or located within a geographical area or location, such as a city or a building within a city. Similarly, a subregion can be defined to include the network devices 110-1 . . . 110-n located in or associated with a geographical area or location within the geographical area or location used to define the region. For example, the region can be defined to include the network devices 110-1 . . . 110-n located in or associated with a state, while the subregion can be defined to include the network devices 110-1 . . . 110-n located in or associated with a city in the state that defines the region. In another implementation, the region can be defined based upon an enterprise to include the network devices 110-1 . . . 110-n associated with or are used in an enterprise network. In yet another implementation, the region can be defined based upon a combination of both geography and an enterprise to include the network devices 110-1 . . . 110-n associated with or located within a geographical location or area within an enterprise network. For example, the region can include the network devices 110-1 . . . 110-n associated with or located within a specific office site of the enterprise.

The regions of the network devices 110-1 . . . 110-n can not only be defined according to the previously described classifications but can also be defined based upon the number of network devices 110-1 . . . 110-n in or associated with the region. In a specific implementation, the region can be defined to include only basic service set (BSS). A BSS includes one network device and all of the stations or other devices (i.e. repeaters) coupled to the network device. The BSS can be identified by a unique basic service set identification (BSSID). The BSSID can be the MAC address of the network device in the BSS. In another implementation, the region can be defined to include an extended service set (ESS) that comprises plurality BSSs. The plurality of BSSs can be interconnected so that stations or devices are connected to multiple network devices within the ESS. The ESS can be identified by a unique extended service set identification (ESSID). The ESSID can be the MAC addresses of the network devices in the ESS.

The system shown in FIG. 1 includes a load balancer system 104 coupled to the regional network device management systems 106-1 . . . 106-n and the network devices 110-1 . . . 110-n through the computer readable medium 108. The load balancer system 104 is also coupled to the interregional redirector system 102 through the computer-readable medium 108. The system can include multiple load balancer systems 104 that can be coupled to and associated with different regional network device management systems 106-1 . . . 106-n. In a specific implementation, the specific regional network device management systems 106-1 . . . 106-n that the load balancer system 104 is coupled to and associated with can be based upon the regions and subregions of the network devices 110-1 . . . 110-n that the engines within the specific regional network device management systems 106-1 . . . 106-n manage. For example, a load balancer system 104 can be coupled to and associated with the regional network device management systems 106-1 . . . 106-n that manage the network devices 110-1 . . . 110-n within an entire state. As the load balancer systems 104 can be coupled to and associated with specific regional network device management systems 106-1 . . . 106-n based on the regions or the subregions of the network devices 110-1 . . . 110-n that the engines within the specific regional network device management systems 106-1 . . . 106-n manage, the load balancer systems 104 can be regionally unique. For example, the load balancer systems 104 can be regionally unique in that they are associated with network devices 106-1 . . . 106-n in the same enterprise network.

Additionally, a specific regional network device management system 106-1 . . . 106-n can be coupled to or associated with multiple load balancer systems 104 based upon the regions and subregions of the network devices 110-1 . . . 110-n managed by the engines in the specific regional network device management system 106-1 . . . 106-n. For example, a specific regional network device management system 106-1 . . . 106-n can be coupled to or associated with a first load balancer system 104 because the specific regional network device management system 106-1 . . . 106-n contains engines that manage network devices 110-1 . . . 110-n in a specific region, such as a state. Additionally, the specific regional network device management system 106-1 . . . 106-n can also be coupled to or associated with a second load balancer system 104 because the specific regional network device management system 106-1 . . . 106-n contains engines that manage network devices 110-1 . . . 110-n in a subregion of the specific region, such as a city within the state.

The load balancer system 104, as will be discussed in greater detail later with respect to FIG. 2, can function to monitor the usage of specific engines grouped into regional network device management systems 106-1 . . . 106-n as the engines within the regional network device management systems 106-1 . . . 106-n manage various network devices 110-1 . . . 110-n. The load balancer system 104 can also function to assign a network device 110-1 . . . 110-n to one or a plurality of engines within one or more of the regional network device management systems 106-1 . . . 106-n so that the assigned engine or engines can manage the assigned network devices 110-1 . . . 110-n. The load balancer system 104 can also function to assign a network device 110-1 . . . 110-n to another load balancer system 104 that can then assign the network devices 110-1 . . . 110-n to another load balancer system 104 or one or a plurality of engines within one or more of the regional network device management systems 106-1 . . . 106-n. In a specific implementation, the load balancer systems 104 can assign a newly purchased network device 110-1 . . . 110-n to either or both another load balancer system 104 and engines in a regional network device management system 106-1 . . . 106-n.

The load balancer systems 104 can assign the network devices 110-1 . . . 110-n to specific engines within the regional network device management systems 106-1 . . . 106-n based upon the regions or subregions of the other network devices 110-1 . . . 110-n that the specific engines manage. The load balancer systems 104 can assign the network devices 110-1 . . . 110-n to other load balancers systems. The other load balancer systems can be coupled to or associated with specific engines within the regional network device management systems 106-1 . . . 106-n. Specifically, the other load balancer systems can be associated with specific engines within the regional network device management systems 106-1 . . . 106-n based upon the regions or subregions of the network devices 110-1 . . . 110-n that the other load balancer systems assign.

The system shown in the example of FIG. 1 includes an interregional redirector system 102. The interregional redirector system 102 is coupled to the network devices 110-1 . . . 110-n and the load balancer systems 104 through the computer readable medium 108. In a specific implementation, the interregional redirector system 102 is not associated with any specific region. Specifically, the interregional redirector system 102 can be coupled to all of the load balancer systems 104, and through the load balancer systems to all of the regional network device management systems 106-1 . . . 106-n. As the regional network device management systems 106-1 . . . 106-n and the load balancer systems 104 can be regionally unique, and as the interregional redirector system 102 can be coupled to all of the regional network device management systems 106-1 . . . 106-n, the interregional redirector system 102 is associated with every region or subregion. Therefore, the interregional redirector system 102 is not unique to a single region, but is rather globally applicable to at least a subplurality of the regions.

In being coupled to the network devices 110-1 . . . 110-n, the interregional redirector system 102 can function to receive identification information from the network devices 110-1 . . . 110-n and validate the network devices. In being coupled to the load balancer systems 104, the interregional redirector system 102 can further function to assign specific network devices 110-1 . . . 110-n to one or a plurality of load balancer systems 104. As the load balancer systems can be regionally unique, the interregional redirector system 102 can assign the network devices 110-1 . . . 110-n to one or a plurality of specific load balancer systems 104 based upon the regions or subregions of the network devices 110-1 . . . 110-n that are being assigned.

In a specific implementation, a newly purchased network device 110-1 . . . 110-n is configured to be directed to the interregional redirector system 102 upon the first turning on of the network device 110-1 . . . 110-n by the purchaser of the network device. In being directed to the interregional redirector system 102, the network device 110-1 . . . 110-n can send the identification information of the network device to the interregional redirector system 102. The interregional redirector system 102 can both validate the network device 110-1 . . . 110-n and assign the newly purchased network device 110-1 . . . 110-n based on the region or subregion of the network device to one or a plurality of load balancer systems 104. The one or a plurality of load balancer systems 104 can then assign the newly purchased network device 110-1 . . . 110-n to one or a plurality of regional network device management systems 106-1 . . . 106-n. In another example, additional server resources, such as additional new regional network device management systems can be added and the load balancer system 104 can assign the newly purchased network device 110-1 . . . 110-n to an added new regional network device management system.

In a specific implementation, the regions or subregions of the network devices 110-1 . . . 110-n can be part of the identification information received by the interregional redirector system 102 from the network devices 110-1 . . . 110-n. In another implementation, the interregional redirector system 102 can trace through the computer readable medium 108 to determine the region or subregion of the newly purchased network device 110-1 . . . 110-n. Alternatively, the interregional redirector system 102 can trace through the computer readable medium 108 the regions or subregions of already activated network devices 110-1 . . . 110-n that neighbor the newly purchased network device 110-1 . . . 110-n either physically or on a network structure level to determine the region or subregion of the newly purchased network device 110-1 . . . 110-n. The interregional redirector system 102 can determine the regions or subregions of neighboring network devices 110-1 . . . 110-n based upon the MAC addresses of the neighboring network devices 110-1 . . . 110-n. In an alternate implementation, the interregional director system 102 can determine the region or subregion of the newly purchased network device 110-1 . . . 110-n through the identity of the purchaser of the network device. Specifically, the interregional redirector system 102 can use the MAC address of the newly purchased network device 110-1 . . . 110-n that can be received from the newly purchased network device 110-1 . . . 110-n to determine the identity of the purchaser of the network device 110-1 . . . 110-n and the region or subregion of the network device 110-1 . . . 110-n. For example, the interregional redirector system 102 can determine that company A purchased the network device 110-1 . . . 110-n and because company A occupies a specific location within a region, such as a city, determine that the city is the region of the newly purchased network device 110-1 . . . 110-n.

FIG. 2 depicts a diagram 200 of an example of a system configured to couple a network device to a network device management engine and monitor the network device management engine. The system includes a regional network device management system 202 coupled to network devices 204-1, 204-3 and 204-3. While only three network devices 204-1, 204-2 and 204-3 are shown, the regional network device management system 202 can be coupled to more or less than three network devices. The system can also include an administrator system 214 coupled to the regional network device management system 202.

The regional network device management system 202 includes a load balancer system 206, network device management engines 208-1, 208-2 and 208-3 and a network device management engine message queue 210. While only three network device management engines 208-1, 208-2 and 208-3 are shown, the regional network device management system 202 can include more or less than three network device management engines.

The network device management engines 208-1, 208-2 and 208-3, within the regional network device management systems 202, are coupled to the network devices 204-1, 204-2 and 204-3. A network device (e.g. 204-3) can be coupled to more than one network device management engines (e.g. 208-2 and 208-3). The network device management engines (e.g.

208-1) can manage the flow of data into and out of the network devices (e.g. 204-1) coupled to the specific network device management engines (e.g. 208-1). The network device management engines (e.g. 208-1) can be regionally unique in that they manage the flow of data into and out of the network devices in specific regions or subregions. Furthermore, the network device management engines can be grouped into a network device management system 202 based upon the regions or subregions of the network devices that the network device management engines manage.

In a specific implementation, the network device management engines 208-1, 208-2 and 208-3 can manage the flow of data into and out of the network devices 204-1, 204-2 and 204-3 by controlling routers connected to the network devices. In another implementation, the network device management engines 208-1, 208-2 and 208-3 can control the flow of data into and out of the network devices 204-1, 204-2 and 204-3 by functioning as a router themselves, and switching between different data paths coupled to the network device management engines. For example, network device management engines 208-1, 208-2 and 208-3 that manage network devices 204-1, 204-2 and 204-3 within the same region or subregion can be grouped into the same network device management system 202.

The network device management engines 208-1, 208-2 and 208-3 can be a server that can perform the previously described functions. In a specific implementation, the network device management engines 208-1, 208-2 and 208-3 can be configured in accordance with the control and provisioning of wireless access points (CAPWAP) protocol. Specifically, the network device management engines 208-1, 208-2 and 208-3 can be CAPWAP servers. CAPWAP servers are servers that can be configured in accordance with the CAPWAP protocol. The CAPWAP protocol is similar to the light weight access point protocol (LWAPP), but differs in that it includes the integration of a full datagram transport layer security (DTLS) tunnel. Data is transmitted through the CAPWAP protocol over an unencrypted data channel while control messages of the data are transmitted in the DTLS tunnel. The CAPWAP protocol is described in RFC 5415 (2009), which is hereby incorporated by reference, and IEEE 802.11 which was previously incorporated by reference.

In a specific implementation, the network devices 204-1, 204-2 and 204-3 can function to determine whether a network device management engine 208-1, 208-2 and 208-3 that the network devices are coupled to has failed. For example, if a network device 204-1, 204-2 and 204-3 does not receive traffic from a network device management engine 208-1, 208-2 and 208-3 that is coupled to the network device, then the network device can determine that the network device management engine has failed. Further in the specific implementation, the network device 204-1, 204-2 and 204-3 can alert the load balancer system 206 to a failure of a network device management engine 208-1, 208-2 and 208-3. For example, upon detecting a failure in a network device management engine 208-1, 208-2 and 208-3 can generate and send a network device management engine failure message to the load balancer system 206. In one example, the network device management engine failure message identifies the specific network device management engine 208-1, 208-2 and 208-3 that has failed.

The network device management engines 208-1, 208-2 and 208-3 are coupled to the network device management engine message queue 210. The network device management engine message queue 210 is coupled to the load balancer system 206. The network device management engine message queue 210 can function to receive status messages sent from the network device management engines 208-1, 208-2 and 208-3. The status messages can be sent from the network device management engines 208-1, 208-2 and 208-3 periodically, after a predetermined interval of time. In another implementation the status messages can be sent from the network device management engines 208-1, 208-2 and 208-3 when the load balancer system 206 sends a status request to the network device management engines 208-1, 208-2 and 208-3.

The status messages sent from the network device management engine 208-1, 208-2 and 208-3 can include the amount of used bandwidth and available bandwidth that exist on network devices coupled to the specific network device management engines 208-1, 208-2 and 208-3. The status messages can also include information about the number of network devices 204-1, 204-2 and 204-3 that the network device management engines 208-1, 208-2 and 208-3 are managing. The status messages can further include the amount of bandwidth on the network device management engines 208-1, 208-2 and 208-3 that each network device 204-1, 204-2 and 204-3 is using. In a specific implementation, the regions and subregions of the network devices 204-1, 204-2 and 204-3 that the network device management engines 208-1, 208-2 and 208-3 are managing is included in the status messages. Further, the status message can include the amount of memory available and is being used by the network device management engines 208-1, 208-2 and 208-3 and how much memory of the network device management engines is being used by each network device 204-1, 204-2 and 204-3 managed by the network device management engines 208-1, 208-2 and 208-3.

The load balancer system 206 is also coupled to the network devices 204-1, 204-2 and 204-3. The load balancer system 206 can become coupled to network devices 204-1, 204-2 and 204-3 when the network device 204-1, 204-2 and 204-3 is assigned to the specific load balancer system 206 of a specific regional network device management system 202. The network devices 204-1, 204-2 and 204-3 can be assigned to a specific load balancer system 206 within a specific regional network device management system 202 by either another load balancer system 104 or the interregional redirector system 102, shown in FIG. 1. As discussed previously with FIG. 1, the network device 204-1, 204-2 and 204-3 can be assigned to a specific regional network device management system 202 based on the region or subregion of the network device 204-1, 204-2 and 204-3.

The load balancer system 206 can function to assign a network device 204-1, 204-2 and 204-3 to a network device management engine 208-1, 208-2 and 208-3 when the network device 204-1, 204-2 and 204-3 is assigned to the load balancer system 206. In a specific implementation, the load balancer system 206 can assign a newly purchased network device 204-1, 204-2 and 204-3 to a network device management engine 208-1, 208-2 and 208-3. The load balancer system 206 can assign a network device 204-1, 204-2 and 204-3 to a network device management engine 208-1, 208-2 and 208-3 based upon the region or subregion of the network devices already assigned to a network device management engine. For example, the load balancer system 206 can assign a network device 204-1, 204-2 and 204-3 to a network device management engine 208-1, 208-2 and 208-3 that already manages network devices in the same or related region or subregion of the network device that is being assigned.

Additionally, the load balancer system 206 can assign the network device 204-1, 204-2 and 204-3 to one or a plurality of network device management engines 208-1, 208-2 and 208-3 based in part upon the status message that the load balancer system 206 reads for each network device management engine 208-1, 208-2 and 208-3 from the network device management engine message queue 210. For example, if the status messages retrieved by the load balancer system 206 indicate that network device management engine 208-1 has a greater amount of bandwidth than network device management engine 208-2, the load balancer system 206 can assign network device 204-1 to network device management engine 208-1. As a result, network device 204-1 is managed by network device management engine 208-1. In another implementation, the load balancer system 206 can also assign the network device 204-1, 204-2 and 204-3 to a network device management engine 208-1, 208-2 and 208-3 based not only on the available bandwidth of the network device management engines, but also on the expected amount of resources, such as bandwidth that the specific network device 204-1, 204-2 and 204-3 will use from the network device management engines.

The load balancer system 206 can also function to monitor the status of the network device management engines 208-1, 208-2 and 208-3 and reassign the network devices 204-1, 204-2 and 204-3 to other network device management engines in the event of a failure of the network device management engine or engines 208-1, 208-2 and 208-3 to which specific network devices are assigned. For example, the load balancer system 206 can detect a failure in network device management engine 208-1 connected along dashed line 212 to network device 204-2. In response to the failure of network device management engine 208-1, the load balancer system 206 can assign the network device 204-2 to network device management engine 208-2 that is not failing.

In a specific implementation, the load balancer system 206 detects a failure of a network device management engine 208-1, 208-2 and 208-3 when the network device management engine does not send a status message to the network device management engine message queue 210. In another implementation, the load balancer system 206 detects a failure of a network device management engine 208-1, 208-2 and 208-3 when the engine does not send a specific number of status messages to the network device management engine message queue 210. The number of status messages that a network device management engine 208-1, 208-2 and 208-3 fails to send to the network device management engine message queue 210 before the load balancer system 206 determines that a failure has occurred can be predefined. In another implementation, the load balancer system 206 detects a failure of a network device management engine 208-1, 208-2 and 208-3 when the status message sent by a network device management engine indicates that the resources of the engine reach a certain level. For example, the load balancer system 206 can detect a failure of one of the specific network device management engine 208-1, 208-2 and 208-3 when the amount of available bandwidth of the specific network device management engines falls below a certain predefined available bandwidth level.

In a specific implementation, the load balancer system 206 functions to detect a failure of a network device management engine based on network device management engine failure messages generated by the network devices 204-1, 204-2 and 204-3. For example, if the load balancer system 206 receives a network device management engine failure message from the network devices 204-1, 204-2 and 204-3 identifying the specific network device management engine 208-1, 208-2 and 208-3 that has failed, then the load balancer system 206 can determine/detect that the specific network device management engine 208-1, 208-2 and 208-3 has failed.

In a specific implementation, if the load balancer system 206 detects a failure in one of the network device management engines 208-1, 208-2 and 208-3, the load balancer system 206 can reassign the one or plurality of network devices 204-1, 204-2 and 204-3 connected to the network device management engine to other network device management engines in either the same regional network device management system 202 or different regional network device management systems. Alternatively, the load balancer system 206 can reassign all of the network devices 204-1, 204-2 and 204-3 connected to a failed network device management engine 208-1, 208-2 and 208-3 to one or a plurality of other network device management engines. In yet another alternative, the load balancer system can reassign a portion of the network devices 204-1, 204-2 and 204-3 connected to a failed network device management engine 208-1, 208-2 and 208-3 so that the failed network device management engine is cured, and is no longer failing. For example, if an network device management engine is failing due to a lack of available bandwidth, the load balancer system 206 can reassign a portion of the network devices assigned to the failed network device management engine so that the available bandwidth of the failing network device management engine is increased to a level where the network device management engine is no longer failing.

The load balancer system 206 can also be coupled to the administrator system 214, thereby coupling the regional network device management system 202 to the administrator system 214. The load balancer system 206 can send a notification to the administrator system 214 in the event that the load balancer system detects a failure of one of the network device management engines 208-1, 208-2 and 208-3 from the status messages sent to the network device management engine queue 210. In a specific implementation, when the load balancer system 206 detects a failure of one of the network device management engines 208-1, 208-2 and 208-3 because a specific network device management engine does not send a status message to the network device management engine message queue 210, the load balancer system can send a notification to the administrator system 214. The notification sent to the administrator system can include the reason why the load balancer system 206 detects a fault in a specific network device management engine, such as a failure caused by not sending a status message to network device management engine message queue 210, or caused by the resources of a specific network device management engine have reached a specific level. The administrator system 214 can include a computer implemented process for fixing the failed network device management engine based upon the reason why the load balancer system 206 detects a failure in a specific network device management engine.

FIG. 3 is a diagram 300 of an example of a load balancer system 302. The load balancer system 302 can be configured to assign network devices to network device management engines, monitor the status of the network management engines, reassign a network device to a new network device management engine if the network device management engine fails and notify the administrator system of a failure of a network management engine.

In the example of FIG. 3, the load balancer system 302 is coupled through computer-readable medium 304 to an administrator system 306, network devices 308 and the network device management engine message queue 310. The load balancer system 302 includes a message queue access engine 314 coupled through the computer readable medium to the network device management engine message queue 310. The message queue access engine 314 can be configured to retrieve status information of network device management engines from the status messages in the network device management engine message queue 310. The status messages can include information as to the status of the network device management engines, such as the amount of available bandwidth on the network device management engines. The status message also can include information as to when a status message was sent to the network device management engine message queue 310, which can be used to determine whether a network device management engine has stopped sending status messages, and thus may have failed. The message queue access engine 314 can be configured to retrieve status information each time a status message is sent to the network device management engine message queue 310. The status information retrieved by the message queue access engine 314 can be stored on a network device management engine status profiles datastore 318.

The load balancer system 302 can also include a network device access engine 312. The network device access engine 312 can be coupled to network devices 308 coupled to the load balancer system 302 through computer-readable medium 304. The network device access engine 312 can be configured to retrieve or receive information from network devices 308 coupled to the load balancer system 302. In a specific implementation, the network device access engine 312 is configured to retrieve or receive information from newly purchased network devices 308 coupled to the load balancer system 302 for the first time. The newly purchased network devices can become coupled to the load balancer system 302 after being assigned to the load balancer system 302 by either or both another load balancer system or an interregional redirector system, as is shown in FIG. 1. The information retrieved or received by the network device access engine 312 can include the region or subregions of the network device 308. The information can also include the amount of bandwidth that the network device 308 expects to use. The network device access engine 312 can store the information retrieved or received from the network devices 308 on a network device profiles datastore 316.

The load balancer system 302 includes a network device assignment engine 320. The network device assignment engine 320 is coupled to the network device management engine status profiles datastore 318 and the network device profiles datastore 316. The network device assignment engine 320 is also coupled to the network devices 308 through the computer-readable medium 304. The network device assignment engine 320 can function to assign a network device 308 to one or a plurality of network device management engines. Specifically, as the network devices 308 are coupled to the network device assignment engine 320, in assigning the network devices 308 to network device management engines, the network device assignment engine 320 can direct the network devices 308 to couple to network device management engines, so that the engines can manage the flow of data packets into and out of the network devices 308. The network device assignment engine 320 can store the assignment information on the network device management engine assignment profiles datastore 322. The assignment information can include which network devices 308 are assigned to be managed by specific network device management engines.

The network device assignment engine 320 can also function to determine that a network device 308 is assigned to a failing network device management engine and reassign the network device 308 to another one or plurality of network device management engines that are not failing. Specifically, the network device assignment engine 320 can determine that a network device management engine is failing form the information stored in the network device management engine status profiles datastore 318. The network device assignment engine can then determine which network devices 308 are being managed by the specific network device management engine that is failing from the network device management engine assignment profiles datastore 322. The network device assignment engine 320 can then reassign the network devices 308 that are being managed by failing network device management engines to different network device management engines that are not failing. In a specific implementation, in reassigning the network devices 308 to different network device management engines the network device assignment engine 320 can use the information about the network devices 308 stored in the network device profiled datastore 316. For example, the network device assignment engine 320 can use the information about the region or subregion of the network device 308 to reassign the network device 308 to another network device management engine.

The network device assignment engine 320 can also be coupled to the administrator system notification engine 324. The administrator system notification engine 324 is coupled to the administrator system 306 through the computer-readable medium 304. In a specific implementation, the network device assignment engine 320 can function to initiate the sending of a notification about the failure of an AP management engine to the administrator system 306. Specifically, the network device assignment engine 320 can send failure information about a network device management engine to the administrator system notification engine 324. The failure information can include why the network device assignment engine 320 has determined that a network device management engine has failed. The administrator system notification engine 324 can send a notification to the administrator system that a network device management engine has failed, as can be determined by the network device assignment engine 320. The notification sent by the administrator system notification engine 324 can include the information used by the network device assignment engine 320 to determine that the network device management engine has failed.

FIG. 4 depicts a flowchart 400 of an example of a method for assigning a network device to a regional network device management system. The flowchart starts at module 402 with powering on a network device. In a specific implementation, the network device can be a newly purchased device that is powered on for the first time by the purchaser of the network device.

In the example of FIG. 4, the flowchart continues to module 404 with connecting the network device to the interregional redirector system. The flowchart continues to module 406 where the interregional redirector system receives information about the network device connected to the interregional redirector system at module 404. The information about the network device can include information about the region or subregion of the network device. The information about the network device can also include the MAC address of the network device and information about the purchaser of the network device. The information about the network device can also include the amount of bandwidth that the network device expects to use.

The flowchart then continues to module 408, where the network device is validated. In one example, the interregional redirector system validates the network device by using the MAC address received from the network device. The flowchart continues to module 410, where the network device is assigned to a load balancer system. In one example, the interregional redirector system can assign the network device to a load balancer system based on the region or the subregion of the network device. In another example, the load balancer system can be associated with a single or multiple regional AP management systems. In still another example, the region or the subregion of the network device can be determined form the network device at module 406.

FIG. 5 depicts a flowchart 500 of an example of a method of a load balancer system assigning a network device to a network device management engine. In one example, the flowchart can further include the load balancer system determining whether or not a network device management engine has failed and reassigning network devices that are being managed by the failed network device management engine to other network device management engine.

The flowchart beings at module 502, where a load balancer system receives network device information. The network device information can be received from a network device assigned to the load balancer system or from another load balancer system or interregional redirector system that assigns the network device to the load balancer system. The network device information can include information about the region or the subregion of the network device assigned to the load balancer system. The flowchart continues to module 504, where the load balancer system receives network device management engine information. The management engine information can be status information of the network device management engines. The status information can be determined by the load balancer system from messages sent to a network device management engine message queue from network device management engines. The status information can include the amount of bandwidth available on a network device management engine. The status information can also include whether or not the network device management engine has failed. The status information can also include any other information related to network device management engines that has been discussed in this paper.

The flowchart continues to module 506 where the load balancer system assigns a network device to a network device management engine or a plurality of network device management engines for management of the network device. As discussed previously, the load balancer system can assign a network device to a network device management engine based on the region of the network device and the regions or subregions of the other network devices that the assigned network device management engine are managing. The load balancer system can also assign a network device to a network device management engine based on the amount of available bandwidth that the network device management has or any other method described in this paper.

The flowchart continues to module 508, where the load balancer system retrieves network device management engine status messages from a network device management engine message queue. The status messages can include information as to the amount of available bandwidth that a network device management engine has. The status messages can also include time stamps to determine when the status message was sent to the network device management engine message queue by the network device management engines. The load balancer system can continuously retrieve status messages form the network device management engine message queue, or at set times when the network device management engines are scheduled to send a status message.

The flowchart continues to module 510, where the load balancer system monitors a network device management engine and determines the status of a network device management engine. The load balancer system can use the number of times that a network device management engine was supposed to send a status message and did not do so in order to determine the status of the network device management engine. Alternatively, the load balancer system can use the available bandwidth to determine the status of a network device management engine.

The flowchart continues to decision point 512, where the load balancer system determines whether the network device management engine has failed. The load balancer system can determine whether a network device management engine has failed based on the status of the network device management engine determined at module 510. For example, if the network device management engine was supposed to send a status message and did not do so, then the load balancer system can determine that the network device management engine has failed. Alternatively, if the network device management engine does not have enough available bandwidth or the network devices coupled to the network device management engine do not have enough available bandwidth then the load balancer system can determine that the network device management engine has failed. If it is determined at decision point 512, that the network device management engine has not failed, then the flowchart proceeds to module 508, where the load balancer system retrieves network device management engine status messages. At decision point 512, if the load balancer system determines that a network device management engine has failed, then the flowchart continues to module 514, where the load balancer system sends a notification to an administrator system that a specific network device management engine has failed. The flowchart then proceeds to module 506, where the load balancer system reassigns the network device to a new network device management engine. In an alternative implementation, if the load balancer system determines at decision point 512 that a network device management engine has failed, then the flowchart skips module 514 and proceeds to module 506, where the load balancer system reassigns the network device to a new network device management engine.

FIG. 6 depicts a flowchart 600 of an example of a method for determining that a network device management engine has failed by a network device managed by the network device management engine. The flowchart begins at module 602, with determining that a network device management engine has failed by a network device that is managed by the network device management engine. In one example, the network device determines that the network device management engine has failed when the network device stops receiving traffic from the network device management engine.

The flowchart continues to module 604 where a network device management engine failure message is sent form the network device to a load balancer system. In one example, the network device generates and sends the network device management engine failure message to the load balancer system after determining that the network device management engine has failed. In another example, the network device management engine failure message identifies the network device management engine that has failed.

The flowchart continues to module 606 where the load balancer system detects that the network device management engine has failed. In one example, the load balancer system detects that the network device management engine has failed after receiving the network device management engine failure message sent from the network device at module 604. In another example, the load balancer system determines the identification of the failed network device management engine from the network device management engine failure message sent by the network device.

The flowchart continues to module 608 where the load balancer reassigns a new network device management engine to the network device. In one example, the new network device management engine is in the same region or subregion as the network device. In another example, the new network device management engine manages other network devices in the same region or subregion as the network device to which the network device management engine is being assigned.

While preferred implementations of the present inventive apparatus and method have been described, it is to be understood that the implementations described are illustrative only and that the scope of the implementations of the present inventive apparatus and method is to be defined solely by the appended claims when accorded a full range of equivalence, many variations and modifications naturally occurring to those of skill in the art from a perusal thereof.

Claims

1. A method for building and maintaining a network, the method comprising:

operationally connecting an access point in a region that is connectable to one or more client devices in the region to an interregional redirector engine associated with a plurality of regions including the region;
receiving at the interregional redirector engine network device information of the access point, the network device information including geography information of the region and enterprise network information of the access point;
determining, by the interregional redirector engine, based on the network device information, a load balancer system uniquely associated with the region selectively from a plurality of load balancer systems that are uniquely associated with different regions and coupled to the interregional redirector engine associated with the plurality of regions;
assigning, by the interregional redirector engine, the access point to the load balancer system;
assigning, by the load balancer system, the access point to a regional network device management engine associated with the region based on the network device information, the regional network device management engine being determined selectively from a plurality of regional network device management engines that are associated with the region and coupled to different sets of one or more access points;
managing, by the load balancer system, a failure of the regional network device management engine in communication with the access point based on network device management engine failure information provided from the access point to the load balancer system without passing through the failed regional network device management engine;
managing, by the regional network device management engine, the access point in providing access to an enterprise network.

2. The method of claim 1, further comprising validating the access point.

3. The method of claim 1, further comprising:

receiving, by the load balancer system, network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines;
reassigning, by the load balancer system, the access point to the second network device management engine based on the network device management engine status information from the first and second network device management engines.

4. The method of claim 1, further comprising:

receiving, by the load balancer system, network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines;
determining the first network device management engine has failed based on the network device management engine status information;
reassigning, by the load balancer system, the access point to a second network device management engine of the plurality of network device management engines.

5. The method of claim 4, further comprising assigning, by the load balancer system, a second access point to the second network device management engine.

6. The method of claim 4, further comprising sending, from the load balancer system, a network device management engine status notification to an administration engine, the network device management engine status notification indicating a reason why the first network device management engine was determined as failed.

7. A system for building and maintaining a network, the system comprising:

a plurality of access points provided in a region and configured to provide access to an enterprise network to one or more client devices in the region;
a plurality of load balancer systems uniquely associated with different regions;
a plurality of regional network device management engines associated with the region and coupled to different sets of one or more of the access points;
an interregional redirector engine associated with a plurality of regions including the region, coupled to the plurality of load balancer systems and the access points, and configured to: receive network device information from one of the access points, the network device information including geography information of the region and enterprise network information of said one of the access points; determine, based on the network device information, a load balancer system uniquely associated with the region selectively from the plurality of load balancer systems; assign said one of the access points to the load balancer system;
the load balancer system configured to assign said one of the access points to a regional network device management engine determined selectively from the plurality of regional network device management engines based on the network device information and manage a failure of the regional network device management engine in communication with said one of the access points based on network device management engine failure information provided from said one of the access points to the load balancer system without passing through the failed regional network device management engine, the regional network device management engine configured to manage the access points in providing access to the enterprise network.

8. The system of claim 7, wherein the interregional redirector engine is further configured to validate said one of the access points.

9. The system of claim 7, wherein the load balancer system is configured to:

receive network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines;
reassign said one of the access points to the second network device management engine based on the network device management engine status information from the first and second network device management engines.

10. The method of claim 7, wherein the load balancer system is configured to:

receive network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines;
determine that the first network device management engine has failed;
reassign the access point to a second network device management engine of the plurality of network device management engines.

11. The system of claim 10, wherein the load balancer system is further configured to assign a second access point of the plurality of access points to the second network device management engine.

12. The system of claim 10, wherein the load balancer system is further configured to send a network device management engine status notification to an administration engine, the network device management engine status notification indicating a reason why the first network device management engine was determined as failed.

13. The method of claim 1, further comprising:

receiving, by the load balancer system, network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines;
reassigning, by the load balancer system, a first portion of a plurality of access points assigned to the first network device management engine, including the access point, to the second network device management engine without reassigning a second portion of the plurality of access points assigned to the first network device management engine, based on the network device management engine status information received from the first and second network device management engines.

14. The method of claim 1, further comprising:

receiving, by a network device management engine message queue uniquely associated with the region, network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines;
retrieving, by the load balancer system, the network device management engine status information received from the first and second network device management engines from the network device management engine message queue;
reassigning, by the load balancer system, a first portion of a plurality of access points assigned to the first network device management engine, including the access point, to the second network device management engine without reassigning a second portion of the plurality of access points assigned to the first network device management engine, based on the network device management engine status information retrieved from the network device management engine message queue.

15. The method of claim 1, further comprising:

receiving, by the load balancer system, the network device management engine failure information from the access point coupled to the failed network device management engine, which is a first network device management engine of the plurality of network device management engines;
reassigning, by the load balancer system, the access point to a second network device management engine of the plurality of network device management engines, based on the network device management engine failure information from the access point.

16. The system of claim 7, wherein the load balancer system is further configured to:

receive network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines;
reassign a first portion of the plurality of access points assigned to the first network device management engine, including said one of the access points, to the second network device management engine without reassigning a second portion of the plurality of access points assigned to the first network device management engine, based on the network device management engine status information received from the first and second network device management engines.

17. The system of claim 7, further comprising:

a network device management engine message queue uniquely associated with the region and configured to receive network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines,
wherein the load balancer system is further configured to:
retrieve the network device management engine status information received from the first and second network device management engines from the network device management engine message queue;
reassign a first portion of a plurality of access points assigned to the first network device management engine, including said one of the access points, to the second network device management engine without reassigning a second portion of the plurality of access points assigned to the first network device management engine, based on the network device management engine status information retrieved from the network device management engine message queue.

18. The system of claim 7, wherein the load balancer system is further configured to:

receive the network device management engine failure information from said one of the access points coupled to the failed network device management engine, which is a first network device management engine of the plurality of network device management engines;
reassign said one of the access points to a second network device management engine of the plurality of network device management engines, based on the network device management engine failure information from said one of the access points.
Referenced Cited
U.S. Patent Documents
5471671 November 28, 1995 Wang et al.
5697059 December 9, 1997 Carney
5726984 March 10, 1998 Kubler et al.
5956643 September 21, 1999 Benveniste
6061799 May 9, 2000 Eldridge et al.
6112092 August 29, 2000 Benveniste
6154655 November 28, 2000 Borst et al.
6201792 March 13, 2001 Lahat
6233222 May 15, 2001 Wallentin
6314294 November 6, 2001 Benveniste
6473413 October 29, 2002 Chiou et al.
6496699 December 17, 2002 Benveniste
6519461 February 11, 2003 Andersson et al.
6628623 September 30, 2003 Noy
6628938 September 30, 2003 Rachabathuni et al.
6636498 October 21, 2003 Leung
6775549 August 10, 2004 Benveniste
6865393 March 8, 2005 Baum et al.
6957067 October 18, 2005 Iyer et al.
7002943 February 21, 2006 Bhagwat et al.
7057566 June 6, 2006 Theobold
7085224 August 1, 2006 Oran
7085241 August 1, 2006 O'Neill et al.
7130629 October 31, 2006 Leung et al.
7154874 December 26, 2006 Bhagwat et al.
7164667 January 16, 2007 Rayment et al.
7174170 February 6, 2007 Steer et al.
7177646 February 13, 2007 O'Neill et al.
7181530 February 20, 2007 Halasz et al.
7224697 May 29, 2007 Banerjea et al.
7216365 May 8, 2007 Bhagwat et al.
7251238 July 31, 2007 Joshi et al.
7336670 February 26, 2008 Calhoun
7339914 March 4, 2008 Bhagwat et al.
7346338 March 18, 2008 Calhoun et al.
7366894 April 29, 2008 Kalimuthu et al.
7369489 May 6, 2008 Bhattacharya
7370362 May 6, 2008 Olson et al.
7440434 October 21, 2008 Chaskar et al.
7512379 March 31, 2009 Nguyen
7536723 May 19, 2009 Bhagwat et al.
7562384 July 14, 2009 Huang
7593356 September 22, 2009 Friday et al.
7656822 February 2, 2010 AbdelAziz et al.
7706789 April 27, 2010 Qi et al.
7716370 May 11, 2010 Devarapalli
7751393 July 6, 2010 Chaskar et al.
7768952 August 3, 2010 Lee
7793104 September 7, 2010 Zheng et al.
7804808 September 28, 2010 Bhagwat et al.
7843907 November 30, 2010 Abou-Emara et al.
7844057 November 30, 2010 Meier et al.
7856209 December 21, 2010 Rawat
7921185 April 5, 2011 Chawla et al.
7949342 May 24, 2011 Cuffaro et al.
7961725 June 14, 2011 Nagarajan et al.
7970894 June 28, 2011 Patwardhan
8000308 August 16, 2011 Dietrich et al.
8069483 November 29, 2011 Matlock
8219688 July 10, 2012 Wang
8249606 August 21, 2012 Neophytou et al.
8493918 July 23, 2013 Karaoguz et al.
8553612 October 8, 2013 Alexandre
8789191 July 22, 2014 Bhagwat et al.
8824448 September 2, 2014 Narayana
8948046 February 3, 2015 Kang et al.
8953453 February 10, 2015 Xiao
9003527 April 7, 2015 Bhagwat et al.
20010006508 July 5, 2001 Pankaj et al.
20020012320 January 31, 2002 Ogier et al.
20020021689 February 21, 2002 Robbins et al.
20020041566 April 11, 2002 Yang
20020071422 June 13, 2002 Amicangioli
20020091813 July 11, 2002 Lamberton et al.
20020114303 August 22, 2002 Crosbie
20020116463 August 22, 2002 Hart
20020128984 September 12, 2002 Mehta et al.
20030005100 January 2, 2003 Barnard et al.
20030039212 February 27, 2003 Lloyd et al.
20030084104 May 1, 2003 Salem
20030087629 May 8, 2003 Juitt
20030104814 June 5, 2003 Gwon et al.
20030129988 July 10, 2003 Lee et al.
20030145091 July 31, 2003 Peng et al.
20030179742 September 25, 2003 Ogier et al.
20030198207 October 23, 2003 Lee
20040003285 January 1, 2004 Whelan et al.
20040013118 January 22, 2004 Borella
20040022222 February 5, 2004 Clisham
20040054774 March 18, 2004 Barber et al.
20040064467 April 1, 2004 Kola et al.
20040077341 April 22, 2004 Chandranmenon et al.
20040103282 May 27, 2004 Meier et al.
20040109466 June 10, 2004 Van Ackere et al.
20040162037 August 19, 2004 Shpak
20040185876 September 23, 2004 Groenendaal
20040192312 September 30, 2004 Li et al.
20040196977 October 7, 2004 Johnson et al.
20040236939 November 25, 2004 Watanabe et al.
20040255028 December 16, 2004 Chu et al.
20050053003 March 10, 2005 Cain et al.
20050074015 April 7, 2005 Chari et al.
20050085235 April 21, 2005 Park
20050099983 May 12, 2005 Nakamura et al.
20050122946 June 9, 2005 Won
20050154774 July 14, 2005 Giaffreda et al.
20050207417 September 22, 2005 Ogawa et al.
20050259682 November 24, 2005 Yosef et al.
20050262266 November 24, 2005 Wiberg et al.
20050265288 December 1, 2005 Liu et al.
20050266848 December 1, 2005 Kim
20060010250 January 12, 2006 Eisl et al.
20060013179 January 19, 2006 Yamane
20060026289 February 2, 2006 Lyndersay et al.
20060062250 March 23, 2006 Payne, III
20060107050 May 18, 2006 Shih
20060117018 June 1, 2006 Christiansen et al.
20060140123 June 29, 2006 Conner et al.
20060146748 July 6, 2006 Ng et al.
20060146846 July 6, 2006 Yarvis et al.
20060165015 July 27, 2006 Melick et al.
20060187949 August 24, 2006 Seshan et al.
20060221920 October 5, 2006 Gopalakrishnan et al.
20060233128 October 19, 2006 Sood et al.
20060234701 October 19, 2006 Wang et al.
20060245442 November 2, 2006 Srikrishna et al.
20060251256 November 9, 2006 Asokan et al.
20060268802 November 30, 2006 Faccin
20060294246 December 28, 2006 Stieglitz et al.
20070004394 January 4, 2007 Chu et al.
20070010231 January 11, 2007 Du
20070025274 February 1, 2007 Rahman et al.
20070025298 February 1, 2007 Jung
20070030826 February 8, 2007 Zhang
20070049323 March 1, 2007 Wang et al.
20070077937 April 5, 2007 Ramakrishnan et al.
20070078663 April 5, 2007 Grace
20070082656 April 12, 2007 Stieglitz et al.
20070087756 April 19, 2007 Hoffberg
20070091859 April 26, 2007 Sethi et al.
20070115847 May 24, 2007 Strutt et al.
20070116011 May 24, 2007 Lim et al.
20070121947 May 31, 2007 Sood et al.
20070133407 June 14, 2007 Choi et al.
20070140191 June 21, 2007 Kojima
20070150720 June 28, 2007 Oh et al.
20070153697 July 5, 2007 Kwan
20070153741 July 5, 2007 Blanchette et al.
20070156804 July 5, 2007 Mo
20070160017 July 12, 2007 Meier et al.
20070171885 July 26, 2007 Bhagwat et al.
20070192862 August 16, 2007 Vermeulen et al.
20070195761 August 23, 2007 Tatar et al.
20070206552 September 6, 2007 Yaqub
20070247303 October 25, 2007 Payton
20070248014 October 25, 2007 Xie
20070249324 October 25, 2007 Jou et al.
20070263532 November 15, 2007 Mirtorabi et al.
20070280481 December 6, 2007 Eastlake et al.
20070288997 December 13, 2007 Meier et al.
20080002642 January 3, 2008 Borkar et al.
20080022392 January 24, 2008 Karpati et al.
20080037552 February 14, 2008 Dos Remedios et al.
20080080369 April 3, 2008 Sumioka
20080080377 April 3, 2008 Sasaki et al.
20080090575 April 17, 2008 Barak et al.
20080095094 April 24, 2008 Innami
20080095163 April 24, 2008 Chen et al.
20080107027 May 8, 2008 Allan et al.
20080109879 May 8, 2008 Bhagwat et al.
20080130495 June 5, 2008 Dos Remedios et al.
20080146240 June 19, 2008 Trudeau
20080151751 June 26, 2008 Ponnuswamy et al.
20080159128 July 3, 2008 Shaffer
20080159135 July 3, 2008 Caram
20080170527 July 17, 2008 Lundsgaard et al.
20080186932 August 7, 2008 Do et al.
20080194271 August 14, 2008 Bedekar et al.
20080207215 August 28, 2008 Chu et al.
20080209186 August 28, 2008 Boden
20080212562 September 4, 2008 Bedekar et al.
20080219286 September 11, 2008 Ji et al.
20080225857 September 18, 2008 Lange
20080229095 September 18, 2008 Kalimuthu et al.
20080240128 October 2, 2008 Elrod
20080253370 October 16, 2008 Cremin et al.
20080273520 November 6, 2008 Kim et al.
20080279161 November 13, 2008 Stirbu et al.
20090019521 January 15, 2009 Vasudevan
20090028052 January 29, 2009 Strater et al.
20090040989 February 12, 2009 da Costa et al.
20090043901 February 12, 2009 Mizikovsky et al.
20090082025 March 26, 2009 Song
20090088152 April 2, 2009 Orlassino
20090097436 April 16, 2009 Vasudevan et al.
20090111468 April 30, 2009 Burgess et al.
20090113018 April 30, 2009 Thomson
20090141692 June 4, 2009 Kasslin et al.
20090144740 June 4, 2009 Gao
20090168645 July 2, 2009 Tester et al.
20090172151 July 2, 2009 Davis
20090197597 August 6, 2009 Kotecha
20090207806 August 20, 2009 Makela et al.
20090239531 September 24, 2009 Andreasen et al.
20090240789 September 24, 2009 Dandabany
20090247170 October 1, 2009 Balasubramanian et al.
20090257380 October 15, 2009 Meier
20090303883 December 10, 2009 Kucharczyk et al.
20090310557 December 17, 2009 Shinozaki
20100020753 January 28, 2010 Fulknier
20100046368 February 25, 2010 Kaempfer et al.
20100057930 March 4, 2010 DeHaan
20100061234 March 11, 2010 Pai et al.
20100067379 March 18, 2010 Zhao et al.
20100112540 May 6, 2010 Gross et al.
20100115278 May 6, 2010 Shen et al.
20100115576 May 6, 2010 Hale et al.
20100132040 May 27, 2010 Bhagwat et al.
20100195585 August 5, 2010 Horn
20100208614 August 19, 2010 Harmatos
20100228843 September 9, 2010 Ok et al.
20100238871 September 23, 2010 Tosic
20100240313 September 23, 2010 Kawai
20100254316 October 7, 2010 Sendrowicz
20100260091 October 14, 2010 Seok
20100290397 November 18, 2010 Narayana
20100304738 December 2, 2010 Lim et al.
20100311420 December 9, 2010 Reza et al.
20100322217 December 23, 2010 Jin et al.
20100325720 December 23, 2010 Etchegoyen
20110004913 January 6, 2011 Nagarajan et al.
20110040867 February 17, 2011 Kalbag
20110051677 March 3, 2011 Jetcheva et al.
20110055326 March 3, 2011 Michaelis et al.
20110055928 March 3, 2011 Brindza
20110058524 March 10, 2011 Hart et al.
20110064065 March 17, 2011 Nakajima et al.
20110085464 April 14, 2011 Nordmark et al.
20110182225 July 28, 2011 Song et al.
20110185231 July 28, 2011 Balestrieri et al.
20110222484 September 15, 2011 Pedersen
20110258641 October 20, 2011 Armstrong et al.
20110292897 December 1, 2011 Wu et al.
20120014386 January 19, 2012 Xiong et al.
20120290650 November 15, 2012 Montuno et al.
20120322435 December 20, 2012 Erceg
20130003729 January 3, 2013 Raman et al.
20130003739 January 3, 2013 Raman et al.
20130003747 January 3, 2013 Raman et al.
20130028158 January 31, 2013 Lee et al.
20130059570 March 7, 2013 Hara et al.
20130086403 April 4, 2013 Jenne et al.
20130103833 April 25, 2013 Ringland et al.
20130188539 July 25, 2013 Han
20130227306 August 29, 2013 Santos et al.
20130227645 August 29, 2013 Lim
20130230020 September 5, 2013 Backes
20130250811 September 26, 2013 Vasseur et al.
20140269327 September 18, 2014 Fulknier et al.
20140298467 October 2, 2014 Bhagwat et al.
20150120864 April 30, 2015 Unnimadhavan et al.
Foreign Patent Documents
1642143 July 2005 CN
0940999 September 1999 EP
1732276 December 2006 EP
1771026 April 2007 EP
1490773 January 2013 EP
0059251 October 2000 WO
0179992 October 2001 WO
2004042971 May 2004 WO
2006129287 December 2006 WO
2009141016 November 2009 WO
Other references
  • Clausen, T., et al., “Optimized Link State Routing Protocol (OLSR),” Network Working Group, pp. 1-71, Oct. 2003.
  • He, Changhua et al., “Analysis of the 802.11i 4-Way Handshake,” Proceedings of the 3rd ACM Workshop on Wireless Security, pp. 43-50, Oct. 2004.
  • Lee, Jae Woo et al, “z2z: Discovering Zeroconf Services Beyond Local Link,” 2007 IEEE Globecom Workshops, pp. 1-7, Nov. 26, 2007.
  • Perkins, C., et al., “Ad hoc On-Demand Distance Vector (AODV) Routing,” Network Working Group, pp. 1-35, Oct. 2003.
  • International Application No. PCT/US2008/061674, International Search Report and Written Opinion dated Oct. 14, 2008.
  • International Application No. PCT/US2011/047591, International Search Report and Written Opinion dated Dec. 19, 2011.
  • International Application No. PCT/US2012/059093, International Search Report and Written Opinion dated Jan. 4, 2013.
  • Chirumamilla, Mohan K. et al., “Agent Based Intrustion Detection and Response System for Wireless LANs,” CSE Conference and Workshop Papers, Paper 64, Jan. 1, 2003.
  • Craiger, J. Philip, “802.11, 802.1x, and Wireless Security,” SANS Institute InfoSec Reading Room, Jun. 23, 2002.
  • Finlayson, Ross et al., “A Reverse Address Resolution Protocol,” Nework Working Group, Request for Comments: 903 (RFC 903), Jun. 1984.
  • Wu, Haitao et al., “Layer 2.5 SoftMAC: End-System Based Media Streaming Support on Home Networks,” IEEE Global Telecommunications Conference (GLOBECOM '05), vol. 1, pp. 235-239, Nov. 2005.
  • European Patent Application No. 12879114.2, Search Report dated Jan. 21, 2016.
  • European Patent Application No. 11823931.8, Search Report dated Aug. 29, 2016.
  • IEEE Computer Society, “IEEE Std 802.11i—Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications—Amendment 6: Medium Access Control (MAC) Security Enhancements,” Section H.4.1, pp. 165-166, Jul. 23, 2014.
  • Cisco Systems, Inc., “Wi-Fi Protected Access 2 (WPA 2) Configuration Example,” Document ID 67134, Jan. 21, 2008 [retrieved online at https://www.cisco.com/c/en/us/support/docs/wireless-mobility/wireless-lan-wlan/67134-wpa2-config.html on Dec. 4, 2018].
Patent History
Patent number: 10326707
Type: Grant
Filed: Jan 17, 2014
Date of Patent: Jun 18, 2019
Patent Publication Number: 20140280967
Assignee: Aerohive Networks, Inc. (Milpitas, CA)
Inventors: Dalun Bao (San Jose, CA), Changming Liu (Cupertino, CA)
Primary Examiner: Nicholas R Taylor
Assistant Examiner: Sanjoy Roy
Application Number: 14/158,661
Classifications
Current U.S. Class: Channel Assignment (370/329)
International Classification: H04L 12/58 (20060101); G06F 12/14 (20060101); H04L 12/911 (20130101);