METHODS AND SYSTEMS FOR CLOCK SYNCHRONIZATION IN A NETWORK

Systems and methods for synchronizing clocks of a plurality of access points (APs) in a network are disclosed. In one implementation, a method includes receiving first information from at least one AP in the network. The first information may indicate whether each of the at least one AP is able to synchronize with a reference signal from a timing source. The method further includes obtaining network connectivity information of the plurality of APs, designating a first AP of the plurality of APs as a master node based on the first information and the network connectivity information, and assigning a second AP of the plurality of APs as a slave node to the master node based on the first information and the network connectivity information. The slave node may synchronize its clock to timing information provided by the master node.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of each of U.S. Provisional Application Ser. No. 62/158,959, filed May 8, 2015, U.S. Provisional Application Ser. No. 62/163,624, filed May 19, 2015, U.S. Provisional Application Ser. No. 62/163,743, filed May 19, 2015, U.S. Provisional Application Ser. No. 62/164,949, filed May 21, 2015, and U.S. Provisional Application Ser. No. 62/165,018, filed May 21, 2015, each of which is hereby incorporated by reference in its entirety.

FIELD OF INVENTION

The present invention generally relates to clock synchronization and, more particularly, to a method and apparatus for synchronizing time and/or frequency of clocks in a wireless access infrastructure.

BACKGROUND Conventional Wireless Access Infrastructure

A conventional wireless access infrastructure includes a radio access network and a core network typically owned, managed, and controlled by a single wireless service provider, called the wireless carrier. The radio access network, such as the Evolved Universal Terrestrial Radio Access (E-UTRA) defined in 3GPP's Long Term Evolution (ITE) standard, contains the network and equipment for connecting user equipment (UE), such as mobile devices and computers having wireless connectivity, to the core network. The core network, such as the Evolved Packet Core (EPC) defined in the LTE standard, contains the network and equipment for providing mobile voice and data services within the carrier's service environment and to external networks, such as the Internet, and other carriers' networks.

The LTE standard, for example, defines specific network nodes and communication interfaces for implementing the E-UTRA and EPC. According to the standard, the E-UTRAN includes one or more eNodeB (base stations) configured to communicate with UEs and the EPC core network. The EPC includes at least a Mobility Management Entity (MME), which manages session states, authentication, paging, and mobility and roaming functions; a packet-data gateway (PGW), which sends/receives data packets to/from an external data network, such as the Internet; a Serving Gateway (SG-W), which routes data packets between the PGW and an eNodeB; and a Policy and Charging Rules Function (PCRF), which manages users, applications, and network resources based on carrier-configured rules.

FIG. 1A is a schematic block diagram of an exemplary LTE wireless access infrastructure 1000 including an E-UTRAN 1100 and an EPC 1200. The E-UTRAN 1100 includes at least one eNodeB 1102 configured to communicate with UEs 1002A and 1002B over wireless links. The EPC 1200 contains network nodes including a MME 1202, SG-W 1204, PGW 1206, and PCRF 1208. While the exemplary infrastructure 1000 is depicted with only one PGW 1206 connected to an external packet-data network, such as the Internet, the EPC 1200 alternatively may contain multiple PGWs, each connecting the EPC 1200 to a different packet data network. The MME 1202, SG-W 1204, POW 1206, and PCRF 1208 are implemented in software on dedicated hardware (computers) 1302, 1304, 1306, and 1308. The dedicated hardware may be a single server or a cluster of servers. The LTE network nodes 1202, 1204, 1206, and 1208 are typically implemented as monolithic software modules that execute on their respective dedicated hardware 1302, 1304, 1306, and 1308.

The LTE standard not only defines functionalities in each of the MME 1202, SG-W 1204, PGW 1206, and PCRF 1208, but also defines the communication interfaces between them. The LTE standard defines several interfaces including, for example, an “S1-MME” interface between the eNodeB 1102 and the MME 1202, an “S1-U” interface between the eNodeB 1102 and the SG-W 1204, an “S11” interface between the MME 1202 and the SG-W 1204, an “S5” interface between the SG-W 1204 and the PGW 1206, and a “Gx” interface between the PCRF 1208 and the POW 1206. The exemplary infrastructure 1100 illustrates these standardized interfaces.

Because the communication interfaces and network nodes in the LTE wireless access infrastructure 1000 are standardized, they ensure compatibility between the MME 1202, SG-W 1204, PGW 1206, and PCRF 1208, even when those nodes are programmed and/or developed by different manufacturers. Such standardization also ensures backward compatibility with legacy versions of any nodes that may have been previously deployed in the infrastructure 1000.

The need for multiple, dedicated network nodes makes deployment of an LTE wireless access infrastructure, such as the exemplary infrastructure 1000, costly and complex. Further, the standardization of functions performed by nodes in the radio access and core networks, and the standardized communication interfaces between nodes in those networks, makes integration of these types of networks with solutions outside the standard, such as enterprise solutions (e.g., deployed within a proprietary enterprise network), challenging. The standardized nodes and interfaces in conventional wireless access infrastructures also make scaling the infrastructure challenging. For example, it may be difficult to deploy only a subset of the functions and/or communication interfaces defined by the standard. Furthermore, conventional wireless access infrastructures may not utilize resources efficiently within the infrastructure. In some conventional wireless access solutions, for example, a UE may be denied voice and/or data services because one of the network nodes is unable to handle an additional user even though other nodes are not being fully utilized. In other words, the capacity of the conventional infrastructure may be limited by the capacity of each node.

Synchronizing Clocks of Nodes in a Wireless Access Infrastructure

In many wireless access infrastructures, it is desirable to synchronize time (phase) and/or frequency of clocks used by the various network nodes. Such synchronization may enable more efficient and robust operation in the wireless infrastructure, which in turn can reduce the probability of interference between calls, dropped calls, and packet loss/collisions, among other things. In the LTE wireless access infrastructure 1000, for example, it may be desirable to synchronize the frequencies of clocks in the UEs 1002A-B and eNodeB 1102 to an accuracy of 50 parts-per-billion (ppb) or less, and synchronize the clocks in nodes within the EPC 1200 to an accuracy of 16 ppb or less. For a time-division duplex (TDD) LTE wireless access infrastructure, the phase of different clocks in the infrastructure may be synchronized, for example, to an accuracy of 1.5 microseconds or less.

One technique for synchronizing time or frequency of clocks in network nodes uses an atomic-clock signal transmitted from the Global Navigation Satellite System (GNSS). FIG. 1B illustrates an exemplary wireless access infrastructure 1000 using this technique. In FIG. 1B, every eNodeB in the network (including 1202) may include, or is otherwise connected to, a GNSS receiver. In this example, each GNSS receiver receives the same atomic-clock signal from GNSS satellites 1400, and the atomic-clock signal may be used to synchronize local clocks running in each network node. This solution, however, may be impractical in situations where some of the nodes are in a location that cannot receive the GNSS signals, such as inside buildings without windows. Additionally, equipping every node with a GNSS receiver may be prohibitively costly for many applications.

Another technique for synchronizing time or frequency of clocks uses one or more dedicated “master” nodes that distribute a reference clock signal to other “slave” nodes, which may be the nodes in the EPC 1200 and E-UTRAN 1100, for example. In FIG. 1C, for example, every node in the exemplary wireless access infrastructure 1000 synchronizes the time and/or frequency of a local clock to a reference clock signal received from a master node 1500. In this example, the nodes 1102, 1202, 1204, 1206, and 1208 are slave nodes relative to the master node 1500. In some implementations, the master node 1500 may be configured as a Precision Timing Protocol (PTP) server, as defined in the IEEE 1588 standard, that sends and receives a series of time-stamped messages to/from the slave nodes 1102, 1202, 1204, 1206, and 1208. The slave nodes may use the time-stamped messages to account for phase and/or frequency offsets with their local running clock, thereby allowing them to synchronize time or frequency of their local clocks with the master node 1500. The master node 1500 may include a GNSS receiver 1502 and may synchronize its time and/or frequency with a reference timing source, such as received from GNSS satellites 1400, or from another timing source, such as a “grandmaster” node 1600.

TERMINOLOGY

“Frequency synchronization” broadly refers to adjusting the operating frequency of one or more clock signals to synchronize their frequency using a reference clock signal.

“Phase synchronization” broadly refers to adjusting the phase of one or more clock signals to synchronize their relative phases using a reference clock signal. Frequency synchronization is a prerequisite for phase synchronization.

“Time synchronization” broadly refers to adjusting one or more clock signals so they are synchronized to the same absolute time value. Both frequency and phase synchronization are prerequisites for time synchronization.

“Clock synchronization” may refer to frequency synchronization, phase synchronization, or time synchronization. As used herein, clock synchronization may be referred to as synchronizing in time and/or frequency with one or more clock signals.

A “timing source” broadly refers to any software or hardware that may be used to provide one or more reference signals used for clock synchronization. The timing source may be implemented at a single location, such as in a network node, or may be distributed across multiple locations, such as GNSS satellites. The reference signals may reflect an absolute time signal, indicative of a current time, or a relative time signal, indicative of a time measured from a predetermined starting point.

SUMMARY

The invention provides a novel system and method for clock synchronization in cloud-based wireless access infrastructures.

The disclosed embodiments of the invention include a novel set of functions that dynamically designates access points (APs) as a master node or a slave node based on information received from the APs as well as information related to the network connectivity of the APs. The information received from the APs include, for example, whether an AP has the necessary hardware and/or software to synchronize its local clock with GNSS satellites. The information related to the network connectivity of the APs may include, for example, a network map.

An AP may be used to provide wireless network access to one or more UEs in accordance with the disclosed embodiments. The AP may provide one or more radio-network functions and one or more core-network functions for each UE in communication with the AP. In some embodiments, radio-network functions in the AP may be configured to receive information from the UE and pass that information to core-network functions allocated for the UE. The AP also may include a distributed portion of a service configured to receive the information from the core-network functions and communicate the information to a corresponding cloud portion of the service running on a cloud platform. The cloud portion of the service on the cloud platform may process the information and return a response to its corresponding distributed portion on the AP.

The disclosed embodiments of the invention may include one or more APs that are configured to receive a reference clock signal from a timing source, such as GNSS satellites, and further configured to synchronize time and/or frequency of one or more clock signals with the reference clock signal. The APs may communicate with a location management function (LMF). For example, an AP having configured to receive GNSS signals (e.g., using a GNSS receiver) may notify the LMF whether the AP is able to obtain a lock on the GNSS signals.

Further to the disclosed embodiments, a timing management function (TMF) may designate each AP as a master node or a slave node. The TMF may assign one or more slave nodes to a master node. In some embodiments, the TMF may assign a slave node to a plurality of master nodes. In these embodiments, the slave node may be configured to synchronize with a first master node of the plurality of the master nodes and switch over to a second master node when the first master node become unavailable (e.g., when the first master node loses a lock to a GNSS signal) or when the number of slave nodes assigned to the first master node exceeds a predetermined number. In some embodiments, the TMF may determine which APs to designate as master or slave nodes based on whether an AP is configured to receive signals from a timing source (e.g., whether an AP has a lock on a GNSS signal). For example, the TMF may assign an AP as a master node, when the AP has a lock to a GNSS signal. The APs may send information to the LMF, and the TMF may obtain the information from the LMF.

Additionally, the TMF may rely on connectivity information to determine which APs to designate as master or slave nodes and to determine which master node to assign slave nodes. The connectivity information may include, for example, a network map including performance and/or cost associated with each interconnection between nodes in the network. In some embodiments, the TMF may obtain the connectivity information from a connectivity management function (CMF) on the AP. The TMF may communicate with the LMF to obtain information relating to an AP's ability to receive signals from a timing source and synchronize frequency and/or time of clocks with the timing-source signal.

According to the disclosed embodiments, the TMF may dynamically change whether an AP is designated as a slave or master node, and which slave nodes are assigned to a master node. In some embodiments, the TMF may change the master/slave designation or assignment of an AP in response to information obtained from at least one of the CMF and LMF. The information obtained from the CMF and/or LMF may include a list of additional APs that are now able to synchronize with the timing source, a list of current “master” APs that are now unable to synchronize with the timing source, updated network connectivity information (e.g., identification of new switches or routers in the network, upgraded network nodes, etc.), to provide some examples. The AP may receive other types of information from the CMF and/or LMF in addition to, or instead of, any of the exemplary types of information noted above.

According to the disclosed embodiments, the TMF may designate one AP as a master node when none of the APs successfully synchronizes with the timing source. In some embodiments, the TMF may designate as a master node an AP that successfully synchronizes one or more clock signals with a non-absolute timing-source signal (i.e., a reference clock signal not tied to a specific time). For example, the TMF may designate an AP that supports the network timing protocol (NTP), which is a standard protocol, as a master node. In some embodiments, the TMF may designate an AP as a slave node when the AP has the necessary hardware and/or software and is able to synchronize with the timing source. In some embodiments, the TMF may designate an AP as a master node, but not assign any slave nodes to the AP.

According to the disclosed embodiments, one or more of the TMF, CMF, and LMF may be executed in a cloud platform, in a server of an enterprise network, or within one or more APs. In some embodiments, one or more of the TMF, CMF, and LMF may be combined at the software level and/or executed on the same hardware platform. In accordance with the disclosed embodiments, one or more of the TMF, CMF, and LMF may be provided as one or more services having a cloud portion and a distributed portion in a cloud platform.

According to the disclosed embodiments, a method for synchronizing clocks of a plurality of access points (APs) in a network includes receiving first information from at least one AP in the network. The first information may indicate whether each of the at least one AP is able to synchronize with a reference signal from a timing source. The method further includes obtaining network connectivity information of the plurality of APs, designating a first AP of the plurality of APs as a master node based on the first information and the network connectivity information, and assigning a second AP of the plurality of APs as a slave node to the master node based on the first information and the network connectivity information. The slave node may synchronize its clock to timing information provided by the master node.

According to the disclosed embodiments, a system for synchronizing clocks of a plurality of access points (APs) in a network includes a location management function (LMF) configured to receive first information from at least one AP of the plurality of APs. The first information may indicate whether each of the at least one AP is able to synchronize with a reference signal from a timing source. The system further includes a connectivity management function (CMF) configured to obtain network connectivity information of the plurality of APs and a timing management function (TMF). The TMF may be configured to designate a first AP of the plurality of APs as a master node based on the first information and the network connectivity information, and assign a second AP of the plurality of APs as a slave node to the master node based on the first information and the network connectivity information. The slave node may synchronize its clock to timing information provided by the master node.

BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate various disclosed embodiments. In the drawings:

FIGS. 1A-C are schematic block diagrams of an exemplary wireless access infrastructures.

FIG. 2 illustrates a schematic block diagram of an exemplary cloud-based wireless access infrastructure in accordance with the disclosed embodiments.

FIG. 3 illustrates another illustrative block diagram of the exemplary cloud-based wireless access infrastructure of FIG. 2 in accordance with the disclosed embodiments.

FIG. 4 illustrates another illustrative block diagram of the exemplary cloud-based wireless access infrastructure of FIGS. 2 and 3 in accordance with the disclosed embodiments.

FIG. 5 illustrates a process performed by a set of functions including a CMF, LMF, and TMF in accordance with the disclosed embodiments.

DETAILED DESCRIPTION OF DISCLOSED EMBODIMENTS

The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar parts. While several illustrative embodiments are described herein, modifications, adaptations and other implementations are possible. For example, substitutions, additions, or modifications may be made to the nodes and steps illustrated in the drawings, and the illustrative methods described herein may be modified by substituting, reordering, removing, or adding steps to the disclosed methods. Accordingly, the following detailed description is not limited to the disclosed embodiments and examples. Instead, the proper scope of the invention is defined by the appended claims.

Cloud-Based Wireless Access Infrastructure

FIG. 2 illustrates a block diagram of an exemplary cloud-based wireless access infrastructure 2000 in accordance with the disclosed embodiments of the invention. The exemplary cloud-based wireless access infrastructure 2000 may provide one or more access points (APs) 2110 through which users may communicate to access standardized wireless voice and/or data services, such as defined in the LTE standard, as well as enterprise-level applications and services that would be available to the user in an enterprise network of a corporate, governmental, academic, non-profit, or other organization or entity. For example, in accordance with the disclosed embodiments, an organization may deploy the APs 2110 in a building to provide its employees in that building with wireless access to both LTE and enterprise-level services.

The exemplary cloud-based wireless access infrastructure 2000 includes at least first and second UEs 2120A-B, one or more antennas 2130, the APs 2110, one or more network devices 2150, a network controller 2500, a cloud platform 2200, an enterprise network 2300, and an internet protocol exchange (IPX) 2400.

As shown in FIG. 2, each of the UEs 2120A-B may communicate with the APs 2110 through the antenna 2130 electrically coupled to the APs 2110. While a single antenna is shown in FIG. 2, the cloud-based wireless access infrastructure 2000 may alternatively employ multiple antennas, each electrically coupled to the APs 2110. In some embodiments, one or more antennas 2130 may connect to the one or more APs in the APs 2110 and other antennas may connect to different APs in the APs 2110. Each AP in the APs 2110 may be implemented on one or more computer systems. An AP, for example, may execute one or more software programs on a single computer or on a cluster of computers. Alternatively, an AP may be implemented as one or more software programs executing on one or more virtual computers.

In the disclosed embodiments, the APs 2110 may be connected to one or more network devices 2150, which may be configured to forward data between the UEs 2120A-B (via the APs 2110) and external data networks, such as the Internet 2600 and/or the cloud platform 2200. The network devices 2150 may include, for example, a hub, switch, router, virtual switches/router, distributed virtual switch (vSwitch), DHCP server, and/or any combination thereof.

In some embodiments, at least a subset of the network devices 2150 may be dynamically configured by a software-defined networking (SDN) controller. For example, as shown in FIG. 2, a SDN controller 2500 may configure one or more layer-two devices (e.g., switches) or layer-three devices (e.g., routers) in the set of network devices 2150, such that data packets or frames may be routed, processed, and/or blocked at the network devices based on various parameters, such as, but not limited to, the origin or destination of the data, type of data, and/or carrier or enterprise policies. Additionally, or alternatively, the SDN controller 2500 may configure at least a subset of the network devices 2150 to provide different qualities of service (QoS) to different UEs based on one or more policies associated with each UE. For example, the SDN controller 2500 may configure the one or more network devices 2150 to ensure that the UE 2120A, which may be associated with a business customer, receives a higher QoS compared with the UE 2120B, which may be associated with a non-business customer.

In some embodiments, the SDN controller 2500 may configure one or more of the network devices 2150 based on data (including, for example, messages, notifications, instructions, measurements, authorizations, approvals, or other information) received from one or more services running in the cloud-based wireless access infrastructure 2000. For example, the SDN controller 2500 may receive instructions on how and which of the network devices 2150 to configure from a service on the cloud platform 2200.

In accordance with the disclosed embodiments, the cloud platform 2200 may communicate with the enterprise network 2300 and/or the IPX 2400. In some embodiments, the cloud platform 2200 may include direct connections to the enterprise network 2300 or may employ indirect connections, such as using the Internet 2600 (via the network device 2150), to communicate with the enterprise network 2300. For example, the cloud platform 2200 may communicate with the enterprise network 2300 through the Internet 2600 using a tunneling protocol or technology, such as the IPSec protocol, or may communicate with an LTE EPC 1200 node of another carrier via the IPX 2400 using one or more standardized interfaces, such as the Gy, Gz, Gx, and S6a interfaces as defined in the LTE standard. In FIG. 2, the enterprise network 2300 is shown to be separate, but electrically coupled, with the cloud platform 2200. In other embodiments (not shown), however, the enterprise network 2300 may be implemented on the cloud platform 2200.

Services of Cloud-Based Wireless Access Infrastructure

FIG. 3 illustrates another illustrative block diagram of the exemplary cloud-based wireless access infrastructure 2000 of FIG. 2 in accordance with the disclosed embodiments. FIG. 3 illustrates additional implementation details of the APs 2110, cloud platform 2200, and enterprise network 2300 that may be used in the exemplary cloud-based wireless access infrastructure 2000.

As shown in FIG. 3, at least one AP in the APs 2110 may be configured to execute one or more instances of a software program configured to implement functions of a base station and one or more instances of a software program configured to implement functions of a core network. For example, in FIG. 3, eNodeB Functions 2112A-B represent at least two instances of a software program configured to provide at least a subset of functions of an LTE base station, such as the eNodeB 1102. Similarly, EPC Functions 2114A-B represent at least two instances of a software program configured to provide at least a subset of functions of an LTE core network, such as the EPC 1200. In some embodiments, the AP may be configured to execute one or more instances of a single software program configured to implement both the eNodeB Functions and EPC Functions.

In some embodiments, a fixed number of instances of eNodeB Function 2112A-B and a fixed number of instances of EPC Function 2114A-B may be instantiated and maintained in an AP. The number of instances of the eNodeB Functions 2112A-B and the number of instances of the EPC Functions 2114A-B may be the same or different. In some embodiments, when a UE 2120A wirelessly connects to the AP, an existing instance of eNodeB Function 2112A and an existing instance of EPC Function 2114A may be assigned to handle communications with the UE 2120A. In other embodiments (e.g., when existing instances of eNodeB Function 2112A and EPC Function 2114A are unavailable to assign to the UE 2120A), the AP may instantiate a new instance of an eNodeB Function and a new instance of an EPC Function for the UE 2120A. In alternative embodiments, the AP may dynamically instantiate and assign a new instance of eNodeB Functions and a new instance of EPC Functions for each UE.

According to the disclosed embodiments, an instance of the eNodeB Functions 2112A may be configured to provide all radio-related functions needed to send/receive data to/from a UE 2120A. For example, an instance of eNodeB Function 2112A may perform at least a subset of functions of an eNodeB as defined in the LTE standard including, but not limited to, functions of a physical (PHY) layer, media access control (MAC) layer, radio resource management (RRM), and/or self-organizing network (SON). Functions of a PHY layer (as defined in the LTE standard) may include, for example, channel coding, rate matching, scrambling, modulation mapping, layer mapping, pre-coding, resource mapping, orthogonal frequency-division multiplexing (ODFM), and/or cyclic redundancy checking (CRC). Functions of MAC layer (as defined in the LTE standard) may include, for example, scheduling, multiplexing, and/or hybrid automatic repeat request (HARQ) operations. Functions of RRM (as defined in the LTE standard) may include, for example, allocating, modifying, and releasing resources for transmission over the radio interface between a UE 2120A and the AP. Functions of a SON (as defined in the LTE standard) may include, for example, functions to self-configure, self-optimize, and self-heal the network devices 2150. Alternatively, or additionally, an instance of eNodeB Function 2112A may perform at least a subset of functions of an element equivalent to an eNodeB in other wireless standards, such as, but not limited to, functions of a base transceiver station (BTS) as defined in the GSM/EDGE standard or a NodeB as defined in the UMTS/HSPA standard. In some embodiments, a UE 2120A may wirelessly connect to the AP in the 3.5 GHz shared band.

According to the disclosed embodiments, an instance of eNodeB Function 2112A may be further configured to send/receive data to/from a corresponding instance of EPC Function 2114A. However, in contrast with the conventional wireless access infrastructure 1000 of FIG. 1A that only uses standardized communication interfaces, an instance of the eNodeB Function 2112A in the AP may communicate with an instance of the EPC Function 2114A also executing in the AP using any interface or protocol. Because the eNodeB and EPC Functions execute on the same AP, they do not need to be constrained to standardized communication interfaces. Instances of eNodeB Functions 2112A and EPC Functions 2114A may communicate with one another using, among other things, language-level method or procedure calls, remote-procedure call (RPC), Simple Object Access Protocol (SOAP), or Representational State Transfer (REST).

In accordance with the disclosed embodiments, an instance of the EPC Functions 2114A may be configured to provide at least some functions of a core network. For example, the exemplary instance of EPC Function 2114A may include functions such as, but not limited to, at least a subset of functions of the MME 1202, PGW 1206, SG-W 1204, and/or PCRF 1208 of EPC 1200 as defined in the LTE standard. An instance of the EPC Function 2114A, for example, may include a Mobility Management Function (MMF) which may perform at least a subset of functions of the MME 1202 (e.g., authentication functions) and the Optimized Packet Function (OPF) which may perform at least a subset of functions of the SG-W 1204 and/or the PGW 1206 node (e.g., forwarding packets between the UE 2120A and one or more external data networks, such as the Internet 2600 and IPX 2400).

In contrast with the MME 1202 node defined in the LTE standard, the MMF executing in the AP may communicate with the OPF using any protocol because both functions are implemented in the same EPC Function 2114A. On the other hand, in the EPC 1200, the MME 1202 node is connected to the SG-W 1204 using the standardized interface S 11 and the SG-W 1204 is connected to the POW node using the standardized interfaces S5/S8. In the disclosed embodiments, for example, the MME 1202 and the OPF node may communicate with one another using language-level methods or procedure calls, RPC, SOAP, or HTTP/REST.

Advantageously, an instance of eNodeB Function 2112A and/or EPC Function 2114A may implement the functions (or a subset of functions) of the eNodeB 1102 and/or the EPC; 1200 using one or more services in accordance with the disclosed embodiments. For example, a service 2210A may include a distributed portion 2212A and a cloud portion 2214A. The distributed portion 2212A may be implemented within the AP and may provide application programming interfaces (APIs) that may be accessible by instances of eNodeB Functions 2112A-B and/or EPC Functions 2114A-B. The cloud portion 2214A of the service 2210A may be utilized by instances of the eNodeB Functions 2112A-B and/or EPC Functions 2114A-B through the associated distributed portion 2212A running on the AP.

Unlike the conventional wireless access infrastructure 1000 of FIG. 1, the exemplary cloud-based wireless access infrastructure 2000 may utilize available resources more efficiently, in part, because the services (e.g., 2110A-B) share the same pool of cloud-platform resources, and further, the cloud platform 2200 may dynamically reallocate resources to and from each service based on the service's resource needs. For example, in the cloud-based wireless access infrastructure 2000, the cloud platform 2200 may dynamically allocate computing resources, such as memory and CPU time, to various services based on each service's real-time demand for such resources. In contrast, a predetermined amount of resources would be dedicated to each node in the conventional wireless access infrastructure 1000, and these resources cannot be distributed among the other nodes dynamically. Therefore, situations may exist in the conventional wireless access infrastructure 1000 where the UE 1002A is denied service because one of the nodes (e.g., the MME 1202 of the EPC 1200) does not have sufficient amount of resources available for the UE 1002A, even when resources of other nodes have not been fully utilized.

Moreover, the capacity of the exemplary cloud-based wireless access infrastructure 2000 may be simpler and easier to scale up or down compared with the capacity of the conventional wireless access infrastructure 1000. For example, the capacity of the cloud-based wireless access infrastructure 2000 may be increased by adding more resources available to the cloud platform 2200 and/or to the APs 2110. In contrast, capacities of multiple EPC 1200 nodes may need to be increased to increase the capacity of the conventional wireless access infrastructure 1000.

According to the disclosed embodiments, the cloud portion 2214A of the service 2210A may be implemented on the cloud platform 2200. Examples of cloud platforms include, Eucalyptus (an open-source cloud platform), Open Stack (an open-source cloud platform), and Amazon Web Service (AWS). In some embodiments, the cloud portion 2214A of the service 2210A may be stateless and communicate with the distributed portion 2212A of the service 2210A using a protocol supported by the cloud platform 2200 (e.g., HTTP/REST and SOAP are supported by AWS). In some disclosed embodiments, the cloud portion 2214A of the service 2210A may utilize a cloud portion 2214B of another service 2210B. In other disclosed embodiments, a cloud portion 2214C of a service 2210C may communicate with a conventional core network node in IPX 2400 by a standardized interface. In some embodiments, the cloud portion 2214C of the service 2210C may communicate with a server/application (e.g., Enterprise Identity and Authentication Application (EIAA) 2310) of the enterprise network 2300. And in some embodiments, the cloud portion 2214C of the service 2210C may communicate with the SDN controller 2500 to provide instructions on how and which network devices of the network devices 2150 to configure/reconfigure. In some embodiments, a service may have a cloud portion only (i.e., without corresponding distributed portions), such as the cloud portion 21143 of the service 22101B.

In some embodiments of the invention, the distributed portion 2212A of the service 2210A, in addition to exposing APIs to instances of eNodeB Functions 2112A-B and/or EPC Functions 2114A-B, may provide additional functions, such as caching. For example, when an API of the distribute portion 2212A of the service 2210A is being utilized to request data, the distributed portion 2212A, prior to communicating with its associated cloud portion 2214A to obtain the requested data, may determine whether the data is cached and/or whether the cached data is still valid.

Time/Frequency Synchronization of APs

FIG. 4 illustrates another illustrative block diagram of the exemplary cloud-based wireless access infrastructure 2000 of FIGS. 2 and 3 in accordance with the disclosed embodiments. FIG. 4 illustrates additional implementation details of the APs 2110 and the cloud platform 2200, including components for synchronizing time and/or frequency among the APs 2110.

FIG. 4 shows the APs 2110 including a first AP 2110A, a second AP 2110B, and a third AP 2110C. The first AP 2110A and the second AP 2110B each includes a GNSS receiver 2116A-B configured to synchronize time and/or frequency with GNSS satellites 1400. In the exemplary infrastructure of FIG. 4, the first AP 2110A may successfully synchronize time and/or frequency with the GNSS satellites 4010, but the second AP 2110B may be prevented from synchronizing time and/or frequency with the GNSS satellites because of adverse weather, for example. In other words, while both the first AP 2110A and the second AP 2110B are considered to have the necessary hardware and/or software to synchronize in time and/or frequency with the GNSS satellites, only the first AP 2110A of the two APs is considered to be in a situation where it is able to synchronize time and/or frequency with the satellite's signal. The third AP 2110C does may not synchronize time nor frequency with the GNSS satellites because the third AP 2110C does not have a GNSS receiver. That is, the third AP 2110C is not considered to have the necessary hardware/software to synchronize in time and/or frequency.

FIG. 4 further shows a connectivity management function (CMF) service, timing management function (TMF) service, and location management function (LMF) service. A “function” broadly refers to a software or hardware configured to perform one or more predetermined operations. A function may be implemented as a software executing on a hardware; alternatively, the function may be implemented as a dedicated hardware. A function may also be implemented as a service on a cloud platform.

A service (e.g., CMF, LMF, or TMF service) may include a distributed portion (e.g., 4020A, 4030A, or 4040A) executing in each AP 2110A-C and a corresponding cloud portion (e.g., 4020B, 4030B, or 4040B) executing on the cloud platform 2200. As noted above, a distributed portion of a service in an AP may communicate with the corresponding cloud portion of the service via the network devices 2150. The APs 2110A-C may also communicate among each other via the network devices 2150. The cloud portions of services may communicate among each other within the cloud platform 2200 using an internal network infrastructure of the cloud platform 2200, or alternatively may communicate with other service cloud portions via the network devices 2150.

In some embodiments, APs that have the necessary hardware and/or software and are in situations to be able to synchronize time and/or frequency with the timing source, such as the GNSS satellites 1400, may communicate with the LMF service. In the exemplary infrastructure of FIG. 4, for example, the first AP 2110A may notify the cloud portion of the LMF service (“C-LMF”) 40601, using the distributed portion of the LMF service (“D-LMF”) 4060A executing in the first AP 2110A, that the first AP 2110A includes, or has access to, a GNSS receiver and that the GNSS receiver has a lock on a GNSS signal. Further, the second AP 2110B may notify the C-LMF 40601, using the D-LMF 4060A in the second AP 2110B includes, or has access to, a GNSS receiver but that the GNSS receiver does not have a lock on a GNSS signal. Additionally, the third AP 2110C may notify the C-LMF 4060B, using the D-LMF 4060A in the third AP, that the third AP 2110C does not include, or have access to, a GNSS receiver. Alternatively, the third AP 2110 (or any other AP that does not have a GNSS receiver) may not communicate with the C-LMF, and the C-LMF may assume that an AP does not have the hardware/software necessary to synchronize with the GNSS satellites, unless the AP communicates otherwise. In some embodiments, C-LMF may have a prior information as to which AP has the necessary hardware/software to synchronize time and/or frequency with the timing source.

According to the disclosed embodiments, the TMF service may designate an AP as a master node or a slave node. In the exemplary infrastructure of FIG. 4, for example, a cloud portion of the TMF service (“C-TMF”) 4020B may designate an AP as a master node or a slave node and assign one or more slave nodes to a designated master node. In some embodiments, the C-TMF 4020B may designate an AP as a master or slave node based on whether the AP has the necessary hardware and/or software to synchronize in time and/or frequency with a signal from a timing source (e.g., by having a GNSS receiver), and/or whether the AP is in a position where it is able to synchronize in time and/or frequency with the timing source (e.g., by having a lock on a GNSS signal). In the exemplary infrastructure 2000 of FIG. 4, for example, the C-TMF 4020B may communicate with the D-TMF 4020A of the first AP 2110A to designate the first AP 2110A as a master node since the first AP 2110A has a GNSS receiver 2116A and the GNSS receiver 2116A has a lock on a GNSS signal. The C-TMF 4020B may further communicate with the D-TMF 4020A of the second AP 2110B to designate the second AP 2110B as a slave node since the second AP 2110B has a GNSS receiver 2116B, but the GNSS receiver 2116B is unable to lock on a GNSS signal because of, for example, inclement weather or any other reason resulting in low signal-to-noise. The C-TMF 4020B may also communicate with the D-TMF 4020A of the second AP 2110B to assign the second AP 2110B as a slave node to the first AP 2110A designated as a master node. Additionally, the C-TMF 4020B may communicate with the D-TMF 4020A of the third AP 2110C to assign the third AP 2110C as a slave node to the first AP 2110A.

According to the disclosed embodiments, the TMF service may rely on connectivity information to determine which APs to designate as master or slave nodes and to determine which master nodes should be assigned slave nodes. In the exemplary infrastructure of FIG. 4, for example, the C-TMF 4020B may rely on network connectivity information to determine which APs to designate as master or slave nodes and determine which master node to assign slave nodes. The connectivity information may include, for example, a network map including performance and/or cost associated with each interconnection between the nodes in the network.

Alternatively, the C-TMF 4020B may assign the second and/or third APs 2110B-C as slave nodes to another AP designated as a master node. For example, where the second and third APs 21100B-C are able to communicate with another AP that has been designated as a master node using less hops, delay, or cost as compared to the first AP 2110A, which is also a master node, the C-TMF 4020B may assign the second and third APs 2110B-C to be slave nodes to the new AP master node instead of using the first AP 2110A as their master node. In some embodiments, the C-TMF 4020B may communicate with the C-LMF 4060B to obtain information on whether an AP has the necessary hardware and/or software for clock synchronization, and whether the AP is able to synchronize in time and/or frequency with a reference signal from a timing source. In some embodiments, the C-TMF 4020B may obtain the network connectivity information from the C-CMF 4040B. In some embodiments, C-CMF 4040B may obtain the network connectivity information from a network administrator.

In embodiments where a slave node can be assigned to a plurality of master nodes, the C-TMF 4020B may assign the slave node to master nodes that uses different network technologies and/or are located in different geographical locations. In these embodiments, the probability of all master nodes (assigned to the slave node) failing at the same time may be reduced.

According to the disclosed embodiments, the C-TMF 4040B may dynamically re-designate an AP as a slave or master node. In some embodiments, the C-TMF 4020B may change the designation of an AP in response to information obtained from the C-CMF 4040B and/or C-LMF 4060B. The information obtained from the C-CMF 4040B and/or C-LMF 4060B may include a list of master APs that are now able or unable to synchronize with the timing source, an updated network connectivity information (e.g., new router, upgraded networked connection, etc.), to provide some examples.

According to the disclosed embodiments, the C-TMF 4040B may dynamically change which master node a slave node is assigned to. In some embodiments, the C-TMF 4020 may change the assignment of an AP in response to information obtained from the C-CMF 4040B and/or C-LMF 4060B. The information obtained from the C-CMF 4040B and/or C-LMF 4060B may include a list of master APs that are now unable to synchronize with the timing source or an updated set of network connectivity information (e.g., identifying new routers in the network, upgraded networked connections, etc.), to provide some examples.

Further to the disclosed embodiments, the C-TMF 4020B may designate an AP as a master node, when none of the APs successfully synchronizes with the timing source. In the exemplary infrastructure 2000 of FIG. 4, for example, when the master-node AP 2110A loses its lock on the GNSS signal and no other APs are able to synchronize time and/or frequency with a signal from the GNSS satellite, the C-TMF 4020B may arbitrarily designate the AP 2110B as a master node and assign all other APs to be slave nodes to the AP 2110B. Alternatively, the C-TMF 4020B may designate an AP that successfully synchronizes with a non-absolute timing source as a master node. For example, in the exemplary infrastructure 2000 of FIG. 4, when the AP 2110C supports the network timing protocol (NTP), a standard protocol, the C-TMF 4020 may designate the AP 2110C as a master node even though the AP 2110C is not able to synchronize time nor frequency with the GNSS satellites.

In some embodiments, the C-TMF 4020B may designate an AP as a master node, but not assign any slave nodes to the AP.

FIG. 5 illustrates a process 5000 performed by a set of functions including a CMF, LMF, and TMF in accordance with the disclosed embodiments. The process may synchronize clocks of a plurality of access points (APs).

At step 5010, an LMF may receive first information from at least one AP of the plurality of APs, the first information including information indicating whether each of the at least one AP is able to synchronize with a signal from a timing source, such as a reference clock signal from the timing source. In some embodiments, the LMF, CMF, and TMF may be separate services executing on at least one cloud platform. In some embodiments, the LMF may receive the first information through a distributed portion for the LMF. The distributed portion for the LMF may execute on the at least one AP. In some embodiments, the first information may further include whether each of the at least one AP has the hardware and/or software for synchronizing with the signal from the timing source. In some embodiments, the timing source may be a global navigational satellite system (GNSS) satellite and the signal may be a satellite signal from the GNSS satellite. In these embodiments, the first information may further include information indicating whether each of the at least one AP has a lock on the satellite signal.

At step 5020, a CMF may obtain a network connectivity information of the plurality of APs. In some embodiments, the network connectivity information may include a network map. At step 5030, a TMF may designate a first AP of the plurality of APs as a master node based on the first information and the network connectivity information. At step 5040, the TMF may assign a second AP of the plurality of APs as a slave node to the master node based on the first information and the network connectivity information.

While illustrative embodiments have been described herein, the scope of any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations and/or alterations as would be appreciated by those skilled in the art based on the present disclosure. The limitations in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application. The examples are to be construed as non-exclusive. Furthermore, the steps of the disclosed routines may be modified in any manner, including by reordering steps and/or inserting or deleting steps. It is intended, therefore, that the specification and examples be considered as illustrative only, with a true scope and spirit being indicated by the following claims and their fill scope of equivalents.

Claims

1. A method of clock synchronization in a network comprising a plurality of access points (APs), the method comprising:

receiving first information from at least one AP in the network, the first information indicating whether each of the at least one AP is able to synchronize with a reference signal from a timing source;
obtaining network connectivity information of the plurality of APs; and
designating a first AP of the plurality of APs as a master node based on the first information and the network connectivity information;
assigning a second AP of the plurality of APs as a slave node to the master node based on the first information and the network connectivity information, the slave node to synchronize its clock to timing information provided by the master node.

2. The method of claim 1, wherein a timing management function (TMF) designates the first AP as the master node and the second AP as the slave node.

3. The method of claim 2, wherein a location management function (LMF) receives the first information from the at least one AP in the network.

4. The method of claim 3, wherein the LMF and TMF are separate services executing on at least one cloud platform.

5. The method of claim 3, wherein a connectivity management function (CMF) obtains the network connectivity information.

6. The method of claim 5, wherein the CMF and TMF are separate services executing on at least one cloud platform.

7. The method of claim 4, wherein the LMF comprises a distributed portion and a cloud portion, and the LMF receives the first information through the distributed portion executing on the at least one AP.

8. The method of claim 1, wherein the first information comprises information indicating whether each of the at least one AP has at least one of hardware and software for synchronizing with the reference signal from the timing source.

9. The method of claim 1, wherein the timing source is a global navigational satellite system (GNSS) satellite and the reference signal is a satellite signal from the GNSS satellite.

10. The method of claim 9, wherein the first information comprises information indicating whether each of the at least one AP has a lock on the satellite signal.

11. The method of claim 1, wherein the network connectivity information includes a network map.

12. The method of claim 1, wherein the first information comprises (i) information indicating whether the at least one AP is configured to receive the reference signal from the timing source, and (ii) information indicating whether the at least one AP is configured to synchronize to the reference signal.

13. A system for synchronizing clocks of a plurality of access points (APs) in a network, the system comprising:

a location management function (LMF) configured to receive first information from at least one AP of the plurality of APs, the first information indicating whether each of the at least one AP is able to synchronize with a reference signal from a timing source;
a connectivity management function (CMF) configured to obtain network connectivity information of the plurality of APs; and
a timing management function (TMF) configured to: designate a first AP of the plurality of APs as a master node based on the first information and the network connectivity information, and assign a second AP of the plurality of APs as a slave node to the master node based on the first information and the network connectivity information, the slave node to synchronize its clock to timing information provided by the master node.

14. The system of claim 13, wherein the LMF, CMF, and TMF are services executing on at least one cloud platform.

15. The system of claim 13, wherein the LMF is configured to receive the first information through a distributed portion of the LMF executing on the at least one AP.

16. The system of claim 13, wherein the first information comprises information indicating whether each of the at least one AP has at least one of hardware and software for synchronizing with the reference signal from the timing source.

17. The system of claim 13, wherein the timing source is a global navigational satellite system (GNSS) satellite and the reference signal is a satellite signal from the GNSS satellite.

18. The system of claim 17, wherein the first information comprises information indicating whether each of the at least one AP has a lock on the satellite signal.

19. The system of claim 13, wherein the network connectivity information includes a network map.

20. The system of claim 13, wherein the first information comprises (i) information indicating whether the at least one AP is configured to receive the reference signal from the timing source, and (ii) information indicating whether the at least one AP is configured to synchronize to the reference signal.

21. A system for clock synchronizing in a network comprising a plurality of access points (APs), the system comprising:

means for receiving first information from at least one AP of the plurality of APs, the first information indicating whether each of the at least one AP is able to synchronize with a reference signal from a timing source;
means for obtaining network connectivity information of the plurality of APs; and
means for designating a first AP of the plurality of APs as a master node based on the first information and the network connectivity information and assigning a second AP of the plurality of APs as a slave node to the master node based on the first information and the network connectivity information, the slave node to synchronize its clock to timing information provided by the master node.

22. The system of claim 21, wherein the timing source is a global navigational satellite system (GNSS) satellite and the reference signal is a satellite signal from the GNSS satellite.

23. The system of claim 22, wherein the first information comprises information indicating whether each of the at least one AP has a lock on the satellite signal.

Patent History
Publication number: 20160330707
Type: Application
Filed: May 9, 2016
Publication Date: Nov 10, 2016
Inventors: Deepak DAS (Lexington, MA), Matthew PROBST (Southborough, MA)
Application Number: 15/150,386
Classifications
International Classification: H04W 56/00 (20060101); H04B 7/185 (20060101); H04L 5/00 (20060101);